id
stringlengths
18
42
text
stringlengths
0
8.25M
added
stringlengths
24
24
created
stringlengths
20
24
source
stringclasses
4 values
original_shard_dir
stringclasses
163 values
original_shard_idx
int64
0
311k
num_tokens
int64
1
1.44M
proofpile-arXiv_065-12836
\section{Introduction} \label{sec:introduction} Ever since the widespread application of computers in numerical mathematics and even before, finite difference methods have been successfully applied to differential equations. An important task is the development and investigation of stable and well-behaved numerical methods. While some general purpose methods can give satisfying results under certain circumstances, schemes that have been developed specifically for the target equation can be advantageous, e.g. if some properties of operators or solutions at the continuous level are mimicked discretely. This has been the goal of, e.g., geometric numerical integration methods for ordinary differential equations, cf. \cite{hairer2003geometric,hairer2006geometric}. In this regard, a well-developed theory of the problem at the continuous (differential equation) level is very important since it can be used as a guideline for the development of (semi-) discretisations. For linear systems of hyperbolic conservation laws, energy estimates play a fundamental role in the analysis of well-posedness \cite{gustafsson2013time}. An important technique is integration-by-parts. Thus, summation-by-parts (SBP) as a discrete analogue has been very successful, since manipulations at the continuous level can be mimicked discretely, yielding stability and conservation results, cf. \cite{kreiss1974finite,strand1994summation,carpenter1994time,carpenter1999stable}. Further references and results can be found in the review articles \cite{svard2014review,fernandez2014review}. Considering nonlinear equations such as scalar conservation laws, functions of locally bounded variation play an important role. Solutions of quasilinear equations can become discontinuous in finite time, even if smooth initial data and coefficients are given \cite{dafermos2010hyperbolic}. In his seminal work \cite{volpert1967spaces}, Vol'pert investigated functions of locally bounded variation and their application to conservation laws. Since such functions can be discontinuous, he developed a corresponding notion of derivatives as measures. Moreover, he investigated products and compositions of functions of locally bounded variation and developed corresponding product and chain rules. Furthermore, the concept of bounded variation is important for the analysis of numerical methods for conservation laws since it implies compactness properties (Helly's theorem), cf. \cite{harten1987nonlinearity}. The investigation of semidiscretisations satisfying a single entropy inequality has received much interest, cf. \cite{tadmor1987numerical,tadmor2003entropy,lefloch2002fully,sjogreen2010skew,fjordholm2012arbitrarily,fisher2013high,gassner2016well,wintermeyer2017entropy,gassner2016split,ranocha2017shallow,ranocha2017comparison,ranocha2018generalised}. For some conservation laws such as Burgers' equations, conservative corrections to the product rule can be used to obtain $L^2$ dissipative schemes, cf. \cite{gassner2013skew,ranocha2016summation,ranocha2017extended}. Therefore, it is interesting whether the chain and product rules for functions of bounded variation have discrete analogues. Furthermore, the investigation of numerical dissipation operators has received much interest, cf. \cite{vonneumann1950method,mattsson2004stable,svard2009shock,ranocha2018stability}. Such operators can be motivated by the vanishing viscosity approach to conservation laws, cf. \cite{bianchini2005vanishing}. For general entropies, the investigation of dissipation induced by such terms relies on the chain rule, cf. \cite[Proof of Theorem~I.3.4]{lefloch2002hyperbolic}. Thus, it is natural to investigate the entropy dissipation of difference approximations. This article is structured as follows. At first, functions of bounded variation are briefly reviewed in section~\ref{sec:BV}, focusing on the chain and product rules. Next, corresponding difference operators are investigated in section~\ref{sec:difference-operators}. It is proven that there are analogous product and chain rules for classical second order periodic and SBP operators (Lemma~\ref{lem:product-rule} and Lemma~\ref{lem:chain-rule}). Furthermore, it is proven that such analogues do not exist for higher order difference approximations of the first derivative (Theorem~\ref{thm:product-rule} and Theorem~\ref{thm:chain-rule}). Thereafter, dissipation operators approximating second derivatives with possibly varying coefficients are investigated in section~\ref{sec:laplace}. It is proven that certain second order difference operators are dissipative for every convex entropy (Theorem~\ref{thm:laplace-2nd-order}). Moreover, it is shown that such a result is impossible for discrete derivative operators with higher order of accuracy (Theorem~\ref{thm:laplace-higher-order}). Finally, a summary and discussion is given in section~\ref{sec:summary}. \section{Functions of Bounded Variation} \label{sec:BV} Functions of locally bounded variation, i.e. those locally integrable functions whose distributional first derivatives are Radon measures, play an important role in analysis, for example in the theory of scalar conservation laws as described in the seminal work of Vol'pert \cite{volpert1967spaces}. Further results about conservation laws and references can be found in the monograph \cite{dafermos2010hyperbolic}, e.g. Theorem~6.2.6 and chapter~XI. Some general results about functions of bounded variation can be found in \cite{volpert1985analysis,evans2015measure}. For functions of locally bounded variation, a product of a possibly discontinuous function and a measure occurs in both the chain rule and the product rule. If the function is integrable with respect to the measure, this product is well-defined as a measure, cf. \cite{volpert1967spaces}. In one space dimension, a function of bounded variation is continuous almost everywhere and the limits from the left and the right exist everywhere. If $u \in \operatorname{BV}_\mathrm{loc}([a,b]; \mathbb{R}^m)$ and $g\colon \mathbb{R}^m \to \mathbb{R}$ is (for simplicity) continuous, then Vol'pert \cite{volpert1967spaces} defined the averaged composition of $g$ and $u$ via \begin{equation} \widehat{g(u)}(x) := \int_0^1 g\bigl( u_- + s (u_+ - u_-) \bigr) \dif s, \end{equation} where $u_\pm = \lim_{\epsilon \searrow 0} u(x \pm \epsilon)$ are the unique limits of $u$ from the left and right hand side, respectively. With this definition, the following chain and product rules have been obtained in \cite[Section~13]{volpert1967spaces}. \begin{theorem} If $u \in \operatorname{BV}([a,b]; \mathbb{R}^m)$ and $f \in C^1(\mathbb{R}^m; \mathbb{R})$, the averaged composition $\widehat{\partial_{u_k} f(u)}$ is locally integrable with respect to the measure $\partial_x u_k$ for $k \in \set{1,\dots,m}$, $f(u) \in \operatorname{BV}_\mathrm{loc}$, and \begin{equation} \label{eq:BV-chain-rule} \partial_x f(u) = \sum_{k=1}^m \widehat{\partial_{u_k} f(u)} \, \partial_x u_k. \end{equation} In particular, for $\mathfrak{u}, \mathfrak{v} \in \operatorname{BV}[a,b]$, \begin{equation} \label{eq:BV-product-rule} \partial_x (\mathfrak{u} \mathfrak{v}) = \widehat{\mathfrak{u}} \partial_x \mathfrak{v} + \widehat{\mathfrak{v}} \partial_x \mathfrak{u}. \end{equation} \end{theorem} \begin{remark} \label{rem:scalar-vs-vector-valued} Sometimes, it might be useful to distinguish vector valued functions $u \in \operatorname{BV}([a,b]; \mathbb{R}^m)$, $m \geq 2$, and scalar valued functions $\mathfrak{u} \in \operatorname{BV}([a,b]; \mathbb{R}^1)$ explicitly. In this case, a fracture font will be used for scalar valued functions. Nevertheless, the case $m = 1$ is not excluded for vector valued functions $u \in \operatorname{BV}([a,b]; \mathbb{R}^m)$ if not stated otherwise. If it is clear from the context whether a function is scalar valued or may be vector valued, the usual font will be used for simplicity. \end{remark} \begin{remark} \label{rem:CR-as-PR} If $\mathfrak{u}, \mathfrak{v} \in \operatorname{BV}[a,b]$, define $u \in \operatorname{BV}([a,b]; \mathbb{R}^2)$ by $u(x) = (\mathfrak{u}(x), \mathfrak{v}(x))$. Considering the function $f \in C^1(\mathbb{R}^2; \mathbb{R})$, given by $f(u) = f(u_1, u_2) = u_1 u_2$, the chain rule \eqref{eq:BV-chain-rule} becomes \begin{equation} \partial_x (\mathfrak{u} \mathfrak{v}) = \partial_x f(u) = \sum_{k=1}^2 \widehat{\partial_{u_k} f(u)} \, \partial_x u_k = \widehat{\mathfrak{v}} \partial_x \mathfrak{u} + \widehat{\mathfrak{u}} \partial_x \mathfrak{v}. \end{equation} Thus, the product rule \eqref{eq:BV-product-rule} is indeed a special case of the chain rule \eqref{eq:BV-chain-rule}. \end{remark} The product rule \eqref{eq:BV-product-rule} is also proven in the monograph \cite[Section~6.4]{volpert1985analysis}. A generalisation of the corresponding definition of a possibly nonconservative product $f(u) \partial_x v$ has been developed and investigated by Dal~Maso, LeFloch, and Murat \cite{dalmaso1995definition}. See also \cite{raymond1996new,lefloch1999representation} for further studies. In order to illustrate the general theory described above and lay some foundations for the following comparison with discrete derivative operators, two examples using jump functions will be considered. If $u, v \in \operatorname{BV}[a,b]$ are (scalar valued) jump functions, \begin{equation} u(x) = \begin{cases} u_-, & x < 0, \\ u_+, & x > 0, \end{cases} \qquad v(x) = \begin{cases} v_-, & x < 0, \\ v_+, & x > 0, \end{cases} \end{equation} they are of bounded variation and their derivatives are Radon measures. In particular, the derivatives $\partial_x u$ and $\partial_x v$ are multiples of the Dirac measure centred at zero. Viewing such a measure as a function mapping (measurable) sets to real numbers, \begin{equation} (\partial_x u)(A) = \begin{cases} u_+ - u_-, & \text{if } 0 \in A, \\ 0, & \text{else}. \end{cases} \end{equation} Thus, the product rule \eqref{eq:BV-product-rule} becomes in this case \begin{equation} \label{eq:BV-product-rule-step-function} \begin{aligned} \bigl( \partial_x (u v) \bigr)\bigl( \set{0} \bigr) &= u_+ v_+ - u_- v_- \\ &= \frac{u_+ + u_-}{2} (v_+ - v_-) + \frac{v_+ + v_-}{2} (u_+ - u_-) \\ &= \bigl( \widehat{u} \partial_x v \bigr)\bigl( \set{0} \bigr) + \bigl( \widehat{v} \partial_x u \bigr)\bigl( \set{0} \bigr), \end{aligned} \end{equation} where the measures on both sides of \eqref{eq:BV-product-rule} have been applied to the set $\set{0}$ containing only the jump point. Similarly, for $f \in C^1$, the chain rule \eqref{eq:BV-chain-rule} becomes \begin{multline} \label{eq:BV-chain-rule-step-function} \bigl( \partial_x f(u) \bigr)\bigl(\! \set{0} \!\bigr) = f(u_+) - f(u_-) = \int_0^1 f'\bigl( u_- + s (u_+ - u_-) \bigr) \dif s \,\cdot (u_+ - u_-) \\ = \bigl( \widehat{f'(u)} \cdot \partial_x u \bigr)\bigl(\! \set{0} \!\bigr). \end{multline} Of course, the intermediate steps of \eqref{eq:BV-product-rule-step-function} and \eqref{eq:BV-chain-rule-step-function} can also be seen as proofs of the general product and chain rule in this special case. Interpreting the difference $u_+ - u_-$ as a discrete derivative, \eqref{eq:BV-product-rule-step-function} is a discrete product rule and \eqref{eq:BV-chain-rule-step-function} is a discrete chain rule. Both use averages instead of the usual point values occurring in the continuous analogues for differentiable functions. Thus, it is interesting whether this can be generalised. \section{Difference Operators} \label{sec:difference-operators} Consider a general discrete derivative/difference operator $D$, acting on grid functions $\vec{u} = (\vec{u}_i)_i = \bigl( u(x_i) \bigr)_i$ defined on a possibly non-uniform grid with nodes $x_i \in \mathbb{R}$ and $h := \min_i (x_{i+1} - x_i) > 0$. Note that this includes both classical finite difference operators and spectral collocation operators such as nodal discontinuous Galerkin ones. In practice, the grid function is represented by the vector of its point values and the discrete derivative operator by a matrix with entries $D_{ij}$. General nonlinear operations such as composition or multiplication are conducted pointwise, i.e. if $\vec{u}$ and $\vec{v}$ are two grid functions, their product $\vec{uv}$ is the grid function with components $(\vec{uv})_i = \vec{u}_i \vec{v}_i$. \subsection{Classical Second Order Derivative Operators} The classical second order finite difference operator on a uniform grid is given by \begin{equation} (D \vec{u})_i = \frac{\vec{u}_{i+1} - \vec{u}_{i-1}}{2 h} \approx u'(x_i). \end{equation} The corresponding summation-by-parts (SBP) operator uses this stencil in the interior and --- if the nodes $x_0, \dots, x_N$ are used --- the boundary closures \begin{equation} (D \vec{u})_0 = \frac{\vec{u}_1 - \vec{u}_0}{h} \approx u'(x_0), \qquad (D \vec{u})_N = \frac{\vec{u}_N - \vec{u}_{N-1}}{h} \approx u'(x_N). \end{equation} Analogously to the product rule \eqref{eq:BV-product-rule-step-function} for a step function of bounded variation, considering scalar valued grid functions $\vec{u}$ and $\vec{v}$, \begin{equation} \begin{aligned} (D \vec{u} \vec{v})_i &= \frac{\vec{u}_{i+1} \vec{v}_{i+1} - \vec{u}_{i-1} \vec{v}_{i-1}}{2 h} \\ &= \frac{\vec{u}_{i+1} + \vec{u}_{i-1}}{2} \frac{\vec{v}_{i+1} - \vec{v}_{i-1}}{2 h} + \frac{\vec{u}_{i+1} - \vec{u}_{i-1}}{2 h} \frac{\vec{v}_{i+1} + \vec{v}_{i-1}}{2} \\ &= (A \vec{u})_i (D \vec{v})_i + (D \vec{u})_i (A \vec{v})_i, \end{aligned} \end{equation} if the averaging operator $A$ is defined by \begin{equation} (A \vec{u})_i = \frac{\vec{u}_{i+1} + \vec{u}_{i-1}}{2} \approx u(x_i). \end{equation} For the corresponding SBP operator, the terms at the left boundary are \begin{equation} \begin{aligned} (D \vec{u} \vec{v})_0 &= \frac{\vec{u}_{1} \vec{v}_{1} - \vec{u}_{0} \vec{v}_{0}}{h} \\ &= \frac{\vec{u}_{1} + \vec{u}_{0}}{2} \frac{\vec{v}_{1} - \vec{v}_{0}}{h} + \frac{\vec{u}_{1} - \vec{u}_{0}}{h} \frac{\vec{v}_{1} + \vec{v}_{0}}{2} = (A \vec{u})_0 (D \vec{v})_0 + (D \vec{u})_0 (A \vec{v})_0, \end{aligned} \end{equation} if the boundary closures of $A$ are given by \begin{equation} (A \vec{u})_0 = \frac{\vec{u}_1 + \vec{u}_0}{2} \approx u(x_0), \qquad (A \vec{u})_N = \frac{\vec{u}_N + \vec{u}_{N-1}}{2} \approx u(x_N). \end{equation} The terms at the right boundary are similar. This is summed up in \begin{lemma} \label{lem:product-rule} The classical second order derivative operator $D$ (on a periodic grid or with boundary closures given above) fulfils the product rule \begin{equation} D (\vec{u} \vec{v}) = (A \vec{u}) (D \vec{v}) + (D \vec{u} ) (A \vec{v}), \end{equation} where the averaging operator $A$ defined above is of the same order of accuracy as the derivative operator $D$, i.e. it fulfils $(A \vec{u})_i = u(x_i) + \O(h^2)$ in the interior and $(A \vec{u})_{0,N} = u(x_{0,N}) + \O(h)$ at the boundaries for a smooth function $u$. \end{lemma} Similarly, a general chain rule as discrete analogue of \eqref{eq:BV-chain-rule-step-function} is satisfied. Indeed, if $f$ is continuously differentiable and $\vec{u}$ a possibly vector valued grid function, \begin{equation} \begin{aligned} \bigl( D f(\vec{u}) \bigr)_i &= \frac{f(\vec{u}_{i+1}) - f(\vec{u}_{i-1})}{2h} \\ &= \underbrace{\int_0^1 f'\bigl( \vec{u}_{i-1} + s (\vec{u}_{i+1} - \vec{u}_{i-1}) \bigr) \dif s}_{=:(A_{f'} \vec{u})_i} \cdot \frac{\vec{u}_{i+1} - \vec{u}_{i-1}}{2h} = (A_{f'} \vec{u})_i \cdot (D \vec{u})_i \end{aligned} \end{equation} for interior nodes, where the possibly nonlinear averaging operator $A_{f'}$ has been introduced. At the boundary nodes, it is given by \begin{equation} \begin{aligned} (A_{f'} \vec{u})_0 &= \int_0^1 f'\bigl( \vec{u}_{0} + s (\vec{u}_{1} - \vec{u}_{0}) \bigr) \dif s \approx f'\bigl( u(x_0) \bigr), \\ (A_{f'} \vec{u})_N &= \int_0^1 f'\bigl( \vec{u}_{N-1} + s (\vec{u}_{N} - \vec{u}_{N-1}) \bigr) \dif s \approx f'\bigl( u(x_N) \bigr). \end{aligned} \end{equation} This is summed up in \begin{lemma} \label{lem:chain-rule} The classical second order derivative operator $D$ (on a periodic grid or with boundary closures) satisfies the chain rule \begin{equation} D f(\vec{u}) = (A_{f'} \vec{u}) \cdot (D \vec{u}), \end{equation} where the averaging operator $A_{f'}$ defined above is of the same order of accuracy as the derivative operator $D$, i.e. it fulfils $(A_{f'} \vec{u})_i = f'\bigl( u(x_i) \bigr) + \O(h^2)$ in the interior and $(A_{f'} \vec{u})_{0,N} = f'\bigl( u(x_{0,N}) \bigr) + \O(h)$ at the boundaries for smooth (and possibly vector valued) functions $u$ and $f$. \end{lemma} \begin{remark} \label{rem:A-Af'} The averaging operator $A$ used for the product rule is a special case of the general averaging operator $A_{f'}$. Indeed, $A = A_{\operatorname{id}}$, where $\operatorname{id}$ is the identity mapping. \end{remark} \begin{remark} In general, $A_{f'}$ is neither a linear operator nor an averaging operator acting on $f'(\vec{u})$. Instead, it is a possibly nonlinear operator that uses intermediate values of $\vec{u}$ to average $f'$. It is linear if and only if $f'$ is linear, in particular in the case $f' = \operatorname{id}$, i.e. $A_{\operatorname{id}} = A$ discussed in Remark~\ref{rem:A-Af'}. \end{remark} \subsection{Higher Order Derivative Operators} The product and chain rules for second order derivative operators cannot be generalised to higher order derivative operators. In order to prove this, the asymptotic expansion of the error of the derivative operator will be used. \begin{lemma} \label{lem:asymptotic-expansion-D1} Assume that $D$ is a discrete derivative operator of order $p$, i.e. $(D \vec{u})_i = u'(x_i) + \O(h^p)$ or, equivalently, $D$ is exact for polynomials of degree $\leq p$, with $p$ maximal. If $u$ is a smooth scalar-valued function, \begin{equation} (D \vec{u})_i = u'(x_i) + u^{(p+1)}(x_i) C^D_i h^p + \O(h^{p+1}), \end{equation} where $C_i^D h^p = \O(h^p)$ depends only on the grid and the derivative operator. \end{lemma} \begin{proof} By Taylor expansion, using the exactness of $D$ for polynomials of degree $\leq p$, \begin{equation} \begin{aligned} &\phantom{=\;} (D \vec{u})_i = \sum_j D_{ij} \vec{u}_j = \sum_j D_{ij} u(x_j) \\ &= \sum_j D_{ij} \Bigl( u(x_i) + u'(x_i) (x_j - x_i) + \dots + \frac{1}{(p+1)!} u^{(p+1)}(x_i) (x_j - x_i)^{p+1} + \O(h^{p+2}) \Bigr) \\ &= u'(x_i) + u^{(p+1)}(x_i) \underbrace{\sum_j \frac{1}{(p+1)!} D_{ij} (x_j - x_i)^{p+1}}_{=: C^D_i h^p} + \O(h^{p+1}). \end{aligned} \end{equation} Here, $C^D_i h^p = \O(h^p)$, since $D$ scales as $h^{-1}$. \end{proof} This can be used to prove one of the main observations of this article. \begin{theorem} \label{thm:product-rule} If $D$ is a discrete derivative operator of order $p > 2$, there can be no averaging operator $A$ of order $q \in \mathbb{N}$ such that there is a product rule of the form $D (\vec{u} \vec{v}) = (A \vec{u}) (D \vec{v}) + (D \vec{u}) (A \vec{v})$. \end{theorem} \begin{proof} Consider the asymptotic expansions \begin{equation} \label{eq:expansion-Duv} \bigl( D (\vec{u} \vec{v}) \bigr)_i = (u v)'(x_i) + (u v)^{(p+1)}(x_i) C^D_i h^p + \O(h^{p+1}) \end{equation} and \begin{equation} \begin{aligned} (A \vec{u})_i (D \vec{v})_i &= \left( u(x_i) + C^A_i(u) h^q + \O(h^{q+1}) \right) \left( v'(x_i) + v^{(p+1)}(x_i) C^D_i h^p + \O(h^{p+1}) \right), \\ (D \vec{u})_i (A \vec{v})_i &= \left( u'(x_i) + u^{(p+1)}(x_i) C^D_i h^p + \O(h^{p+1}) \right) \left( v(x_i) + C^A_i(v) h^q + \O(h^{q+1}) \right), \end{aligned} \end{equation} where $C^A_i(u)$ is the leading order coefficient for $A$ and may depend on the function $u$ and its derivatives. There are three different cases: $q < p$, $q = p$, and $q > p$. If $q < p$, the product rule cannot hold, because the terms \begin{equation} \left( u'(x_i) C^A_i(v) + v'(x_i) C^A_i(u) \right) h^q \end{equation} involving $h^q$ are not matched by terms in \eqref{eq:expansion-Duv}. Basically, $D (\vec{u} \vec{v})$ is a $p$-th order approximation while $(A \vec{u}) (D \vec{v}) + (D \vec{u}) (A \vec{v})$ is only a $q$-th order approximation. If $q = p$, the product rule can only hold if the terms involving $h^p = h^q$ are equal, i.e. if \begin{multline} \label{eq:proof-1} (u v)^{(p+1)}(x_i) C^D_i \\ = \left( u(x_i) v^{(p+1)}(x_i) + u^{(p+1)}(x_i) v(x_i) \right) C^D_i + u'(x_i) C^A_i(v) + v'(x_i) C^A_i(u). \end{multline} Since \begin{equation} (u v)^{(p+1)}(x_i) = \sum_{k=0}^{p+1} \binom{p+1}{k} u^{(k)}(x_i) v^{(p+1-k)}(x_i), \end{equation} the terms with $k=0$ and $k=p+1$ match the braces on the right hand side of \eqref{eq:proof-1}, but the remaining terms can only match if $p \leq 2$, since the remaining sum cannot be factored as on the right hand side. Finally, if $q > p$, the terms involving $h^p$ do not match, because \begin{equation} (u v)^{(p+1)}(x_i) \neq u(x_i) v^{(p+1)}(x_i) + u^{(p+1)}(x_i) v(x_i) \end{equation} for $p \geq 1$ in general. \end{proof} \begin{remark} Using polynomial collocation methods on Lobatto Legendre or Gauss Legendre nodes in $[-1,1]$, a discrete product rule holds for $p = 1$, i.e. for two nodes, since they are of the same form as the classical finite difference derivative operator. However, for $p=2$, there can be no product rule. Indeed, for Lobatto nodes $\set{-1,0,1}$ and $u(x) = (1+x)^2 = v(x)$, the discrete derivatives of $u$ and $v$ at $-1$ are zero (since they are exact), but the discrete derivative of $uv$ at $-1$ is \begin{equation} (D \vec{u} \vec{v})_{-1} = - \frac{3}{2} \vec{u}_{-1} \vec{v}_{-1} + 2 \vec{u}_0 \vec{v}_0 - \frac{1}{2} \vec{u}_1 \vec{v}_1 = 0 + 2 \cdot 1^2 - \frac{1}{2} \cdot 4^2 = - 6 \neq 0. \end{equation} A similar argument holds for Gauss Legendre nodes. \end{remark} Since the product rule is a special case of the chain rule with vector valued functions $u$ (cf. Remark~\ref{rem:CR-as-PR}), a general chain rule is also excluded for discrete derivative operators of higher order of accuracy. However, this argument does not forbid a chain rule for scalar valued functions. Nevertheless, this case is also excluded by the second main observation of this article. \begin{theorem} \label{thm:chain-rule} If $D$ is a discrete derivative operator of order $p > 2$, there can be no general averaging operator $A_{f'}$ of order $q \in \mathbb{N}$ such that there is a chain rule of the form $D\bigl( f(\vec{u}) \bigr) = (A_{f'} \vec{u}) \cdot (D \vec{u})$. \end{theorem} \begin{proof} By the argument above, it suffices to consider scalar valued functions. In this case, \begin{equation} \bigl(D f(\vec{u}) \bigr)_i = f'(\vec{u}_i) u'(x_i) + \bigl( f(u) \bigr)^{(p+1)}(x_i) C^D_i h^p + \O(h^{p+1}) \end{equation} and \begin{multline} (A_{f'} \vec{u})_i (D \vec{u})_i = \left( f'(\vec{u}_i) + C^A_i\bigl(f'(u)\bigr) h^q + \O(h^{q+1}) \right) \\ \left( u'(x_i) + u^{(p+1)}(x_i) C^D_i h^p + \O(h^{p+1}) \right). \end{multline} Again, there are three different cases: $q < p$, $q = p$, and $q > p$. If $q < p$, the chain rule cannot hold, because $D\bigl( f(\vec{u}) \bigr)$ is a $p$-th order approximation while $(A_{f'} \vec{u}) (D u)$ is only a $q$-th order approximation. If $q = p$, $\bigl( f(u) \bigr)^{(p+1)}(x_i)$ can be expressed using the formula of Faà di Bruno \cite[Lemma~II.2.8, simplified for the scalar case]{hairer2008solving}, \begin{equation} \bigl( f(u) \bigr)^{(p+1)}(x_i) = \sum_{u \in LS_{p+2}} f^{(m)}(\vec{u}_i) \, u^{(\delta_1)}(x_i) \dots u^{(\delta_m)}(x_i). \end{equation} Here, $LS_{p+2}$ is the set of special labelled trees of order $p+2$ which have no ramifications except at the root, $m$ is the number of branches leaving the root, and $\delta_1, \dots, \delta_m$ are the numbers of nodes in each of these branches, see \cite[Lemma~II.2.8]{hairer2008solving}. Thus, it is clear that $u'(x_i)$ cannot be factored out of the remaining terms after subtracting $f'(\vec{u}_i) u^{(p+1)}(x_i)$ if $p > 2$. Finally, if $q > p$, the terms involving $h^p$ do not match, because \begin{equation} \bigl( f(u) \bigr)^{(p+1)}(x_i) \neq f'(\vec{u}_i) u^{(p+1)}(x_i) \end{equation} for $p \geq 1$ in general. \end{proof} \begin{remark} A product rule for classical difference operators with error term of the form \begin{equation} (D \vec{u} \vec{v})_i = \vec{u}_i (D \vec{v})_i + (\partial_x u)_i (A \vec{v})_i + e_i \end{equation} has been used in \cite[Lemma~3.1 and Lemma~3.2]{mishra2010stability}. If $u$ is smooth, $(\partial_x u)_i$ is the derivative at $x_i$ and $\norm{e} \leq C h \norm{\vec{v}}$ for some constant $C > 0$. The averaging operator $A$ is linear and of the same order of accuracy as the derivative operator $D$. \end{remark} \begin{remark} The investigation of discrete product and chain rules is also somewhat loosely related to the entropy stability and conservation theory initiated by Tadmor \cite{tadmor1987numerical,tadmor2003entropy}. Indeed, instead of a chain rule of the form $\partial_x f(u) = \widehat{f'(u)} \partial_x u$ \eqref{eq:BV-chain-rule}, a discrete version of $U'(u) \cdot \widetilde{\partial_x f(u)} = \widetilde{\partial_x F(u)}$ is used, where $U$ is the entropy fulfilling $U'(u) \cdot f'(u) = F'(u)$. Such approximations can be found for arbitrary order, cf. \cite{lefloch2002fully,sjogreen2010skew,fisher2013high,ranocha2017comparison,chen2017entropy}. Basically, schemes of lower order can be extrapolated if regular grids are used, cf. \cite[Section~3.2]{ranocha2018thesis}. Nevertheless, they can be used also on certain irregular grids. \end{remark} \section{Entropy Stability of Discrete Second Derivatives} \label{sec:laplace} In order to regularise a hyperbolic conservation law $\partial_t u + \partial_x f(u) = 0$, where $u$ are the conserved variables and $f(u)$ is the flux, a parabolic term can be added to the right-hand side, resulting in \begin{equation} \partial_t u(t,x) + \partial_x f\bigl( u(t,x) \bigr) = \partial_x \bigl( \epsilon(x) \partial_x u(t,x) \bigr), \end{equation} where $\epsilon \geq 0$ controls the amount of viscosity. An entropy is a convex function $U$ satisfying $U'(u) \cdot f'(u) = F'(u)$, where $F$ is the corresponding entropy flux. Thus, smooth solutions of the conservation law fulfil the additional conservation law \begin{equation} \partial_t U(u) = U'(u) \cdot \partial_t u = - U'(u) \cdot f'(u) \cdot \partial_x u = - \partial_x F(u) \end{equation} and an entropy inequality $\partial_t U + \partial_x F \leq 0$ is required for weak solutions, cf. \cite[Chapter~IV]{dafermos2010hyperbolic}. The viscosity term on the right-hand side induces a global entropy inequality for sufficiently smooth solutions. Indeed, in a periodic domain $\Omega$, \begin{equation} \label{eq:smooth-entropy-dissipation-CR} \int_\Omega U' \cdot \partial_x (\epsilon \partial_x u) \dif x = - \int_\Omega \epsilon (\partial_x U') \cdot \partial_x u \dif x = - \int_\Omega \epsilon (\partial_x u) \cdot U'' \cdot \partial_x u \dif x \leq 0, \end{equation} since $U$ is convex and $\epsilon \geq 0$. In a non-periodic domain $\Omega$, if $\epsilon$ vanishes on $\partial\Omega$, the same result holds. Otherwise, there will be additional boundary terms. The computation in \eqref{eq:smooth-entropy-dissipation-CR} relies on the chain rule. Thus, it might be conjectured that second order difference approximations of the Laplace operator (with possibly varying coefficients) are also dissipative for every entropy $U$ and that higher order difference approximations to the second derivative are not necessarily dissipative for every entropy $U$. \subsection{Second Order Derivative Operators} In a periodic domain, the classical second order difference approximation to the Laplace operator is given by \begin{equation} (D_2 \vec{u})_i = \frac{\vec{u}_{i+1} - 2 \vec{u}_i + \vec{u}_{i-1}}{h^2}. \end{equation} Thus, multiplying pointwise by $U'(\vec{u}_i) = U'_i$ and summing up all terms yields due to the periodicity of the domain \begin{equation} \begin{aligned} h^2 \sum_i U'_i \cdot (D_2 \vec{u})_i &= \sum_i U'_i \cdot (\vec{u}_{i+1} - \vec{u}_i) - \sum_i U'_i \cdot (\vec{u}_i - \vec{u}_{i-1}) \\ &= -\sum_i (U'_{i+1} - U'_i) \cdot (\vec{u}_{i+1} - \vec{u}_i) \leq 0, \end{aligned} \end{equation} since $U'$ is monotone. If $u$ is scalar valued, $U'$ is monotonically increasing, since $U'' \geq 0$ due to the convexity of $U$. If $u$ is vector valued, the usual generalised definition of monotonicity is used, i.e. $(U'(u) - U'(v)) \cdot (u - v) \geq 0$, cf. \cite[section~II.2, p.~37]{showalter1997monotone}. Indeed, due to the convexity of $U$, \begin{multline} (\vec{u}_{i+1} - \vec{u}_i) \cdot \bigl( U'(\vec{u}_{i+1}) - U'(\vec{u}_i) \bigr) \\ = \int_0^1 (\vec{u}_{i+1} - \vec{u}_i) \cdot U''\bigl( \vec{u}_i + s (\vec{u}_{i+1} - u_i) \bigr) \cdot (\vec{u}_{i+1} - \vec{u}_i) \dif s \geq 0. \end{multline} This is exactly the chain rule for classical difference approximations. If a variable coefficient $\epsilon \geq 0$ is considered in a periodic domain, a second order approximation to $\partial_x (\epsilon \partial_x u)$ is given by \begin{equation} \label{eq:SBP-var-coef-2-interior} (D_2^\epsilon \vec{u})_i = \frac{\epsilon_{i} + \epsilon_{i+1}}{2 h^2} \vec{u}_{i+1} - \frac{\epsilon_{i-1} + 2 \epsilon_{i} + \epsilon_{i+1}}{2 h^2} \vec{u}_i + \frac{\epsilon_{i-1} + \epsilon_{i}}{2 h^2} \vec{u}_{i-1}, \end{equation} cf. \cite{mattsson2012summation}. Using again the periodicity and the convexity of $U$, \begin{equation} \begin{aligned} h^2 \sum_i U'_i \cdot (D_2^\epsilon \vec{u})_i &= \sum_i \frac{\epsilon_{i} + \epsilon_{i+1}}{2} U'_i \cdot (\vec{u}_{i+1} - \vec{u}_i) - \sum_i \frac{\epsilon_{i-1} + \epsilon_{i}}{2} U'_i \cdot (\vec{u}_i - \vec{u}_{i-1}) \\ &= -\sum_i \frac{\epsilon_{i} + \epsilon_{i+1}}{2} (U'_{i+1} - U'_i) \cdot (\vec{u}_{i+1} - \vec{u}_i) \leq 0. \end{aligned} \end{equation} Summation-by-parts operators for second derivatives with variable coefficients have been developed in \cite{mattsson2012summation}. The second order discrete derivative in the interior is given by \eqref{eq:SBP-var-coef-2-interior} and equipped with the boundary closures \begin{equation} \begin{aligned} (D_2^\epsilon \vec{u})_0 &= (2 \epsilon_0 - \epsilon_1) \vec{u}_0 + (-3 \epsilon_0 + \epsilon_1) \vec{u}_1 + \epsilon_0 \vec{u}_3, \\ (D_2^\epsilon \vec{u})_N &= (2 \epsilon_N - \epsilon_{N-1}) \vec{u}_N + (-3 \epsilon_N + \epsilon_{N-1}) \vec{u}_{N-1} + \epsilon_N \vec{u}_{N-2}. \end{aligned} \end{equation} If the variable coefficient $\epsilon$ vanishes at the boundary, i.e. if $\epsilon_0 = 0 = \epsilon_N$, these boundary closures become \begin{equation} (D_2^\epsilon \vec{u})_0 = \epsilon_1 (\vec{u}_1 - \vec{u}_0), \qquad (D_2^\epsilon \vec{u})_N = -\epsilon_{N-1} (\vec{u}_N - \vec{u}_{N-1}). \end{equation} Since the discrete integral is given as a quadrature with weights on the diagonal of the mass/norm matrix $H = \diag{1/2, 1, \dots, 1, 1/2}$, the discrete equivalent of the integral $\int_\Omega U' \cdot \partial_x(\epsilon \partial_x u)$ is \begin{equation} \begin{aligned} &\phantom{=\;} \sum_{i=0}^N H_{ii} U'_i \cdot (D_2^\epsilon \vec{u})_i \\ &= \frac{1}{2} \epsilon_1 U'_0 \cdot (\vec{u}_1 - \vec{u}_0) - \frac{1}{2} \epsilon_{N-1} U'_N \cdot (\vec{u}_N - \vec{u}_{N-1}) \\&\quad + \sum_{i=1}^{N-1} \frac{\epsilon_{i} + \epsilon_{i+1}}{2} U'_i \cdot (\vec{u}_{i+1} - \vec{u}_{i}) - \sum_{i=1}^{N-1} \frac{\epsilon_{i-1} + \epsilon_{i}}{2} U'_i \cdot (\vec{u}_{i} - \vec{u}_{i-1}) \\ &= \frac{\epsilon_0 + \epsilon_1}{2} U'_0 \cdot (\vec{u}_1 - \vec{u}_0) - \frac{\epsilon_{N-1} + \epsilon_N}{2} U'_N \cdot (\vec{u}_N - \vec{u}_{N-1}) \\&\quad + \sum_{i=1}^{N-1} \frac{\epsilon_{i} + \epsilon_{i+1}}{2} U'_i \cdot (\vec{u}_{i+1} - \vec{u}_{i}) - \sum_{i=0}^{N-2} \frac{\epsilon_{i} + \epsilon_{i+1}}{2} U'_{i+1} \cdot (\vec{u}_{i+1} - \vec{u}_{i}) \\ &= - \sum_{i=0}^N \frac{\epsilon_{i} + \epsilon_{i+1}}{2} (U'_{i+1} - U'_i) \cdot (\vec{u}_{i+1} - \vec{u}_{i}) \\ &\leq 0. \end{aligned} \end{equation} This proves \begin{theorem} \label{thm:laplace-2nd-order} The discretisations $D_2^\epsilon$ of the second derivative operator $\partial_x (\epsilon \partial_x \cdot)$ with possibly varying coefficients $\epsilon \geq 0$ given above in periodic domains or on bounded domains with $\epsilon_0 = 0 = \epsilon_N$ are entropy dissipative for every convex entropy. \end{theorem} \begin{remark} \label{rem:laplace-2nd-order} The statement of Theorem~\ref{thm:laplace-2nd-order} holds for the second order summation-by-parts operator $D_2^\epsilon$ (and its interior stencil in periodic domains) mentioned above. It is not necessarily true for every second order approximation of $\partial_x (\epsilon \partial_x \cdot)$. Indeed, in a periodic domain, such an approximation is also given by \begin{equation} (\widetilde D_2^\epsilon \vec{u})_i = \epsilon_i \frac{\vec{u}_{i+1} - 2 \vec{u}_{i} + \vec{u}_{i-1}}{h^2} + \frac{\epsilon_{i+1} - \epsilon_{i-1}}{2 h} \frac{\vec{u}_{i+1} - \vec{u}_{i-1}}{2 h}. \end{equation} Choose the grid $x_i = i$, $i \in \set{0,1,2,3}$, with periodic boundary conditions, i.e. $\vec{u}_3 = \vec{u}_0$. Set $\vec{u} = (\vec{u}_0,\vec{u}_1,\vec{u}_2) = (0.6, 0.8, 0.2)$, $\epsilon = (0.4, 0.2, 0.8)$ and use the entropy given by $U(u) = u$. Then, $U'(u) = 1$ and \begin{equation} \sum_{i=0}^2 U'(\vec{u}_i) \cdot (\widetilde D_2^\epsilon \vec{u})_i = -0.17 - 0.20 + 0.79 = 0.42 > 0. \end{equation} While this does not prove that the SBP operator mentioned above is the only second order entropy dissipative approximation, it illustrates the good properties of this operator. \end{remark} \subsection{Higher Order Derivative Operators} Since there is no discrete chain rule for higher order difference approximations to the first derivative, it might be conjectured that discrete higher order second derivatives are in general not entropy dissipative. In order to prove this, it suffices to consider the case of constant coefficients. At the grid point $x_j$, a general (linear) discrete approximation of the second derivative can be written as \begin{equation} \label{eq:2nd-derivative-coefficients} (D_2 \vec{u})_j = \sum_k c_k \vec{u}_{j+k}. \end{equation} The following result will be used. \begin{lemma} \label{lem:2nd-derivative-coefficients} If \eqref{eq:2nd-derivative-coefficients} is an approximation of the second derivative with order of accuracy $p > 2$, there is a $k \neq 0$ such that $c_k < 0$. \end{lemma} \begin{proof} Using Taylor expansion, the order conditions for an order of accuracy $p = 3$ are \begin{equation} \begin{aligned} \sum_k c_k &= 0, \qquad & \sum_k c_k (x_{j+k} - x_j) &= 0, \qquad & \sum_k c_k (x_{j+k} - x_j)^2 &= 2, \\ \sum_k c_k (x_{j+k} - x_j)^3 &= 0, \qquad & \sum_k c_k (x_{j+k} - x_j)^4 &= 0. \end{aligned} \end{equation} Due to the last condition, at least one $c_k$ with $k \neq 0$ must be negative. \end{proof} \begin{example} The classical fourth order approximation to the second derivative on a periodic domain is given by \begin{equation} h^2 (D_2 \vec{u})_i = - \frac{1}{12} (\vec{u}_{i+2} + \vec{u}_{i-2}) + \frac{4}{3} (\vec{u}_{i+1} + \vec{u}_{i-1}) - \frac{5}{2} \vec{u}_i. \end{equation} \end{example} Lemma~\ref{lem:2nd-derivative-coefficients} can be used to prove the last main observation of this article. \begin{theorem} \label{thm:laplace-higher-order} If $D_2$ is a discrete derivative operator approximating the second derivative with order of accuracy $p > 2$, it is not dissipative for every entropy. \end{theorem} \begin{proof} Consider the grid point $x_j$. Writing the approximation of the second derivative as in \eqref{eq:2nd-derivative-coefficients}, there is some coefficient $c_k < 0$, $k \neq 0$, due to Lemma~\ref{lem:2nd-derivative-coefficients}. Fix $\vec{u}_{j+k} < 0$, say $\vec{u}_{j+k} = -1$. Set $\vec{u}_j = \epsilon > 0$, where $\epsilon > 0$ will be fixed later. Choose $\vec{u}_i = 0$ for $i \neq j, j+k$. Finally, consider the entropy $U(u) = \max\set{0, u - \epsilon/2}$. Then, $U'(u) = 0$ for $u < \epsilon/2$ and $U'(u) = 1$ for $u > \epsilon/2$. Thus, \begin{equation} \sum_i U'(\vec{u}_i) \cdot (D_2 \vec{u})_i = U'(\vec{u}_j) \cdot (D_2 \vec{u})_j = 1 \cdot \sum_l c_l \vec{u}_{j+l} = c_0 \underbrace{\vec{u}_j}_{= \epsilon} + \underbrace{c_k}_{< 0} \underbrace{\vec{u}_{j+k}}_{< 0} > 0, \end{equation} if $\epsilon > 0$ is chosen small enough. \end{proof} \begin{remark} The entropy $U$ used in the proof of Theorem~\ref{thm:laplace-higher-order} can be made smooth by suitable modifications around $u = \epsilon/2$. \end{remark} \begin{remark} Of course, higher order approximations to second derivatives that are dissipative for a specific entropy can be constructed. Classical difference operators are negative semidefinite, i.e. they are dissipative for the $L^2$ entropy $U(u) = \frac{1}{2} u^2$ with $U'(u) = u$. For a general entropy $U$, entropy dissipative second derivatives can be constructed by using the entropy variables $w := U'(u)$ instead of the conserved variables $u$, cf. \cite{fisher2013high}. \end{remark} \begin{remark} In periodic domains, the classical central finite difference approximations to the first derivative of higher order can be constructed via extrapolation from the second order operator, cf. \cite[Section~3.2]{ranocha2018thesis}. Thus, by enforcing positivity of the corresponding coefficients for the second derivative, entropy dissipative terms can be constructed similarly for higher order first derivative operators, as used in \cite{svard2009shock}. However, these are not higher order approximations of the second derivative. \end{remark} \section{Summary and Discussion} \label{sec:summary} In this article, product and chain rules using averaged compositions have been shown to hold for second order approximations to first order derivative operators, similarly to corresponding results for functions of bounded variation (Lemma~\ref{lem:product-rule} and Lemma~\ref{lem:chain-rule}). While such mimetic properties may have nice implications, it is proven that such results cannot hold for higher order approximations, independently of the grid or the exact form of the discrete derivative operator (Theorem~\ref{thm:product-rule} and Theorem~\ref{thm:chain-rule}). This result holds also for spectral collocation and nodal discontinuous Galerkin methods. Furthermore, the entropy dissipation induced by difference operators approximating second derivatives with varying coefficients is studied. While certain second order approximations are dissipative for all entropies (Theorem~\ref{thm:laplace-2nd-order}), such a result is not valid for higher order approximations (Theorem~\ref{thm:laplace-higher-order}). These results (Theorems~\ref{thm:product-rule}, \ref{thm:chain-rule}, \ref{thm:laplace-higher-order}) have been proven for linear difference operators. Indeed, they rely on Lemma~\ref{lem:asymptotic-expansion-D1} and Lemma~\ref{lem:2nd-derivative-coefficients}, which assume linearity. Thus, similar to classical results for (scalar) conservation laws \cite{harten1987nonlinearity}, it might be possible to construct nonlinear operators approximating the first and second derivative such that desirable properties can be obtained. While these results are interesting on their own, there are several connections with other results and open questions. It is well known that higher order schemes can be more efficient for certain problems than lower order ones \cite{kreiss1972comparison}. However, the numerical treatment of discontinuities in solutions to hyperbolic conservation laws has to be well-considered, especially for higher order schemes. Even though a single entropy inequality can be sufficient for genuinely nonlinear scalar conservation laws \cite{panov1994uniqueness,delellis2004minimal,krupa2017single}, general conservation laws pose additional challenges \cite{lefloch2002hyperbolic}. Since certain mimetic properties discussed in this article are limited to second order schemes, suitable detection of discontinuities and corresponding adaptations of the numerical methods may be inevitable. \section*{Acknowledgements} This work was supported by the German Research Foundation (DFG, Deutsche Forschungsgemeinschaft) under Grant SO~363/14-1. The author would like to thank the anonymous reviewers for their helpful comments and valuable suggestions to improve this article.
2024-02-18T23:40:31.284Z
2018-10-23T02:24:16.000Z
algebraic_stack_train_0000
2,645
6,893
proofpile-arXiv_065-12984
\section{Introduction} In the breakup of weakly-bound projectiles against heavy nuclei targets, a relevant phenomenon which has been investigated is the Coulomb-nuclear dynamics, such that considerable efforts have been made to understand the role of Coulomb-nuclear interference and the dynamics of fragments absorption in the breakup process. The established studies in this matter with the corresponding most relevant works can be found in Refs.~\cite{2003Suzuki,Thompson100,Chat10}. For other complementary studies done in past two decades on the Coulomb-nuclear dynamics involving weakly-bound projectiles, with particular interest to our present investigation, we select the Refs.~\cite{Thomp20,1999Th,2002Margueron,2003Capel,Tarutina10,Hussein20,2006Canto,2009Lubian,2009Canto}, as well as more recent works (among which we include contributions by some of us) in Refs.~\cite{Kucuk10,2014Capel,Hussein200,2015Lubian,Mukeru10,Mukeru20,2016Manjeet,Pierre100,2017Mukeru, MukeruPRC2020}. Despite the advances verified by these studies, the question on how both Coulomb and nuclear forces interfere to produce a total breakup remains far from being fully established. Some of the challenges emanate from the fact that, in a Coulomb-dominated reaction, small contribution of the nuclear breakup does not automatically imply insignificant Coulomb-nuclear interference~\cite{Nakamura10,Noc10,Aum10,Fukuda10,Abu10}. It could be interesting to verify what happens in nuclear-dominated reactions. In view of the long-range nature of Coulomb forces, a low breakup threshold is expected to lead to peripheral collisions, where the Coulomb breakup prevails over the nuclear breakup. In this peripheral region, the Coulomb breakup cross section depends on the projectile structure through the electromagnetic matrix elements of the projectile. Although not a general rule, according to the Coulomb dissociation method~\cite{Bertulani10,Winther10,Baur10,Baur20}, the breakup cross section is simply the product of the reaction parameters and the projectile dipole electric transition probability. As the binding energy decreases, the reaction becomes more peripheral, with the ratio between the Coulomb breakup cross section to the nuclear counterpart being expected to rise significantly, regardless the target mass. Intuitively, in this case, one would expect that the total breakup cross section becomes comparable to the Coulomb one, owing to both dynamic and static breakup effects. From the fact that lower is the ground-state binding, longer is the tail of the associated wave function, the nuclear forces are fairly stretched beyond the projectile-target radius. Therefore, for a projectile with very weak binding energy, even the nuclear breakup can be assumed to be a peripheral phenomenon, with the Coulomb-nuclear interference becoming stronger in the peripheral region. The dependence of various reaction observables on the projectile ground-state binding energy has being studied recently in Refs.~\cite{Wang10,Rath100,2016Rangel,Lei100,Mukeru15,2020Mukeru}, in which different projectiles with different binding energies have been considered. One of the drawbacks being that all the projectiles do not have the same ground-state structure, mass and charge. Among other ways to circumvent such shortcomings, at least theoretically, one could artificially consider different binding energies for the same projectile (i.e., with nucleon-number $A$ and charge $Z$ unchanged), within an approach that has been adopted for instance in Refs.~\cite{2016Rangel,Lei100,Mukeru15,2020Mukeru}. Even though a given nucleus has fixed ground-state energy, this is a convenient theoretical approach to unambiguously establish the dependence of the reaction observables on the projectile binding energy. Another important aspect in breakup dynamics, relies on possible effects on other reaction observables, such as on fusion cross sections. While it is widely understood that the complete fusion suppression strongly depends on the projectile breakup threshold (see Ref.~\cite{2020Jha}, for recent related studies), strong charge clustering has recently been identified as the main factor responsible for such suppression, in the breakup of $^8$Li on a heavy target \cite{Cook20}. Some behavior in the breakup of this nucleus, has also been reported in Refs.~\cite{Pak20,Gum20}. Unlike several other loosely bound nuclei (such as $^8$B, $^{6,7}$Li, and $^{11}$Be), not much has been reported on the breakup dynamics of the $^8{\rm Li}$ nucleus. In view of the above discussion, we are motivated to study the breakup of $^8{\rm Li}$ nucleus, within a model in which a valence neutron (n) is loosely bound to the $^7{\rm Li}$ nucleus by a binding energy $\varepsilon_b=2.03$ MeV~\cite{Nut10}, by considering the light and heavy targets $^{12}$C and $^{208}$Pb. The present study on the $^8$Li$+^{208}$Pb breakup reaction is also extending a previous recent analysis for this reaction done in Ref.~\cite{2020Mukeru}, where a critical angular momentum for complete fusion was also considered. We are particularly interested in analyzing the dependence of the resulting total, Coulomb and nuclear breakup cross sections, as well as the Coulomb-nuclear interference, on the projectile ground-state binding energy, in order to test the validity of the assumptions presented in the previous paragraphs. Within a more detailed investigation, we expect to show that for a much weaker projectile binding energy, the Coulomb breakup becomes dominant regardless the target mass, and the nuclear breakup becomes relatively peripheral, leading to a peripheral Coulomb-nuclear interference. Since both Coulomb and nuclear breakup cross sections increase with the decrease of the binding energy, a clear separation of their effects is not a simple task. The choice of $^{12}$C and $^{208}$Pb as the targets is motivated by the fact that, in the former case, the reaction should be dominated by the nuclear breakup, whereas it is dominated by Coulomb breakup in the latter case. If fact, $^{12}$C was also used in Ref.~\cite{Fukuda10}, as a reference target when studying the $^{11}$Be Coulomb dissociation on $^{208}$Pb target. In our approach to obtain the corresponding total, Coulomb and nuclear breakup cross sections, we adopt the Continuum Discretized Coupled Channels (CDCC) formalism~\cite{Aust100}, with the Fresco code~\cite{1988Thompson} being used for the numerical solutions. The next sections are organized as follows: Sect.~\ref{calculation} provides some details on the model approach, with a summary of the CDCC formalism. Sect.~\ref{results} contains the main results for elastic and breakup cross sections, together with our analysis on the Coulomb-nuclear interference and possible absorption contributions. Finally, the Sect.~\ref{conclusion} presents a summary with our conclusions. \section{Formalism and computational approach} \label{calculation} \subsection{Brief description of the CDCC formalism} As mentioned in the introduction, in our numerical approach we use the CDCC formalism, in which we model the projectile $^8{\rm Li}$ as $^7{\rm Li}$ core nucleus, to which a neutron is loosely bound with ground-state energy $\varepsilon_b=2.03\,{\rm MeV}$. This state is defined in the core-neutron centre-of-mass (c.m.) by $n=1$, $\ell_0=$1, $\tilde{\j}_0^{\pi}=2^+$ quantum numbers, where $n$ stands for the radial state, $\ell_0$ the orbital angular momentum and $\tilde{\j}_0^\pi$ the projectile total angular momentum with parity $\pi$. It is obtained by applying the usual spin-orbit coupling ${\bm j}_0={\bm \ell}_0+{\bf 1/2}$; $\tilde{\bm \j}_0={\bm j}_0+{\bm I}_c$, with the core spin $I_c={3}/{2}$. In addition to the ground state, an excited bound state with energy $\varepsilon_{\rm ex}=0.98\,{\rm MeV}$ (located in the $\tilde{\j}_0^{\pi}=1^+$ state~\cite{Nut10}) was also considered in our coupling scheme. We would like to emphasize that we are not considering possible core excitations in our calculations. In this formalism, we first consider the expansion of the three-body wave function on the projectile internal states. After that, by introducing the three-body expansion into the corresponding Schr\"odinger equation, a one-dimensional radial set of coupled differential equations can be derived for the radial wave-function components $\chi_{\alpha}^{LJ}(R)$, in terms of the projectile-target c.m. coordinate $R$, which is given by \begin{eqnarray}\label{coupled} &&\left[-\frac{\hbar^2}{2\mu_{pt}}\bigg(\frac{d^2}{dR^2}-\frac{L(L+1)}{R^2}\bigg)+ U_{\alpha\alpha}^{LLJ}(R)\right] \chi_{\alpha}^{LJ}(R)\nonumber\\ &&+\sum_{\alpha'L' (\alpha'\ne\alpha)} U_{\alpha\alpha'}^{LL'J}(R)\chi_{\alpha'}^{L'J}=(E-\varepsilon_\alpha)\chi_{\alpha}^{LJ}, \end{eqnarray} where $L$ is the orbital angular momentum associated with $R$, $J$ is the total angular momentum, and $\mu_{pt}$ the projectile-target ($pt$) reduced mass. The total energy is given by $E$, with $\varepsilon_{\alpha}$ being the projectile bin energies. The index $\alpha$ appearing in the equation is representing a set of quantum numbers describing the projectile states, as given by $\alpha\equiv (i,\ell,s,j,I_c,\tilde{\j})$, $i=0,1,2,\ldots,N_b$ ($N_b=$ number of bins). With the projectile-target potential given as a sum of the core-target ($ct$) and neutron-target ($nt$) terms, i.e., $U_{pt}({\bm r}, {\bm R})=U_{ct}({\bm R}_{ct})+U_{nt}({\bm R}_{nt})$, where $ {\bm R}_{ct} \equiv{\bm R}+\frac{1}{8}{\bm r}$ and ${\bm R}_{nt}\equiv{\bm R}-\frac{7}{8}{\bm r}$ (with ${\bm r}$ being the projectile internal coordinate), the potential matrix elements $U_{\alpha\alpha'}^{LL'J}(R)$ in (\ref{coupled}) are given by its Coulomb and nuclear parts, such that {\small \begin{eqnarray}\label{potcomp} U_{\alpha\alpha'}^{LL'J}(R) &=&\langle\mathcal{Y}_{\alpha L}({\bm r},\Omega_R)|V_{ct}^{Coul}({\bm R}_{ct}) |\mathcal{Y}_{\alpha' L'}({\bm r},\Omega_R)\rangle\nonumber\\ &+&\langle\mathcal{Y}_{\alpha L}({\bm r},\Omega_R)|U_{ct}^{nucl}({\bm R}_{ct}) |\mathcal{Y}_{\alpha' L'}({\bm r},\Omega_R)\rangle\\ &+&\langle\mathcal{Y}_{\alpha L}({\bm r},\Omega_R)|U_{nt}^{nucl}({\bm R}_{nt}) |\mathcal{Y}_{\alpha' L'}({\bm r},\Omega_R)\rangle\nonumber, \end{eqnarray} }where $\mathcal{Y}_{\alpha L}({\bm r},\Omega_R)\equiv[\hat\Phi_{\alpha}({\bm r})\otimes {\rm i}^LY_L^{\Lambda}(\Omega_R)]_{JM}$ is the direct product of the angular part of ${\bm R}$ with the projectile channel wave function, $\hat\Phi_{\alpha}({\bm r})$, which contains the square integrable discretized bin wave functions. The nuclear terms express the sums of real and imaginary parts. The former are responsible for the nuclear dissociation, whereas the latter accounts for the nuclear absorption. These nuclear terms are, respectively, given by $U_{ct}^{nucl}({\bm R}_{ct})=V_{ct}^{nucl}({\bm R}_{ct})+{\rm i}W_{ct}^{nucl}({\bm R}_{ct})$ and $U_{nt}^{nucl}({\bm R}_{nt})=V_{nt}^{nucl}({\bm R}_{nt})$+ ${\rm i}W_{nt}^{nucl}({\bm R}_{nt})$, with the Woods-Saxon shape being adopted for both components. The diagonal coupling matrix elements $U_{\alpha\alpha}^{LL J}(R)$, contain the monopole nuclear term in the projectile-target c.m., which we denote by $V_{\beta_0\beta_0}^{LJ}(R)=\langle \Phi_{\beta_0}({\bm r})|U_{ct}^{nucl}+U_{nt}^{nucl}|\Phi_{\beta_0}({\bm r})\rangle$, where $\beta_0$ represents the set of ground-state projectile quantum numbers, $\beta_0\equiv (k_0,\ell_0,s,j_0,I_c,\tilde{\j}_0)$. The imaginary part accounts for the absorption in the projectile-target c.m. motion. The separation of the Coulomb and nuclear interactions to obtain the Coulomb and nuclear breakup cross sections ($\sigma_{Coul}$ and $\sigma_{nucl}$, respectively) remains a challenge in nowadays theories, making an accurate description of the Coulomb-nuclear interference a more tricky task. For that, in this work we resort to an approximate approach, as follows: The nuclear breakup cross sections, defined as $\sigma_{nucl}$, are obtained by including in the coupling matrix elements, the nuclear components of $U_{ct}$ and $U_{vt}$ potentials, plus the diagonal monopole Coulomb potential. On the other hand, the Coulomb breakup cross sections, defined as $\sigma_{Coul}$, are obtained by including in the matrix elements the Coulomb component of the projectile-target potential, i.e., $V_{ct}^{Coul}(R_{ct})$ (as $V_{nt}^{Coul}=0$), plus the monopole nuclear potential. The total breakup cross sections $\sigma_{tot}$ are obtained by including the full $U_{pt}$ potential in the calculations. Since the early works on Coulomb and nuclear breakup studies~\cite{Thomp20,1999Th}, this approach has been widely adopted to study Coulomb and nuclear breakup cross sections, as one can follow from the review~\cite{2015Canto} (and references therein). In Ref.~\cite{Pierre100}, where different methods are considered in order to decompose the total breakup into its Coulomb and nuclear components, this approach is also referred as {\it weak-coupling approximation}. Two methods emerged from their discussion, which they refer to as {\it method 1} and {\it method 2}. The weak-coupling approximation is very close to {\it method 1} for nuclear breakup, and close to {\it method 2} for Coulomb breakup. While this approximate procedure will not completely eliminate the ambiguities surrounding the separation of the total breakup cross section into its Coulomb and nuclear components (as also outlined in Ref.\cite{Pierre100}), we believe that it is particularly justified in the present work, since by using the $^{12}$C target, the breakup is naturally dominated by nuclear dissociation, whereas by using the $^{208}$Pb target the breakup is dominated by Coulomb dissociation. Once the matrix elements (\ref{potcomp}) are computed, the coupled Eq.~(\ref{coupled}) is solved with the usual asymptotic conditions, which for $k_{\alpha}\equiv\sqrt{{(2\mu_{pt}/\hbar^2)(E-\varepsilon_{\alpha})}}$ is given by \begin{eqnarray}\label{BC} \chi_{\alpha}^{LJ}(R)\stackrel{R\to\infty}\longrightarrow \frac{\rm i}{2}\left[H_{\alpha}^-(k_{\alpha}R)\delta_{\alpha\alpha'}- H_{\alpha}^+(k_{\alpha}R)S_{\alpha\alpha'}^{LL'J}\right], \end{eqnarray} where $H_{\alpha}^{\mp}(k_{\alpha}R)$ are the usual incoming (-) and outgoing (+) Coulomb Hankel functions~\cite{Abramo100}, with $S_{\alpha\alpha'}(k_{\alpha})$ being the scattering S-matrix elements. Due to the short-range nature of nuclear forces, the matrix elements corresponding to the nuclear interaction in Eq.~(\ref{potcomp}) will vanish at large distances, ${R\gg R_n}$, where \begin{equation} \label{potnucl} R_n\equiv r_0(A_p^{1/3}+A_t^{1/3}) +\delta_R(\varepsilon_b)\equiv R_0 + \delta_R(\varepsilon_b) \end{equation} determines the range of the nuclear forces ($r_0$ being the nucleon size, with $r_0A_{p}^{1/3}$ and $r_0A_{t}^{1/3}$ the projectile and target sizes, respectively). The function $\delta_R(\varepsilon_b)$ is introduced to take into account the well-known effect which occurs in weakly-bound systems (low breakup thresholds), as in halo nuclei, in which the nuclear forces can be stretched beyond $R_0=r_0(A_p^{1/3}+A_t^{1/3})$. The various breakup cross sections are obtained by using the relevant S-matrix, as outlined for example in Ref.~\cite{Thompson100}. At large distance ($R\to\infty$), Eq.(\ref{potcomp}) contains only the Coulomb interaction, which can be expanded as~\cite{Hussein50} {\small \begin{eqnarray}\label{CE} V^{Coul}({\bm r},{\bm R})\stackrel{R\to\infty}\longrightarrow 4\pi Z_te\sum_{\lambda=0}^{\lambda_{\rm max}} \frac{\sqrt{2\lambda+1} }{R^{\lambda+1}}\left[\mathcal{O}_{\lambda}^{\epsilon}({\bm r})\otimes Y_{\lambda}(\Omega_R)\right]^{0}, \end{eqnarray} }where $Z_te$ is the target charge, with $\lambda$ the multipole order truncated by $\lambda_{\rm max}$. $\mathcal{O}_{\lambda}^{\epsilon}({\bm r})$ is the projectile electric operator, given by {\small \begin{eqnarray}\label{EO} \mathcal{O}_{\lambda\mu}^{\epsilon}({\bm r})&=& \left[Z_c e\left(-\frac{A_n}{A_p}\right)^{\lambda}\right]r^{\lambda}Y_{\lambda}^{\mu}(\Omega_r)= Z_{\lambda}r^{\lambda}Y_{\lambda}^{\mu}(\Omega_r), \end{eqnarray} }where $Z_ce$ is the charge of the projectile core, with $Z_\lambda$ being defined as the effective charge. The projectile electric transition probability for the transition from the projectile ground-state to the continuum states can be obtained through $\mathcal{O}_{\lambda}^{\epsilon}({\bm r})$~\cite{Bertulani50}. For excitation energies $\varepsilon$, the corresponding variation of the electric transition probability $B(E\lambda)$ can be written as {\small \begin{eqnarray}\label{elec} \frac{dB(E\lambda)}{d\varepsilon}&=&\frac{\mu_{cn}}{\hbar^2 k}\sum_{\tilde{\j}}(2{\tilde{\j}}+1) \left|\langle \Phi_{\beta_0}({\bm r})|\mathcal{O}_{\lambda}^{\epsilon}({\bm r})|\Phi_{\beta}({\bm r})\rangle\right|^2, \end{eqnarray} }where ($\beta$) refers to the set of quantum numbers in the continuum states $\beta\equiv (k,\ell,s,j,I_c,\tilde{\j})$], $k=\sqrt{2\mu_{cn}\varepsilon/\hbar^2}$, $k_0 = \sqrt{2\mu_{cn}|\varepsilon_0|/\hbar^2}$, with $\mu_{cn}$ the core-neutron reduced mass. By defining $\hat l\equiv \sqrt{2l+1}$ for general angular quantum numbers, from the above we obtain {\small \begin{eqnarray}\label{electric} \frac{dB(E\lambda)}{d\varepsilon}&=& \frac{\mu_{cn}}{\hbar^2 k}\sum_{\tilde{\j}}(2{\tilde{\j}}+1)|\mathcal{F}_{\lambda,{\tilde{\j}}}|^2, \;\; {\rm with}\\ \mathcal{F}_{\lambda,j}&\equiv& \frac{1}{4\pi}Z_{\lambda}\hat\ell_0\hat\ell\hat\lambda^2\hat j_0\hat j (-1)^{\ell_0+\ell+s+j+j_0+I_c+{\tilde{\j}}}\nonumber\\ &\times & \left(\begin{array}{ccccc} \ell & \lambda & \ell_0 \\ 0 & 0& 0 \end{array} \right) \left(\begin{array}{ccccc} j & \lambda & j_0 \\ 0 & 0& 0 \end{array} \right) \left \{\ \begin{array}{cccc} s &\ell_0 & j_0 \\ \lambda & j & \ell \end{array} \right\}\ \nonumber\\ &\times & \left \{\ \begin{array}{cccc} I_c &j_0 & \tilde{\j}_0 \\ \lambda & \tilde{\j} & j \end{array} \right\}\ \int_0^{\infty}dr\, u_{k_0\ell_0}^{\tilde{\j}_0}(r)r^{\lambda}u_{k\ell}^{\tilde{\j}}(r),\nonumber \end{eqnarray} }where $u_{k_0\ell_0}^{\tilde{\j}_0}(r)$, and $u_{k\ell}^{\tilde{\j}}(r)$ are the ground-state and continuum radial wave functions. The Eqs.~(\ref{CE})-(\ref{electric}) are indicating how the Coulomb breakup is being affected by the projectile structure. \subsection{Computational details} The energies and corresponding wave functions which appear in the set of coupled differential equations (\ref{coupled}), for the bound and continuum states of the $^7$Li+n system, are obtained by considering a two-body Woods-Saxon potential as input, whose parameters are the same as in Ref.~\cite{Moro200}. The depth $V_0$ of the central part of the potential was adjusted to reproduce the ground and excited bound-state energies. These parameters are summarized in Table~\ref{table1}. \begin{table}[h] \caption{\label{table1} Woods-Saxon potential parameters for the projectile (n$-^7$Li) ground and excited bound-state energies.} \begin{tabular}{lllllllll} \hline\hline $\tilde{\j}^{\pi}$ & \multicolumn{1}{c}{$V_0$} & \multicolumn{1}{c}{$r_0$} & \multicolumn{1}{c}{$a_0$} & \multicolumn{1}{c}{$V_{\rm SO}$} & \multicolumn{1}{c}{$r_{\rm SO}$} & \multicolumn{1}{c}{$a_{\rm SO}$} \\ & \multicolumn{1}{c}{(MeV)} &\multicolumn{1}{c}{(fm)} &\multicolumn{1}{c}{(fm)} & \multicolumn{1}{c}{(MeV/fm$^{2}$)} & \multicolumn{1}{c}{(fm)} & \multicolumn{1}{c}{(fm)} \\ \hline $2^+$ &\multicolumn{1}{c}{37.22} &\multicolumn{1}{c}{1.25} & \multicolumn{1}{c}{0.52} & \multicolumn{1}{c}{4.89} & \multicolumn{1}{c}{1.25} & \multicolumn{1}{c}{0.52} \\ $1^+$ &\multicolumn{1}{c}{46.65} &\multicolumn{1}{c}{ 1.25} & \multicolumn{1}{c}{0.52} & \multicolumn{1}{c}{4.89} & \multicolumn{1}{c}{1.25} & \multicolumn{1}{c}{0.52} \\ \hline\hline \end{tabular} \end{table} Similarly, the other binding energies considered in this work are obtained by adjusting $V_0$. The same ground-state potential parameters are adopted to calculate the corresponding continuum wave functions. With these potential parameters, we first calculate the electric transition probability $B(E1)$ variation with the excitation energy $\varepsilon$, given by Eq.(\ref{electric}), corresponding to the transition from the ground-state to continuum $s$- plus $d$-states, for the binding energies $\varepsilon_b=0.01\,{\rm MeV}, 1.0\,{\rm MeV}$ and $2.03\,{\rm MeV}$. The results are shown in the upper panel of Fig.\ref{f01}. One notices that $B(E1)$ varies substantially for $\varepsilon_b=0.01\,{\rm MeV}$ as compared with values obtained for larger $\varepsilon_b$. These results highlight the strong dependence of the Coulomb breakup on the projectile internal structure, particularly in the asymptotic region. In this regard, it is also instructive to verify how the projectile root-mean-square radii $\sqrt{\langle r^2\rangle}$ vary with the projectile ground-state binding energies. For that, we add the lower panel of Fig.~\ref{f01}, with the corresponding root-mean-square radii, obtained for the projectile ground-state $\Phi_{\beta_0}(\bm r)$. As expected, the root-mean-square radii behavior is reflecting the large increasing of the wave function as the binding energy comes close to zero. Also, for $\varepsilon_b=2.033$\,MeV, we note that we obtain $\sqrt{\langle r^2\rangle}=2.39$\,fm, in very close agreement with the corresponding values reported in Refs.~\cite{2015Fan} and \cite{1988Tanihata} (respectively, $\sqrt{\langle r^2\rangle}=2.39\pm 0.05$\,fm and $\sqrt{\langle r^2\rangle}=2.37\pm 0.02$\,fm). \begin{figure}[h] \begin{center} \hspace{-1mm} \resizebox{75mm}{!}{\includegraphics{f01-ELECTRIC-RMS.png}} \end{center} \caption{\label{f01} In panel (a), considering three different $^7$Li-n ground-state binding energies $\varepsilon_b$, it is shown how the derivative of the electric transition probability, given by (\ref{electric}), varies with the excitation energy $\varepsilon$, for transitions from ground to continuum $s$- plus $d$-states. In panel (b), the root-mean-square radii is shown as a function of the binding energy $\varepsilon_b$.} \end{figure} In order to evaluate the coupling matrix elements of Eq.~(\ref{coupled}), fragments-target optical potentials are needed. The $^7$Li$+^{12}$C optical potential parameters were taken from Ref.\,\cite{Barioni100}, whereas the $^7$Li$+^{208}$Pb optical potential parameters were obtained from the $^7$Li global potential of Ref.~\cite{Cook300}, with the depth of the real part slightly modified to fit the elastic scattering experimental data. For the $n-$target optical potentials, we adopted the global potential of Ref.~\cite{1969Green}. The CDCC limiting values of the model space parameters, used for the numerical solution of Eq.(\ref{coupled}), are listed in Table~\ref{table2}, for the two targets we are considering, $^{12}$C and $^{208}$Pb, where $\ell_{\rm max}$ is the maximum angular momentum between $^7{\rm Li}$ and the neutron, $\lambda_{\rm max}$ is the maximum order of the potential multipole expansion, $\varepsilon_{\rm max}$ is the maximum bin energies, $r_{\rm max}$ is the maximum matching radius for bin potential integration, $L_{\rm max}$ is the maximum angular momentum of the relative c.m. motion, and $R_{\rm max}$ is the maximum matching radius of the integration for the coupled differential equations, with $\Delta R$ the corresponding $R-$step size. The reported main values are found to give enough converged results for $\varepsilon_b\ge $0.4 MeV. However, as we decrease the projectile binding energy, for $\varepsilon_b\le 0.08$MeV, to guarantee enough good convergence and precision of the results we found necessary to increase the maximum values for the projectile matching radius $r_{\rm max}$, for the matching radius $R_{\rm max}$, and for the relative angular momentum of the c.m. motion, $L_{\rm max}$, correspondingly to each of the target. These values for smaller $\varepsilon_b$ are shown within parenthesis, below the respective values obtained for larger $\varepsilon_b$. The adopted bin widths were, $\Delta\varepsilon=0.5\,{\rm MeV}$, for $s$- and $p$-states, $\Delta\varepsilon=1.0\,{\rm MeV}$, for $f$- and $d$-states and $\Delta\varepsilon=1.5\,{\rm MeV}$ for ${\rm g}$-states. {\small \begin{table}[h] \caption{\label{table2} Maximum model space parameters, for optimal numerical convergence of Eq.~(\ref{coupled}) for both $^{12}$C and $^{208}$Pb targets. The main reported values are for $\varepsilon_b\ge 0.4$MeV, with the corresponding ones within parenthesis for $\varepsilon_b\le 0.08$MeV.} \begin{tabular}{cccccccc} \hline\hline Target &$\ell_{\rm max}$&$\lambda_{\rm max}$&$\varepsilon_{\rm max}$&$r_{\rm max}$&$L_{\rm max}$& $R_{\rm max}$&$\Delta R$\\ &($\hbar$) &{} &(MeV) &(fm) &($\hbar$) &(fm) &(fm)\\ \hline $^{12}{\rm C}$ &{3} &{3} & {6} & {80} & {300} & {300} & {0.08} \\ & & & & {(100)} & {(1000)} & {(500)}& \\ $^{208}{\rm Pb}$ &{4} &{ 4} & {10} & {80} & {1000} & {600} & {0.03}\\ & & & & {(100)} & {(10000)} & {(1000)} & \\ \hline\hline \end{tabular} \end{table}} \begin{figure}[t] \begin{center} \hspace{-0.8cm} \resizebox{75mm}{!}{\includegraphics{f02-SCATTERING12C}} \end{center} \caption{\label{f02} $^8$Li+$^{12}$C elastic scattering cross sections for the incident energies, $E_{lab}=$14 MeV and $E_{lab}=$23.9 MeV. The model results are for different $^8$Li binding energies $\varepsilon_b$ (in MeV units), as indicated inside panel (a) for both panels. The available experimental data, converted to Rutherford $\sigma_R$ units, are from Refs.~\cite{1993Becchetti} [panel (a)] and \cite{Barioni100} [panel (b)], as indicated in the database reported in Ref.~\cite{Jinr20}. } \end{figure} \begin{figure}[h] \begin{center} \resizebox{75mm}{!}{\includegraphics{f03-SCATTERING208Pb}} \end{center} \caption{\label{f03} $^8$Li+$^{208}$Pb elastic scattering cross sections (in units of the Rutherford $\sigma_R$), obtained for the incident energies $E_{lab}=$36 MeV [panel (a)] and $E_{lab}=$60 MeV [panel (b)]. As in Fig.~\ref{f02}, the results are for the same set of $\varepsilon_b$ (in MeV). From Ref.~\cite{2002Kolata}, we included in (a) the closest available experimental data, which are for $E_{lab}=$30.6 MeV, as indicated in the database reported in Ref.~\cite{Jinr20}.} \end{figure} \section{Results and Discussion} \label{results} \subsection{Elastic scattering cross sections} \label{elastic} We start the first part of this section by analyzing the dependence of the elastic scattering cross sections on the projectile ground-state binding energy. These cross sections are displayed in Fig.~\ref{f02} for $^{12}{\rm C}$ target; and in Fig.~\ref{f03} for $^{208}{\rm Pb}$ target, considering two incident energies. In both the cases, we assume different values of $\varepsilon_b$, from the experimental one down to 0.01 MeV. In the case of $^{12}{\rm C}$ target, which is a nuclear-dominated reaction, from the results shown in Fig.~\ref{f02} one can observe a weak dependence on $\varepsilon_b$ in the range $0.4\,{\rm MeV}\le\varepsilon_b\le 2.03\,{\rm MeV}$, for both incident energies, $E_{lab}=$14 MeV [panel (a)] and 24 MeV [panel (b)]. However, it becomes relatively significant for $\varepsilon_b\le 0.08\,{\rm MeV}$ [see panel (a)]. Also shown in Fig.~\ref{f02} is that the experimental data are well reproduced by the model for both incident energies. For the Coulomb-dominated reaction with $^{208}{\rm Pb}$, the results given in Fig.\,\ref{f03} for $E_{lab}=$36 MeV (a) and 60 MeV (b)] are indicating strong dependence of the elastic scattering cross sections on all binding energies at forward angles (asymptotic region), where the Coulomb breakup is particularly dominant. However, at backward angles (short distance), where the nuclear breakup is expected to provide meaningful effects, the elastic cross sections become almost independent of the binding energy. These results lead to a conclusion that, when the nuclear breakup is dominant or relatively significant, the effect of the binding energy on the elastic scattering cross section is rather small, whereas it is more pronounced when the Coulomb breakup is dominant. Therefore, since a relatively significant effect for the $^8{\rm Li}+{}^{12}{\rm C}$ reaction is observed when $\varepsilon_b\le 0.08\,{\rm MeV}$, it is possible that the $^8{\rm Li}+{}^{12}{\rm C}$ reaction is already dominated by the Coulomb breakup for $\varepsilon_b\le 0.08\,{\rm MeV}$. As the binding energy decreases, the Coulomb breakup becomes dominant over its nuclear counterpart, as anticipated. It also follows that the probability of the projectile to fly on the outgoing trajectory unbroken decreases, diminishing the corresponding elastic scattering cross section. In the next section, we will look into this observation in more detail. \subsection{Breakup cross sections}\label{breakup} \begin{figure}[h]\hspace{-5cm} \begin{center} \hspace{-5mm} \resizebox{85mm}{!}{ \includegraphics{f04-ANGDISTR12C14MeV} } \end{center} \caption{\label{f04} For incident energies $E_{ lab}=14\,{\rm MeV}$ (left column) and $E_{ lab}=24\,{\rm MeV}$ (right column), with fixed different $\varepsilon_b$ (shown inside the panels), the $^8$Li$+^{12}$C angular distributions for the total, Coulomb and nuclear differential breakup cross sections $d\sigma/d\Omega$ (identified inside the upper panels) are shown as functions of the c.m. angle $\theta$. } \end{figure} \begin{figure}[h] \begin{center} \resizebox{85mm}{!}{ \includegraphics{f05-Converge}} \end{center} \caption{\label{f05} Convergence sample results for the $^8$Li$+^{208}$Pb, total (upper frames), Coulomb (middle frames) and nuclear (bottom frames) breakup angular distributions, $d\sigma/d\Omega$, at $E_{ lab}=36\,{\rm MeV}$, considering different maximum projectile internal angular momenta $\ell_{\rm max}$ (indicated in the upper-left frame). The left set [(a)-(c)] is for $\varepsilon_b=0.01$MeV, with the right set [(d)-(f)] for $\varepsilon_b=2.03$MeV. } \end{figure} \begin{figure}[h] \begin{center} \resizebox{80mm}{!}{ \includegraphics{f06-ANGDISTRIBUTION208Pb36MeV} } \end{center} \caption{\label{f06} For incident energies $E_{ lab}=36\,{\rm MeV}$ (left column) and $E_{ lab}=60\,{\rm MeV}$ (right column), with fixed different $\varepsilon_b$ (shown inside the panels), the $^8{\rm Li}+{}^{208}{\rm Pb}$ angular distributions for the total, Coulomb and nuclear $d\sigma/d\Omega$ (identified in the upper panels) are shown as functions of the c.m. angle $\theta$. } \end{figure} The differential total, Coulomb and nuclear breakup cross sections, for the $^{12}{\rm C}$ target, are depicted in Fig.~\ref{f04}, for $E_{lab}=$14 MeV [(a)-(e) panels] and $E_{lab}=$24 MeV [(f)-(j) panels]. As anticipated, in the case of nuclear-dominated reactions, for both incident energies, $d\sigma_{nucl}/d\Omega$ $\simeq d\sigma_{tot}/d\Omega $ $\gg d\sigma_{Coul}/d\Omega$ as $\varepsilon_b\to 2.03\,{\rm MeV}$, with $d\sigma_{Coul}/d\Omega\to 0$. However, it is interesting to notice that as $\varepsilon_b$ decreases, the Coulomb breakup increases rapidly, such that for $\varepsilon_b\to 0.01\,{\rm MeV}$, $d\sigma_{nucl}/d\Omega$ $\ll d\sigma_{Coul}/d\Omega$ $\simeq d\sigma_{tot}/d\Omega$ at forward angles, for both incident energies. On the light of these results it follows that as the binding energy further decreases, the Coulomb breakup becomes more relevant, and comparable with the total breakup even in such a naturally nuclear-dominated reaction. This can be attributed to the fact that the breakup becomes more peripheral as $\varepsilon_b$ decreases, where only Coulomb forces are available. Hence, the importance of the Coulomb breakup in this case relies mainly on the long-range behavior of the Coulomb forces, and on its direct dependence on the electromagnetic transition matrix elements, in agreement with our assessment in Sect.~\ref{elastic}. Furthermore, these results show that the ``nuclear-dominated reaction" concept may be relative to the projectile binding energy. As the projectile binding energy varies from 2.03 MeV down to 0.01 MeV, one may wonder how relevant higher-order partial-waves ($\ell$) are in the breakup process for such very low binding energy, particularly for heavy targets. In order to verify the importance of higher-order partial-waves in this case, we performed a convergence test of the total, Coulomb and nuclear differential breakup cross sections for $^{208}$Pb target at $E_{lab}=36$ MeV. The different breakup cross sections are shown in Fig.~\ref{f05}, as functions of the c.m. angle $\theta$, for different maximum projectile internal angular momenta $\ell_{\rm max}$, and only for $\varepsilon_b=0.01$\,MeV and 2.03\,MeV binding energies. As evidenced by the results in this figure, there is no meaningful difference between $\ell_{\rm max}=4$ and $\ell_{\rm max}=7$, regardless the binding energy. This implies that, by reducing the ground-state binding energy, the convergence of the breakup cross sections is not affected, in respect to the maximum core-neutron orbital angular momentum $\ell_{\rm max}$. In Fig.\ref{f06}, displays, the total, Coulomb and nuclear breakup angular cross-section distributions as functions of the c.m. angle $\theta$, for the different binding energies $\varepsilon_b$, for $^8$Li$+^{208}$Pb reaction. We first observe that as $\varepsilon_b$ decreases, the peaks of $d\sigma_{tot}/d\Omega$ and $ d\sigma_{Coul}/d\Omega$ are shifted to forward angles. In fact, for $\varepsilon_b\le$ 0.08 MeV, the peaks are located close to zero degree. This is a clear display of the peripheral nature of the breakup process as $\varepsilon_b$ decreases. A careful look at this figure also indicates that as $\varepsilon_b$ decreases, even the peak of $d\sigma_{nucl}/d\Omega$ is shifted to forward angles, which may suggest that even the nuclear breakup process becomes peripheral as $\varepsilon_b\to$ 0.01 MeV. The peripherality of the nuclear breakup in this case, can be understood by considering the function $\delta_R(\varepsilon_b)$, which appears in Eq.(\ref{potnucl}). The nuclear breakup dynamics require that $\delta_R(\varepsilon_b)\to 0$, as $\varepsilon_b$ increases, implying that $R_n\to R_0$, due to the short-range nature of nuclear forces. However, as $\varepsilon_b\to 0$, $\delta_R(\varepsilon_b)$ increases and so does $R_n$, leading to a significant nuclear effect in the peripheral region. Therefore, the function $\delta_R(\varepsilon_b)$ is introduced to take into account the well-known effect which occurs in weakly-bound systems, as in halo nuclei, in which the nuclear forces are stretched beyond the usual range. Quantitatively, since this $^8$Li$+^{208}$Pb reaction is Coulomb-dominated, we observe that at forward angles both $d\sigma_{tot}/d\Omega$ and $d\sigma_{Coul}/d\Omega$ are substantially larger than $d\sigma_{nucl}/d\Omega$ (about three orders of magnitude as $\varepsilon_b$ decreases). A further inspection of this figure shows that for $E_{lab}=60$ MeV, we notice that the total and Coulomb breakup cross sections are more similar compared to $E_{lab}=36$\,MeV, with the difference coming from the competition between the nuclear and Coulomb interactions above the barrier (for a discussion on the role of the diagonal Coulomb interaction, see also Ref.~\cite{MukeruPRC2020}). \begin{figure}[!t] \begin{center} \resizebox{70mm}{!}{\includegraphics{f07-BreakUpFusion}} \end{center} \caption{\label{f07} For the $^8$Li$+^{208}$Pb reaction, by considering $E_{lab}=$36 MeV (a) and 60 MeV (b), it is shown the integrated breakup cross sections (BU) (when both $W_{ct}^{nucl}$ and $W_{nt}^{nucl}$ are contributing), the breakup cross section without absorption (NA) (when $W_{ct}^{nucl}=W_{nt}^{nucl}= 0$), and the total fusion cross section (TF), as functions of the projectile binding energy $\varepsilon_b$.} \end{figure} In order to better elucidate the importance of the nuclear absorption in the breakup process, we present in Fig.~\ref{f07}, for the $^8$Li+$^{208}$Pb reaction, the integrated total breakup cross section as well as the total fusion cross sections as functions of $\varepsilon_b$. In this regard, we are extending a previous analysis done for this reaction in Ref.~\cite{2020Mukeru}, in which the total fusion cross sections are shown as functions of the incident energy for different projectile binding energies. The breakup cross section obtained in the presence of nuclear absorption (i.e., $W_{ct}^{nucl}\ne 0$, $W_{nt}^{nucl}\ne 0$), are indicated by the label ``BU". The breakup cross section obtained in the absence of nuclear absorption (i.e, $W_{ct}=W_{nt}=0$), are indicated by ``NA". The total fusion cross section is labeled as ``TF''. By observing this figure, it follows that, as $\varepsilon_b\to 2.03$ MeV, the nuclear absorption contributes to largely reduce the breakup cross section about one order magnitude in the log-scale. However, we observe that the nuclear absorption plays a minor role on the breakup cross section for smaller binding energies, being negligible for $\varepsilon_b\to 0.01$ MeV, in particular at $E_{lab}=60$\,MeV. In this case, $\sigma_{\rm NA}\simeq\sigma_{\rm BU}\gg\sigma_{\rm TF}$, ({ where $\sigma_{\rm BU}$ is the breakup cross section followed by fragments absorption, and $\sigma_{\rm NA}$ is the breakup cross section without fragments absorption after breakup}). A larger breakup cross section over the total fusion cross section can be understood as due to the fact that, when the breakup occurs where classically the trajectory is far away from the target, the projectile fragments have no easy access to the absorption region, thus significantly reducing the flux that contributes to the fusion cross section. However, as expected, as $\varepsilon_b\to 2.03$ MeV, where the breakup process occurs closer to the target, where the probability for the projectile fragments to survive absorption is significantly reduced, we observe that $\sigma_{\rm BU}\ll \sigma_{\rm NA}<\sigma_{\rm TF}$. A weak dependence of the total fusion cross section on the binding energy compared to the breakup cross section is also observed. The energy region well above the Coulomb barrier is particularly dominated by the complete fusion process. As shown in Ref.~\cite{Lei100}, the complete fusion cross section is insignificantly dependent on the projectile $\varepsilon_b$ for $^7$Li$+^{209}$Bi reaction. We believe that these observations would be valid for any loosely bound projectile, and hence there is nothing unusual in the breakup of the $^8{\rm Li}$ nucleus. Concerning our approach to total fusion (TF) and absorption, let us clarify that: In the standard CDCC method, the optical potentials are chosen to describe the elastic scattering of the fragments by the target. So, their imaginary parts account for the absorption to fusion and other direct channels (surface reactions). Nevertheless, as direct reaction cross sections are expected to be small for the interactions between fragments and targets selected in this work, the TF cross section provides the major contribution to this absorption. \begin{figure}[h] \begin{center} \resizebox{70mm}{!}{\includegraphics{f08-All12C}} \end{center} \caption{\label{f08} For the $^8$Li+$^{12}$C breakup reaction, the angular-integrated total, Coulomb and nuclear breakup cross sections are given as functions of the projectile binding energy $\varepsilon_b$, for the incident energies $E_{lab}=$14 MeV (a) and 24 MeV (b).} \end{figure} \begin{figure}[h] \begin{center}\hspace{-.5cm} \resizebox{80mm}{!}{ \includegraphics{f09-All208Pb}} \end{center} \caption{\label{f09} The angular-integrated total, Coulomb and nuclear breakup cross sections are given for the $^8$Li+$^{208}$Pb breakup reaction as functions of $\varepsilon_b$, with nuclear absorption in the panels (a) and (b); and without absorption in the panels (c) and (d). As indicated, the incident energies are $E_{lab}=$36 MeV [panels (a) and (c)] and 60 MeV [panels (b) and (d)]. }\end{figure} For a better quantitative assessment of these results, we consider the integrated total ($\sigma_{tot}$), Coulomb ($\sigma_{ Coul}$), and nuclear ($\sigma_{nucl}$) breakup cross sections, which are displayed as functions of $\varepsilon_b$ in Fig.~\ref{f08} (for the $^{12}{\rm C}$ target), and in Fig.~\ref{f09} (for the $^{208}{\rm Pb}$ target). The results in both figures confirm the conclusions already drawn from Figs.\ref{f04} and \ref{f06}. For example, both panels of Fig.\ref{f08}, show that as $\varepsilon_b\to$ 0.01 MeV, $\sigma_{Coul}> \sigma_{nucl}$ ($\sigma_{Coul} \simeq\sigma_{tot}$), whereas $\sigma_{Coul} < \sigma_{nucl}\simeq\sigma_{tot}$ ($\sigma_{nucl}\simeq\sigma_{tot}$) as $\varepsilon_b\to 2.03$ MeV. For $^{208}{\rm Pb}$ target, the results are shown in the presence of nuclear absorption. When the nuclear absorption is taken into account [panels (a) and (b)], we notice that $\sigma_{coul}\simeq\sigma_{tot}\gg \sigma_{nucl}$ and this is independent of $\varepsilon_b$. In the absence of the nuclear absorption [panels (c) and (d)], while $\sigma_{coul}\simeq\sigma_{tot}\gg \sigma_{nucl}$ remains valid for $\varepsilon_b\to 0.01$ MeV, it is noticed that $\sigma_{tot}\simeq \sigma_{nucl}>\sigma_{Coul}$, for $\varepsilon_b\to 2.03$ MeV, which further highlights the importance of the nuclear absorption for large binding energies. The results in this figure further support the fact that strong nuclear absorption in the inner region is the main factor that dictates the importance of the Coulomb breakup cross section over its nuclear counterparts. \begin{table*}[t] \caption{\label{table3} Coulomb, nuclear and interference cross-sections for the $^8{\rm Li}+{}^{12}{\rm C}$ and $^8$Li$+^{208}$Pb, considering n$-^7$Li binding energies $\varepsilon_b=$0.01 MeV and 2.03 MeV. For each target, we present our results, in terms of ratios, for two colliding energies. For $^{208}$Pb target, with no-nuclear absorption (NA) the results are shown within parenthesis below the ones with absorption. } \begin{center} \begin{tabular}{cc|ccccc|ccccc} \hline\hline Target &${E_{ lab}}$ &&$\varepsilon_b=$&2.03&MeV&&&$\varepsilon_b=$&0.01&MeV&\\ &(MeV)& $\frac{\sigma_{Coul}}{\sigma_{tot}}$ & $\frac{\sigma_{nucl}}{\sigma_{tot}}$ & $\frac{\sigma_{Coul}}{\sigma_{nucl}}$ & $\frac{\sigma_{int}}{\sigma_{nucl}}$ & $\frac{\sigma_{int}}{\sigma_{tot}}$ & $\frac{\sigma_{Coul}}{\sigma_{tot}}$ & $\frac{\sigma_{nucl}}{\sigma_{tot}}$ & $\frac{\sigma_{Coul}}{\sigma_{nucl}}$ & $\frac{\sigma_{int}}{\sigma_{nucl}}$ & $\frac{\sigma_{int}}{\sigma_{tot}}$ \\ \hline\hline $^{12}{\rm C}$ & 14 &0.024&0.824&0.029&0.186&0.153 &0.836&0.268&3.123&-0.387& -0.104\\ & 24 &0.048&0.808&0.059&0.190&0.154 &0.701&0.339&2.069&-0.118& -0.040\\ \hline $^{208}{\rm Pb}$ &36&1.800&0.150&12.00&-6.333&-0.950 &1.033&0.012&90.16&-3.850& -0.044\\ &&(0.344) &(1.481)&(0.232)&(-0.557)&(-0.825)&(1.032)&(0.029)&(33.97)&(-2.061)&(-0.063)\\ &60&1.326&0.087&15.25&-4.750&-0.413&1.015&0.010&104.6&-2.540&-0.025\\ &&(0.198)&(0.783)&(0.253)&(0.025)&(0.019) & (1.000)&(0.059)&(17.03)&(-0.991)&(-0.058) \\ \hline\hline \end{tabular} \end{center} \end{table*} In Table~\ref{table3}, we provide more quantitative results, given as fractions from $\sigma_{tot}$ and $\sigma_{nucl}$, reflecting the competition between the different cross sections, by selecting the two limiting binding energies we are studying, i.e., $\varepsilon_b=$0.01 MeV and $\varepsilon_b=$2.03 MeV. We are also including $\sigma_{int}$, as defined by \begin{eqnarray}\label{sigint} \sigma_{int}=\sigma_{tot}-(\sigma_{Coul}+\sigma_{nucl}), \end{eqnarray} which we naively regard as the Coulomb-nuclear interference and that will be discussed in the next subsection. From this table, it becomes evident that, when $\varepsilon_b$ decreases, $\sigma_{Coul}$ (approaching to $\sigma_{tot}$) becomes substantially larger than $\sigma_{nucl}$. Also, for the light $^{12}$C target, at $E_{lab}=14$\,MeV and 24\,MeV, we note that $\sigma_{Coul}/\sigma_{nucl}$ rapidly grows, when varying $\varepsilon_b$ from 2.03 MeV down to 0.01 MeV. As shown, in this energy interval, $\sigma_{Coul}/\sigma_{nucl}$ increases from 0.03 to 3.12 for 14 MeV, and from 0.06 to 2.10 for 24 MeV. This indicates that, as the binding energy decreases, the $^8$Li$+^{12}$C reaction becomes like a ``Coulomb-dominated reaction", with the emergence of a long-range behavior. Moreover, with the heavy target at $E_{lab}=$36 MeV, in the presence of nuclear absorption, for $\varepsilon_b=$2.03 MeV, $\sigma_{coul}/\sigma_{nucl}=12$, whereas $\sigma_{coul}/\sigma_{nucl}\simeq 90$ for $\varepsilon_b=$ 0.01 MeV. It is noticed in this case that this ratio is substantially affected in the absence of nuclear absorption (NA), becoming $\sigma_{coul}/\sigma_{nucl}\simeq 0.06$ ($\varepsilon_b=2.03$ MeV), and $\sigma_{coul}/\sigma_{nucl}\simeq 34$ ($\varepsilon_b=0.01$ MeV). \begin{figure}[!h] \begin{center} \resizebox{70mm}{!}{\includegraphics{f10-Interference-12C-ab}} \end{center} \caption{\label{f10} The $^8$Li+$^{12}$C integrated Coulomb-nuclear interference $\sigma_{int}$ [panel (a)], given by (\ref{sigint}), with the respective ratio $\sigma_{int}/\sigma_{tot}$ [panel (b)], are shown as functions of $\varepsilon_b$, for the colliding energies $E_{\rm lab}=$14 and 24 MeV. } \end{figure} \begin{figure*}[t] \begin{center} \resizebox{140mm}{!} {\includegraphics{f11-Interf-208Pb}} \end{center} \caption{\label{f11} The $^8{\rm Li}+{}^{208}{\rm Pb}$ integrated Coulomb-nuclear interference $\sigma_{int}$ [panels (a) and (b)], given by (\ref{sigint}), with their ratios $\sigma_{int}/\sigma_{tot}$ [panels (c) and (d)], are shown as functions of $\varepsilon_b$, for $E_{\rm lab}=$36 and 60 MeV (upper and lower frames, respectively). $\sigma_{int}^{\rm WA}$ (solid lines) denotes the interference when the breakup is followed by nuclear absorption, with $\sigma_{int}^{\rm NA}$ (dot-dashed lines) denoting interference with no nuclear absorption.} \end{figure*} \subsection{Coulomb-nuclear interference} \label{interf} It is well-known that the incoherent sum of the Coulomb and nuclear breakup cross section $(\sigma_{Coul}+\sigma_{nucl})$ is always different from their coherent sum, $\sigma_{tot}$, due to the Coulomb-nuclear interference effect. To assess this effect in the context of very weak ground-state binding energy, we consider $\sigma_{int}$ as defined in Eq.~(\ref{sigint}) to estimate the Coulomb-nuclear interference. Given that, for the two limiting binding energies, the quantitative results for $\sigma_{int}$ are already furnished in Table~\ref{table3} as ratios with respect to $\sigma_{tot}$ and $\sigma_{nucl}$. In Figs.~\ref{f10} and \ref{f11} (respectively, for $^{12}$C and $^{208}$Pb targets), we provide the exact $\sigma_{int}$ behaviors, together with their respective ratios $\sigma_{int}/\sigma_{tot}$, as functions of $\varepsilon_b$, in a way to clarify that the differences between $\sigma_{tot}$ and $(\sigma_{Coul}+\sigma_{nucl})$ are quite large in both the cases, with the amount varying with $E_{lab}$ ($|\sigma_{int}|$ decreasing with increasing $E_{lab}$). The Coulomb-nuclear interference is strongly dependent on $\varepsilon_b$. As one can notice, it appears to increase as $\varepsilon_b$ decreases, and becomes quite small as $\varepsilon_b\to 2.03$\,MeV. For the $^8$Li+$^{208}$Pb reaction, nuclear absorption which is already shown to reduce the breakup cross section (Fig.\ref{f07}), is expected to be more relevant on the Coulomb-nuclear interference. The Coulomb-nuclear interference obtained when the breakup is followed by nuclear absorption (i.e., $W_{ct}^{nucl}\ne 0, W_{nt}^{nucl}\ne 0$), is denoted by $\sigma_{int}^{\rm WA}$ (WA standing for ``with absorption''), and by $\sigma_{int}^{\rm NA}$ the Coulomb-nuclear interference obtained when $W_{ct}^{nucl}=W_{nt}^{nucl}=0$. Therefore, in order to assess the relevance of the nuclear absorption on this interference, we compare $\sigma_{int}^{\rm WA}$ with $\sigma_{int}^{\rm NA}$. The results are presented in Fig.\ref{f11}. In this figure, the panels (a) and (b) are for the exact $\sigma_{int}$ results, whereas in panels (c) and (d) we have the respective ratios $\sigma_{int}/\sigma_{tot}$. The upper panels are for $E_{lab}=36$ MeV, and the lower panels for $E_{lab}=60$ MeV. The absorption contribution to $\sigma_{int}$ is verified by the observed difference $|\sigma_{int}^{\rm NA}-\sigma_{int}^{\rm WA}|$, which are clearly shown for both $E_{lab}$ energies, as $\varepsilon_b$ varies. Besides the fact that the Coulomb-nuclear interference is shown to be larger in the very small binding energy limits, such larger values may also be influenced by the large magnitudes of the total and Coulomb breakup cross sections, which are shown in Figs.~\ref{f09} and \ref{f10}. However, as verified from Table~\ref{table3}, the ratios $\sigma_{Coul}/\sigma_{tot}$ for the smaller binding are enough deviating from one (when full absorption is considered, in the $^{208}$Pb case). Consistently, we also noticed from the results given in Table~\ref{table3}, that $\sigma_{int}/\sigma_{tot}$ is larger for $\varepsilon_b=2.03$\,MeV when we have the usual cross section values with absorption. Further investigation may be required to clarify the $\varepsilon_b$ dependence of Coulomb-nuclear interference, in support to the actual results that are shown an overall significant effect of nuclear absorption. In the case of such weakly-bound projectiles, a better understanding of the function $\delta_R(\varepsilon_b)$, which appears in Eq.(\ref{potnucl}), could shed more light on the complexity of the Coulomb-nuclear interference. In such cases, $R_n$ can significantly deviate from $R_0$, since the nuclear breakup dynamics requires that $\delta_R(\varepsilon_b)\to 0$ for larger values of $\varepsilon_b$. Particularly, the main characteristics of this function could show up in a study with charged projectiles, considering that strong Coulomb/nuclear interference has been observed for the reaction of proton halo $^8$B with $^{58}$Ni target~\cite{2001Tostevin,2002Margueron,Tarutina10,2009Lubian}, in which we have a very weakly-bound projectile with breakup threshold of 0.137 MeV. \section{Conclusion} \label{conclusion} We have presented a study on the breakup of the weakly-bound $^8$Li (n$-^7$Li) projectile on light and heavy target masses, namely, $^{12}$C and $^{208}$Pb. Our main objective was to investigate the dependence of the total, Coulomb and nuclear breakup cross sections, on the $^8$Li ground-state binding energy $\varepsilon_b$, in order to study the peripherality of the total, Coulomb and nuclear breakup processes, which are associated to the weaker binding energy of the projectile. To this end, apart from the experimen\-tally-known ground-state binding energy of the n$-^7$Li system, we artificially considered four other binding energies, below the experimental value, down to $\varepsilon_b=0.01$\,MeV, which is much smaller than the experimental value, $\varepsilon_b=2.03$\,MeV. From our analysis it is shown that the total, Coulomb and nuclear breakup processes become peripheral as $\varepsilon_b\to 0.01$\,MeV, regardless the target mass. We argue that the peripherality of the nuclear breakup in this case is primarily related to the spacial extension of the corresponding ground-state wave function, which is related to weaker binding energy. The peripheral region is determined by the range of the nuclear forces $R_0$, and the corresponding extension of the ground-state wave function, which is associated to a function $\delta_R(\varepsilon_b)$, expressed by $R_n$ defined in Eq.~(\ref{potnucl}). By taking into account the fact that close to the $n-$core $\varepsilon_b\to 0$ limit, a long-range interaction is expected to emerge between projectile and target (similar as for three-body halo-nuclei systems~\cite{2012Frederico}), the size of the associated wave function will increase significantly in this limit. So, a detailed investigation of this function $\delta_R(\varepsilon_b)$ (which should go to zero by increasing $\varepsilon_b$) can shed more light into the dynamics of nuclear breakups induced by loosely bound projectiles. It is also noticed that the variation of $\varepsilon_b$ strongly affects the Coulomb breakup, as compared to the nuclear breakup, such that as $\varepsilon_b\to$ 0.01 MeV, the Coulomb breakup becomes dominant even for the $^{12}{\rm C}$ target, which is known to be naturally dominated by nuclear breakup. Therefore, in view of this binding-energy dependence, one may infer that the expression ``naturally-dominated by nuclear breakup'' may be relative to the projectile binding energy. It is also verified that the nuclear absorption has an insignificant effect on the total and nuclear breakup cross sections when decreasing the binding energy to small binding such as $\varepsilon_b\to$ 0.01 MeV. In this small binding energy region, we found that the total breakup cross section is larger than the calculated total fusion cross section, while as expected, the opposite is observed as $\varepsilon_b\to$ 2.03\,MeV. \section*{Acknowledgements} We thank T. Frederico, B.V. Carlson and L. F. Canto for useful discussions. B.M. is also grateful to the South American Institute of Fundamental Research (ICTP-SAIFR) for local facilities. For partial support, we also thank Conselho Nacional de Desenvolvimento Cient\'\i fico e Tecnol\'ogico [INCT-FNA Proc.464898/2014-5 (LT and JL), Proc. 304469/2019-0(LT) and Proc. 306652/2017-0(JL)], and Funda\c c\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo [Projs. 2017/05660-0(LT)].
2024-02-18T23:40:31.962Z
2022-02-01T02:21:42.000Z
algebraic_stack_train_0000
2,677
8,608
proofpile-arXiv_065-13027
\section{Abstract} Bottom-up production of semiconductor nanomaterials is often accompanied by inhomogeneity resulting in a spread in electronic properties which may be influenced by the nanoparticle geometry, crystal quality, stoichiometry or doping. Using photoluminescence spectroscopy of a population of more than 20,000 individual Zn-doped GaAs nanowires, we reveal inhomogeneity in, and correlation between doping and nanowire diameter by use of a Bayesian statistical approach. Recombination of hot-carriers is shown to be responsible for the photoluminescence lineshape; by exploiting lifetime variation across the population, we reveal hot-carrier dynamics at the sub-picosecond timescale showing interband electronic dynamics. High-throughput spectroscopy together with a Bayesian approach are shown to provide unique insight in an inhomogeneous nanomaterial population, and can reveal electronic dynamics otherwise requiring complex pump-probe experiments in highly non-equilibrium conditions. \section{Introduction} Bottom-up growth of nanomaterials is widely established as a highly scaleable methodology, capable of producing electronic materials from nanometre to micrometre lengthscales\cite{Thiruvengadathan2013NanomaterialApproaches}. However, this capability is tempered by the sensitivity of growth to local conditions, and variation in precursor concentration, ratio, or temperature can lead to significant variation in yield or functional performance\cite{Al-Abri2021}. Crucially, both ensemble and single-element characterisation are highly challenging for nanomaterials, creating a bottleneck for their exploitation; the former as it cannot measure inhomogeneity and the latter because measurement of a local region may not represent the whole sample size.\cite{Richman2009}. GaAs nanowires (NWs) have been widely studied for optoelectronic applications;\cite{Dasgupta2014, parkinson2007transient, gallo2011, wang2018gaas, li2020optical} they can be produced with high crystal quality\cite{Joyce2008, dhaka2012high} and provide a facile route to heterostructure design based on decades of experience in planar material growth.\cite{hyun2013nanowire} However, a large surface-to-volume ratio and a relatively high surface recombination velocity of $5.4\times10^{5}$\,cm/s (for 50\,nm-thick GaAs NW\cite{joyce2013electronic}) give rise to low quantum efficiency of the emission\cite{chang2012electrical, jiang2012long}. Photoluminescence quantum efficiencies as low as $0.1\%$ have been reported for uncapped GaAs NWs \cite{parkinson2009carrier} compared to 50$\%$ for high-quality InP NWs.\cite{gao2014selective} For both emissive and photovoltaic applications,\cite{kim2021doping} it is crucial to maximise the radiative efficiency, therefore several approaches have been developed to improve radiative emission. These include heavy doping, to dramatically increase the radiative recombination rate,\cite{zhang2016recombination,alanis2018optical} and passivating the NW surface with a higher-bandgap capping layer to decrease the non-radiative emission originating from surface states.\cite{zhou2019epitaxial,couto2012effect} Measuring recombination rates provides a direct means to assess radiative and non-radiative processes\cite{Milot2015Temperature-DependentFilms}. At the shortest timescales relevant for high recombination rates, carrier cooling processes are often apparent\cite{Gierz2013SnapshotsGraphene,Bernardi2015AbGaAs,Bailey1990NumericalGaAs}. This is particularly crucial in photovoltaics as the efficiency of these devices depends on the electron-hole separation before recombination process.\cite{fast2020hot} To obtain a comprehensive understanding of the carrier pathways, electronic properties such as surface, radiative, and Auger recombination must be measured. However, inter-wire variation in geometry and doping results in an inherent spread and a potentially unknown distribution of the recombination values across a population which could mask systematic trends. Where surface processes dominate, geometric inhomogeneity will dominate; ensemble measurements will be biased towards larger nanowires or to higher quantum efficiency subsets, and are therefore unreliable to assess a given growth. A statistically rigorous analysis must be able to reflect the inhomogeneity of the material, informed by numerous measurements. For this particular type of problem, the Bayesian methodology can be used to model a distribution in properties such as doping, diameter or surface recombination velocity, and is highly suited to determine unknown parameters and put limits on their spread.\cite{gabbard2022bayesian} The Bayesian approach is based on representing all model parameters by probability distributions, which is refined from a prior distribution -- representing knowledge of the system before the data is considered -- to a posterior distribution, using the fit of the model to the data.\cite{thrane2019introduction} Bayesian approaches have been demonstrated in a wide range of domains such as astrophysics,\cite{smith2020massively,dobigeon2007joint} sensor analysis,\cite{mirsian2019new} and modelling NW growth given experimental data.\cite{huang2010physics} In this study, we demonstrate that automated high-throughput imaging and spectroscopy of a large population of single NWs with a range of diameters and doping can be used to investigate doping inhomogeneity and carrier cooling and recombination processes at the sub-picosecond timescale, without using pump-probe measurement which induces highly non-equilibrium populations, and crucially, with statistical confidence. Our high-throughput approach is less biased towards high-efficiency subsets of the NWs which might otherwise have led to ensemble measurements underestimating the true range of recombination velocities. We show that for Zn-doped GaAs NWs with a median diameter of $76_{68}^{84}$\,nm~, the surface recombination velocity has a median value of $1.17_{0.41}^{2.60}\times10^{6}$\,cm/s, (upper and lower limits represent the interquartile range), consistent with 5-13$\times10^{5}$\,cm/s as previously measured for Zn-doped GaAs NWs.\cite{darbandi2016measurement,joyce2013electronic,Joyce_2017} Hole densities are inhomogeneous across the population, with an asymmetric distribution of $9.67_{5.08}^{17.76}\times10^{19}$\,cm$^{-3}$~which is not significantly correlated with NW diameter; this demonstrates that other factors such as temperature or precursor availability during the growth process more strongly determine doping inhomogeneity. Significantly, by linking effective carrier temperature to effective recombination lifetime, we observe that high doping plays a critical role in hot band-edge emission, and therefore, emission temperature can be an accurate clock for understanding carrier dynamics in sub-picosecond timescale. \section{Discussion and Results} Zn-doped GaAs NWs were grown using the Aerotaxy method\cite{heurlin2012continuous, Sivakumar2020Aerotaxy:Nanostructures} and were deposited on silicon with native oxide following a recipe described previously.\cite{yang2015zn} The NWs were heavily doped with atomic Zn with an ensemble average density of 2.32(14)$\times10^{21}$\,cm$^{-3}$ as measured using X-ray Photoelectron Spectroscopy (XPS) (details in the SI). A set of 20,000 NWs were initially located and investigated using automated micro-photoluminescence ($\mu$-PL) spectroscopy \cite{church2022holistic}. A continuous-wave HeNe laser of 632.8\,nm wavelength with circular polarisation (to avoid polarisation dependant absorption effects\cite{Titova2006}) and power density at the sample of 6.4\,kW cm$^{-2}$ was used (equivalent to around 30\,photons/picosecond/NW). Photoluminescence (PL) spectra and dark-field optical images were collected for each NW; the approximate length of each NW was extracted from the images. Scanning Electron Microscopy (SEM) was performed on the same sample to obtain a more accurate distribution of NW length and diameter from a subset of more than 50 wires, with exemplary images shown in Figure \ref{fig:LengthWidth}. SEM images of a region with a high density of NWs and a typical single NW are illustrated respectively in Figure \ref{fig:LengthWidth}a and b. It is observed that a subset of wires studied may be clumps of multiple wires as shown circled in Figure \ref{fig:LengthWidth}a, due to high density of NW production by the Aerotaxy method.\cite{barrigon2017gaas} Analysis of the dark-field imagery and PL characteristics were used to exclude the majority of such objects, by filtering photoluminescence fit outside the expected range of energy, temperature or intensity as detailed in the SI. This approach eliminated clumps of wires (with high intensity) as well as surface contamination (with emission indistinguishable from background). Following this filtering process, around 46$\%$ of the initially identified objects were retained for study. This process may result in the removal of very weakly emitting single NWs; filtering produces a statistical bias which places a lower limit on the quantum efficiency of NWs that we can study. We address this through the incorporation of weighted evidence in our Bayesian model, discussed below. Figure \ref{fig:LengthWidth}c and d depict length and diameter distributions from SEM. For comparison, a distribution of length measured using optical microscopy obtained for the set of 11487~NWs used is shown in Figure \ref{fig:LengthWidth}c which is consistent with the SEM results. The NW diameter cannot be reliably determined using optical microscopy, as it is far below the optical resolution limit. \begin{figure} \includegraphics[width=0.5\textwidth]{Figures/SEM.png} \caption{(a) SEM image of an ensemble and (b) a single typical Zn-doped GaAs NW on silicon substrate. (c) Length distribution from SEM and filtered-NW length distribution from optical imaging. The vertical line indicates the median length from SEM (2.3\,$\mu$m). (d) Diameter distribution of the NWs obtained from the SEM, with the vertical line indicating the median diameter of the NWs from SEM (72\,nm). Solid lines in c and d are kernel density estimates of the continuous probability distribution for these data-sets.} \label{fig:LengthWidth} \end{figure} Previous photoluminescence studies with thicker NWs (d$>$200\,nm) showed a single emission peak at the band-edge,\cite{alanis2018optical} while studies of high-doped thin NWs (d$<$100\,nm) tend to show a higher energy peak\cite{yang2015zn} which has been associated with recombination from the conduction band to the split-off band with the transition energy around 0.33\,eV above that of the band-edge.\cite{benz1977auger} A series of randomly selected PL spectra for individual NWs are shown in Figure \ref{fig:SpectraFit}a. These spectra show an emission peak below 1.405\,eV attributed to red-shifted band-edge emission from GaAs, as well as a weaker second high-energy peak at around 1.73\,eV which we attribute to recombination from the conduction band to the split-off. \begin{figure} \includegraphics[width=0.45\textwidth]{Figures/Fit_to_data.png} \includegraphics[width=0.45\textwidth]{Figures/Histogram_of_energies.png} \caption{(a) A series of PL spectra of Zn-doped GaAs NWs with fits as described in the text. The spectra are selected randomly and ordered by their redshift in band-edge emission. Best-fit parameters are given in the title, and the relative contribution of band-edge emission (red dashed line) and split-off emission (green dashed lines) are shown. (b) Normalised distribution of the band-edge and split-off energies for 11487~NWs. The dotted vertical lines represent the updoped band-edge (1.405\,eV) and the median split-off band (1.73\,eV) energies of GaAs at room temperature, and the horizontal line represents the separation between emission bands.} \label{fig:SpectraFit} \end{figure} Each PL spectrum was fit using two models: one for the band-edge emission (BE) only, and one containing both band-edge and split-off peaks (BE+SO)\cite{alanis2017large}. For BE+SO model, the PL is fit with a linear sum of the two emissions, \begin{align} \label{eqn:model} I(E) &= \sum_{i=BE,SO}I_{i}(E) \\ &= \sum_{i=BE,SO}\left[\beta_{i} (B(E,E_{g,i},T) \otimes G(E,\sigma_{i}))\right] \end{align} where each contribution is a convolution of $B$, the product of a three-dimensional density of states for a transition with energy $E_g$, and an occupation described by a Maxwell-Boltzmann distribution function with effective carrier temperature $T$ \begin{equation} \label{dos} B(E,E_g,T) = \sqrt{(E-E_g)} e^{(-(E-E_g)/k_B T)}, \end{equation} with a simple Gaussian distribution $G(E,\sigma)$ \begin{equation} \label{gaussian} G(E,\sigma) = \beta e^{(-E^2/2\sigma^2)}, \end{equation} with width $\sigma$ representing all experimental sources of spectral broadening. $\beta$ is a scaling factor corresponding to the intensity of each component in the spectrum. We justify using a Maxwellian temperature for emission, as we expect scattering and thermalization to occur on fast (sub-100\,fs) timescales in GaAs\cite{Taylor1985UltrafastCompounds, Bailey1990NumericalGaAs}. The quality of fitting was obtained for both models based on the reduced $\chi$-squared, and the most appropriate fit was selected for each spectrum for further analysis; around 1.5\% of spectra was better fit with band-edge emission only, and 98.5\% requiring two peak fits. The data and code used for this analysis are provided in the SI and online. Figure \ref{fig:SpectraFit}b shows the distribution in modelled emission energy ($E_g$) for the band-edge and the split-off emission. The band-edge energy shows a redshift with respect to intrinsic GaAs at room temperature (of 1.405\,eV indicated by a vertical line), with a median value of 1.34\,eV. This redshift is attributed to band-structure shift arising from heavy Zn doping.\cite{haggren2014effects,Alanis2019} The separation between the intrinsic band-edge energy and the median split-off band energy ($0.326_{0.312}^{0.341}$\,eV) is consistent with reported split-off band separation for GaAs, within experimental uncertainty.\cite{benz1977auger,Zschauer1969AugerGaAs} The modelling above provides a number of parameters for each spectral fit, five of which provide insight into the recombination process; the band-edge emission energy $E_{g,BE}$, the effective carrier temperature for band-edge $T_{BE}$ and split-off emission $T_{SO}$, the band-edge emission amplitude $\beta_{BE}$ and split-off emission amplitude $\beta_{SO}$. The emission energy $E_{g,BE}$ has been shown to be strongly related to hole density for p-type material, and the doping level can be calculated based on the energy shift such that\cite{Borghs1989,alanis2018optical} \begin{equation} \label{eqn:energy} E = E_{0}-Kp^{1/3}, \end{equation} where $E_{0}$ is the energy bandgap of intrinsic GaAs at room temperature, $p$ is the hole density, and $K$ is a constant determined for Zn-doped GaAs when using a 632.8\,nm excitation source that we have previously measured to be 1.158$\times10^{-8}$\,eV.cm using the same experimental system.\cite{Alanis2019} For very short emission lifetimes, we expect that carriers photogenerated with excess energy may not have cooled to the lattice temperature\cite{Aitchison1998EnhancedGaAs, Bernardi2015AbGaAs}. By approximating carrier cooling as a Newtonian process with a single effective cooling rate $\tau_0$, the carrier lifetime $\tau$ can be obtained from the carrier temperature $T$ for each NW \begin{equation} \label{eqn:temperature} T = T_0 e^{(-\tau/\tau_0)} + T_L, \end{equation} where $T_0$ is the initial temperature of electrons after photoexcitation with an upper limit given by the excess energy following the photon absorption process and $T_L$ is the lattice temperature. $\tau_0$ is the timescale of the dominant cooling process, and has been reported as approximately 0.2\,ps for GaAs\cite{yang2016observation} related to the longitudinal optical photon scattering rate.\cite{Scholz1998HolephononArsenide, Kash1989Carrier-carrierLuminescence} Using the emission temperature measured from PL as a proxy for carrier recombination lifetime is valid under certain conditions: that the recombination process takes place in the sub-picosecond timescale between thermalization and cooling, that low-density excitation is used to avoid carrier-carrier effects, and where a single effective cooling process dominates. In the modelling presented, we use the split-off band emission temperature to calculate the lifetime. In this case, the electron excess energy following photoexcitation is insufficient to populate the L or X valley, simplifying the interpretation as discussed later. Finally, the total band-edge emission, given by the integral of the spectrum, can be used to understand the quantum efficiency of each NW. Internal quantum efficiency ($IQE$) is related to the $PL$ intensity by a constant experimental scaling factor $\alpha$ and a photon absorption rate $A(d)$, which varies with NW width $d$ to account for both absorption cross-section and absorption depth, \begin{equation} \label{eqn:pl} PL = A(d) {\rm{IQE}} = A(d) \left(\frac{Bp}{Bp+Cp^2+4S/d} \right), \end{equation} where $Bp$, $Cp^2$, $4S/d$ are radiative, Auger, and surface recombination rates. The material parameters are: $B$ - the radiative recombination rate estimated between 10$^{-9}$ to 10$^{-10}$\,cm$^3$/s\cite{zhang2022all,strauss1993auger}, $C$ - the Auger rate estimated between 10$^{-26}$ to 10$^{-31}$ \,cm$^6$/s,\cite{capizzi1984electron,mclean1986picosecond} and $S$ - the surface recombination velocity estimated to be between 5-13$\times10^{5}$\,cm/s for diameter between 100-300\,nm.\cite{darbandi2016measurement, joyce2013electronic, Joyce_2017} The experimental scaling factor $\alpha$ is related to the conditions such as laser power and spot size, microscope collection efficiency, and spectrometer quantum efficiency. More details on the scaling $IQE$ to $PL$ is provided in the SI. We propose a model for recombination that maps the doping $p$ and diameter $d$ for each NW to a unique triplet of observables, namely emission energy $E$ (through Eqn.~\ref{eqn:energy}), emission temperature $T$ (Eqn.~\ref{eqn:temperature}) and emission intensity PL (Eqn.~\ref{eqn:pl}). This mapping relies on a number of material, sample, and experimental-specific model parameters as listed in Table \ref{tab:priors}. These model parameters form a prior vector $\Psi$ which is used as an input within a Bayesian framework; they can be refined towards a posterior distribution by fitting the model to the data\cite{Trotta2008BayesCosmology, Bolstad2016}. For every parameter, we define a prior -- a nominal probability distribution representing likely values -- using physical knowledge and literature values as summarised in Table \ref{tab:priors} (more details on prior notation is found in the SI). We sample from possible values of the parameters and make use of Eqns.~\ref{eqn:energy}, \ref{eqn:temperature} and \ref{eqn:pl} to produce a modelled three-dimensional distributions in $E'$, $T'$, and PL$'$. We performed Markov-Chain Monte-Carlo (MCMC) modelling with the Python \textit{emcee} package\cite{Foreman-Mackey2012Emcee:Hammer} to update the priors to maximise the probability of the model output given the experimental data - $P(\Psi | \rm{Data})$ - as expressed in Bayes formula \begin{equation} P(\Psi | \rm{Data}) = \frac{P_{\rm{likelihood}}(\rm{Data} | \Psi) P_{\rm{prior}}(\Psi)}{P_{\rm{evidence}}(\rm{Data})}. \end{equation} \begin{table*} \caption{Parameter description and prior distribution. Here $\mathcal{N}$, $G$, and $U$ donate normal, generalised normal, and uniform distributions (details in the SI). The mean is denoted by $\mu$, standard deviation $\sigma$, and shape parameter as $\beta$.\label{tab:priors}} \resizebox{\textwidth}{!}{\begin{tabular}{|l|l|l|l|l|} \hline \textbf{Symbol} & \textbf{Description} & \textbf{Origin} & \textbf{Prior} & \textbf{Source} \\ \hline \hline E$_{0}$ & Energy band-edge of intrinsic GaAs [eV] & Material & $\mathcal{N}$($\mu=1.405, \sigma=5e-3$) & Ref[\citenum{kusch2014type}]\\ \hline K & Constant linking doping and redshift [eV.cm] & Material & $\mathcal{N}$($\mu=1.158\times10^{-8}, \sigma=0.1\times10^{-8}$) and positive & Ref [\citenum{alanis2018optical}]\\ \hline log(B) & Bimolecular radiative constant [cm$^3$/s] & Material & $G$($\mu=-10, \sigma=1, \beta=8$) and $U$(-12,-8) & Ref [\citenum{nelson1978minority}]\\ \hline log(C) & Auger constant [cm$^6$/s] & Material & $G$($\mu=-29, \sigma=1.5, \beta=8$) and $U$(-32,-26) & Ref [\citenum{ahrenkiel2001auger}]\\ \hline log($\tau_{0}$) & Total carrier lifetime [s] & Material & $G$($\mu=-12.7, \sigma=0.5, \beta=8$) and $U$(-13.3,-11.5) & Ref[\citenum{yang2016observation}]\\ \hline log(p) & Hole density in log$_{10}$-scale [cm$^{-3}$] & Sample & $G$($\mu=20.4, \sigma=3.5, \beta=8$) & Ref[\citenum{Johansson2020CalculationNanowires}] \\ \hline d & NW diameter [nm] & Sample & $\mathcal{N}$($\mu=77, \sigma=12.5$) and $U$(35,100) & SEM\\ \hline log(S) & Surface recombination velocity in log$_{10}$ [cm/s] & Sample & $G$($\mu=6, \sigma=1.5, \beta=8$) and positive & Ref [\citenum{joyce2013electronic}]\\ \hline T$_{0}$ & Initial temperature after excitation [K] & Experimental & $G$($\mu=2000, \sigma=500, \beta=8$) & Derived\\ \hline log($\alpha$) & Scaling factor related to experimental conditions on log$_{10}$ scale & Experimental & $G$($\mu=-15.8, \sigma=2, \beta=8$) & Calculated\\ \hline \end{tabular}} \end{table*} As previously noted, by removing data-sets where the signal intensity is too small to be fit, we potentially introduce a bias into our model. While this eliminates known issues associated with clumping and sample contamination, it will also exclude wires with PL below our observable limit, which may be associated with particular regions of doping-diameter space. We can incorporate this into our analysis by assigning a reduced evidential weight ($P_{\rm{evidence}}(\rm{Data}) < 1$) to the regions which are removed during filtering; this approach means that we use the experimental data only to constrain the model within a parameter space where evidence exists. A piecewise evidence function is used, reducing from unity representing unbiased measurement above the 3rd percentile of the experimental distribution, linearly reducing to 0.25 at the 0.5th percentile to capture under-sampling at low emission intensities. While this choice of function is arbitrary, it is found that the modelled results are not highly sensitive to the cutoff values used. Figure \ref{fig:Distributions}a compares one-dimensional projections of ($E',T', PL'$) distributions obtained from our optimised MCMC modelling, showing the experimental results of the full output of our model, and a modified output removing model points below the 1st percentile to more closely match our observed data. The full model distributions tend to reproduce the observed distributions, while there is a slight excess of blueshifted (low-doped) and weak intensity NWs when compared with the experimental results, which is attributed to the bias induced by cutting low-emission NWs and split-off temperature. \begin{figure} \includegraphics[width=0.38\textwidth]{Figures/Bayesian_sampling.png} \includegraphics[width=0.6\textwidth]{Figures/Bayesian_results.png} \caption{(a) Normalised projection of experimentally observed $E_{BE}$, $T_{SO}$ and $PL$ and modified model-predicted $E'$, $T'$, and $PL'$ distributions. (b) Prior distribution (blue lines) and posterior histogram (red) for (i) hole density on log$_{10}$ scale and (ii) NW diameter $d$ with SEM results (blue histogram). The vertical line in (i) shows the Zn density calculated from XPS. (iii) Scatter plot for posterior $p$ and $d$ and a 2D kernel density estimation indicating weak correlation evident from the data. (iv) A two-dimensional histogram of model-predicted $IQE$ as a function of $p$, with the black line indicating the modelled $IQE$ with median posterior values, and the dashed lines depict the $IQE$ interquartile range.} \label{fig:Distributions} \end{figure} We determine the distribution in doping and NW diameter from the posterior distributions as shown in Figure \ref{fig:Distributions}b(i) and (ii). The posterior distributions determined from sampling versus diameter and doping are illustrated in the SI. From the predicted data, the probability distribution function for hole density is asymmetric with values between $9.67_{5.08}^{17.76}\times10^{19}$\,cm$^{-3}$. The value for Zn-doping was obtained from XPS as 1.35(14)$\times10^{21}$\,cm$^{-3}$; while this is significantly higher, it is expected that the Zn would not be fully activated at such high concentrations, and effects such as clustering or interstitial doping may occur. The diameter distribution has values of $76_{68}^{84}$\,nm~which closely follows the prior distribution determined from SEM results. The MCMC modelling approach provides likely values for the full vector $\Psi$, which allows us to probe whether the data supports correlations between parameters. Notably, doping and diameter demonstrate weak correlation (two-sided Pearson correlation of $\rho=-0.003$) as illustrated in Figure \ref{fig:Distributions}b(iii) as a scatter and 2D-Kernel Density Estimate (KDE). This suggests that the dopant incorporation during growth is not primarily dependent on the NW diameter, and other factors are likely to be the dominant source of variation in doping across the NW population. We propose that this is due to the nature of Aerotaxy growth mechanism, where the random growth trajectory of NWs may affect the precursor density or local temperature during growth.\cite{kim2021doping} Using our posterior estimates for diameter $d$ and $\alpha$, we are able to produce a model of IQE as a function of doping as shown in Figure \ref{fig:Distributions}b(iv). It is noted that the model is produced from the median a-posteriori values and Equation \ref{eqn:pl} and not as a direct fit to the data. While there appears to be an offset between the data and model, we note that the experimental results undersample low-IQE and low-diameter parameter space; indeed there is no requirement for our model to reproduce the data, and the agreement within uncertainty limits is supportive of our approach. We can see two regimes around a switch point $p'=$$2.3\times10^{20}$\,cm$^{-3}$; low doping ($p<p'$): where non-radiative recombination $4S/d$ dominates and IQE increases linearly with doping due to the $Bp$ term in Eqn.\ref{eqn:pl}, and high doping ($p>p'$): where Auger recombination $Cp^2$ dominates. This is around $4\times$ larger than literature values for this transition determined at 77\,K\cite{Zschauer1969AugerGaAs}; the very large non-radiative rate in narrow GaAs NWs will shift the switching point $p'$ to higher values. The surface recombination velocity $S$ is determined as $1.17_{0.41}^{2.60}\times10^{6}$\,cm/s, which spans values previously estimated for Zn-doped GaAs NWs with diameter between 50-300\,nm.\cite{darbandi2016measurement, joyce2013electronic, Joyce_2017} This high $S$ justifies the necessity of control of non-radiative recombination to optimise emission intensity\cite{jiang2012long,jiang2013enhanced}. We highlight that a significant variation in estimated IQE is observed as a function of $p$. This is in-part due to the large spread in NW diameter, and hence relative variation in non-radiative recombination; this can be further explored via the effective emission temperature. Our samples demonstrate hot-carrier emission with $T >$ 300\,K, significantly higher than observed in previous studies on thick ($d>$300\,nm) Zn-doped GaAs.\cite{alanis2018optical} Previous time-resolved studies have revealed carrier lifetimes of 1.5\,ps for unpassivated 50\,nm NWs\cite{parkinson2007transient} and 5-7\,ps for 300\,nm Zn-doped NWs\cite{parkinson2007transient, burgess2016doping, Alanis2019}, sufficient time for full thermalization of carriers. However, our present samples have significantly larger radiative and Auger decay channels when compared with undoped 50\,nm NWs and have larger non-radiative surface-related recombination when compared with doped 300\,nm wires; we may naively expect at least a six-fold reduction in lifetime due to the reduction in diameter, to below 1\,ps, giving rise to the hot-carrier emission observed\cite{Wittenbecher2021, Bailey1990NumericalGaAs}. An additional complication can arise when interpreting cooling dynamics in degenerately doped semiconductors. Due to the spread in doping, there is a spread in Fermi energy level shift, and with very high doping ($>$$1.5\times10^{20}$\,cm$^{-3}$) GaAs becomes degenerately doped (details in the SI). Photo-excited electrons can then recombine with a population of holes with energy below the valence band-edge, giving rise to a hotter effective emission ($>$1000\,K) than expected from a purely electron-dominated cooling process. Therefore the effect of Fermi temperature spread must be removed from the band-edge temperature to study electron cooling only. By subtracting the non-thermal contribution to emission temperature that arises from degenerate doping the effective electron temperature from the band-edge emission ($T_{BE}$) and split-off emission ($T_{SO}$) become comparable as shown in the SI; a gradient linking these is $0.91_{0.90}^{0.92}$~indicating that both band-edge and split-off recombination as well as cooling processes are likely to take place at the same timescale. Therefore, the split-off temperature can be exploited as a more accurate ``clock'' for the carrier dynamics. The emission lifetime is linked to the effective electron temperature from the split-off using the median posterior values for $T_0=$$1984_{1741}^{2240}$\,K~and $\tau_0=$$193_{112}^{347}$\,fs~for each of the 11487~NW spectra, using \begin{equation}\label{eqn:tau-T} \tau(T) = -\tau_0\ln\left(\frac{T-T_L}{T_0}\right) \end{equation} Figure \ref{fig:TR}a provides a schematic of the proposed dynamic model. Following 1.96\,eV (633nm) photo-excitation, three hole populations will be formed at early times: 40\% in the heavy hole band, 40\% in the light hole band, and 20\% in the split-off band.\cite{Becker1988FemtosecondGaAs} Elsaesser and colleagues showed that at short times following the photo-excitation, split-off emission may be expected to dominate, as excitation from the split-off band can only create carriers in the $\Gamma$ valley, while excitation from the heavy or light hole bands have sufficient excess energy to populate the indirect $L$ or $X$ valleys which reduces their recombination rate.\cite{elsaesser1991initial} The photo-excited electrons in the conduction band have two pathways to recombine; with holes in the band-edge ($\gamma_{BE}$), or with holes in the split-off ($\gamma_{SO}$). To compare these two emission processes, we calculate the ratio of split-off emission to the total emission as a function of recombination lifetime calculated from the temperature of each emission using Equation \ref{eqn:tau-T} as shown in Figure \ref{fig:TR}b. This ratio is modelled with an exponential decay revealing a short effective carrier lifetime (1.12$\pm$0.02\,ps) as expected for thin and heavy-doped GaAs NWs,\cite{parkinson2007transient,burgess2016doping,Alanis2019}, and around 20\% of the total emission from the split-off band which falls with time over a picosecond. The rate of reduction in relative split-off emission may be driven and affected by different sub-picosecond processes such as; faster split-off recombination when compared to band-edge recombination owing to stronger coupling as given by Fermi's Golden rule,\cite{ouguzman1995theoretical} hole scattering from the split-off band to the valance band in timescales from 0.35\,ps to 0.5\,ps reducing the split-off band occupation,\cite{Bailey1990NumericalGaAs, Scholz1998HolephononArsenide} or inter-valley scattering from L, X to $\Gamma$ in a timescale between 0.7\,ps to 2\,ps increasing band-edge recombination.\cite{saeta1992intervalley} To give more insight into the full carrier dynamics, additional measurements on thinner nanowires or higher doping density is essential to probe carrier dynamics below 400\,fs. Figure \ref{fig:TR}c shows the ratio of emission processes as a function of calculated doping; here, the average split-off emission is relatively constant at 12\% for non-degenerate doping below $1.5\times10^{20}$\,cm$^{-3}$~(indicated by the red vertical line). However, in the case of degenerate doping the band-edge emission grows due to an increased likelihood of non-geminate recombination to the large light-hole and heavy-hole population, reducing the effective split-off band signal. \begin{figure} \includegraphics[width=0.95\textwidth]{Figures/Dynamics.png} \caption{(a) A schematic for the carrier dynamics showing pathways of absorption from band-edge $A_{BE}$ and split-off $A_{SO}$ and emission to band-edge $\gamma_{BE}$ and split-off $\gamma_{SO}$. The location of Fermi energy labelled is dependent on doping. (b) Ratio of split-off to the total emission as a function of carrier lifetime shows a short lifetime of 1.12$\pm$0.02\,ps~from the exponential fit. The red dotted line is an exponential fit to the data. (c) Ratio of split-off to the total emission as a function of logarithmic doping, the vertical line indicates the degenerate doping where $\gamma_{BE}$ dominates and a linear guide to the eye is given in red.} \label{fig:TR} \end{figure} \section{Conclusion} We present a large-scale optoelectronic study of a highly inhomogeneous NW population, performed at a single-wire level for Aerotaxy-grown Zn-doped GaAs NWs. Despite the large spread in doping and diameter obtained from the posterior distribution, our results demonstrate that inter-wire doping inhomogeneity in Aerotaxy-doped GaAs NWs is weakly linked to the NW diameter, suggesting that there are other factors causing doping inhomogeneity. The internal quantum efficiency of the NWs was determined from the surface, radiative, and Auger recombination processes, showing maximum efficiency of $2\%$~ at doping of $2.3\times10^{20}$\,cm$^{-3}$~when modelled using median a-posteriori parameter values. We show that carrier lifetimes are related to the split-off temperature, where a single process dominates, revealing complex carrier dynamics at a timescales less than 1\,ps. Our data-driven methodology provides a statistically rigorous evaluation of material properties in the presence of inhomogeneity, as well as a novel approach to studying ultrafast dynamics. High-throughput spectroscopy combined with a Bayesian analytical framework is highly promising for studying bottom-up grown nanomaterials, identifying the origin of inhomogeneity, and providing statistical confidence across a range of material, sample-specific and experimental parameters that are otherwise challenging to access. \section{Author Contributions} \textbf{Ruqaiya Al-Abri}: Conceptualization, Methodology, Formal Analysis, Investigation (Optics and SEM), Writing - Original Draft, \textbf{Nawal Al Amairi}: Investigation (Absorption), \textbf{Stephen Church}: Methodology, Formal Analysis, Writing - Review and Editing, \textbf{Conor Byrne}: Investigation (XPS), Formal Analysis, \textbf{Sudhakar Sivakumar}: Resources, \textbf{Alex Walton}: Writing - Review and Editing, Supervision, Funding Acquisition, \textbf{Martin Magnusson}: Writing - Review and Editing, Supervision, Funding Acquisition, \textbf{Patrick Parkinson}: Conceptualization, Methodology, Formal Analysis, Writing - Review and Editing, Supervision, Project Administration, Funding Acquisition \section{Acknowledgements} PP acknowledges funding from UKRI under the Future Leaders Fellowship program (MR/T021519/1). RAA and NAA are in receipt of studentship funding from the Omani Government. Alex Walton acknowledges funding from the EPSRC (UK) (EP/S004335/1). MHM acknowledges support from NanoLund, the Swedish Research Council and from the Knut and Alice Wallenberg foundation; the NW growth was done at Lund Nano Lab within the MyFab cleanroom infrastructure. \section{Competing Interests} The Authors declare no Competing Financial or Non-Financial Interests. \section{Data Availability} All study data is available at [DOI to be determined on publication]. All analysis code is available at [DOI to be determined on publication]. \section{Supporting Information} A supporting information is available, providing X-Ray Photoelectron Spectroscopy (XPS) measurements, filtering of nanowires based on fitting parameters of PL model, information on scaling the internal quantum efficiency to the photoluminescence intensity using alpha factor, NW absorption from COMSOL, details of Markov-Chain Monte-Carlo model, showing the prior distribution functions and posterior probability distributions, summary of Fermi energy shift dependence on doping and comparison of the effective carrier temperature from the band-edge and split-off temperature. \onecolumn \section{Supporting Information} \subsection{X-Ray Photoelectron Spectroscopy (XPS) Measurements} XPS measurements were carried out on the Zn-doped GaAs NWs to determine the atomic concentration of the Zn dopant. The NWs were characterised as-deposited on silicon substrate with a native oxide. X-ray photoelectron spectroscopy (XPS) measurements were performed with SPECS XPS instrument, equipped with a SPECS Focus 500 monochromated Al K$\alpha$ X-ray source with photon energy of 1486.6\,eV and an argon-ion sputtering source. Emitted photoelectrons were collected using a 150 mm hemispherical energy analyser (SPECS Phoibos 150). Detailed scans were recorded for Zn2p and Ga2p core levels at a pass energy of 30 eV. The areas of the peaks were corrected for the known relative sensitivity factors to calculate Zn:Ga concentration ratios. The XPS spectrum is shown in Figure \ref{fig:XPScan}a. The sample shows photoemission peaks arising from Ga2p, As3s, and Zn2p which are used to determine the material stoichiometry by dividing the peak areas by their respective relative sensitivity factors for Al K alpha X-Rays. In addition, there are silicon and oxygen peaks associated with silicon oxide substrate and carbon due to the ambient exposure. Figure \ref{fig:XPScan}b and c show a magnified region at high energy attributed to Ga2P and Zn2p photoemission. Table \ref{tab:sample_table} summarises the main findings showing nominal Zn flow, the ensemble average of Zn level obtained from XPS, and the effective hole density derived from calculations based on the energy shift reported in the main text. The calculated dopant level using optical methods is noted to be around 20$\times$ lower than that determined from XPS. This may be due to an ensemble weighting effect, or more likely due to incomplete activation of Zn dopants at the high levels present as discussed in the manuscript. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figures/XPS_data.png} \caption{(Top) XPS survey spectra for Zn-doped GaAs on silicon oxide substrate. (Bottom) High-resolution XPS scans of the (left) gallium Ga2p and (right) zinc Zn2p regions for both nominally undoped and highly zinc-doped nanowires.} \label{fig:XPScan} \end{figure} \begin{table*}[t] \resizebox{\textwidth}{!}{\begin{tabular}{|l|l|l|l|l|} \hline Sample & Zn Flow & XPS (Ga:Zn) & XPS Zn density(cm$^{-3}$) & Optical hole density (cm$^{-3}$) \\ \hline Undoped-GaAs & $\sim0$\,\% & 1:0.012 & 0.26(11)$\times10^{21}$ & - \\ Zn-GaAs & 1.5\,$\%$ & 1:0.060 & 1.32(11)$\times10^{21}$ & $9.67_{5.08}^{17.76}\times10^{19}$\,cm$^{-3}$ \\ \hline \end{tabular}} \caption{Summary of reference and Zn-doped GaAs NWs studied showing Zn flow, dopant level $p$ from XPS and hole density from optical measurements from $\mu$-PL spectroscopy.\label{tab:sample_table}} \end{table*} \subsection{Post-location Filtering of Nanowires} There is variation in photoluminescence emission due to inhomogeneity between individual wires, however, SEM imagery (main text) indicates that we also anticipate emission from clumps of multiple NWs and the presence of non-emissive dust or dirt. Following the initial NW location using dark-field optical microscopy, we use a filtering process to remove spectra likely to be contaminants. The photoluminescence emission PL model output is used to create a threshold above which wires are defined as clumps, which are removed before further analysing. In addition, the emission temperature and peak energy outside reliable range or representing physically unrealistic wires were removed. Table \ref{tab:filtering} shows the benchmarks in which the parameters were filtered and the number of wires removed. This results in removing around 54$\%$ of the wires ending up with 11487~wires. \begin{table*}[t] \resizebox{\textwidth}{!}{\begin{tabular}{|l|l|l|l|} \hline Parameter & Condition & Justification & Wires removed \\ \hline BE emission amplitude & $<6\times10^{-16}$ & Removing clumps & 48 \\ SO emission amplitude & $<1.19\times10^{-16}$ & Removing clumps & 37 \\ BE emission temperature & $<5450$\,K & Removing unphysical high temperatures & 103 \\ BE emission temperature & $>330$\,K & Removing unphysical low temperatures & 962 \\ SO emission temperature & $<700$\,K & Removing unphysical high temperatures & 1928 \\ SO emission temperature & $>305$\,K & Removing unphysical low temperatures & 3216 \\ BE peak energy & $<1.405$\,eV & Removing unphysical high energy & 144 \\ BE peak energy & $>1.301$\,eV & Removing unphysical low energy & 9601 \\ SO peak energy & $<1.779$\,eV & Range of SO emission is known & 1037 \\ SO peak energy & $>1.671$\,eV & Range of SO emission is known & 2893 \\ \hline \end{tabular}} \caption{Summary of filtering conditions of the output PL-model parameters with the number is wires removed. Some wires are removed by multiple filters - the numbers do not sum to the total removed.\label{tab:filtering}} \end{table*} \subsection{Scaling of Internal Quantum Efficiency to Photoluminescence Intensity} The photoluminescence intensity $PL$ is related to the internal quantum efficiency $IQE$ by a scaling factor $\alpha$ and the diameter-dependent NW absorption ($A(d)$) as mentioned in the main text. Quantifying these factors is crucial in setting a prior for Bayesian analysis. \subsubsection{Experimental Factor $\alpha$} The scaling factor $\alpha$ is related to experimental conditions during the acquisition of PL spectra namely laser power, laser spot size, objective lens collection efficiency, microscope throughput efficiency, and spectrometer quantum efficiency. All of these conditions were approximated experimentally by an end-to-end calibration of laser reflection from a mirror in the sample position. \subsubsection{NW Absorption Modelling} The NW absorption as a function of diameter was determined using COMSOL simulation. In the simulation, power loss density (PLD) was studied for a range of NW diameters from 10-200\,nm under simulated excitation conditions to obtain the absorption percentage. The PLD was found for p- and s- polarised incident light, with light spot size 1\,$\mu$m. The absorption was obtained by integrating the PLD across the NW length and normalising it to the incident power and spot diameter. Figure \ref{fig:Absorption}a depicts NW absorption of light with second-degree polynomial fit at different NW diameters, the shaded area indicates the range of diameter of interest. While a quadratic fit is naive, reflecting the increasing absorption with increasing geometric area presented to the beam and increasing thickness in the sub-absorption depth regime, the model does not pass through the data points. The NW model used in COMSOL reflects interference related to scattering and substrate interactions, which appear as oscillation. However, the error introduced is relatively small - on the order of $<2\times$ - and we choose to neglect this in our data analysis. An example of NW absorption of light is shown in Figure \ref{fig:Absorption}b for s-polarised light at three different NW diameters. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Figures/AbsorptionCurve.png} \includegraphics[width=0.8\textwidth]{Figures/AbsorptionImge.png} \caption{(a) Relation between NW absorption of light and diameter, the red line is a second-degree polynomial fit to the data. The shaded area illustrates the range of diameter of interest 10-130\,nm. (b) Electric field distribution displaying light absorption of s-polarised light at three NW diameters 50, 150, and 250\,nm.} \label{fig:Absorption} \end{figure} \subsection{Markov-Chain Monte-Carlo Model} \subsubsection{Prior Probability Distribution Functions} Bayesian inference is a process of updating prior knowledge with new evidence or data. The prior vector $\Psi$ is a list of probability distribution functions (PDFs) that represents our knowledge of each parameter before considering the data. In line with convention, we use analytical PDFs, with our choice of function depending on our confidence in each parameter; these are listed in the main text. The bandgap of GaAs $E_0$ is a well-known material parameter, hence the prior is set as a normal distribution $\mathcal{N}(\mu,\sigma)$ centred at 1.405\,eV with a narrow full width at half maximum (FWHM) of 5\,meV reflecting our small uncertainty. Equation \ref{equ:normal} describes normal distribution where $\mu$ and $\sigma$ are the mean and standard deviation of the distribution, \begin{equation} \label{equ:normal} \mathcal{N}(\mu,\sigma) = e^{-0.5((x-\mu)/\sigma)^2}. \end{equation} In some cases a range of values have been reported in the literature - for instance the radiative recombination coefficient $B$ - or where we base our prior on physical upper and lower limits - for instance the doping level $p$. In this case, a generalized distribution $G(\mu,\sigma,\beta)$ is used, which is a normal distribution with higher uncertainty used \begin{equation} \label{equ:generalized} \mathcal{G}(\mu,\sigma,\beta) = e^{-0.5((x-\mu)/\sigma)^\beta}. \end{equation} The parameter $\beta$ represents the width of the prior distribution. Finally, in some cases hard upper and lower limits are known, such as for carrier initial lifetime. However, we have little further a-priori insight and the prior distribution is therefore uniform \begin{equation} \label{equ:uniform} \mathcal{U}(x_{min},x_{max}) = (x>=x_{min}) \cap (x<=x_{max}). \end{equation} In addition, these distributions may be combined or truncated to be positive to avoid physically unrealistic values. The distributions are schematically shown in Figure \ref{fig:PDFs}. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Figures/PDFs.png} \caption{Schematic of three probability distribution functions centred at $x=0$ with scale parameter $\sigma=1$.} \label{fig:PDFs} \end{figure} \subsubsection{Posterior Probability Distributions} Figure \ref{fig:Posteriors} shows the distributions and correlation plots between all model posterior distributions as a function of doping $\log(p)$ and diameter $d$. As described in the main text, posterior samples are separated into regions with experimental support (evidenced regions) and the full range returned by the model. Additionally, samples with very high values of $\alpha$ are removed to improve the visualization of the correlations ($>$$2\times10^{-15}$) resulting in removing $7\%$ of the sampling data-set. The figure illustrates no strong correlations between the parameters as a function of $d$ and $\log(p)$ before and after masking; the correlation between $\log(p)$ and $d$ is investigated in detail in the main text. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Figures/Posterior_DvsP.png} \caption{Pair posterior correlations for all the parameters determined using MCMC, including $K$, $E_0$, $\log(B)$, $\log(C)$, $\log(\tau_0)$, $\log(p)$, $d$, $\log(S)$, $T_0$, and $\log(\alpha$) versus $d$ and $\log(p)$.} \label{fig:Posteriors} \end{figure} \subsection{Fermi Energy Shift} The location of the Fermi level in a p-type doping semiconductor is proportional to the logarithmic hole density, as given in the following relation \begin{equation} \label{equ:Fermi} \Delta E = K_{B}T \ln\left(\frac{N_{V}}{p}\right), \end{equation} where $\Delta E$ is the location of the Fermi level with respect to the valence band-edge, $K_{B}T$ is thermal energy given as 25\,meV, $N_{V}$ is the density of state in the valence band given as $9\times10^{18}$\,cm$^{-3}$, and $p$ is the hole density. Given the spread in hole density, the shift of Fermi level is calculated for each doping measured as illustrated in Figure \ref{fig:Eexcc_Ef}a. The negative values indicate that Fermi level is below the valence band. For a Fermi-level within the gap, no correction is needed. However, the doping regime above $1\times10^{19}$\,cm$^{-3}$ leads to degenerate doping, and a correction to the effective emission temperature is required. The electron excess energy $E_{exc}^{e}$ and hole excess energy $E_{exc}^{h}$ due to the incident photo-excitation energy $E_{inc}$ = 1.96\,eV are calculated using the hole-electron effective mass ratio $r_{m^{*}}$ = 0.118~ and the spread in the band-edge energy $E_{g}$, that is \begin{align} \label{equ:Excess} E_{exc} = E_{inc}-E_{g} \\ E_{exc}^{h} = E_{exc}\times r_{m*} \\ E_{exc}^{e} = E_{exc}-E_{exc}^{h} \end{align} Figure \ref{fig:Eexcc_Ef}b depicts the Fermi energy, electron, and hole excess energies with respect to doping. Fermi energy intercepts the hole excess energy at doping $1.5\times10^{20}$\,cm$^{-3}$~revealing that above this doping level GaAs NWs absorption to the valance band will drop. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{Figures/Excess_Fermi_E.png} \caption{(a) Fermi energy shift as a function of logarithmic doping. (b) Hole excess energy and scaled electron excess energy versus doping compared to Fermi energy. The vertical line indicates the doping level beyond which GaAs NWs absorption into the valance band is affected.} \label{fig:Eexcc_Ef} \end{figure} \subsubsection{Effective Carrier Temperature} Due to the spread in Fermi energy level shift, the photo-excited electrons can recombine with holes below the valence band edge, giving rise to hot emission. By eliminating the effect of Fermi temperature, the effective electron temperature from the band-edge and split-off are comparable as shown in Figure \ref{fig:Effective_Temp}; with a gradient of $0.91_{0.90}^{0.92}$\,K/K. This indicates that both band-edge and split-off recombination processes are likely to take place at the same timescale. \begin{figure} \centering \includegraphics[width=0.65\textwidth]{Figures/Effective_T.png} \caption{Comparison of effective electron temperature from the band-edge and the split-off temperature showing a gradient of $0.91_{0.90}^{0.92}$ K/K~.} \label{fig:Effective_Temp} \end{figure} \end{document}
2024-02-18T23:40:32.195Z
2023-01-27T02:02:15.000Z
algebraic_stack_train_0000
2,682
7,873
proofpile-arXiv_065-13159
\section{Introduction} In recent years flexible robots have been drawing more attention as they might hold the key to the industry's significant problems. These problems include increasing the load-to-mass ratio and making robots intrinsically safer to facilitate human-robot collaboration. The main reason why flexible robots have yet to be adopted is the flexibility that causes oscillations and static deflections, complicating modeling and control. Flexible robots have an infinite number of \ac{DOF} and are governed by nonlinear \ac{PDE}. Discretization converts \ac{PDE} into \ac{ODE}, making them suitable for control and trajectory planning. There are three main methods for discretizing a flexible link: the \ac{AMM} \citep{book1984recursive, green2004LQR2link}, the \ac{FEM} \citep{sunada1981application, shabana2020dynamics} and the \ac{LPM} \citep{Yoshikawa1996, franke2009vibration, staufer2012ella, moberg2014modeling}. All the methods generally assume small deformations and use the linear theory of elasticity. The \ac{FEM} is the most accurate among the three but results in a higher number of differential equations. The \ac{AMM} is often used for one \ac{DOF} flexible robots; for robots with higher \ac{DOF}, the choice of boundary conditions becomes nontrivial \citep{heckmann2010choice}. The \ac{LPM} is the simplest among the three, but tuning the parameters of such models takes much work. In this paper, we leverage the \ac{LPM} following formulation defined in \cite{wittbrodt2007FREM}, where the method is called the modified rigid \ac{FEM} (MRFEM). The Section \ref{sec:setup} describes the method in detail. Many control methods have been proposed for controlling flexible robots, including optimal control methods. \cite{green2004LQR2link} applied LQR for trajectory tracking control of a two-link flexible robot using linearized dynamics. \cite{silva2020implementable} used \ac{MPC} to control a single-link flexible robot, while \cite{boscariol2010model} used \ac{MPC} to control four-link flexible mechanisms. In both cases, the authors linearized the dynamics and used linear \ac{MPC}. Indeed, designing fast \ac{NMPC} is challenging given the high dimensional dynamics of flexible robots. To make \ac{NMPC} available for a broader range of systems, there have been attempts in the last two decades to approximate \ac{NMPC} with \ac{NN} \cite{johansen2012explicit}. \cite{Nubert2020} proposed using supervised learning to approximate robust \ac{NMPC}, while \cite{carius_2020} proposed a policy search method guided by \ac{NMPC} without safety considerations. To ensure that the approximate \ac{NMPC} provides safe inputs (does not violate constraints), \cite{Nubert2020} leveraged a statistical validation technique to obtain safety guarantees. The authors reported that the validation process is time-consuming. In general, approximating \ac{NMPC} with \ac{NN} fits under the umbrella of \ac{IL}. \cite{brunke2022safe} discuss various methods for ensuring the safety of learning methods. One particular approach to guarantee the safety of a learned policy is the \ac{SF} \citep{Wabersich2021PredictiveSafetyFilter}. \ac{SF} is an \ac{MPC} scheme that receives a candidate input and verifies if it can drive the system to a safe terminal set after applying the candidate input. If the answer is positive, the input is applied to the system; otherwise, it is modified as little as possible to ensure safety. \cite{vinod2022safetyfilter} successfully used \ac{SF} for a multi-agent drone setup that was trained with \ac{RL}. To the best of the authors knowledge, this paper is the first that investigates whether \ac{IL} combined with \ac{SF} can replace \ac{NMPC} for safe regulation and trajectory tracking control of flexible robots. We show that our particular implementation successfully speeds up computation time of the controller, filters unsafe controls while operating close to the expert's performance. The paper is organized as follows: Section \ref{sec:setup} describes the setup, Section \ref{sec:nmpc} formulates \ac{NMPC} while Section \ref{sec:il} describes \ac{IL} as a tool to approximate \ac{NMPC}. Section \ref{sec:exper} presents simulation studies of several controllers for regulation and trajectory tracking and discusses the results. Finally, in Section \ref{sec:conc}, we make concluding remarks. \section{Setup} \label{sec:setup} The simulation setup is a three \ac{DOF} serial manipulator, as shown in Fig. \ref{fig:model_discr}, which was inspired by flexible robots TUDOR \citep{malzahn2010tudor} and ELLA \citep{staufer2012ella}. The first link is rigid; the second and third links are flexible and have the same dimensions and material properties. We assume that joints are rigid and directly actuated, i.e., actuators do not have a gearbox. Available measurements are actuated joints positions and velocities, and the \ac{EE} position to monitor elastic deflections. \begin{figure}[t] \captionsetup{font=small} \centering \includegraphics[width=0.6\textwidth]{figures/fa_modeling.pdf} \caption{Schematic representation of the setup and of the discretization method.} \vspace{-0.35cm} \label{fig:model_discr} \end{figure} \subsection{Modeling} For modeling the robot, we utilize MRFEM; it, first, divides the flexible links into $n_{\mathrm{seg}}$ segments and lumps their spring and damping properties at one point, e.g., in the geometrical center of the segment. Then, MRFEM isolates so-called \ac{rfes} between massless passive joints (spring-damper elements), see Fig. \ref{fig:model_discr}. Mechanics textbooks contain ready-to-use formulas for computing inertial properties of the \ac{rfes}, and spring and damper coefficients for simple geometries. For complex geometries, however, CAD software should be used. For deriving the equations of motion of a flexible manipulator using the Lagrange method \citep[Ch. 7]{sciavicco2001book}, let $\bm q = (\bm q_\mathrm{a}; \ \bm q_\mathrm{p})$ denote the vector of joint angles with $\bm q_\mathrm{a} \in \mathbb{R}^3$ being the vector of active joint angles and $\bm q_\mathrm{p} \in \mathbb{R}^{2 n_{\mathrm{seg}}}$ being the vector of passive joint angles. Passive joints are generally chosen as spherical joints to represent compliance in all directions (two bending deformations and a torsional deformation). For some geometries, as in our case, compliance in one direction dominates compliance in other directions. Reducing the model by modeling flexibility only along the most compliant direction is beneficial in such cases. In the setup, we only model the bending about axes $Z_1$ and $Z_2$, as shown in Fig. \ref{fig:model_discr}. The kinetic energy $K$, the potential energy $P$ and the dissipation $D$ functions of the manipulator are: \begin{align} K(\bm q, \dot{\bm q}) = \frac{1}{2} \dot{\bm q}^\top \bm M( \bm q) \dot{\bm q},\ P(\bm q) = \frac{1}{2} \bm q^\top \bm K \bm q + \sum_{i=0}^{n_{\mathrm{rb}}} m_i \bm g_0 \bm p_{C_i},\ D(\dot{\bm q}) = \frac{1}{2} \dot{\bm q}^\top \bm D \dot{\bm q}, \end{align} where $n_{\mathrm{rb}} = 1 + 2 (n_{\mathrm{seg}} + 1)$ is the number of rigid bodies, $\bm M \in \mathbb{R}^{n_{\mathrm{rb}} \times n_{\mathrm{rb}}}$ is the symmetric inertia matrix; $\bm K \in \mathbb{R}^{n_{\mathrm{rb}} \times n_{\mathrm{rb}}}$ and $\bm D \in \mathbb{R}^{n_{\mathrm{rb}} \times n_{\mathrm{rb}}}$ are the constant diagonal stiffness and damping matrices, respectively; $m_i$ and $\bm p_{C_i} \in \mathbb{R}^{3}$ are $i-$th links mass and center of mass, respectively; $\bm g_0 = [0\ 0\ -9.81]^\top\ \mathrm{m}/\mathrm{s}^2$ is the gravity acceleration vector. Applying the Lagrange method yields the final expression for the flexible manipulator dynamics discretized using MRFEM \begin{align} \bm M( \bm q) \Ddot{\bm q} + \bm C(\bm q, \Dot{\bm q}) \Dot{\bm q} + \bm K \bm q + \bm D \Dot{\bm q} + \bm g(\bm q) = \bm B \bm \tau, \label{eq: rgb_dyn} \end{align} where $\bm C \in \mathbb{R}^{n_{\mathrm{rb}} \times n_{\mathrm{rb}}}$ is the matrix of centrifugal and Coriolis forces, $\bm g \in \mathbb{R}^{n_{\mathrm{rb}}}$ is the vector of gravitational forces, $\bm B \in \mathbb{R}^{n_{\mathrm{rb}} \times 3}$ is the constant control jacobian and $\bm \tau \in \mathbb{R}^{3}$ is the torque vector. The model is converted to state space form by defining $\bm x := \left(\bm q; \dot{\bm q} \right)$ and $\bm u := \bm \tau$ \begin{align} \dot{\bm x} = \bm f( \bm x, \bm u, n_{\mathrm{seg}}) = \left(\dot{\bm q};\ \bm M(\bm q)^{-1}\{\bm B \bm \tau - \bm C(\bm q, \Dot{\bm q}) \Dot{\bm q} - \bm K \bm q - \bm D \Dot{\bm q} - \bm g(\bm q) \}\right) \label{eq:ss_dyn}. \end{align} The output map of the system is $\bm y = \bm h(\bm x) = (\bm q_\mathrm{a}; \dot{\bm q}_a; \bm p_{\mathrm{ee}})$ where $ \bm p_{\mathrm{ee}}$ is the \ac{EE} position. \begin{figure} \captionsetup{font=small} \centering \includegraphics[width=0.9\textwidth, trim={0cm 0.2cm 0cm 0cm},clip]{figures/discr.pdf} \caption{EE positions $x$ and $z$ simulated for a different number of segments $n_{\mathrm{seg}}$.} \vspace{-0.35cm} \label{fig:discr_sim} \end{figure} \subsection{Simulation and discretization} Roboticists favor MRFEM because existing efficient tools for rigid-body dynamics can be reused: the \ac{ABA} for forward dynamics and the \ac{RNEA} for the inverse dynamics \citep{featherstone2014rigid}. In this paper, we use \ac{ABA} for the forward dynamics and the forward path of the \ac{RNEA} for the forward kinematics, both generated by Pinocchio \citep{carpentier2019pinocchio} as CasADi \citep{Andersson2019casadi} functions. \ac{ODE} \eqref{eq:ss_dyn} is stiff; commonly used explicit fixed-step integrators in optimal control, e.g. $4-$th order Runge-Kutta (RK4) integrator, quickly diverge. Implicit integrators, however, can accurately integrate \eqref{eq:ss_dyn} at a higher computational cost. In this paper, for simulation, we use the backward differentiation formula implemented in CVODES \citep{hindmarsh2005sundials} with specified absolute and relative tolerances. As ground truth dynamics, we consider the model with $n_{\mathrm{seg}} = 10$. The critical aspect of any discretization method, including MRFEM is the choice of $n_{\mathrm{seg}}$. Finer discretization, on the one hand, yields a better approximation of the natural frequencies of the flexible links, but, on the one hand, it yields a high dimensional state-space model of the form \eqref{eq:ss_dyn}. Figure \ref{fig:discr_sim} shows the comparison between models with different $n_{\mathrm{seg}}$ in terms of $x$ and $z$ positions of the \ac{EE}. All the models were initialized with equivalent initial states and were excited with two square wave signals: first, joint three with an amplitude of $5\ \mathrm{Nm}$ and a duration of $0.03\ \mathrm{s}$; $0.42\ \mathrm{s}$ later, joint two with an amplitude of $10 \ \mathrm{Nm}$ and a duration of $0.04 \ \mathrm{s}$. \ac{EE} positions predicted by models with $n_{\mathrm{seg}} = 0$ (rigid body approximation) and $n_{\mathrm{seg}} = 1$ are noticeably different compared with finer discretized models. The \ac{EE} position predictions of the model with just two segments approach the "true" (of the model with $n_{\mathrm{seg}} = 10$) \ac{EE} position. This result is consistent with the validation of MRFEM from \cite{wittbrodt2007FREM}, Ch. 6, and indicates that a discretization as coarse as two segments might be sufficient for control purposes. Subsection \ref{subsec:model_complexity_nmpc} further investigates model complexity but from the the controller performance perspective. \section{Nonlinear model predictive control} \label{sec:nmpc} We use \ac{NMPC} to solve an optimal control problem for regulation around an \ac{EE} goal position and refer to it as expert policy. We formulate it as a general \ac{NLP}, as detailed in the following section. \subsection{NLP Formulation} In order to formulate the \ac{NLP} for solving the \ac{NMPC} problem, we use multiple shooting: we discretize the trajectory into $N-1$ intervals, leading to the state decision variables $\bm X=[\bm x_0,\ldots,\bm x_N]\in\mathbb{R}^{n_x \times N}$ and the control decision variables $\bm U=[\bm u_0,\ldots,\bm u_{N-1}]\in\mathbb{R}^{n_u \times N-1}$. Additionally, we use algebraic variables $\bm Z=\left[\bm z_0,\ldots,\bm z_{N}\right]\in\mathbb{R}^{n_z \times N}$ for the \ac{EE} position of the robot. In the cost function, we penalize the deviation from reference \ac{EE} positions $\bm z^\mathrm{ref}_0,\ldots,\bm z^\mathrm{ref}_N$, reference states $\bm x^\mathrm{ref}_0,\ldots,\bm x^\mathrm{ref}_N$ and reference torques $\bm u^\mathrm{ref}_0\ldots,\bm u^\mathrm{ref}_{N-1}$ using the squared L2 norm. The weight matrices of the squared L2 norms are $\bm Q=\mathrm{diag}(\bm w)\in\mathbb{R}^{n_x \times n_x}$, $\bm Q_N\in\mathbb{R}^{n_x \times n_x}$, $\bm R=\mathrm{diag}(\bm r)\in\mathbb{R}^{n_u \times n_u}$, $\bm P=\mathrm{diag}(\bm p)\in\mathbb{R}^{n_z \times n_z}$ and $\bm P_N\in\mathbb{R}^{n_z \times n_z}$. Vector $\bm w$ contains weights $\bm w_{q_\mathrm{a}}$ for active joints positions, $\bm w_{q_\mathrm{p}}$ for passive joint positions, $\bm w_{\dot{q}_\mathrm{a}}$ for active joint velocities and $\bm w_{\dot{q}_\mathrm{p}}$ for passive joint velocities. We formulate constraints for joint velocities and obstacles using slack variables $\bm \Sigma=[\bm \sigma_0,\ldots,\bm \sigma_N]^\top\in\mathbb{R}^{2\times N}$, with $\bm \sigma_k=[\sigma_k^{\dot{q}}\quad\sigma_k^\mathrm{obs}]^\top\in \mathbb{R}^2$. Slack variables are penalized in the cost function by L1- and squared L2-norms (by weights $\bm s\in\mathbb{R}^2$ and $\bm S\in \mathbb{R}^{2\times2}$, respectively). \begin{align} \bm L(\bm X,\bm U,\bm Z,\bm \Sigma)= &\sum_{k=0}^{N-1} \norm{\bm x_k-\bm x^\mathrm{ref}_{k}}_{\bm Q}^2 + \norm{\bm u_k-\bm u^\mathrm{ref}_k}_{\bm R}^2+ \norm{\bm z_k-\bm z^\mathrm{ref}_{k}}_{\bm P}^2 + \norm{\bm \sigma_k}_{\bm S}^2 + \norm{\bm \sigma_k}_{1,s} + \nonumber \\ &\norm{\bm x_N-\bm x^\mathrm{ref}_{N}}_{\bm Q_N}^2 + \norm{\bm z_N-\bm z^\mathrm{ref}_{N}}_{\bm P_N}^2 + \norm{\bm \sigma_N}_{\bm S}^2 + \norm{\bm \sigma_N}_{1,s}. \label{eq:objective} \end{align} We use a four-stage implicit RK method with a sampling time $\Delta t$ to discretize the continuous time system dynamics \eqref{eq:ss_dyn}, leading to the equality constraints $\bm 0=\bm F(\bm x_{k+1}, \bm x_k, \bm u_k)$. We use forward kinematics $\bm p_\mathrm{EE}=\bm P^\mathrm{fwd}(\bm x)$ to define the algebraic equation $\bm z_k=\bm P^\mathrm{fwd}(\bm x_k)$. We encode obstacle constraints as box constraints for the algebraic states with an upper bound $\ub{\bm z}^\mathrm{tr}$ and a lower bound $\lb{\bm z}^\mathrm{tr}$. Control upper bounds $\ub{\bm u}$, control lower bounds $\lb{\bm u}$, state upper bounds $\ub{\bm x}^\mathrm{tr}$ and state lower bounds $\lb{\bm x}^\mathrm{tr}$ are used to formulate box constraints on input torques and joint velocities, respectively. To avoid constraint violation due to noisy measurements, we perform a simple heuristic-based constraint tightening. In particular, based on the knowledge about the measurement noise, we assume a safety margin $\bm \delta_x =[\bm \delta_q^\top \quad \bm \delta_{\dot{q}}^\top]^\top \in \mathbb{R}^x$ for the state constraints $\ub{\bm x}=\ub{\bm x}^\mathrm{tr}-\bm \delta_x$, $\lb{\bm x}=\lb{\bm x}^\mathrm{tr}+\delta_x$, and $\bm \delta_z$ for the \ac{EE} position constraints $\ub{\bm z}=\ub{\bm z}^\mathrm{tr}-\bm\delta_z$, $\lb{\bm z}=\lb{\bm z}^\mathrm{tr}+\bm\delta_z$. By choosing generous safety margins we could avoid a more sophisticated robust \ac{NMPC} formulation. Since the optimization problem can only be formulated for a finite horizon, a control invariant terminal set~$S^t(x,z)$ is included, where we set the joint velocities to zero. With the estimated state $\hat{\bm x}$, the final nonlinear program reads as \begin{mini!} {\bm X,\bm U,\bm Z,\bm \Sigma} {\bm L(\bm X,\bm U,\bm Z,\bm \Sigma)} {\label{eq:final_mpc_policy}} {} \addConstraint{\bm x_0}{= \hat{\bm x},\quad \bm \Sigma \geq 0, \quad (\bm x_N,\bm z_N)\in S^t}{} \addConstraint{\bm 0}{= \bm F(\bm x_{k+1},\bm x_k,\bm u_k),}{\quad k=0,\ldots,N-1} \addConstraint{\bm z_k}{=\bm P^\mathrm{fwd}(\bm x_k),}{\quad k=0,\ldots,N} \addConstraint{\lb{\bm x}-\bm \sigma_k^{\dot{q}}}{\leq \bm x_k \leq \ub{\bm x}+\bm \sigma_k^{\dot{q}},}{\quad k=0,\ldots,N} \label{eq:final_mpc_policy_x} \addConstraint{\lb{\bm z}-\bm \sigma_k^\mathrm{ref}}{\leq \bm z_k \leq \ub{\bm z}+\bm \sigma_k^\mathrm{ref},}{\quad k=0,\ldots,N} \label{eq:final_mpc_policy_z} \addConstraint{\lb{\bm u}}{\leq \bm u_k \leq \ub{\bm u},}{\quad k=0,\ldots,N-1}. \label{eq:final_mpc_policy_u} \end{mini!} \subsection{Safety Filter} As we aim for approximating \ac{NMPC} with a \ac{NN}, safety w.r.t. the constraints in \eqref{eq:final_mpc_policy} cannot be guaranteed directly. \cite{Wabersich2021PredictiveSafetyFilter} propose an MPC-based policy~${\pi^\mathrm{s}:\mathbb{R}^{n_u}\rightarrow \mathbb{R}^{n_u}}$ that projects the NN output~$\bm u^\mathrm{NN}\in \mathbb{R}^{n_u}$ to a safe control~$\bm u^\mathrm{s}=\pi^\mathrm{s}(\bm x^\mathrm{s},\bm u^\mathrm{NN}) \in\mathcal{U}^\mathrm{s}\subseteq\mathbb{R}^{n_u}$. The safe set~$\mathcal{U}^\mathrm{s}$ is defined as in \eqref{eq:final_mpc_policy_u} for a possibly simpler discrete-time system model~$\bm x^\mathrm{s}_{i+1}=\bm F^\mathrm{s}(\bm x^\mathrm{s}_i,\bm u^\mathrm{s}_i)$ with states~$\bm x^\mathrm{s}$ and controls~$\bm u^\mathrm{s}$. Constraint satisfaction for states is expressed via the set membership of state variables~${\bm x^\mathrm{s}\in\mathcal{X^\mathrm{s}}}$ \eqref{eq:final_mpc_policy_x}, algebraic variables~$\bm z^\mathrm{s}\in\mathcal{Z^\mathrm{s}}$ \eqref{eq:final_mpc_policy_z}, and for controls via ${\bm u^\mathrm{s}\in\mathcal{U}^\mathrm{s}}$. The safety filter solves the optimization problem \eqref{eq:final_mpc_policy} but with the cost function ${\norm{\bm u^\mathrm{s}_0 - \bm u^\mathrm{NN}}}^2_{\bm R^\mathrm{s}}$ and decision variables $ \bm X^\mathrm{s},\bm U^\mathrm{s},\bm Z^\mathrm{s},\bm \Sigma^\mathrm{s}$. It takes the first control~$\bm u^{\mathrm{s} *}_0$ of the optimal solution~$(\bm X^{\mathrm{s},*},\bm U^{\mathrm{s}*})$ as filter output~$\bm u^\mathrm{s}:=\bm u^{\mathrm{s}*}_0$. \section{Imitation Learning} \label{sec:il} We aim at imitating the expert policy $\pi^*$, i.e., \ac{NMPC}, by training a \ac{NN}. Due to the violation of the i.i.d. assumption of sequential prediction problems, purely supervised learning performs poorly \citep{ross2011dagger}. One approach that leverages this problem and has shown superior performance in practice is named \emph{DAgger} (Data Aggregation \cite{ross2011dagger}). It alternates between roll-outs by the neural network to aggregate states to dataset $\mathcal{D}$ and queries the expert policy on those visited states. Thereafter, it performs supervised learning with an L2-loss function on the policy output (torques) for obtaining a trained policy $\pi_i$ in iteration $i$. Consequently, \cite{ross2011dagger} showed that training samples for supervised learning are efficiently sampled. We use \emph{DAgger} for \ac{IL}, where we train for $M$ episodes an collect $n_0$ state-action pairs in an initial phase and $n_1$ state-action pairs in each episode thereafter. \section{Experiments} \label{sec:exper} \subsection{State estimation} Since the number of segments $n_{\mathrm{seg}}$ of the model used for simulation differs from the $n_{\mathrm{seg}}$ of a control model and we simulate measurement noise, a state estimator is needed to infer the states of the control model from outputs $\bm y$ of the simulation model. To this end, we implemented a commonly used discrete-time \ac{EKF}, which uses a model of form \eqref{eq:ss_dyn}. We discretized the estimation model (which is the same as the control model) using the fixed-step implicit Radau collocation method; the linearization of the model is performed through automatic differentiation available in CasADi. \subsection{Model complexity comparison for NMPC} \label{subsec:model_complexity_nmpc} To analyze the influence of the model fidelity, i.e., the number of segments on the controller performance, we compared the \ac{NMPC} for $n_\mathrm{seg}=\{0,1,2,3,5,10\}$. For $\{0,1\}$ segments, the solver did not converge and for ten segments, the computation time is unacceptably long. We evaluate the performance of a controller by the path-length $d_{\mathcal{G}_{\epsilon}}$ of the \ac{EE} position and time taken from an initial state to reach a goal region $\mathcal{G}_{\epsilon}(\bar{\epsilon})$ and stay within it. We define $\mathcal{G}_{\epsilon}(\bar{\epsilon})=\{\bm z \in \mathbb{R}^n_z |\bm (\bm z-\bm z_\mathrm{goal})^\top(\bm z-\bm z_\mathrm{goal})\leq \bar{\epsilon} \}$ as a ball of radius $\bar{\epsilon}$ around the goal \ac{EE} position. As shown in Tab.~\ref{tab:cntr_per}, the average mean computation time $\bar t_{\mathrm{MPC}}$ and the maximum computation time $t_{\mathrm{MPC}}^\mathrm{max}$ increase drastically with a higher number of segments: $n_\mathrm{seg}=\{2,3,5\}$. However, the performance measured in the KPIs $t_{\mathcal{G}_{\epsilon}}$ and $d_{\mathcal{G}_{\epsilon}}$ does not significantly improve. Therefore, we selected $n_\mathrm{seg}=2$ as a compromise between performance and computation time for \ac{MPC} for further use within \ac{IL} framework. \subsection{Set-to-point motion} The setup (Fig.~\ref{fig:scetch_task_1}) of the first task includes a goal \ac{EE} position $\bm z_\mathrm{goal}$ that should be reached from a initial rest state sampled from an initial set \begin{equation} \mathcal{I}=\Bigg\{ \bm q \in \mathbb{R}^{n_x} \Bigg|\Big[-\frac{\pi}{2}-0.1,-\frac{\pi}{4},-\frac{\pi}{4}\Big]^\top \leq \bm q_\mathrm{a}^\top \leq \Big[-0.1,\frac{\pi}{4},\frac{\pi}{4}\Big]^\top,\ \bm q_\mathrm{p},\ \dot{\bm q} = \bm 0 \Bigg\}, \end{equation} where $\bm q_\mathrm{p}$ is the equilibrium configuration of passive joints for a given configuration of active joints ($\bm K \bm q + \bm g(\bm q) = \bm 0$). The initial set is a subset of the reachable set $\mathcal{F}$ which is constrained by an obstacle set $\mathcal{O}=\{\bm p_\mathrm{EE}\in R^{n_z}| p_{y,\mathrm{EE}}\geq 0 \}$, where $p_{y,\mathrm{EE}}$ is the y-component of the \ac{EE}. \begin{figure} \captionsetup{font=small} \centering \begin{minipage}{.45\textwidth} \centering \def.9\textwidth{.9\textwidth} \input{task1.pdf_tex} \end{minipage}% \begin{minipage}{.55\textwidth} \centering \includegraphics[width=1\textwidth]{figures/fa_render.pdf} \label{fig:sub2} \end{minipage} \caption{On the left, a sketch of the sets for the defined task is shown. The right image shows the rendered robot in the simulation environment with the goal location close to the safety-critical wall constraint.} \label{fig:scetch_task_1} \end{figure} We trained the \ac{NN} agent using \emph{DAgger} for $M=30$ episodes and collected $14 \cdot 10^4$ training samples. Fig.~\ref{fig:task1_training} shows a return -- the sum of rewards $r_i=\Delta t ||\bm z_\mathrm{goal}-\bm z_i||_2$ used purely for evaluation within training -- and the L2 loss. \begin{figure} \captionsetup{font=small} \centering \includegraphics[width=0.7\textwidth,trim={0.2cm 0.2cm 0.2cm 0.2cm},clip]{figures/training.pdf} \caption{L2 loss and evaluation return (sum of distances from current \ac{EE} position to the goal \ac{EE} position) during \ac{IL} of the expert policy.} \label{fig:task1_training} \end{figure} The resulting trained \ac{NN} achieves a massive improvement in the computation time compared with expert \ac{NMPC}, as shown in Tab.~\ref{tab:task1_timings}. However, it violates constraints since the learned expert's motion is \emph{sliding} along constraints, thus making it difficult for the \ac{NN} to meet them precisely. The \ac{SF} yields constraint satisfaction at the cost of increased computation time. Nevertheless, \ac{NN} with \ac{SF} is still much faster than the expert. For comparison, we implemented \ac{MPC} with different horizons ($N=\{10,20,40,80\}$). Shorter horizons decrease the computation time but may not maintain stability. Tab.~\ref{tab:kpis} compares the performances of several controllers for different sizes $\epsilon$ of the goal region $\mathcal{G}_{\epsilon}$ as measured by selected KPIs. The expert's performance on randomized initial states is given in absolute values, whereas other approaches are given relative to the expert's performance, including the standard deviation over the experiments. For a larger goal region $\epsilon=0.1$m, all the controllers perform similarly while for smaller values of $\epsilon$ the time to reach the goal increases compared with the expert for \ac{NN} and \ac{SF}+\ac{NN} up to $25\%$, and the traveled path up to $6\%$. The \ac{SF}+NN implementation performs slightly worse than the pure NN implementation, but yields a safety guarantee to our chosen robust constraints. \begin{table} \captionsetup{font=small} \parbox[t]{.58\linewidth}{ \centering \scalebox{0.73}{ \begin{tabular}[t]{c|ccc|cc} \hline & \multicolumn{3}{c}{computation times $[\mathrm{ms}]$} & \multicolumn{2}{c}{max. constr. violations}\\ approach & mean & std. & max. & $\dot{\bm q}_a$ $[\frac{\mathrm{rad}}{\mathrm{s}}]$ & wall $[\mathrm{cm}]$ \\ \hline \\ expert MPC ($N=50$) & 11.790 & 0.239 & 13.775 & 0.00 & 0.00 \\ NN & 0.091 & 0.001 & 0.095 & 4.02 & 4.46 \\ NN + SF & \textbf{3.619} & 0.258 & 7.304 & \textbf{0.00} & \textbf{0.00} \\ MPC ($N=10$) & 2.701 & 0.098 & 3.511 & 0.00 & 3.51 \\ MPC ($N=20$) & 4.807 & 0.167 & 6.334 & 0.00 & 0.00 \\ MPC ($N=40$) & 9.540 & 0.247 & 11.679 & 0.00 & 0.00 \\ MPC ($N=80$) & 19.016 & 0.390 & 22.107 & 0.00 & 0.00 \\ \hline \end{tabular} } \caption{Computation times and constraint violations of MPC formulations with different horizon lengths, the trained \ac{NN} and the \ac{NN} including the \ac{SF}.} \label{tab:task1_timings} } \hfill \parbox[t]{.4\linewidth}{ \centering \scalebox{0.73}{ \begin{tabular}[t]{cccc} \hline \\ metric/KPI & $n_\mathrm{seg} = 3$ & $n_\mathrm{seg} = 5$ \\ \hline \\ $d_{{\mathcal{G}_{\epsilon}}}/d_{{\mathcal{G}_{\epsilon}}}^{*}$ & $1.008$ & $1.026$ \\ $t_{{\mathcal{G}_{\epsilon}}}/t_{{\mathcal{G}_{\epsilon}}}^{*}$ & $0.955$ & $0.974$ \\ $\bar t_{\mathrm{MPC}}/ \bar t_{\mathrm{MPC}}^{*}$ & $1.653$ & $2.95$ \\ $t_{\mathrm{MPC}}^\mathrm{max} / t_{\mathrm{MPC}}^{\mathrm{max},\ *}$ & $1.589$ & $2.940$ \\ \hline \end{tabular} } \caption{Performance comparison of \ac{NMPC} controllers with different models. KPIs are measured with respect to KPIs of the \ac{NMPC} with $n_{\mathrm{seg}}=2$, denoted with superscript ${}^*$.} \label{tab:cntr_per} } \end{table} \begin{table}[t] \captionsetup{font=small} \centering \scalebox{0.73}{ \begin{tabular}{c|ccc|ccc|ccc} \hline & \multicolumn{3}{c|}{$\epsilon=0.1\ [\mathrm{m}]$}& \multicolumn{3}{c|}{$\epsilon=0.05\ [\mathrm{m}]$ }& \multicolumn{3}{c}{$\epsilon=0.025\ [\mathrm{m}]$}\\ & f $[\%]$ & $t_{\epsilon} [\mathrm{s}]$ & $d_{\epsilon} [\mathrm{m}]$ & f $[\%]$ & $t_{\epsilon} [\mathrm{s}]$ & $d_{\epsilon}$(m)& f $[\%]$ & $t_{\epsilon} [\mathrm{s}]$ & $d_{\epsilon} [\mathrm{m}]$ \\ \hline \\ expert MPC ($N=50$) & 0 & 0.34$\pm$0.15 & 0.72$\pm$0.35 & 0 & 0.39$\pm$0.14 & 0.79$\pm$0.34 & 0 & 0.43$\pm$0.12 & 0.82$\pm$0.34\\ \hline \\ & & $\frac{t_{\epsilon}}{t^\mathrm{exp}_{\epsilon}}$ & $\frac{d_{\epsilon}}{d^\mathrm{exp}_{\epsilon}}$ & & $\frac{t_{\epsilon}}{t^\mathrm{exp}_{\epsilon}}$ & $\frac{d_{\epsilon}}{d^\mathrm{exp}_{\epsilon}}$& & $\frac{t_{\epsilon}}{t^\mathrm{exp}_{\epsilon}}$ & $\frac{d_{\epsilon}}{d^\mathrm{exp}_{\epsilon}}$ \\ \hline \\ NN & 0 & 0.98$\pm$0.13 & 1.01$\pm$0.05 & 0 & 1.06$\pm$0.14 & 1.04$\pm$0.07 & 5 & 1.25$\pm$0.18 & 1.06$\pm$0.07\\ NN + SF & 0 & 1.05$\pm$0.08 & 1.00$\pm$0.04 & 5 & 1.08$\pm$0.16 & 1.01$\pm$0.04 & 5 & 1.17$\pm$0.27 & 1.02$\pm$0.05\\ MPC ($N=10$) & 100 & nan & nan & 100 & nan & nan & 100 & nan & nan \\ MPC ($N=20$) & 0 & 0.99$\pm$0.04 & 0.99$\pm$0.03 & 80 & 1.76$\pm$1.10 & 1.11$\pm$0.07 & 100 & nan & nan \\ MPC ($N=40$) & 0 & 1.00$\pm$ 0.01 & 1.00$\pm$ 0.02 & 0 & 1.03$\pm$ 0.10 & 1.01$\pm$ 0.03 & 0 & 1.06$\pm$ 0.14 & 1.01$\pm$ 0.03\\ MPC ($N=80$) & 0 & 1.00$\pm$ 0.00 & 1.00$\pm$ 0.00 & 0 & 1.00$\pm$ 0.02 & 1.00$\pm$ 0.01 & 0 & 0.99$\pm$ 0.03 & 1.00$\pm$ 0.01\\ \hline \end{tabular} } \caption{Key performance indicators and the failure rate $f$ of MPC formulations with different horizon lengths, the trained \ac{NN} and the \ac{NN} including the \ac{SF}.} \label{tab:kpis} \end{table} The time signals of a simulation run are plotted in Fig.~\ref{fig:task1_time_plots}, where the EE goal position is reached in approximately 1s. The NN violates the joint velocity constraint and nearly the EE $y-$position constraint. The \ac{SF} successfully projects the outputs of the same NN to safe control, where constraints are respected. \begin{figure} \captionsetup{font=small} \centering \includegraphics[trim={0cm 0.2cm 0cm 0.2cm},clip]{figures/fig1.pdf} \caption{Simulation comparison of the expert MPC, the trained neural network (NN) and the trained NN with an addition of the safety filter (NN+SF). Despite good performance of the NN, it violates the joint velocity. The safety filter successfully avoids the constraint violation.} \label{fig:task1_time_plots} \end{figure} \section{Conclusion} \label{sec:conc} This work demonstrated that a three degree of freedom flexible robot manipulator, modeled using lumped parameter approach called the modified rigid finite element method, can be effectively controlled by \ac{NMPC}. We used a high-fidelity simulator and compared \ac{NMPC} formulations for different model complexities (discretizations of flexible links) and horizon lengths. We alleviated the problem of the rather high computation time of \ac{NMPC} by approximating it with NN using imitation learning -- reduced the computational load by a factor of 100. NN, however, does not give any safety guarantees. In order to recover safety, we combined the NN with a "safety filter" formulated as a simple \ac{NMPC} to project controls to a safe control set. The proposed approach can be easily extended to trajectory tracking problems in flexible robotics and to control problems in soft robotics where robots are even more compliant and require even more complicated models. \acks{ The authors want to thank Daniele Ronzani for his helpful feedback. This research was supported by DFG via Research Unit FOR 2401 and project 424107692 and by the EU via ELO-X 953348. This research was also supported by the Research Foundation Flanders (FWO-Vlaanderen) through SBO project Energy-efficient, Lightweight, safe Yet Strong manipulator Arm (ELYSA) for cobot applications (S001821N). }
2024-02-18T23:40:32.750Z
2022-12-07T02:13:18.000Z
algebraic_stack_train_0000
2,708
5,173
proofpile-arXiv_065-13185
\section{Introduction} Patterns in large-dimensional data are central to many research problems. They often reveal a hidden simplicity of the data, helping to infer useful models of the data or to design more principled data analysis approaches. To develop inference methods and test the ability of various algorithms to detect such patterns requires generative models that are able to produce many statistically similar datasets with varying types of patterns. In this work we construct a generative probabilistic matrix model, which, through tuning of a handful of parameters, can generate data with multiple distinct commonly studied statistical structures. We focus on models where the data is large-dimensional with $N$ variables (degrees of freedom, such as neuronal activities or mutations) measured over $T$ observations. Generally $T\sim N$, but modern experiments often push us to $N\gg T$. We further focus on data that are intrinsically lower-dimensional, in which the number of \emph{latent features} (aka, collective degrees of freedom, which can be imposed or emergent) present in each samples is much smaller than $N$. Such lower dimensional latent structures are common in modern experimental data spanning different domains of biological physics~\cite{stephens2008dimensionality,halabi2009protein,Pandarinath_etal_2018,Morrell:2021hk,nieh2021geometry}. Apart from dimensionality, the structure of data is determined by the mixing of latent features in each sample. Here we only explore {\em linear} mixing of features. Nonetheless, even in linear mixing, many different types of data can be observed, from clusters~\cite{halabi2009protein} and overlapping clusters~\cite{eisen1998cluster}, to lower-dimensional manifolds in the data space, and to sparse models~\cite{olshausen1996emergence}. The richness of low-dimensional latent feature models is further increased since some types of data have constrained values: e.~g., activities are often non-negative~\cite{lee1999learning}. Ideally, one generative model should be able to reproduce as many of these diverse data types as possible. Additionally, a useful generative model also should be amenable to theoretical analysis, allowing to investigate the ability of various data analysis methods to detect features in the data analytically. A realistic way to achieve this is to rely on Random Matrix Theory (RMT), which affords well-developed tools for calculating analytical properties of data that can be represented in terms of $N\times T$ random matrices taken from different probability distributions~\cite{Potters_Bouchaud_2020}. A number of such probabilistic matrix models have been described in a variety of contexts~\cite{goldt2020modeling,mignacco2020role,Fleig_Nemenman_2022,Potters_Bouchaud_2020,Grosse_etal2012}. However, to the best of our knowledge, no model exists that captures the diversity of patterns in the data described above. Here, we present a simple probabilistic matrix model able to generate diverse data patterns by mixing linear latent features. The diversity is achieved by introducing statistically dependent mixing of the features, which gives rise to distinct data patterns. We provide a qualitative discussion of the correlation distribution and eigenvalue density in the different regimes of the model. Furthermore, we investigate the effect of feature structure on these distributions. Finally, we discuss how the model can be employed for generating structured training data for neural networks. \section{The model} We consider data matrices $\mathbf X$ with $N$ variables (columns) recorded $T$ times (rows). We focus on models where $\mathbf X$ is a product of two random matrices, \begin{align} \mathbf X &= \mathbf V\mathbf U, \mbox{or, for the matrix elements,}\\ x_{tn}&=\sum_{\mu=1}^\ld v_{t\mu} u_{\mu n}\,. \label{eq:model} \end{align} We call $\mathbf V$ the {\em latent feature matrix} (dimensions $T\times \ld$). Each column of the matrix is the value of one of the $\ld$ latent features in each of the $T$ samples. Further, $\mathbf U$ is the {\em coefficient matrix} (dimensions $\ld\times N$). It describes how each of the $N$ observable variables are obtained by mixing of the $\ld$ latent features. We will assume that $N,T\gg 1$. In~\cite{Fleig_Nemenman_2022} we considered such a model for the case when the elements of $\mathbf U$ and $\mathbf V$ are i.i.d.\ Gaussian random variables, so that correlations between observed variables are introduced by mixing of the $\ld$ latent features. Here we drop the condition of statistical independence of the matrix components. Specifically, we introduce statistical dependence through the Dirichlet distribution \begin{align} Dir(\{u_\mu\}; \beta)=\frac{1}{B(\beta)}\prod_{\mu=1}^\ld u_\mu^{\beta-1}\,,\quad \sum_\mu u_\mu=1\,, \end{align} where $\beta\in\mathbb R_{>0}$ is a hyperparameter, $B(\beta)$ is \begin{align} B(\beta)= \frac{ \Gamma^m(\beta)}{\Gamma(m\beta)}\,, \end{align} and $\Gamma(\beta)$ is the usual Gamma-function of its argument. The Dirichlet distribution generates random vectors of weights $\{u_\mu\}$, $\mu=1,\ldots,\ld$, that all sum to 1. In other words, in the model, Eq.~(\ref{eq:model}), each sample is a linear combination of $\ld$ features, and the weights are sampled randomly themselves. The mean, the variance, and the covariance of components in the vector sampled from the Dirichlet distribution are~\cite{DirichletWiki} \begin{align} \mu_u\equiv\bar{u}_\mu&=\frac{1}{\ld}\,,\\ \sigma_u^2\equiv\text{var}\,{u_\mu}&=\frac{1-\ld^{-1}}{\ld(1+\ld \beta)}\,,\label{eq:Dirichlet_mean&variance}\\ \text{cov}(u_\mu,u_\nu)&=\frac{\delta_{\mu\nu}-\ld^{-1}}{\ld(1+\ld\beta)}\label{eq:Dirichlet_covariance}\,. \end{align} To develop some intuition for what follows, in Fig.~\ref{fig:fig1}(a), we show typical draws from the Dirichlet distribution with $\ld=5$ for three values of $\beta$. There are two extreme limits. For $\beta\ll1/\ld$, (left), the entire draw is concentrated on a single weight ($\mu=3$ in this particular case), where it is nearly 1. In the other extreme, for $\beta\gg1$ (right), the sample is nearly uniform with each weight almost $1/\ld$. For intermediate values of $\beta$ (middle), the Dirichlet distribution produces weights between these two extremes. Roughly, when $\beta\in[1/\ld,1]$, then $\ld \beta$ is the typical number of weights that are large ($\ld\beta=2$ in this example), while other weights are nearly zero. In this work, we exploit the Dirichlet distribution in two ways. First, we use coefficients $\mathbf U$ drawn from the Dirichlet distribution for statistically dependent mixing of model features. For this, we define a \emph{Dirichlet-Gaussian} (DG) model, where the elements of the latent feature matrix $\mathbf V$ are i.i.d.\ random Gaussian numbers. Second, we use the Dirichlet distribution to define non-negative latent features themselves. For this, we define the \emph{Dirichlet-Dirichlet} (DD) model, where both $\mathbf U$ and $\mathbf V$ are samples from the Dirichlet distributions. We note parenthetically that both of these models are are fully stochastic, so that both the features $\mathbf V$ and their mixing weights $\mathbf U$ are random variables. This is in contrast, for example, with spiked covariance models, which contain deterministic components~\cite{Baik_etal_2005,Potters_Bouchaud_2020}. In data analysis, since different measured variables may have widely different units, one often uses empirically standardised data $\widetilde{\mathbf X}$, where each variable (columns of the data matrix $\mathbf X$) is independently normalized to zero mean and scaled to unit variance. One then makes inferences about the data based on the empirical correlation matrix (ECM) \begin{align}\label{eq:corr_matrix} \mathbf C=\frac1T\widetilde{\mathbf X}^T\widetilde{\mathbf X} \end{align} and its eigenvalue spectrum. The empirically standardised data approximate the theoretically standardised random variables \begin{align}\label{eq:x_standardised} \widetilde x \equiv \frac{x-\mu_x}{\sigma_x}\,, \end{align} where the theoretical mean $\mu_x$ and the standard deviation $\sigma_x$ of each element $x$ of $\mathbf X$ are obtained from Eq.~(\ref{eq:model}): \begin{align} \mu_x&=\mathbb E[x]=\sum_\mu\mathbb E[u_\mu v_\mu]=\sum_\mu\mathbb E[u_\mu]\mathbb E[v_\mu]\,,\label{eq:mean_x}\\ \sigma^2_x&=\mathbb E[(x-\mu_x)^2]=\sum_\mu\text{var}(u_\mu v_\mu)+\sum_{\mu\neq\nu}\text{cov}\left(u_\mu v_\mu,u_\nu v_\nu\right)\,.\label{eq:var_x} \end{align} Above we used the fact that $u_\mu$ and $v_\mu$ are statistically independent. For both the DG and the DD model, in this article, we describe the structure of $\mathbf X$ that they generate in different regimes of the parameters $\ld$ and $\beta$. We evaluate the theoretically standardised random variables $\widetilde{x}$ and discuss qualitatively their correlations and eigenvalue densities of the correlation matrix. \section{Dirichlet-Gaussian (DG) model} \begin{figure*} \includegraphics[width=16.4cm]{fig1_01Dec2022.pdf} \caption{{\bf Qualitatively different correlation patterns emerge in the Dirichlet-Gaussian model in different regimes of the parameter $\beta_U$:} We illustrate this for $T=100$, $N=30$, $\ld=5$, and $\sigma_V^2=1$. (a) Typical samples of the Dirichlet distribution for different $\beta_U$. (b) Pairwise correlation matrices and dendrograms obtained from hierarchical clustering of the ECM. (c) Probability densities of correlation coefficients for a range of $\beta_U$ values. For reference, the gray curve shows the Gaussian distribution with the variance $1/T$, as expected for correlations of pure Gaussian noise data. (d) Eigenvalue densities for the same $\beta_U$ values. Inset: zoom in for $\beta_U=4\times 10^{-1}$, which is in the overlapping clusters regime. The vertical bars show the positions of eigenvalues for the single realization shown in (b, middle); the height of bars is not meaningful. Note a single large eigenvalue separated from the rest by a gap. In this Figure, averages are computed from $12\times 10^3$ independent realizations of the model for the correlation densities and $2\times 10^4$ realizations for the eigenvalue densities.}\label{fig:fig1} \end{figure*} Above we defined the Dirichlet-Gaussian model as \begin{align}\label{eq:DG_model} U_n\sim Dir(\{u_\mu\};\beta_U)\,,\;\text{ and }\;V_{t\mu}\sim \mathcal N(\mu_V,\sigma_V^2)\,, \end{align} where $U_n$ is a vector of length $\ld$ giving the $n$'th column of the $\ld\times N$ coefficient matrix $\mathbf U$. Each vector $U_n$ is drawn from the Dirichlet distribution. The entries of the feature matrix $\mathbf V$ are i.i.d.\ Gaussian random variables. For convenience, we will set $\mu_V=0$ and $\sigma_V^2=1$ in the following, unless explicitly stated otherwise. \subsection{Data patterns} We give a description of the data patterns obtained in different regimes of the parameter $\beta_U$ and consider both the case where $\ld\sim O(1)$ and the case $\ld\sim O(N)$. In Figs.~\ref{fig:fig1},~\ref{fig:fig2} (b) we visualise data patterns by presenting the pairwise correlation of the variables. We use the centroid clustering algorithm in Python's SciPy library \cite{Scipy_centroid} to group variables hierarchically based on their correlations, and we plot the correlation matrices with variables ordered accordingly. The dendrograms above the correlation matrices detail the structure of the hierarchical clusters. The following are the typical patterns observed in the data. \emph{Clusters}, Fig.~\ref{fig:fig1} (left column), emerge for $\ld\sim O(1)$ and $\beta_U\ll 1$. Here each column of the coefficient matrix $\mathbf U$ is concentrated on a single bin. Thus out of $\ld$ features, a single feature is assigned to each variable, and each feature defines a separate cluster. The clusters are clearly visible in the correlation matrix. The dendrogram shows that the cluster are clearly separated, and each includes multiple variables. \emph{Overlapping clusters}, Fig.~\ref{fig:fig1} (middle), are visible for $\ld\sim O(1)$ and $\ld\beta_U\sim O (1)$. Here multiple features contribute to each variable. The correlation structure between variables becomes fuzzy, and variables may belong to more than one cluster. The dendrogram reveals groups of variables of different sizes and varying cluster distances. \emph{Uniform mixing}, Fig.~\ref{fig:fig1} (right), describes the regime $\ld\sim O(1)$ and $\beta_U\gg 1$. All of the $\ld$ latent features contribute approximately uniformly to each variable, and all the variables are, therefore, highly correlated. The dendrogram confirms that there is effectively a single cluster of variables (note that the scale of correlation distances is $10^{-3}$ in this subplot). \emph{Sparse mixing}, Fig.~\ref{fig:fig2} (left), is observed for $\ld\gg 1$, and $\ld\beta_U\ll \ld$ ($\ld\beta_U$ can be $O(1)$, or $\gg1$). In this regime, the number of features available is much larger than the number of features that contribute to any given variable. For example, in Fig.~\ref{fig:fig2} (left), we chose $\ld=N/3$ and, on average, there are $\ld\beta_U=3$ features per variable. From the dendrogram, Fig.~\ref{fig:fig2}(b), we see that there are no clear clusters of more than two variables. \subsection{Observable distributions} Next, we qualitatively discuss the distribution of correlation coefficients and the spectral density of the ECM, Eq.~(\ref{eq:corr_matrix}). We leave a detailed analytical treatment of these distributions for future investigation. For simulations, the pairwise correlations between variables are computed from empirically standardized data. These approximate the theoretically standardized random variables in Eq.~(\ref{eq:x_standardised}), whose means and variances we now compute. The mean of a product of a zero mean Gaussian and a Dirichlet random variable vanishes. Hence, by Eq.~(\ref{eq:mean_x}), the mean of elements $x$ of the data matrix also vanishes \begin{align} \mu_x=\sum_\mu \mathbb E[v_\mu]\mathbb E[u_\mu]=0\,. \end{align} In Appendix~\ref{app:variance_GaussDirichlet} we evaluate Eq.~(\ref{eq:var_x}) to find the variance: \begin{align} \sigma^2_x=\ld(\sigma_u^2+\ld^{-2})\sigma_V^2=\left(\frac{1-\ld^{-1}}{1+m\beta_U}+\frac{1}{m}\right)\sigma_V^2\,. \end{align} We verify this expression numerically in Fig~\ref{fig:S1}(a), and note the following limits: when $\ld\gg 1$, while $\ld\beta_U$ is finite, the variance approaches $\sigma^2_x\approx \sigma_V^2/(1+\ld\beta_U)$; when $\beta_U$ approaches zero, $\sigma^2_x\approx \sigma_V^2$; and when $\beta_U$ approaches infinity, $\sigma^2_x\approx \sigma_V^2/\ld$. \subsubsection{Correlations} The family of correlation distributions in Fig.~\ref{fig:fig1}(c) reflects the transition between the two extreme limits (from pure clusters to uniform mixing). In the clusters limit (black curve), the density of correlations has a nearly delta function peak at 1, corresponding to the highly correlated variables within clusters, and an approximately Gaussian distribution for many small correlation values between the clusters, which is the result of finite sampling. In the opposite limit of uniform mixing (yellow curve), all small correlations have disappeared and the density is approaching a delta function concentrated at 1 because all variables approximately behave as a single cluster. For the regime of overlapping clusters between the two extremes, we observe intermediate correlation values between zero and one, signifying the existence of variables that can be attributed to more than one cluster. In the sparse mixing regime, the correlation density (orange curve in Fig.~\ref{fig:fig2}(c)) has a large peak around small, positive correlations and few large correlation values. This is due to the sparse selection from a large number of features, making strong correlations among variables unlikely. For reference, we also show the correlation density of a Gaussian-Gaussian (GG) latent feature model (gray curve). In the GG model, the feature and the coefficient matrix have i.i.d.\ zero mean, unit variance Gaussian entries. The density of the GG model was analyzed in detail in~\cite{Fleig_Nemenman_2022}. \subsubsection{Eigenvalues} The shape of the eigenvalue density depends on the model parameters $T$, $N$, $\ld$ and $\beta_U$. From RMT, we know that the eigenvalue densities of random matrices often converge to their limiting form when $T$, $N$, etc.\ tend to infinity, while their ratios, such as the sampling ratio $N/T$ or the ratio of the number of latent features to the number of observables $\ld/N$ remain fixed. This is known as the thermodynamic or large-$N$ limit, in which the fixed ratios control the shape of the density~\cite{Potters_Bouchaud_2020,Fleig_Nemenman_2022}. We expect these ratios to control the shape of the eigenvalue distributions of the DG model as well. However, here we focus on the qualitative description of eigenvalue densities, and we leave a detailed analytic description for the future. When $\ld\sim O(1)$, the rank of the ECM satisfies $\mathrm{rank}(\mathbf C)<N,T$, implying that there are trivial zero eigenvalues. Beyond these, analogous to the correlations, the eigenvalue densities in Fig.~\ref{fig:fig1}(d) reflect the transition between the two limits of the model. First, in the clusters limit (black curve), the eigenvalue density has a broad, `spiky' bump. The spikes result from the interaction of (on average) $\ld$ dominant eigenvalues of a similar magnitude. In Fig.~\ref{fig:S2}, we show how the resolution of the delta spikes increases at better sampling (smaller $N/T$ ratio). In the opposite limit of uniform sampling (yellow curve), the spectrum becomes degenerate with two delta peaks. One peak is for a single eigenvalue at $\lambda=N$, corresponding to all variables being in one cluster, and the other peak has $N-1$ eigenvalues at zero. For the regime of overlapping clusters, in between the two limits, the density has two bumps (inset in Fig.~\ref{fig:fig1} (d)). The bumps move in opposite directions as $\beta_U$ increases, and they develop into delta peaks in the extreme limit. To understand the origin of the two bumps, we show the position of eigenvalues of a particular realization of the overlapping clusters model from Fig.~\ref{fig:fig1} (b) by vertical bars in Fig.~\ref{fig:fig1} (d, inset). The right density bump is due to a single, top-ranked eigenvalue, and the left bump is due to the remaining $\ld-1$ non-trivial eigenvalues. The top-ranked eigenvalue is due to a baseline correlation across all variables that comes from the non-zero, positive mean of the correlation density. Its origin can be further understood from analysing the eigenmode associated with the top-ranked eigenvalue. For this, we decompose the correlation matrix into two contributions, as shown in Fig.~\ref{fig:S3} (a). The first contribution is the projection of the correlation matrix onto the top-ranked eigenmode, and the second contribution is the projection on all other eigenmodes. The decomposition is shown in Fig.~\ref{fig:S3} (b), and the distributions of matrix entries of each contribution is shown on the right. The mean values of the correlation density and the top-ranked contribution are the same, and the mean of the remaining contribution vanishes. Thus the top-ranked mode can be considered as a baseline contribution, similar to the `market mode' found in the analysis of financial data, which captures the collective up-and-down of the stock market~\cite{Laloux_etal_1999}. As $\beta_U$ grows, more and more of the total variance goes into this mode, until all variables are strongly correlated. The remaining $\ld-1$ modes capture correlations between specific subgroups of variables. As $\beta_U$ is increased, less and less of the total variance belongs to these modes. Finally, in Fig.~\ref{fig:fig2}(d) we show the eigenvalue density in the {\em sparse} mixing regime (orange curve) and for reference we also show the eigenvalue density of the GG model (gray curve) \cite{Fleig_Nemenman_2022}. The density of the GG model has a single peak with upper and lower eigenvalue bounds given by $\lambda_\pm=(1\pm \sqrt{N/\ld})^2\approx7.46,\,0.54$, respectively, as was derived in \cite{Fleig_Nemenman_2022}. In contrast, the eigenvalue density of the sparse mixing model has two peaks. The qualitative understanding of the origin of the two peaks is analogous to the one just described for the overlapping clusters regime. The eigenvalues of a specific realization of the sparse DG model from Fig.~\ref{fig:fig2}(b) are shown by the orange bars here. The right and the left bump, are due to a single top-ranked and the $\ld-1$ remaining non-trivial eigenvalues, respectively. The decomposition of the sparse DG correlation matrix into the two contributions is shown in Fig.~\ref{fig:S3} (c), together with the contributions' densities. For the computation of the sparse model we have selected parameters such that the density is well sampled. This is the case when both $T$ and $N$ are large. In particular, $N$ has to be large enough to ensure that sampling of the different ways of sparsely selecting features is sufficient. In other sampling limits, the form of the eigenvalue density is more complicated, and a detailed analysis will be needed to understand its structure. \section{Dirichlet-Dirichlet (DD) model} \begin{figure*} \includegraphics[width=16.4cm]{fig2_03Dec2022.pdf} \caption{{\bf Sparse mixing regime of the Dirichlet-Gaussian and non-negative Dirichlet-Dirichlet model.} The correlation and eigenvalue densities are qualitatively similar. We use $T=1000$, $N=60$, $\ld=20$, and $\sigma_V^2=1$ and $\beta_V=3\times10^{-2}$. (a) Typical samples of the Dirichlet distribution for specified $\beta$-value. (b) Pairwise correlation matrices and dendrograms obtained from hierarchical clustering of the ECM. Sparse DG left, and sparse DD right. (c) Density of pairwise correlations of the DG model (orange) and the DD model (purple). The densities show a small difference (see inset). The observable density of the GG model (gray) is shown for reference. (d) Eigenvalue densities of the same models. Inset: comparison of the DG and the DD densities. Vertical bars show positions of eigenvalues for the single realization of the DG model in (b, left); the height of bars is not meaningful. Note a single large eigenvalue separated from the rest by a gap. Averages are computed from $12\times 10^3$ independent realizations of the model for the correlation densities and $2\times 10^4$ realizations for the eigenvalue densities.}\label{fig:fig2} \end{figure*} We define the Dirichlet-Dirichlet model by \begin{align}\label{eq:DD_model} U_n\sim Dir(\{u_\mu\};\beta_U)\,,\;\text{ and }\;V_\mu\sim Dir(\{v_t\};\beta_V)\,, \end{align} where $V_\mu$ is a vector of length $T$ giving the $\mu$'th column of the feature matrix $\mathbf V$. It is important to note that the matrix elements $v_{t\mu}$, $v_{t\nu}$ with $\mu\neq\nu$ are statistically independent from each other, while elements $v_{t\mu}$, $v_{s\mu}$ with $t\neq s$ are statistically dependent. The columns of $\mathbf U$ and $\mathbf V$ are drawn from Dirichlet distributions with parameters $\beta_U$ and $\beta_V$, respectively. By definition, the model yields non-negative data, admits sparse mixing of features, and also allows the features themselves to have a sparse structure. These properties are characteristically found in non-negative matrix factorization of images~\cite{lee1999learning}, and hence the DD model may be used as a generative model for this type of data. \subsection{Data patterns} We focus on sparse mixing of features, where $\ld$ is large (potentially $\sim O(N)$) and $\beta_U$ is chosen such that $\ld\beta_U\ll\ld$. In our specific example, we set $\ld=20$ and $\beta_U=15\times 10^{-2}$, such that $\ld\beta_U=3$. We investigate the effect of non-negativity of the data and a statistically dependent structure of the features on the observable distributions. The structure of the features depends on the choice of $\beta_V$ in relation to the magnitude of $T$. We obtain \emph{sparse features} for $T\beta_V\ll T$, \emph{dense features} for $\beta_V<1$ but not too small, and \emph{approximately uniform features} for $\beta_V>1$. In Fig.~\ref{fig:fig2}, we show a typical correlation pattern of the DD model with sparse latent features ($T=1000$, $\beta_V=3\times 10^{-2}$, and thus $T\beta_V=30$). \subsection{Observable distributions} We compute the theoretically standardised random variable in Eq.~(\ref{eq:x_standardised}). The mean of $x$ is given by \begin{align} \mu_x=\sum_\mu \mathbb E[v_\mu]\mathbb E[u_\mu]=T^{-1}\,. \end{align} In Appendix~\ref{app:variance_DirichletDirichlet}, we evaluate Eq.~(\ref{eq:var_x}) for the variance and find \begin{align} &\sigma^2_x=\ld\,\text{var}\left(u_\mu v_\mu\right)+(m^2-m)\text{cov}\left(u_\mu v_\mu,u_\nu v_\nu\right)\label{eq:sigma2x_DD} \end{align} with \begin{align} \text{var}\left(u_\mu v_\mu\right)&=(\sigma_u^2+\ld^{-2})(\sigma_v^2+T^{-2})-(\ld T)^{-2}\,,\nonumber\\ \text{cov}\left(u_\mu v_\mu,u_\nu v_\nu\right)&=(\text{cov}(u_\mu,u_\nu)+\ld^{-2})T^{-2}-(\ld T)^{-2}\,, \end{align} where $\sigma_u^2$, $\sigma_v^2$ and the covariance on the right-hand sides are given in Eqs.~(\ref{eq:Dirichlet_mean&variance}) and~(\ref{eq:Dirichlet_covariance}), respectively. We verify Eq.~(\ref{eq:sigma2x_DD}) numerically in Fig~\ref{fig:S1} (b). \subsubsection{Observable distributions} We now explore the effect that non-negativity and a statistically dependent feature structure have on the distributions of observables. In Fig.~\ref{fig:S4} (a), we show the distributions of the theoretically standardised data values and find large qualitative differences between distributions with different feature structures (sparse, dense, and approximately uniform). Notably, the distributions are also qualitatively different from the DG model distribution. In Figs.~\ref{fig:fig2} (c) and~\ref{fig:S4} (b), we show the distributions of correlations. For a sparse feature structure of the DD model, we find a small, visible difference with the DG model. However, as the DD feature structure becomes dense and approaches uniformity, this difference vanishes. Similarly, in Figs.~\ref{fig:fig2} (d) and~\ref{fig:S4} (c), we show the eigenvalue densities. Even for a sparse feature structure in the DD model, the difference with the DG model is very small and vanishes as the feature structure becomes dense. The fact that the effect of feature structure on the correlation and eigenvalue distributions is small and even vanishes for dense features is likely due to averaging effects over feature structure when computing correlations. We expect that this can be understood with a generalization of the central limit theorem to statistically dependent random variables, which we leave for future work. \section{Discussion} We have presented generative probabilistic matrix models for structured data, based on linear mixing of latent features. Key ingredient in our model is the statistical dependence of the mixing coefficients. In different parameter regimes, the models give rise to common, distinct data patterns. Based on these models, there are multiple lines of further investigation. It would be desirable to obtain better analytic understanding of the models' observable distributions shown in Figs.~\ref{fig:fig1} (c,d) and~\ref{fig:fig2} (c,d), paralleling similar work on the GG model~\cite{Fleig_Nemenman_2022}. We believe that, at least in certain parameter regimes, this is possible. Intriguingly, in our numerical investigation, we have found that the effect that the feature structure has on the correlation and eigenvalue distributions in the DD model is not very strong, cf.~Figs.~\ref{fig:fig1},~\ref{fig:fig2} and~\ref{fig:S4} (c,d). This means that one may be unable to identify the model that generated the data from such data observables, and hence we may be limited in our ability to correctly select a learning machine to learn from such data. Understanding the causes and the effects of this limitation should be one of the first applications of analytical studies. A possible application of our model can be in generating training data for supervised learning. For example, the \emph{hidden manifold model} (HMM) was introduced in~\cite{goldt2020modeling} as a method for generating structured low-dimensional, non-linear training data, which is complex enough to be interesting, but simple enough for analytical investigations. The HMM training data $\mathbf X^*$ was computed by `folding' the data of a linear latent feature mixing model $\mathbf V\mathbf U$, by a non-linear function $f$, which acts component-wise, \begin{align} \mathbf X^*=f(\mathbf V\mathbf U/\sigma_x)\,, \end{align} where $\sigma_x$ is included for normalisation. The product $\mathbf V\mathbf U$ is the GG model, which we analyzed at length in \cite{Fleig_Nemenman_2022}. Reference~\cite{goldt2020modeling} also considered a Hadamard latent feature matrix. In all these cases, the data lies on a continuous $\ld$-dimensional manifold (possibly with some noise), embedded in the high-dimensional data $\mathbf X^*$, and the training labels for such data are assigned to depend on the position of the sample within this $\ld$-dimensional manifold only. It is clear that real-world data may have other types of low-dimensional structure, such as (overlapping) clusters, constrained (non-negative) manifolds, or they may be not low-dimensional, but sparse. Our DG and DD models, in their different regimes, cover many such possibilities. This allows, at least in principle, analytical studies of performance of different learning machines on such qualitatively different data. \begin{acknowledgments} IN was supported in part by the Simons Foundation Investigator award, NSF grant PHY/201052, and NIH grants 1R01NS099375 and 2R01NS084844. \end{acknowledgments} \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
2024-02-18T23:40:32.878Z
2022-12-07T02:14:15.000Z
algebraic_stack_train_0000
2,714
4,799
proofpile-arXiv_065-13261
\section{Introduction} The demand for deploying DNN models on edge devices (e.g., mobile phones, robots, and self-driving cars) are expanding rapidly. However, the increasing memory and computing power requirements of DNNs make their deployment on edge devices a grand challenge. Thus, various custom-made DNN models have been introduced by experts to accommodate a DNN model with reasonably high accuracy on mobile devices~\cite{howard2019searching,tan2019efficientnet,zhang2018shufflenet,ma2018shufflenetv2,mehta2020dicenet,huang2018condensenet}. In addition to mobile-friendly deep networks, model optimization methods such as network pruning~\cite{han2015deep,he2018amc}, factorization~\cite{Sainath2013factorization}, knowledge distillation~\cite{hinton2015distilling}, and parameter quantization~\cite{han2015deep} help to shrink the DNN model size down to the target hardware capabilities. Among such methods, network pruning has shown to be considerably useful in model compression by introducing sparsity or eliminating channels or filters, yet requires extensive knowledge and effort to find the perfect balance between the accuracy and model size. The main challenge of network pruning is to find the best pruning schedule or strategy for the layers of a network. Furthermore, a pruning strategy for a given DNN cannot be used for other networks due to their different structure. Thus, each network demands a customized pruning strategy. Recently, He et al.~\cite{he2018amc} leveraged reinforcement learning (RL) to automatically find the best pruning strategy. However, they used manually defined rules, such as number of input/output channels, parameter size, and FLOPs for the RL environment states vectors and ignored the rich structural information within the DNN. Yu et al.~\cite{yu2020agmc} are the first to model a given DNN as a hierarchical graph and proposed a GNN-based encoder-decoder to embed DNN layers. However, their method learns the topology indirectly and does not consider topology changes while model compression. Moreover, existing RL-based model compression methods require manually defined pruning ratio to get the desired model size reduction. Although the model accuracy is used within the RL agent's reward function, there is a negative correlation between the compression ratio and reward. Thus, without any constraint, the RL agent tends to search for a tiny compression ratio to get a better reward. Deep neural networks are already being represented as computational graphs in deep-learning frameworks, such as TensorFlow\cite{Abadi2016TensorFlow} and PyTorch\cite{Paszke2019pytorch}. Such a representation contains various patterns (a.k.a motifs) repeated throughout the network topology. For instance, MobileNetV2~\cite{Sandler2018mobileNetv2} involves 17 blocks, each following a similar graph and operation structure. The topology of the blocks can represent their states, allowing us to exploit their redundancy and importance and search for a suitable compression policy. Such structural characteristics within DNNs inspired us to model them as hierarchical computational graphs and learn the compression policy. In a nutshell, we model a given DNN as hierarchical computational graphs and propose multi-stage graph neural networks (m-GNN) to embed DNNs. Additionally, we equipped m-GNN with a reinforcement learning agent (GNN-RL) to automatically search for the compression policy (e.g., pruning ratios). To avoid tiny compression ratios due to the negative correlation between the compression ratio and RL agent's reward, we created a DNN-Graph environment for the GNN-RL agent. Such an environment allows the agent to continuously compress the DNNs until it satisfies the model size constraint. For each step of the compression, the DNN-Graph environment converts the compressed DNN to a graph. The graph is the environment state input to the GNN-RL agent. Once the compressed DNN satisfies the desired model size, the DNN-Graph ends the search episodes and uses the pruned DNN's accuracy as a reward for the agent. In essence, this paper makes the following contributions: \begin{itemize} \item A novel method for modeling DNNs as hierarchical graphs to exploit their topological information for network pruning. \item An efficient multi-stage GNN and a learning-based pooling method to learn hierarchical graph embeddings. \item A topology-aware solution based on GNN and RL for automatic network pruning. \item State-of-the-art model compression results on various DNN models. \end{itemize} \section{Related Work} Within the context of this paper, researchers already proposed various methods to compress DNN models, such as architecture design, network pruning, and quantization. Graph neural networks are also gaining momentum among these research fields. In the following, we will review these methods. \textbf{Model Compression.} Extensive works focus on model compression and efficient deployment of DNNs, such as network pruning~\cite{han2015deep,he2018amc}, knowledge distillation~\cite{hinton2015distilling}, and network quantization~\cite{han2015deep,courbariaux2016binarized,rastegari2016xnor}. Within the scope of this paper, we mainly consider network pruning. Structured~\cite{Anwar2017Structured} and unstructured pruning~\cite{zhang2018unstructured,Guo2016unstructured} evaluate model parameters importance and remove those with a lower rank. The unstructured pruning promises a higher compression ratio by tensor sparsification. However, the potential speedup is only attainable on specialized AI-accelerators. On the other hand, structured pruning attempts to eliminate filters or channels and provides benefit to all hardware platforms. For instance, the uniform, shallow, deep empirical structured pruning policies~\cite{he2017handcraft_channel,Li2016handcraft}, the hand-crafted structured pruning methods, such as SPP~\cite{wang2017SPP}, FP~\cite{Li2016handcraft}, and RNP~\cite{Lin2017RNP} fall into the structured pruning category. The SPP analyzes each layer and measures a reconstruction error to determine the pruning ratio. FP evaluates the performance of single-layer-pruning and ranks the importance of layers and prunes aggressively on low ranks. RNP groups all convolutional channels into sets and trains an RL agent to decide on the sets. However, handcrafted pruning policies often fail to be extended to new models and might lead to sub-optimal performance. Recently, researchers tend to leverage reinforcement learning to search for pruning policies automatically. Liu et al.~\cite{liu2020AutoCompress} proposed an ADMM-based~\cite{Boyd2011ADMM} structured weight pruning method and an innovative additional purification step for further weight reduction. He et al.~\cite{he2018amc} proposed the AMC for network pruning and leveraged reinforcement learning to predict each hidden layer's compression policies. However, they manually defined DNN's embeddings and ignored the neural network's essential structural information. Yu et al.~\cite{yu2020agmc} are the first to model DNNs as graphs and introduced a GNN-based graph encoder-decoder to embed DNNs' hidden layers. Nevertheless, their RL agent learns the topology information indirectly and is insensitive to the structural changes of DNNs while being pruned. \textbf{Graph Neural Networks (GNN).} GNN and its variants~\cite{kipf2017gcn,Schlichtkrull2018rgcn} can learn the graph embeddings and have been successfully used for link prediction~\cite{Nowell2007linkprediction} and node classification. However, these methods are mainly focused on node embedding and are inherently flat, which is inefficient to deal with the hierarchical data. In this paper, we aim to learn the global topology information from DNNs. Thus, we proposed multi-stage GNN (m-GNN), which takes advantage of the repetitive motifs available in DNNs. m-GNN considers the edge features and has a novel learning-based pooling strategy to learn the global graph embedding. \textbf{Graph-based Neural Architecture Search (NAS).} Although this paper is not directly related to NAS, it is an active area of research wherein the computationally expensive operations are replaced with more efficient alternative. Particularly, graph-based NAS methods apply GNN and use graph-based neural architecture encoding schemes to exploit neural network's topology. They model neural architecture's search spaces as graphs and aim to search for the best performing neural network structure~\cite{Guo2019NAS_NAT,Han2020NAS_oneshot,Dudziak2021BPR_NAS}. Such methods inspired us to exploit compression policy from the topology information of DNNs. \section{Approach} To prune a given DNN, the user provides the model size constraint~(e.g., FLOPs-constraint). The DNN-Graph environment receives the constraint, takes the DNN's hierarchical computational graph as the environment state, and leverages the GNN-RL agent to search for a compression policy. Figure~\ref{fig:2} depicts a high-level overview of our method. The DNN-Graph environment episode is essentially a model compression iteration. As the red arrows show, the process starts from the original DNN. The model size evaluator first evaluates the size of the DNN. If the size is not satisfied, the graph generator converts the DNN to a hierarchical computational graph. Then the GNN-RL agent leverages m-GNN to learn pruning ratios (compression policy) from the graph. The pruner prunes the DNN with the pruning ratios and begins the next iteration from the compressed DNN. Each step of the compression will lead to DNN's topology change. Thus, the DNN-Graph environment reconstructs a new hierarchical computational graph for the GNN-RL agent corresponding to the current compression state. Once the compressed DNN satisfies the size constraint, the evaluator will end the episode, and the accuracy evaluator will assess the pruned DNN's accuracy as an episode reward for the GNN-RL agent. As opposed to the existing RL-based methods~\cite{he2018amc,yu2020agmc,liu2020AutoCompress}, with the DNN-Graph environment, the GNN-RL can automatically learn to reach the desired model size. Hence, it prevents us from manual adjustments and obtaining tiny compression ratios. In the following, we will explain the details of the m-GNN and RL agent within our approach. \input{contents/f02_figure2} \subsection{Hierarchical graph representation} \label{sec:hierarchicalgraph} Neural networks representation as computational graphs in deep learning frameworks, such as TensorFlow and PyTorch, contains rich topology information. However, it may involve billions of operations~\cite{he2016ResNet}, which makes the computational graph bloated. Nevertheless, computational graphs often contain repetitive sub-graphs (a.k.a. motifs), such as 3$\times$3 convolutions or custom blocks in state-of-the-art networks. We can simplify the computational graphs by extracting the motifs and modeling them as hierarchical computational graphs. Additionally, we can make the graph coarser by replacing primitive operations such as \textit{add}, \textit{multiple}, and \textit{minus} with machine-learning high-level operations (e.g., convolution, pooling, etc.). Formally, we model the DNN as an $l$-layer hierarchical computational graph, such that at the $l^{th}$ layer (the top layer) we would have the hierarchical computational graph set $\mathcal{G}^{l} = \{G^l\}$, where each item is a computational graph $G^l = (V^l,\mathcal{E}^l,\mathcal{G}^{l-1})$. $V^l$ is the graph nodes corresponding to hidden states. $\mathcal{E}^l$ is the set of directed edges with a specific edge type associated with the operations. Lastly, $\mathcal{G}^{l-1} = \{G^{l-1}_0,G^{l-1}_1,...\}$ is the computational graph set at the $(l-1)$-layer and the operation set at layer $l$. Within the first layer, we manually choose commonly used machine learning operations as the primitive operations for $\mathcal{G}^{0}$. As an example, Figure \ref{fig:1} illustrates the idea behind generating hierarchical computational graphs using a sample graph $G$, where the edges are operations and the nodes are hidden states. In the input graph, we choose three primitive operations $\mathcal{G}^{0} = $ \{1$\times$1 conv, 3$\times$3 conv, 3$\times$3 max-pooling\} corresponding to the three edge types. Then, we extract the repetitive subgraphs (i.e., $G^1_1$, $G^1_2$ and $G^1_3$), each denoting a compound operation, and decompose the graph $G$ into two hierarchical levels, as shown in Figure \ref{fig:1}~(b) and (c). The level-1 computational graphs are motifs that correspond to the edges within the level-2 computational graph. \input{contents/f01_figure1} The hierarchical computation graph's size depends on the primitive operations we choose in $\mathcal{G}^{0}$. In our experiments, we choose the commonly used operations in machine learning as primitive operations (e.g., convolution, pooling, etc.). \subsection{Network pruning using GNN and RL} \subsubsection{Multi-stage GNN} Standard GNN and its variants~\cite{kipf2017gcn} are inherently flat~\cite{Ying2018DiffPool}. Since we model a given DNN as an $l-$layer hierarchical computational graph (see Section~\ref{sec:hierarchicalgraph}), we propose a multi-stage GNN~(m-GNN), which embeds the hierarchical graph in $l$-stages according to its hierarchical levels and analyzes the motifs. As depicted in Figure~\ref{fig:1}, m-GNN initially learns the lower level embeddings and uses them as the corresponding edge features in high-level computation graphs. Instead of learning node embeddings, m-GNN aims to learn the global graph representation. We further introduced a novel learning-based pooling strategy for every stage of embedding. With m-GNN, we only need embedding once for each motif on the computational graph. It is much more efficient and uses less memory than embedding a flat computation graph with standard GNN. \textbf{Multi-stage Embedding.} For the computational graphs $\mathcal{G}^{t} = \{G^{t}_0,G^{t}_1,...,G^{t}_{N_t}\}$ in the $t^{th}$ hierarchical layer, we embed the computational graph $G^t_i = (V^t_i,\mathcal{E}^t_i,\mathcal{G}^{t-1}), i=\{1,2,...,N_t\}$ as: \begin{equation} e^t_i = EncoderGNN_t(G^t_i, E_{t-1}) \end{equation} , where $e^t_i$ is the embedding vector of $G^{t}_i$, $E_{t-1} = \{e^{t-1}_j\}, j=\{1,2,...,N_{t-1}\}$ is the embedding of the computational graphs at level ${t-1}$ and the type of edges at level ${t}$. For layer-1, $E_{0}$ contains the initial features (e.g., one-hot, and random standard) of the primitive operations $\mathcal{G}^{0}$ that we manually select. In the hierarchical computational graphs, each edge corresponds to a computational graph of the previous level and uses its graph embedding as the edge feature. Furthermore, the graphs at the same hierarchical level share the GNN's parameter. At the top layer ($l^{th}$ layer) of the hierarchical graph $\mathcal{G}^{l} = \{G^l\}$, we only have one computational graph and its embedding is the DNN's final embedding $g$: \begin{equation} g = EncoderGNN_l(G^l, E_{l-1}) \end{equation} \textbf{Message passing.} In the multi-stage hierarchical embedding, we consider the edge features. However in the standard graph convolutional networks (GCN)~\cite{kipf2017gcn}, it only passes the node features and the message passing function can be formulated as follows: \begin{equation} h^{l+1}_i = \sum_{j\in N_i}\frac{1}{c_i}W^l h^l_j \end{equation} , where $h$ is nodes' hidden states, $c_i$ is a constant coefficient, $N_i$ is node $i$ neighbors, and $W^l$ is GNN's learnable weight matrix. Instead of standard message passing, in the multi-stage GNN, we add the edge features: \begin{equation} h^{l+1}_i = \sum_{j\in N_i}\frac{1}{c_i}W^l (h^l_j\circ e^{l-1}_k) \end{equation} , where $e^{l-1}_k$ is the features of edge $(i,j)$ and is also the embeddings of the $k^{th}$ graph at layer $l-1$, such that the edge $(i,j)$ corresponds to the operation $G^{l-1}_k$. The operation $\circ$ denotes the element-wise product, which we selected for the convenience of multi-stage message passing, but it is not limited to element-wise product. \textbf{Learning-based pooling.} Standard GNN aims to learn the node embeddings of a graph~(e.g., learn node representation and perform node classification). However, our goal is to learn the graph representation of a given DNN. Thus, we introduced a learning-based pooling method for multi-stage GNN to pool node embeddings and learn the graph embedding. We define the graph embedding $e$ as: \begin{equation} e = \sum_{i\in N}\alpha_i h_i \end{equation} , where $N$ is the set of nodes, $h_i$ is the $i-$th node embedding, and $\alpha_i$ is the learnable weight coefficient for $h_i$. In the multi-stage GNN, the computational graphs at the same hierarchical level share the GNN’s parameters, but in the pooling, each computational graph has its learnable pooling parameters $\alpha$. \subsubsection{Reinforcement learning} We use the generated hierarchical computational graph $\mathcal{G}^{l}$ for representing the DNN's state and the RL agent's environment state. Since pruning the model causes its underlying graph topology to change, we constantly update the graph $\mathcal{G}^{l}$ after each pruning step to help the RL agent find the pruning policy on current states. We employ deep deterministic policy gradient (DDPG) RL~\cite{lillicrap2016ddpg} together with m-GNN~(GNN-RL) to learn the compression policy directly from topology states. The actor and critic-network within the GNN-RL agent contain an m-GNN graph encoder and a multi-layer perception. The graph encoder is used to learn the graph embedding, and the multi-layer perception projects the embedding into action space~(i.e., compression policy). The actor's output layer applies the sigmoid function to bound the actions within $(0,1)$. Specifically, we perform FLOPs-constrained model compression using structured channel pruning (filter pruning) on the DNN's convolutional layers, which are the most computationally intensive. Thus, the GNN-RL agent's action space $A\in \mathbb{R}^{N \times 1}$, where the $N$ is the number of pruning layers, is the pruning ratios for hidden layers: $A=a_i$, where $i = \{1,2,...,N\}$, and $a_i \in [0,1)$ is the pruning ratio for $i^{th}$ layer. The GNN-RL agent makes the actions directly from the topology states: \begin{equation} g = GraphEncoder(\mathcal{G}^{l}) \label{eq:graphencoder} \end{equation} \begin{equation} A = MLP(g) \label{eq:mlp} \end{equation} , where the $\mathcal{G}^{l}$ is the environment states, $g$ is the graph representation, The MLP is a multi-layer perception neural network. The graph encoder learns the topology embedding, and the MLP projects the embedding into hidden layers' pruning ratios. The reward function is defined in Equation~\ref{eq:reward}. \begin{equation} R_{err} = -Error \label{eq:reward} \end{equation} , where the \textit{Error} is the compressed DNN's Top-1 error on validation set. \section{Experiments} To show the effectiveness of the GNN-RL, we evaluate our approach on over-parameterized DNNs (e.g., ResNet-20/32/44/56/110\cite{he2016ResNet} and VGG-16\cite{Simonyan2015VGG}) and mobile-friendly DNNs (e.g., MobileNet\cite{Andrew2017MobileNetv1,Sandler2018mobileNetv2} and Shufflenet\cite{ma2018shufflenetv2,zhang2018shufflenet}). Additionally, to demonstrate the superiority of our proposed method, we compare GNN-RL with three sets of methods: \begin{itemize} \item Uniform, shallow, and deep empirical policies~\cite{he2017handcraft_channel,Li2016handcraft}. \item The handcrafted channel reduction methods, such as SPP\cite{wang2017SPP}, FP~\cite{Li2016handcraft}, and RNP~\cite{Lin2017RNP}. \item The state-of-the-art RL-Based AutoML methods, such as AMC~\cite{he2018amc}, AGMC~\cite{yu2020agmc}, and random search (RS) with RL. \end{itemize} We use soft target update rate $\tau = 0.01$ for the GNN-RL updates. In the first $30$ episodes, we warm up the agent with random action. Then exploits 150 episodes with exponentially decayed noise and trains the network with 64 batch size and 2000 as replay buffer size. The experiment involves multiple datasets, including CIFAR-10/100~\cite{Krizhevsky2009Cifar}, and ImageNet~\cite{Olga2015ImageNet}. In the CIFAR-10/100 dataset, we sample $5K$ images from the test set as the validation set. In ILSVRC-2012, we split $10K$ images from the test set as the validation set. When searching, the DNN-Graph environment uses the compressed model's $R_{err}$ on the validation set as the GNN-RL agent's reward. \subsection{Over-parameterized DNNs} \input{contents/t01_table1} \input{contents/f04_figure4} We evaluate the effectiveness of GNN-RL on ResNet-20/32/44/56/110~\cite{he2016ResNet} and VGG-16~\cite{Simonyan2015VGG}, which fall into the over-parameterized networks category. With its residual connections, ResNet avoids gradient vanishing and allows an efficient training on its deep layers. However, its deep neural structure and billions of parameters make ResNet a challenging network to deploy on edge devices. Similarly, the VGG-16 network contains compact and dense convolutional layers, where some layers have hundreds of filters, leading to a giant model size (528 MB GPU memory for VGG-16). To compress these over-parameterized DNNs, we perform FLOPs-constrained channel pruning (filter pruning) on their convolutional layers. We trained ResNet-20/32/44/56/110 and VGG-16 models on CIFAR-10~\cite{Krizhevsky2009Cifar} and ImageNet~\cite{Olga2015ImageNet} datasets, respectively. Since the validation accuracy on the ImageNet dataset is sensitive to the compression ratio, with high compression ratios, the accuracy drops considerably without fine-tuning (in some cases, the pruned model without fine-tuning has less than $1\%$ validation accuracy). We applied a one epoch fine-tuning process on each RL search episode to ensure that the GNN-RL gets a valuable reward when pruning the VGG-16. When pruning the ResNett-20/32/44/56/110, we share the pruning index between residual connection layers to avoid channel mismatch. Table~\ref{table_1} shows the top-1 test accuracy of the pruned models. We set the $50\%$ FLOPs constraint, and all the RL-Based methods use the $R_{err}$ as the reward. After pruning, we fine-tuned the DNNs with 100 epochs and only updated pruned layers' parameters. Results show that GNN-RL outperforms all the baselines and achieves higher test accuracy and compression ratio. For the ResNet-110/56/44 models, the model pruned by the GNN-RL even achieves higher test accuracy than the original model. After further investigations, we believe that it is due to the over-fitting of ResNet-110/56/44, as the accuracy on the training set was 100\%. To verify our assumption, we performed a further experiment to explore the relationship between the FLOPs constraints and the accuracy of DNNs. Figure~\ref{fig:5} shows the FLOPs ratio between 0.4 to 0.6 (compared to the original model's FLOPs) can get the highest test accuracy on ResNet-110. When the FLOPs reduction ratio exceeds 0.6, the test accuracy drops intensively. \input{contents/f05_figure5} In addition to the experiments above, we further analyzed the redundancy and the importance of each layer. Figure~\ref{fig:4} shows the hidden layers' pruning ratios on ResNet-110 and ResNet-56. ResNet contains residual connection layers, which transfer hidden states directly from previous residual layers. Thus, the residual connection layers are more redundant and informative since they contain the information of both the current layer's hidden states and the previous layers. The GNN-RL agent automatically learns that the residual connection layers are more redundant and applies more pruning on the residual connection layers. Another insight from Figure~\ref{fig:4} is that the GNN-RL agent applies more pruning on layers 45 to 65 within ResNet-110. Similarly, layers 23 to 35 of ResNet-56 have been pruned more. Such an insight shows that the middle layers have less impact on model accuracy. \subsection{Mobile-friendly DNNs} \input{contents/f06_figure6} We evaluated GNN-RL on MobileNet-v1/v2~\cite{Andrew2017MobileNetv1,Sandler2018mobileNetv2} and ShuffleNet-v1/v2~\cite{zhang2018shufflenet,ma2018shufflenetv2}, which are more suitable for devices with limited resources. Instead of using traditional convolutional operation, the MobileNet-v1/v2 and ShuffleNet-v1/v2 have designed more efficient convolutional blocks. To maintain the characteristics and high-efficiency of those custom-designed blocks, we have developed specific pruning strategies for them. \subsubsection{Pruning strategy} \textbf{MobileNet-v1.} The MobileNet-v1 block separates the convolution into the depth-wise and point-wise convolutions~\cite{Andrew2017MobileNetv1}. Each depth-wise filter only operates on one channel of feature maps. On the other hand, point-wise operations are the $1\times1$ convolutions, which operate on the feature maps processed by the depth-wise convolutions. In our experiments, applying regular filter pruning on such layers causes information loss. As depicted in Figure~\ref{fig:6}, pruning the filter painted in grey causes its corresponding channel (the green one) to be deleted as well. To handle this, instead of pruning depth-wise and point-wise filters separately, we only prune the point-wise filters within MobileNet-v1 blocks. \textbf{MobileNet-v2.} The MobileNet-v2 is principally designed based on MobileNet-v1 blocks with an additional linear expansion layer. The linear expansion layers are 1$\times$1 convolutions without non-linear activation. Residual shortcuts are between every two linear expansion layers, which connect MobileNet-v1 blocks. Similar to the MobileNet-v1, here we prune linear expansion layers and point-wise convolutional layers. Since residual connections are between linear expansion layers, we share the linear expansion layers' pruning ratio. \textbf{ShuffleNet-v1/v2.} The ShuffleNet model uses blocks containing depth-wise and point-with convolutions, channel shuffle, linear expansion, and residual connections. To avoid dimension mismatch when downsampling, we consider the ShuffleNet blocks together and perform channel pruning inside the blocks. In a ShuffleNet block, we do not prune the expansion layer (the output layer of the block), which can preserve the number of output channels and keep the feature maps dimensions when downsampling. \subsubsection{Results} \input{contents/t02_table2} Table~\ref{table_2} shows the FLOPs-constrained channel pruning results with 60\% and 80\% FLOPs ratio for ShuffleNet and MobileNet, respectively. We have compared GNN-RL with AGMC~\cite{yu2020agmc}, and random search (RS) with RL. We did not include AMC and handcrafted methods since we designed specific pruning strategies for mobile-friendly DNNs. We believe that these strategies are incompatible with AMC layer embeddings and handcrafted rules, leading to unfair comparison. The MobileNet-v1/v2 and ShuffleNet-v1/v2 are pre-trained on the CIFAR-100~\cite{Krizhevsky2009Cifar}. After pruning, we fine-tuned the compressed DNNs with 150 epochs. Our approach outperformed all the baselines. Although the networks have already been very compact, with $20\%$ FLOPs reduction on the MobileNet-v2, the GNN-RL increases the top-1 accuracy by $0.19\%$. \subsection{Inference acceleration and memory saving} \input{contents/t03_table3} The inference and memory usage of compressed DNNs are essential metrics to determine the possibility of DNN deployment on a given platform. Thus, we evaluated the pruned models' inference latency using PyTorch 1.7.1 on an Nvidia GTX 1080Ti GPU and recorded the GPU memory usages. The ResNet-110/56/44/32/20 are measured on the CIFAR-10 test set with batch size 32. The VGG-16 is evaluated on the ImageNet test set with batch size 32. Lastly, MobileNet-v1/v2 and ShuffleNet-v1/v2 are measured on the CIFAR-100 with batch size 32. Table~\ref{table_3} shows the inference accelerations and memory savings on our GPU. All the models pruned by GNN-RL achieve noteworthy inference acceleration and GPU memory reductions. Particularly, for the VGG-16, the original model's GPU memory usage is 528 MB since it has a very compact dense layer, which contributes little to FLOPs but leads to extensive memory requirement. The GNN-RL prunes convolutional layers and significantly reduces the feature map size, thus consuming 141 MB less memory than the original version. The inference acceleration on VGG-16 is also noticeable, with $1.38 \times$ speed up on the ImageNet. The inference acceleration for mobile-friendly DNNs may seem relatively insignificant. However, such models are designed for deployment on mobile devices. Thus, we believe that our tested GPU, with its extensive resources, does not take advantage of the mobile-friendly properties. \section{Conclusion} This paper proposed a network compression approach called GNN-RL, which utilizes a graph neural network and a reinforcement learning agent to exploit a topology-aware compression policy. We introduced a DNN-Graph environment that converts compression states to a topology changing process and allow GNN-RL to learn the desired compression ratio without human intervention. To efficiently embed DNNs and take advantage of motifs, we introduced m-GNN, a new multi-stage graph embedding method. In our experiments, GNN-RL is validated and verified on over-parameterized and mobile-friendly networks. For the over-parameterized models pruned by GNN-RL, ResNet-110/56/44, the test accuracy even outperformed the original models, i.e. $+0.63\%$ on ResNet-110, $+0.1\%$ on ResNet-56 and $+0.13\%$ on ResNet-44. For mobile-friendly DNNs, the $79\%$ FLOPs MobileNet-v2 pruned by GNN-RL increased the test accuracy by $0.19\%$ compared to the original model. Additionally, all the pruned models accelerated the inference speed and saved a considerable amount of memory usage.
2024-02-18T23:40:33.197Z
2021-02-08T02:17:54.000Z
algebraic_stack_train_0000
2,731
4,685
proofpile-arXiv_065-13265
\section{Introduction} \label{sec:introduction} Deep learning has shown great successes with end-to-end learned representations replacing hand-crafted features in various machine perception fields, including computer vision, natural language processing and machine listening, especially in supervised learning paradigm. However, unlike ImageNet for computer vision, which contains millions of labeled images, human annotated datasets for machine listening are usually small \cite{choi2016automatic}. Therefore, learning from limited labeled data \cite{kim2020one} is especially important. There are existing methods such as transfer learning \cite{choi2017transfer} and domain adaptation, where models learned from different tasks with larger datasets are transferred and fine-tuned to another task/domain, and unsupervised learning \cite{wulfing2012unsupervised, schneider2019wav2vec, baevski2019vq}, such as generative models \cite{oord2016wavenet, kumar2019melgan}, where data distribution is often learned through reconstruction of the signal. Self-supervised learning \cite{cramer2019look, chen2020simple, chen2020big, gfeller2020spice}, as one sub-field of unsupervised learning, exploits the structure of the input data to provide supervision signals. It has become more popular in recent years, showing good improvement in multiple fields. For self-supervised learning, raw signals are transformed, and models are optimized with reconstruction or contrastive losses against original signals, where preserving of temporal or spatial data consistency is assumed for learning meaningful representations. These representations are proven useful to generalize and solve downstream tasks. On the other hand, multi-task learning \cite{hung2019multitask} improves generality by solving multiple tasks altogether during training, while weighting mechanisms among the losses from each task are crucial \cite{kendall2018multi, gong2019comparison}. Self-supervised and multi-task learning techniques are combined and applied to the speech domain, and they have shown success in \cite{pascual2019learning, ravanelli2020multi}, where reconstruction of various hand-crafted features are used for pre-training, and further learned representations are evaluated with downstream emotion recognition and automatic speech recognition (ASR) tasks. Similar to speech, music is also a highly structured audio signal. There are many hand-crafted features designed specifically for music to solve various music information retrieval (MIR) tasks. In this paper, we are interested in applying self-supervised and multi-task learning methods for pre-training music encoders. We explore various design choices including encoder architectures, weighting mechanisms to combine losses from pretext tasks, and worker selections to reconstruct various music specific hand-crafted features, such as Mel-frequency cepstral coefficients (MFCCs) for timbre \cite{de2012enhancing}, Chroma for harmonic \cite{ellis2007classifying}, and Tempogram \cite{grosche2010cyclic} for rhythmic attributes. Our main contributions are 1. provide suggestions on best design choice among all the variations from our experiments, and 2. investigate how different selections of pretext tasks interact with the performance of downstream music classification tasks, including instrument, rhythm and genre. \section{Method} \label{sec:method} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{diagram.pdf} \caption{Diagram of multi-task self-supervised encoder pre-training and downstream music classification evaluation.} \label{fig:diagram} \end{figure} A two-stage approach involving unsupervised or self-supervised pre-training and supervised learning for training to evaluate on downstream tasks is commonly adopted \cite{cramer2019look, chen2020simple, pascual2019learning, ravanelli2020multi} in recent literature, especially in the context of limited labeled data, where representation learning is key. In order to evaluate the effectiveness of the pre-training, simple linear or multi-layer perceptron (MLP) classifiers are usually used where the pre-trained encoders are required to capture meaningful representations to perform well on linear separation evaluation tasks. \subsection{Multi-task self-supervised pre-training} As shown in Figure \ref{fig:diagram}, we combine self-supervised and multi-task learning ideas for pre-training. Raw audio inputs are passed through multiple encoding layers, and outputs are two dimensional representations with temporal information. These encoded representations are then used for solving pretext tasks via workers including waveform reconstruction, and prediction of various popular hand-crafted features used in MIR to guide the learning jointly. \subsection{Downstream task training scenarios} \label{subsec:scenarios} After pre-training, we remove the workers, and feed the encoder outputs to MLP classifiers for downstream tasks. We adopt three training scenarios proposed in \cite{pascual2019learning}: 1. \textbf{Supervised}: Initialize the encoder weights randomly and train from scratch on the downstream datasets directly. 2. \textbf{Frozen}: Treat the pre-trained encoder as feature extractor with frozen weights, concatenate the feature extractor with trainable MLP classifiers and only optimize the classifier weights. 3. \textbf{Fine-tuned}: Initialize the encoder with pre-trained weights and fine-tune the encoder with downstream tasks altogether. \section{Experimental Design} \label{sec:experimental_design} We experiment with various design choices during pre-training including 1. Encoder architectures, 2. Pretext tasks for worker selections, 3. Weighting mechanisms for losses from pretext tasks. We provide more details on the downstream evaluations and data usage for both pre-training and downstream tasks in section \ref{subsec:downstream_eval} and \ref{subsec:data}. \subsection{Encoder architectures} We compare two encoder architectures proposed in two relevant studies in speech domain which inspire our work. We refer the two encoder architectures as PASE \cite{pascual2019learning} and PASE+ \cite{ravanelli2020multi}, respectively. 1. \textbf{PASE}: We use the same encoder architecture as the original PASE work \cite{pascual2019learning} with source code implementation\footnote{https://github.com/santi-pdp/pase}. The first layer is based on SincNet \cite{ravanelli2018speaker}, where the raw input waveform is convolved with a set of parameterized Sinc functions implementing rectangular band-pass filters. The authors claim that SincNet has fewer parameters and provides better interpretability. SincNet layer is followed by 7 one-dimensional convolutional blocks, batch normalization \cite{ioffe2015batch}, and multi-parametric rectified linear unit activation \cite{he2015delving}. We use the same model parameters as provided in the original work including kernel widths, number of filters, and strides. The set of parameters for convolutional layers emulates a 10ms sliding window. 2. \textbf{PASE+}: PASE+ \cite{ravanelli2020multi} improves upon PASE \cite{pascual2019learning} by adding skip connections and Quasi-Recurrent Neural Network (QRNN) \cite{bradbury2016quasi} layers to capture longer-term contextual information. QRNN layers consist of interleaved convolutional layers with RNN layers to speed up training with parallel optimization, while maintaining compatible performance. \subsection{Pretext tasks worker selections} Inspired by the original PASE \cite{pascual2019learning} work, we select waveform reconstruction, log power spectrum (LPS) and prosody features as baseline workers. We then choose three popular hand-crafted features in MIR field including MFCC, Chroma, and Tempogram as mixed-in workers. For waveform reconstruction, encoder layers are applied in reverse order to decode embeddings and optimized with mean absolute error (MAE) loss. For all the other workers, we use MLP with convolutional layers, and mean squared error (MSE) loss. Waveform, LPS, and MFCC are commonly used in machine listening. Chroma is inspired from western 12-tone theory which frequencies are folded into 12 bins as one octave. Tempogram \cite{grosche2010cyclic} takes local auto-correlation of the onset strength envelope. As used in \cite{pascual2019learning}, prosody features include zero crossing rate (ZCR), energy, voice/unvoice probability and fundamental frequency (F0) estimation, resulting in 4 features concatenated along with temporal dimension. For LPS, MFCC, Chroma, Tempogram and prosody, we use librosa\footnote{https://github.com/librosa/librosa} implementations with hop\_length = 160, n\_fft = 2048, sr = 16000 in order to align each hop as 10ms to match encoder parameters, with other default parameters. \subsection{Weighting mechanisms} We explore two weighting mechanisms to combine losses from each worker during pre-training. 1. \textbf{Equal weighted} by simply sum up losses from different workers for backpropagation. 2. \textbf{Re-weighted} by taking the validation losses per worker of the first 10 epochs from equal weighted training, averaging the loss per worker, taking the reciprocal as the new weights and applying those to retrain from scratch. The intuition is that the losses from each worker will then contribute more equally during backpropagation optimization. \subsection{Downstream evaluation} \label{subsec:downstream_eval} After pre-training, we remove the workers for pretext tasks and concatenate the output of the encoder with a simple MLP classifier. The input layer of the MLP is to take mean pooling across temporal dimension, resulting in one 512 dimension embedding, followed by 1 fully connected layer to adapt to output dimensions corresponding to the number of classes of each downstream dataset. We train with three scenarios discussed in section \ref{subsec:scenarios}, including supervised, frozen and fine-tuned, all with the same hyper-parameters, Adam optimizer \cite{kingma2014adam} with initial learning rate as 0.001 and early stopping criteria with patience value of 10 on validation loss. We run 10 trials for each experiment in this paper to get statistically meaningful results. \subsection{Data} \label{subsec:data} \subsubsection{AudioSet for pre-training} We use clips in AudioSet \cite{gemmeke2017audio} with "Music" label for pre-training. We are able to acquire \~{}2M (97\% of the original AudioSet data) clips, within which there are \~{}980k clips labeled with "Music". We randomly select 100k for pre-training, resulting in \~{}83 hours of data. \subsubsection{Datasets for downstream evaluation} OpenMIC \cite{humphrey2018openmic}, Extended Ballroom \cite{marchand2016extended} and FMA Small (FMA) \cite{defferrard2017fma}, three publicly available classification datasets are used for downstream evaluation as representative samples of well-known MIR tasks. These datasets range from different number of clips, clip duration, and number of classes. For all three datasets, we report macro F1 scores as shown in the figures. \begin{enumerate} \item OpenMIC \cite{humphrey2018openmic}: OpenMIC is a multi-label instrument classification dataset containing 15k samples total with provided train/valid/test splits as well as masks for strong positive and negative examples for each class. We follow similar setup as the official baseline\footnote{https://github.com/cosmir/openmic-2018} by training 20 binary classifiers. \item Extended Ballroom \cite{marchand2016extended}: Extended Ballroom (4k samples) is a multi-class dance genre classification dataset. We follow the same setup as \cite{jeong2017dlr} by removing 4 categories due to dataset imbalance, resulting in only using 9 categories. \item FMA Small \cite{defferrard2017fma}: FMA Small (8k samples) is a multi-class music genre classification dataset with 8 genre categories. \end{enumerate} \section{Results and discussions} \label{sec:results} We first show results of encoder choices and whether pre-training helps. All workers (waveform (W), LPS (L), prosody (P), MFCC (M), Chroma (C) and Tempogram (T), where WLP are also referred to as baseline) and frozen scenario are used. We then dive deeper into the effects of different weighting mechanisms, and ablation study of worker selections, for which we also report results in frozen scenario. Finally, we investigate whether fine-tuning further improves performance. \subsection{Encoder architectures} \label{subsec:encoder_choices} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{info_max.png} \caption{Comparisons of encoder architectures (PASE vs PASE+). Left, center, and right figures are Macro F1 metrics on different downstream tasks of frozen scenario. Red and green dotted lines represent PASE and PASE+ encoder with supervised training (scenario 1) directly on downstream dataset from scratch.} \label{fig:info_max} \end{figure} From Figure \ref{fig:info_max}, we observe that for all three downstream tasks, PASE+ outperforms PASE. This is not surprising as PASE+ is a more powerful encoder with \~{}8M parameters, skip-connection and QRNN layer, and PASE has only \~{}6M parameters and basic convolutional layers. This confirms with the findings from original PASE+ \cite{ravanelli2020multi} work applied to speech data. The dotted lines are trained supervisedly (scenario 1) from scratch directly on the downstream tasks with random weights initialization. It shows that pre-training in general helps to initialize the encoder weights better, resulting in better performance on downstream tasks. One exception is PASE for OpenMIC, we hypothesize that it is because OpenMIC already contains enough data to train PASE encoder (with less capacity) from scratch well, which is not the case for PASE+. This shows that pre-training for encoders with larger capacities is especially helpful when evaluating on downstream tasks with limited labeled data. We conducted experiments using PASE+ through out the remaining paper as it's a better encoder for our tasks. \subsection{Weighting mechanisms} \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{all.png} \caption{Comparisons of equal weighted vs re-weighted for different worker selections on all downstream tasks. PASE+ encoder architecture is used with frozen scenarios. Y-axis is Macro F1 classification metrics. X-axis are labeled with WLP (waveform, LPS, and prosody), M (MFCC), C (Chroma), and T (Tempogram). No filled and filled color represent equal weighted and re-weighted mechanisms correspondingly. From all trials, circles represent mean while the length of the bar represents standard deviation.} \label{fig:weighted} \end{figure} In Figure \ref{fig:weighted}, we show results comparing equal weighted and re-weighted mechanisms with different worker selections during pre-training. We see that re-weighted mechanism (filled color) helps to boost the influences from various workers to the performance of downstream tasks in general. For Extended Ballroom on the right especially, we see clearly that results with workers containing Tempogram are improved by a large margin. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{weights.pdf} \caption{Log loss per worker for first 20 epochs. X-axis is number of epochs. On the left is equal weighted. On the right is re-weighted where loss weights are balanced using reciprocal of mean losses per worker from equal weighted pre-training.} \label{fig:loss_weight} \end{figure} We further examine losses per worker during pre-training as shown in Figure \ref{fig:loss_weight}. We can see that with equal weighted on the left, LPS (L) almost dominates all losses and Tempogram (T) worker loss contributes the least with two orders of magnitude smaller, but for re-weighted on the right, each worker contributes more equally. \subsection{Pretext tasks worker selections} Figure \ref{fig:ablation} shows the relative difference in accuracy by including different workers over the WLP baseline. We observe that different worker selections affect variously to different downstream tasks. Tempogram helps the most across all different combinations especially for Extended Ballroom. MFCC is usually important for most of the downstream tasks as it captures the low-level attributes differentiating instrument and genre. Chroma is however at a disadvantage, especially for OpenMIC, since Chroma is designed to normalize for timbre, which is important for instrumentation. MFCC only hurts slightly on Extended Ballroom as it brings together different dance genres with similar timbre, and separates music from same dance genre that changes in timbre. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{ablation.png} \caption{Relative improvement (\%) of different additional music specific workers included during pre-training compared to WLP on different downstream tasks.} \label{fig:ablation} \end{figure} These variations can be further compensated to show improvement across all tasks by using all workers as shown on the right most of each subplot in Figure \ref{fig:ablation}. We observe relative improvement adding all workers compared to WLP baseline by 1.9\%, 4.5\% and 14\% on OpenMIC, FMA and Extended Ballroom datasets respectively. This indicates that workers complement each other, and the encoders are able to use signals from diversified workers to generalize better to various downstream tasks. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{cm_extended_ballroom.png} \caption{Confusion matrices of Extended Ballroom. On the left is WLP baseline. On the right are the differences between WLP+T and WLP, and WLP+MCT and WLP+T. Red and blue colors indicate positive and negative changes respectively.} \label{fig:cm_extended_ballroom} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{cm_fma.png} \caption{Confusion matrices of FMA. On the left is WLP baseline. On the right are the differences between WLP+M and WLP, WLP+T and WLP+M, and WLP+MT and WLP+T. Red and blue colors indicate positive and negative changes respectively.} \label{fig:cm_fma} \end{figure} We then show confusion matrices of Extended Ballroom and FMA in Figure \ref{fig:cm_extended_ballroom} and \ref{fig:cm_fma}. In Figure \ref{fig:cm_extended_ballroom}, we show the difference between WLP + T and WLP, and observe that adding Tempogram helps differentiate Chacha with Jive and Samba, which differ in rhythm and tempo, as well as Foxtrot with Quickstep, and Viennesewaltz with Waltz, as the two pairs of dance genres originate from similar music playing in different speed. Adding MFCC and Chroma further helps differentiate Foxtrot with Rumba and Viennesewaltz as additional timbre cues are provided. In Figure \ref{fig:cm_fma}, we observe that even adding MFCC (WLP+M - WLP) helps in general as hypothesized, however, it misclassifies Electronic with Hip-Hop and International, and Pop with Hip-Hop and Rock, as there might be similar instruments used in these genres, resulting in similar timbre. Adding Tempogram (WLP+T - WLP+M) corrects the mistakes made on Electronic and Pop genres, but misclassifying International with Folk and Instrumental. Finally, adding both workers (WLP+MT - WLP+T) provides further improvements upon MFCC and Tempogram only. In general we observe improvements with positive values (red) in diagonal and negative (blue) in off-diagonal. \subsection{Frozen versus fine-tuned} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{train_num.png} \caption{Comparisons of frozen and fine-tuned on \# of training samples for different downstream tasks.} \label{fig:train_num} \end{figure} In Figure \ref{fig:train_num}, we plot frozen (filled) versus fine-tuned (no filled) with re-weighted mechanisms and all workers used during pre-training. By using all available training examples, both Extended Ballroom (2.8k) and OpenMIC (11k) show further improvement with fine-tuning, while FMA does not. We hypothesize that this is because each downstream task requires different number of samples for fine-tuned to work well. For FMA, we just do not have enough training samples. We further reduce number of samples used for training OpenMIC and Extended Ballroom as shown in Figure \ref{fig:train_num}, where we see clear reverting behavior around 8k (OpenMIC) and 1k (Extended Ballroom) that fine-tuning stops to outperform frozen. \section{Conclusion} \label{sec:conclusion} In this paper, we explore different design choices for pre-training music encoders with multi-task and self-supervised learning techniques, and show that this method, when combined with different encoder architectures, generally benefits for downstream tasks. The improvement is clearer and more stable when (\# unlabeled data / \# labeled data) is larger. We also show that each type of pretext task provides different and complementary information, re-weighted mechanism helps the encoder to better learn different cues provided from each task, and fine-tuning can further improve performance. For future work, we are interested in applying this pre-training technique to various encoders, adding more audio specific features, and explore other unsupervised and self-supervised learning ideas such as wav2vec \cite{schneider2019wav2vec} as pretext tasks. We are also interested in including more diverse downstream tasks such as music tagging, and chord recognition (Chroma should be more effective in this task) for evaluation. We think that this pre-training technique can be applied to a large varieties of music encoders and generalize to different downstream music tasks, especially those with limited labeled data. \vfill\pagebreak \bibliographystyle{IEEEbib}
2024-02-18T23:40:33.217Z
2021-02-08T02:18:18.000Z
algebraic_stack_train_0000
2,733
3,141
proofpile-arXiv_065-13339
\section{Introduction} \subsection{Background} The affine quantum group, denoted by $\mathbf U$, admits two presentations: the Serre presentation introduced by Drinfeld-Jimbo, and the current presentation, also known as the Drinfeld presentation \cite{Dr88}. The isomorphism between these two presentations was stated by Drinfeld, and a detailed proof was supplied by Beck \cite{Be94} and Damiani \cite{Da12} \cite{Da15}. This current presentation has been shown to be crucial in the representation theory of affine quantum groups; see the survey paper \cite{CH10} for partial references. \par Quantum symmetric pairs $(\mathbf U,\U^\imath_{\boldsymbol{\varsigma}})$ were introduced by Letzter in \cite{Let99} for finite type and generalized to Kac-Moody type by Kolb \cite{Ko14}. The \emph{$\imath$quantum group} $\U^\imath_{\boldsymbol{\varsigma}}$ arising from quantum symmetric pairs is a coideal subalgebra of $\mathbf U$ associated with an involution on the underlying root system. The \textit{universal $\imath$quantum group} $\widetilde{{\mathbf U}}^\imath$ \cite{LW19a} is a coideal subalgebra of Drinfeld double quantum group $\widetilde{\U}$, and $\U^\imath_{{\boldsymbol{\varsigma}}}$ can be recovered from $\widetilde{{\mathbf U}}^\imath$ by a central reduction. The version $\widetilde{{\mathbf U}}^\imath$ naturally arises in an $\imath$Hall algebra realization of $\imath$quantum groups \cite{LW19a}, and a braid group action on $\widetilde{{\mathbf U}}^\imath$ is realized via the reflection functors in this approach \cite{LW21a}. \par Both affine quantum groups and affine $\imath$quantum groups are closely related to the quantum integrable systems. The affine quantum groups, which are trigonometric solutions of the quantum Yang-Baxter equation (QYBE), arise in the quantum inverse scattering method \cite[\S 11,13]{Dr87}. The affine $\imath$quantum groups, as solutions for the reflection equation $(=$boundary QYBE), appear in the framework of quantum integrable systems with certain boundary conditions. The $\imath$quantum group of split affine rank one is known as the $q$-Onsager algebra, which plays a crucial role in the study of the XXZ open spin chain \cite{BB13}. The $\imath$quantum groups of higher ranks are known as the ``generalized $q$-Onsager algebras", whose applications in affine Toda field theory are developed \cite{BB10}. \par According to \cite{BW18}, various algebraic, geometric, categorical results for quantum groups are expected to have corresponding analogues for $\imath$quantum groups. In particular, it is natural to ask whether there is a current presentation for $\U^\imath_{\boldsymbol{\varsigma}}$, analogous to the current presentation for $\mathbf U$. There are several early attempts to construct a current presentation for the $q$-Onsager algebra in \cite{BS10,BK20}. A major progress toward this question is the recent work by Lu and Wang \cite{LW20b}, who formulated a Drinfeld type presentation for $\widetilde{{\mathbf U}}^\imath$ of split affine ADE type. In the rank one case, Lu-Wang's Drinfeld type presentation was built on the construction of root vectors and relations by Baseilhac and Kolb \cite{BK20}. The braid group action on $\widetilde{{\mathbf U}}^\imath$ plays an essential role in Lu and Wang's construction, just as for affine quantum groups \cite{Da93, Be94}.\par \subsection{ Main results } The goal of this paper is to generalize results in \cite{LW20b} to arbitrary split (untwisted) affine types, namely, to provide a Drinfeld type presentation for the split $\imath$quantum group $\widetilde{{\mathbf U}}^\imath=\widetilde{{\mathbf U}}^\imath(\widehat{\mathfrak{g}})$ where $\mathfrak{g}$ is any simple Lie algebra and $\widehat{\mathfrak{g}}$ is the corresponding untwisted affine Lie algebra. Generalizing from split ADE types to split BCFG types requires new ideas, which will be explained below; in the meantime, our proofs for several relations in the general setting also simplify Lu-Wang's original proofs for split ADE types. \par Let us explain our approach in details. Let $(c_{ij})_{i,j\in\mathbb{I}_0}$ denote the Cartan matrix for $\mathfrak{g}$. We start by recalling the Serre presentation of $\widetilde{{\mathbf U}}^\imath$ in Definition~ \ref{def:tUi}, following \cite{LW19a} and \cite{LW20a}. Such a presentation is obtained by modifying the Serre presentation for $\U^\imath_{{\boldsymbol{\varsigma}}}$ given in \cite[Theorem 3.1]{CLW18} (built on earlier work by Kolb and Letzter for $\U^\imath_{{\boldsymbol{\varsigma}}}$). The braid group action on $\widetilde{{\mathbf U}}^\imath$ (see Lemma \ref{lem:Ti}) arises from $\imath$Hall algebras \cite{CLW21, LW21a, LW21b}, and it recovers the braid group action established in \cite{KP11, BK20} for $\U^\imath_{{\boldsymbol{\varsigma}}}$ (who worked with specific parameters). \par We shall use similar definitions and notations \eqref{Bik}-\eqref{Thim} for root vectors in $\widetilde{{\mathbf U}}^\imath$ (i.e. generators in the Drinfeld type presentation) as in \cite[(3.28)-(3.30)]{LW20b}; that is, we shall use $B_{i,k}$ for the real root vectors, $\acute{\Theta}_{i,m}$ for the imaginary root vectors constructed in \cite{BK20} in the rank one case and adapted to our general case, and $\Theta_{i,m}$ for the alternative imaginary root vectors originated from $\imath$Hall algebra approach \cite{LRW20}. We shall focus on $\Theta_{i,m}$ instead of $\acute{\Theta}_{i,m}$ throughout this paper. \par Let us explain how we formulate the relations \eqref{iDRG0}-\eqref{iDRG6'} for $B_{i,k},\Theta_{i,m}$ which are used in a current presentation for $\widetilde{{\mathbf U}}^\imath$. Relations \eqref{iDRG0}-\eqref{iDRG34} are natural generalizations of \cite[(3.33)-(3.37)]{LW20b} to the case when $c_{ij}$ is arbitrary. \par The remaining task is to obtain a suitable formulation of the Serre relations (in the current presentation). For $c_{ij}=-1$, there are two (equivalent) formulations of the Serre relations available: one is the general version formulated in \cite[(3.32),(3.38)]{LW20b}, and the other is the equal-index version \eqref{iDRG4'} (which is the special cases of the general version); the latter is much simpler than the general version and is obtained by applying degree shift automorphisms to the corresponding finite type Serre relation \eqref{eq:S2}. As for their generalizations to $c_{ij}<-1$, the general versions of Serre relations are going to be extremely complicated as Lu and Wang's formulation suggests. However, the equal-index versions \eqref{iDRG5'}-\eqref{iDRG6'} can be obtained relatively easily. Hence, we choose to use relations \eqref{iDRG4'}-\eqref{iDRG6'} in our current presentation. \par There are two supporting examples for the use of this equal-index version of Serre relations. For affine quantum groups, Damiani in \cite[Theorem 11.18]{Da12} showed that the Drinfeld presentation $ ^{\mathrm{Dr}}\mathbf U$ is equivalent to a reduced current presentation $ ^{\mathrm{Dr}}\mathbf U_{red}$ where Serre relations are replaced by the corresponding equal-index version. (See Proposition~ \ref{prop:Da}.) Moreover, Lu and Wang showed in \cite[\S 4.7-4.8]{LW20b} that their general Serre relations can be derived from other defining relations and the equal-index Serre relations \eqref{iDRG4'}. In other words, for $\widetilde{{\mathbf U}}^\imath$ of split ADE type, if we replace the Serre relation in the current presentation formulated in \cite[Definition 3.10]{LW20b} by the equal-index version \eqref{iDRG4'} of itself, we obtain an equivalent presentation. \par Generalizing these phenomenons, we define an algebra ${}^{\mathrm{Dr}}\tUi_{red}$ in Definition~ \ref{def:Dpr} with defining relations \eqref{iDRG0}-\eqref{iDRG6'}. We shall show that ${}^{\mathrm{Dr}}\tUi_{red}$ is isomorphic to $\widetilde{{\mathbf U}}^\imath$ in Theorem~ \ref{DprADE}. We call ${}^{\mathrm{Dr}}\tUi_{red}$ a reduced Drinfeld type presentation. \par In the proof of this isomorphism, most defining relations of ${}^{\mathrm{Dr}}\tUi_{red}$ are verified in $\widetilde{{\mathbf U}}^\imath$ in a similar way as \cite[\S 4]{LW20b}. A major exception is the relation \eqref{iDRG1'} for $i\neq j$, whose original proof for $c_{ij}=-1$ in \cite[\S 4.7-4.8]{LW20b} is no longer effective when $c_{ij}<-1$, since it requires the help of the general Serre relation. In this paper, we provide a new inductive proof of the relation \eqref{iDRG1'} for $i\neq j$, which is based on a recursive formula \eqref{recur} uniformly established for arbitrary $c_{ij}$ in \S \ref{Verf1}. \par Surprisingly, the base case for the induction is much more challenging, and we establish it case by case depending on $c_{ij}$ in \S \ref{Verf2}. For $c_{ij}=-1$, the base case is verified using the finite type Serre relation. For $c_{ij}=-2$, the base case is derived from formulas for the braid group action. For $c_{ij}=-3$, it turns out we need both of the finite type Serre relation and formulas of the braid group action to prove the base case. \par Two techniques are widely used in our proof of \eqref{iDRG1'} and later in the proof of Serre relations, analogous to \cite{Da12}. One is the $q$-brackets, which allow us to write Serre relations and formulas of the braid group action in compact forms (e.g. \eqref{Se:brkt}, \eqref{Br:brkt1} etc.) and then deal with them efficiently. The other one is the degree shift automorphism $\texttt{\rm T}_{\omega_i}$, coming from the braid group action, which sends $B_{i,k}$ to $B_{i,k-1}$ and fixes $B_{j,l},$ for all $j\neq i$. These degree shift automorphisms allow us to recover a general relation from a more basic version and thus minimize the required amount of work. \par It is still desirable to have general Serre relations, which we shall provide when $c_{ij}=-2$. We next explain our approach toward it. \par For $c_{ij}=-1$, the general Serre relation \eqref{iDRG5} is first formulated in \cite[(3.38), (5.6)]{LW20b}. We offer a more direct proof in terms of generating functions in \S \ref{SeADE}, compared with \cite[\S 4.7,4.8]{LW20b}. This new proof also offers a method to formulate general Serre relations which admits a natural generalization to cases $c_{ij}<-1$. \par For $c_{ij}=-2$, we generalize this method and formulate a general Serre relation \eqref{iDRG6} in terms of generating functions. Details for the proof of \eqref{iDRG6} is included in \S \ref{symfun}-\ref{genfor}. Such a formulation has several remarkable features. (Similar features also hold for $c_{ij}=-1$.) \begin{enumerate} \item Each component of its RHS is a finite sum, and the constant term is the same as the corresponding finite type Serre relation \eqref{eq:S3}. (see Remark \ref{rmk:com}) \item It can be viewed as a deformation of the Serre relation in the original Drinfeld presentation. (see Remark \ref{rmk:spe}) \end{enumerate} Adding general Serre relations \eqref{iDRG5}-\eqref{iDRG6} to the reduced presentation, a (complete) Drinfeld type presentation for $\widetilde{{\mathbf U}}^\imath$ of split affine type BCF is formulated and proved in Theorem \ref{DprBCF}. \par For $c_{ij}=-3$, while it is still possible to formulate a version of the general Serre relation, computation becomes much more involved and we will skip it. For practical purposes such as developing the representation theory of $\widetilde{{\mathbf U}}^\imath$, we do not need this. \subsection{Organization} This paper is organized as follows. In Section \ref{Prel}, we set up notations and review the basic theory of affine quantum groups and affine $\imath$quantum groups. In Section \ref{ADE}, we formulate a current presentation for $\widetilde{{\mathbf U}}^\imath$ of arbitrary split affine type in Definition \ref{def:Dpr} and Theorem \ref{DprADE}. In Section \ref{BCF}, we establish a Drinfeld type presentation in terms of generating functions for $\widetilde{{\mathbf U}}^\imath$ in Theorem \ref{DprBCF}.\par In Section \ref{verf}, we verify the relation \eqref{iDRG1'} in the current presentation using an induction. We establish a recursive formula for the induction in Section \ref{Verf1} and check base cases in Section \ref{Verf2}. In Section \ref{Serre}, we verify general Serre relations: the one for the $c_{ij}=-1$ case is verified in Section \ref{SeADE}, and the one for the $c_{ij}=-2$ case is verified in Section \ref{symfun}-\ref{genfor}. \vspace{2mm} \noindent {\bf Acknowledgement.} The author would like to thank Ming Lu and his advisor Weiqiang Wang for sharing their work earlier and for many helpful discussions and advices. This work is partially supported by the GRA fellowship of Wang's NSF grant DMS-2001351. \section{Preliminaries}\label{Prel} \subsection{Affine Weyl groups} Set $\mathbb{I}_0=\{1,\ldots,n\}$. Let $\mathfrak{g}$ be a simple Lie algebra with Cartan matrix $ (c_{ij})_{i,j\in\mathbb{I}_0}$. Let $ d_i $ be relatively prime positive integers such that $(d_i c_{ij})_{i,j\in \mathbb{I}_0}$ is a symmetric matrix. Let $\mathcal {R}_0$ denote the root system of $\mathfrak{g}$. Fix a set of simple roots $\{\alpha_i|i\in \mathbb{I}_0\}$ for $\mathcal {R}_0$ and denote the corresponding positive system by $\mathcal {R}_0^+$. Let $ Q=\bigoplus_{i\in \mathbb{I}_0}\mathbb{Z} \alpha_i$ be the root lattice of $\mathfrak{g}$ and $P$ be the dual lattice of $Q$. The bilinear pairing between $P,Q$ is denoted by $\langle \cdot, \cdot \rangle: P\times Q \rightarrow \mathbb{Z}$. The lattice $P$ is known as the weight lattice of $\mathfrak{g}$ and $P=\bigoplus_{i\in \mathbb{I}_0}\mathbb{Z} \omega_i$, where $\omega_i$ are fundamental weights of $\mathfrak{g}$ given by $ \langle \omega_i, \alpha_j \rangle =\delta_{i,j}$. We identify $Q$ as a sublattice of $P$ via $\langle \alpha_i,\alpha_j\rangle=d_i c_{ij}$. Let $\theta$ be the highest root for $\mathfrak{g}$. \par Set $\mathbb{I}=\mathbb{I}_0\cup\{0\}$. Let $\widehat{\mathfrak{g}}$ be the untwisted affine Lie algebra associated to $\mathfrak{g}$ with the affine Cartan matrix $ (c_{ij})_{i,j\in\mathbb{I}}$. Extend $Q$ to the affine root lattice $\widetilde{Q}:=Q\oplus \mathbb{Z}\alpha_0$. It is known that the element $\delta=\alpha_0+\theta\in \widetilde{Q}$ satisfies $\langle \alpha_i, \delta\rangle =0, \forall i\in \mathbb{I}$. The root system $\mathcal {R}$ and the set of positive roots $\mathcal {R}^+$ for $\widehat{\mathfrak{g}}$ are defined to be \begin{align} \mathcal {R} &=\{\pm (\beta + k \delta) \mid \beta \in \mathcal {R}_0^+, k \in \mathbb{Z}\} \cup \{m \delta \mid m \in \mathbb{Z}\backslash \{0\} \}, \label{eq:roots} \\ \mathcal {R}^+ &= \{k \delta +\beta \mid \beta \in \mathcal {R}_0^+, k \ge 0\} \cup \{k \delta -\beta \mid \beta \in \mathcal {R}_0^+, k > 0\} \cup \{m \delta \mid m \ge 1\}. \label{eq:roots+} \end{align} Let $s_i$ be the reflection acting on $\widetilde{Q}$ by $s_i(x)=x-\langle x, \alpha_i \rangle\alpha_i$ for $i \in \mathbb{I}$. The Weyl group $W_0$ of $\mathfrak{g}$ and the affine Weyl group $W$ of $\widehat{\mathfrak{g}}$ are subgroups of $\mathrm{Aut}(\widetilde{Q})$ generated by $s_i,i\in \mathbb{I}_0$ and by $s_i,i\in \mathbb{I}$, respectively \par The extended affine Weyl group $\widetilde{W}$ is the semi-direct product $W_0 \ltimes P$. It is known that $W \cong W_0 \ltimes Q$ and thus $W$ is identified with a subgroup of $\widetilde{W}$. For $\omega\in P$, write $\omega $ for the element $(1,\omega)\in \widetilde{W}.$ For $s\in W_0$, write $s$ for the element $(s,0)\in \widetilde{W}$. \par There is a $\widetilde{W}$-action on $\widetilde{Q}$ extending the $W_0$-action on $\widetilde{Q}$ such that $\omega(\alpha_i)=\alpha_i-\langle \omega, \alpha_i \rangle \delta$ for $\omega\in P,i\in \mathbb{I}$. We identify $P/Q$ with a finite group $\Omega$ of Dynkin diagram automorphism, and thus $\widetilde{W} \cong\Omega \ltimes W$. The length function $l$ on $W$ extends to $ \widetilde{W}$ by setting $l(\tau w)=l(w)$ for $\tau \in \Omega, w\in W$. \subsection{Drinfeld presentation for affine quantum groups} Let $v$ be the quantum parameter and $v_i=v^{d_i}$. Define, for $n\in \mathbb{Z}, a\in \mathbb{Q}(v)$ \[ [n]_{a}=\frac{a^n-a^{-n}}{a-a^{-1}}, \qquad [n]_a!=[n]_a[n-1]_a\cdots [1]_a,\qquad \qbinom{n}{s}_a=\frac{[n]_a!}{[s]_a![n-s]_a!}. \] Write $[A,B]=AB-BA$ and $[A,B]_a=AB-aBA$. \par Let $\mathbf U$ be the Drinfled-Jimbo quantum group associated to $\widehat{\mathfrak{g}}$ with Chevalley generators $\{E_i,F_i,K_i^{\pm1}| i\in \mathbb{I}\}$. Let $\mathbf U^-$ be the subalgebra of $\mathbf U$ generated by $F_i,i\in \mathbb{I}$.\par It was formulated in \cite{Dr88} that $\mathbf U$ is isomorphic to $^{\mathrm{Dr}}\mathbf U$, where $^{\mathrm{Dr}}\mathbf U$ is the $\mathbb{Q}(v)$-algebra generated by $x_{i k}^{\pm}$, $h_{i l}$, $K_i^{\pm 1}$, $C^{\pm \frac12}$, for $i\in\mathbb{I}_0$, $k\in\mathbb{Z}$, and $l\in\mathbb{Z}\backslash\{0\}$, subject to the following relations: \begin{align} &C^{\frac{1}{2}}, C^{- \frac12} \text{ are central,}\label{Dr1} \\ [K_i,K_j] & = [K_i,h_{j l}] =0, \quad K_i K_i^{-1} =C^{\frac12} C^{- \frac12} =1, \\ [h_{ik},h_{jl}] &= \delta_{k, -l} \frac{[k c_{ij}]_{v_i}}{k} \frac{C^k -C^{-k}}{v_j -v_j^{-1}},\label{Dr3} \\ K_ix_{jk}^{\pm} K_i^{-1} &=v_i^{\pm c_{ij}} x_{jk}^{\pm}, \\ [h_{i k},x_{j l}^{\pm}] &=\pm\frac{[kc_{ij}]_{v_i}}{k} C^{\mp \frac{|k|}2} x_{j,k+l}^{\pm},\label{Dr5} \\ [x_{i k}^+,x_{j l}^-] &=\delta_{ij} {(C^{\frac{k-l}2} K_i\psi_{i,k+l} - C^{\frac{l-k}2} K_i^{-1} \varphi_{i,k+l})}, \\ x_{i,k+1}^{\pm} x_{j,l}^{\pm}-v_i^{\pm c_{ij}} x_{j,l}^{\pm} x_{i,k+1}^{\pm} &=v_i^{\pm c_{ij}} x_{i,k}^{\pm} x_{j,l+1}^{\pm}- x_{j,l+1}^{\pm} x_{i,k}^{\pm},\label{Dr7} \\ \operatorname{Sym}\nolimits_{k_1,\dots,k_r}\sum_{t=0}^{r} (-1)^t \qbinom{r}{t}_{v_i} & x_{i,k_1}^{\pm}\cdots x_{i,k_t}^{\pm} x_{j,l}^{\pm} x_{i,k_t+1}^{\pm} \cdots x_{i,k_n}^{\pm} =0, \text{ for } r= 1-c_{ij}\; (i\neq j),\label{Dr8} \end{align} where $\operatorname{Sym}\nolimits_{k_1,\dots,k_r}$ denotes the symmetrization with respect to the indices $k_1,\dots,k_r$, $\psi_{i,k}$ and $\varphi_{i,k}$ are defined by the following functional equations: \begin{align*} 1+ \sum_{m\geq 1} (v_i-v_i^{-1})\psi_{i,m}u^m &= \exp\Big((v_i -v_i^{-1}) \sum_{m\ge 1} h_{i,m}u^m\Big), \\ 1+ \sum_{m\geq1 } (v_i-v_i^{-1}) \varphi_{i, -m}u^{-m} & = \exp \Big((v_i -v_i^{-1}) \sum_{m\ge 1} h_{i,-m}u^{-m}\Big). \end{align*} We refer to \cite{Be94} for a proof of the isomorphism $\phi$: $^{\mathrm{Dr}}\mathbf U \to\mathbf U$.\par In \cite[\S 11]{Da12}, the general Serre relation \eqref{Dr8} is proved to be redundant except the case $k_1=k_2=\cdots=k_{1-c_{ij}}$. Let $^{\mathrm{Dr}}\mathbf U_{red}$($red$ stands for the reduced presentation) denote the $\mathbb{Q}(v)$-algebra generated by $x_{i k}^{\pm}$, $h_{i l}$, $K_i^{\pm 1}$, $C^{\pm \frac12}$ subject to relations \eqref{Dr1}-\eqref{Dr7} and \begin{equation} \sum_{t=0}^{r} (-1)^t \qbinom{r}{t}_{v_i} \big(x_{i,k }^{\pm}\big)^{r-t} x_{j,l}^{\pm} \big( x_{i,k }^{\pm}\big)^t =0, \text{ for } r= 1-c_{ij},k,l\in \mathbb{Z}\; (i\neq j). \end{equation} \begin{proposition}[\text{\cite[Theorem 11.18]{Da12}}]\label{prop:Da} $^{\mathrm{Dr}}\mathbf U$ is isomorphic to $^{\mathrm{Dr}}\mathbf U_{red} $ by sending generators $x_{i k}^{\pm}$, $h_{i l}$, $K_i^{\pm 1}$, $C^{\pm \frac12}$ to those with the same names. \end{proposition} Note Damiani's original result is stronger than the one stated in Proposition \ref{prop:Da}, since it also involves a reduction of relations \eqref{Dr3} and \eqref{Dr5}, but this version is sufficient for our purpose.\par Compose the isomorphism in Proposition \ref{prop:Da} with $\phi$, we have an isomorphism \begin{equation}\label{phired} \phi_{red}:\,^{\mathrm{Dr}}\mathbf U_{red}\longrightarrow \mathbf U. \end{equation} \subsection{Universal $\imath$quantum groups of split affine type} We recall the definition of the universal $\imath$quantum group of split affine type via its Serre presentation following \cite[\S 3.3]{LW20b}. \begin{definition}\label{def:tUi} The universal (split) affine $\imath$quantum group $\widetilde{{\mathbf U}}^\imath:=\widetilde{{\mathbf U}}^\imath(\widehat{\mathfrak{g}})$ associated to $\widehat{\mathfrak{g}}$ is the $\mathbb{Q}(v)$-algebra generated by $B_i,\mathbb{K}_i^{\pm 1}, i\in \mathbb{I}$, subject to \begin{align} \mathbb{K}_i\mathbb{K}_i^{-1} =\mathbb{K}_i^{-1}\mathbb{K}_i=1, & \quad \mathbb{K}_i \text{ is central}, \\ B_iB_j -B_j B_i&=0, \quad \qquad\qquad\qquad\qquad\qquad \text{ if } c_{ij}=0, \label{eq:S1} \\ B_i^2 B_j -[2]_{v_i} B_i B_j B_i +B_j B_i^2 &= - v_i^{-1} B_j \mathbb{K}_i, \quad\qquad\qquad\qquad \text{ if }c_{ij}=-1, \label{eq:S2} \\ \sum_{r=0}^3 (-1)^r \qbinom{3}{r}_{v_i} B_i^{3-r} B_j B_i^{r} &= -v_i^{-1} [2]_{v_i}^2 (B_iB_j-B_jB_i) \mathbb{K}_i, \; \text{ if }c_{ij}=-2, \label{eq:S3} \\ \label{eq:S4} \sum_{s=0}^4(-1)^s \qbinom{4}{s}_{v_i} B_{i}^{4-s}B_{j} B_{i}^s &= -v_i^{-1}(1+[3]_{v_i}^2)( B_{j} B_{i}^2+ B_{i}^2 B_{j})\mathbb{K}_i \\\notag &\quad+v_i^{-1}[4]_{v_i} (1+[2]_{v_i}^2) B_{i} B_{j} B_{i} \mathbb{K}_i \\\notag & \quad-v_i^{-2}[3]^2_{v_i} B_{j} \mathbb{K}_i^2, \qquad\qquad\quad \text{ if } c_{ij}=-3. \end{align} \end{definition} \begin{remark}\label{rmk:cred} For any ${{\boldsymbol{\varsigma}}}=(\varsigma_i)_{i\in\mathbb{I}} \in (\mathbb{Q}^\times)^\mathbb{I} $, an affine $\imath$quantum group $\U^\imath_{{\boldsymbol{\varsigma}}}$ with parameters is introduced in \cite{Ko14}, generalizing G. Letzter's work for finite type. $\U^\imath_{{\boldsymbol{\varsigma}}}$ admits a Serre presentation formulated in \cite[Theorem 7.1]{Ko14} and also in \cite[Theorem 3.1]{CLW18}. \par The presentation for $\widetilde{{\mathbf U}}^\imath$ in Definition \ref{def:tUi} can be obtained by replacing the parameter $-v_i^2\varsigma_i$ in the Serre presentation of $\U^\imath_{{\boldsymbol{\varsigma}}}$ formulated in \cite[Theorem 3.1]{CLW18} by a central element $\mathbb{K}_i$ for $i\in \mathbb{I}$. (set $\tau=id$ there for split type) Hence, $\U^\imath_{{\boldsymbol{\varsigma}}}$ is related to $\widetilde{{\mathbf U}}^\imath$ by a central reduction $\U^\imath_{{\boldsymbol{\varsigma}}}:= \widetilde{{\mathbf U}}^\imath/(\mathbb{K}_i + v_i^2 \varsigma_i| i\in \mathbb{I}) $. \end{remark} \begin{remark} A Serre presentation for $\widetilde{{\mathbf U}}^\imath$ is also formulated with generators $B_i,\widetilde{k}_i,i\in \mathbb{I}$ in \cite[Proposition 6.4]{LW19a} for finite ADE type and in \cite[Theorem 4.2]{LW20a} for symmetric Kac-Moody type. The central element $\mathbb{K}_i$ is related to $\widetilde{k}_i$ by $\mathbb{K}_i=-v_i^2\widetilde{k}_i$. We are following notations in \cite{LW20b} in this paper. \end{remark} \begin{remark}\label{deg} $\widetilde{{\mathbf U}}^\imath$ has a $\mathbb{Z} \mathbb{I}$-grading by setting \[ \mathrm{wt}(B_i) = \alpha_i, \mathrm{wt}(\mathbb{K}_i)=2\alpha_i,\qquad i\in \mathbb{I}. \] We say that $B_i$ has weight $\alpha_i$. \end{remark} \begin{remark}\label{filt} $\widetilde{{\mathbf U}}^\imath$ has a natural filtered algebra structure by setting \[ \widetilde{\U}^{\imath,m}= \mathbb{Q}(v)\text{-span}\{B_{i_1} B_{i_2} \cdots B_{i_r}\mathbb{K}_\mu| \mu\in \mathbb{Z} \mathbb{I}, r\leq m , i_k\in\mathbb{I}\}. \] According to \cite{Let02,Ko14}, the associated graded algebra with respect to this filtration is \begin{equation} \mathrm{gr} \widetilde{{\mathbf U}}^\imath \cong \mathbf U^- \otimes \mathbb{Q}(v)[\mathbb{K}_i^\pm | i\in \mathbb{I}], \qquad \overline{B_i}\mapsto F_i, \quad \overline{\mathbb{K}}_i \mapsto \mathbb{K}_i \; (i\in \mathbb{I}). \end{equation} \end{remark} The following formulas for the braid group action on $\widetilde{{\mathbf U}}^\imath$ of finite ADE type were obtained in \cite{LW21a}; its generalization to Kac-Moody types is conjectured in \cite[Conjecture 6.5]{CLW21} and proved in \cite{LW21b}. \begin{lemma}[ \text{\cite[Lemma 5.1]{LW21a}, \cite[Conjecture 6.5]{CLW21}, \cite{LW21b}}] \label{lem:Ti} For $i\in \mathbb{I}$, there exists an automorphism $\texttt{\rm T}_i$ of the $\mathbb{Q}(v)$-algebra $\widetilde{{\mathbf U}}^\imath$ such that $\texttt{\rm T}_i(\mathbb{K}_\mu) =\mathbb{K}_{s_i\mu}$, and \[ \texttt{\rm T}_i(B_j)= \begin{cases} B_i \mathbb{K}_i^{-1}, &\text{ if }j=i,\\ B_j, &\text{ if } c_{ij}=0, \\ B_jB_i-v_i B_iB_j, & \text{ if }c_{ij}=-1, \\ [2]_{v_i}^{-1} \big(B_jB_i^{2} -v_i[2]_{v_i} B_i B_jB_i +v^2 B_i^{2} B_j \big) + B_j\mathbb{K}_i, & \text{ if }c_{ij}=-2,\\ [3]_{v_i}^{-1}[2]_{v_i}^{-1} \big(B_jB_i^{3} -v_i[3]_{v_i} B_i B_jB_i^2 +v^2[3]_{v_i} B_i^{2} B_jB_i & \\ -v_i^3 B_i^3 B_j + v_i^{-1}[B_j,B_i]_{v_i^3}\mathbb{K}_i \big)+[B_j,B_i]_{v_i}\mathbb{K}_i, & \text{ if }c_{ij}=-3. \end{cases} \] for $\mu\in \mathbb{Z}\mathbb{I}$ and $j\in \mathbb{I}$. Moreover, $\texttt{\rm T}_i$ $(i\in \mathbb{I})$ satisfy the braid relations, i.e., $\texttt{\rm T}_i \texttt{\rm T}_j =\texttt{\rm T}_j \texttt{\rm T}_i$ if $c_{ij}=0$, and $\texttt{\rm T}_i \texttt{\rm T}_j \texttt{\rm T}_i =\texttt{\rm T}_j \texttt{\rm T}_i \texttt{\rm T}_j$ if $c_{ij}c_{ji}=1$, and $\texttt{\rm T}_i \texttt{\rm T}_j \texttt{\rm T}_i \texttt{\rm T}_j=\texttt{\rm T}_j \texttt{\rm T}_i \texttt{\rm T}_j\texttt{\rm T}_i$ if $c_{ij}c_{ji}=2$, and $\texttt{\rm T}_i \texttt{\rm T}_j \texttt{\rm T}_i \texttt{\rm T}_j \texttt{\rm T}_i \texttt{\rm T}_j=\texttt{\rm T}_j \texttt{\rm T}_i \texttt{\rm T}_j\texttt{\rm T}_i \texttt{\rm T}_j \texttt{\rm T}_i $ if $c_{ij}c_{ji}=3$. \end{lemma} Its inverse $\texttt{\rm T}_i^{-1}$ is explicitly given by $\texttt{\rm T}_i^{-1} (\mathbb{K}_\mu) =\mathbb{K}_{s_i\mu}$, and \[ \texttt{\rm T}_i^{-1} (B_j)= \begin{cases} B_i \mathbb{K}_i^{-1} , &\text{ if }j=i,\\ B_j, &\text{ if } c_{ij}=0, \\ B_iB_j -v_i B_jB_i, & \text{ if }c_{ij}=-1, \\ {[}2]_{v_i}^{-1} \big( B_i^{2}B_j-v_i[2]_{v_i} B_i B_j B_i+v_i^2 B_j B_i^{2} \big) +B_j\mathbb{K}_i, & \text{ if }c_{ij}=-2,\\ [3]_{v_i}^{-1}[2]_{v_i}^{-1} \big(B_i^{3}B_j -v_i[3]_{v_i} B_i^2 B_j B_i +v^2[3]_{v_i} B_i B_j B_i^{2}&\\ -v_i^3 B_j B_i^3 + v_i^{-1}[ B_i,B_j]_{v_i^3}\mathbb{K}_i \big)+[B_i,B_j]_{v_i}\mathbb{K}_i, & \text{ if }c_{ij}=-3. \end{cases} \] \begin{remark} For specific parameters ${\boldsymbol{\varsigma}}=(\varsigma_i)_{i\in \mathbb{I}},\varsigma_i=-v_i^{-2},i\in \mathbb{I}$, a braid group action on $\U^\imath_{{\boldsymbol{\varsigma}}}$ of split finite type is constructed in \cite[Theorem 3.3]{KP11}. By taking the central reduction in Remark \ref{rmk:cred}, $\texttt{\rm T}_i$ descends to an automorphism of $\U^\imath_{{\boldsymbol{\varsigma}}}$, which recovers Kolb and Pellegrini's braid group action.\par However, for general parameters ${\boldsymbol{\varsigma}}$, $\texttt{\rm T}_i$ fails to become an automorphism of $\U^\imath_{{\boldsymbol{\varsigma}}}$ via the central reduction. (A quick way to see this: since $\texttt{\rm T}_i(\mathbb{K}_i)=\mathbb{K}_i^{-1}$, if $\texttt{\rm T}_i$ reduces to an automorphism on $\U^\imath_{{\boldsymbol{\varsigma}}}$, then the image of $\mathbb{K}_i$, as a scalar in $\U^\imath_{{\boldsymbol{\varsigma}}}$, must be $\pm 1$ and thus $\varsigma_i=\pm v_i^{-2}$.)\par A natural generalization of Kolb and Pellegrini's braid group action to the split affine rank one case is formulated in \cite[\S 2]{BK20} for equal parameters ${\boldsymbol{\varsigma}}=(\varsigma_0,\varsigma_1),\varsigma_0=\varsigma_1$. \end{remark} For $w\in \widetilde{W}$ with a reduced expression $w =\sigma s_{i_1} \ldots s_{i_r},\sigma \in \Omega$, we define $\texttt{\rm T}_w = \sigma \texttt{\rm T}_{i_1} \ldots \texttt{\rm T}_{i_r}$, where $\sigma$ acts on $\widetilde{{\mathbf U}}^\imath$ by $\sigma(B_i) =B_{\sigma i}, \sigma(\mathbb{K}_i) =\mathbb{K}_{\sigma i}$, for all $i\in \mathbb{I}$. By Lemma~\ref{lem:Ti}, $\texttt{\rm T}_w$ is independent of the choice of reduced expressions for $w$. \par The first property for this braid group action can be obtained by adapting \cite[\S2.7]{Lus89} to our setting. \begin{lemma} \label{lem:Lus} Let $x\in P$, $i, j \in \mathbb{I}_0$. \begin{itemize} \item[(a)] If $s_i x=x s_i$, then $\texttt{\rm T}_i \texttt{\rm T}_x=\texttt{\rm T}_x \texttt{\rm T}_i$. \item[(b)] If $s_i x s_i= \alpha_i^{-1}x=\prod_{k\in \mathbb{I}_0} \omega_k^{a_k}$, then we have $\texttt{\rm T}_i^{-1}\texttt{\rm T}_x\texttt{\rm T}_i^{-1}=\prod_{k\in \mathbb{I}_0} \texttt{\rm T}_{\omega_k}^{a_k}$, in particular, $\texttt{\rm T}_i^{-1}\texttt{\rm T}_{\omega_i}\texttt{\rm T}_i^{-1}=\texttt{\rm T}_{\omega_i}^{-1}\prod_{k\neq i}\texttt{\rm T}_{\omega_k}^{-c_{ik}}$. \item[(c)] $\texttt{\rm T}_{\omega_i} \texttt{\rm T}_{\omega_j} = \texttt{\rm T}_{\omega_j} \texttt{\rm T}_{\omega_i}.$ \end{itemize} \end{lemma} For $i \in \mathbb{I}$, just as in \cite[\S 3]{Be94}, let $\omega_i'= \omega_i s_i$ and $\widetilde{{\mathbf U}}^\imath_{[i]}$ be the subalgebra of $\widetilde{{\mathbf U}}^\imath$ generated by $B_i, \texttt{\rm T}_{\omega'_i}(B_i), \mathbb{K}_i, \mathbb{K}_{\delta-\alpha_i}.$ Since $l(\omega'_i)=l(\omega_i)-1$, we have \begin{equation} \texttt{\rm T}_{\omega_i}=\texttt{\rm T}_{\omega'_i} \texttt{\rm T}_i. \end{equation} The following properties for this braid group action on $\widetilde{{\mathbf U}}^\imath$ are natural generalizations of corresponding results formulated in \cite[\S 3.3]{LW20b}. \begin{lemma}[ \text{\cite[Lemma 3.5-3.6 and Proposition 3.9]{LW20b}} ]\label{lem:bra} Let $i\in \mathbb{I}$. \begin{itemize} \item[(a)] We have $\texttt{\rm T}_w (B_i) = B_{w i}$, for any $w \in W$ such that $wi \in \mathbb{I}$. \item[(b)] We have $\texttt{\rm T}_{\omega_j}(x)=x$, for any $j\neq i$ and $x\in\widetilde{{\mathbf U}}^\imath_{[i]}$. \item[(c)] There exists a $\mathbb{Q}(v)$-algebra isomorphism $\aleph_i: \widetilde{{\mathbf U}}^\imath(\widehat{\mathfrak{sl}}_2) \rightarrow \widetilde{{\mathbf U}}^\imath_{[i]}$, which sends $B_1 \mapsto B_i, B_0 \mapsto \texttt{\rm T}_{\omega_i'} (B_i), \mathbb{K}_1 \mapsto \mathbb{K}_i, \mathbb{K}_0 \mapsto \mathbb{K}_\delta \mathbb{K}_i^{-1}$. \end{itemize} \end{lemma} \section{Drinfeld type presentations for affine $\imath$quantum groups}\label{Dpr} \subsection{A reduced Drinfeld type presentation for $\widetilde{{\mathbf U}}^\imath$ of split affine type}\label{ADE} New generators $B_{i,k},\Theta_{i,m}$ are introduced in \cite[(3.28)-(3.30)]{LW20b} for $\widetilde{{\mathbf U}}^\imath$ of split affine ADE type. We define elements $B_{i,k},\Theta_{i,m}$ basically in the same way for $\widetilde{{\mathbf U}}^\imath$ of arbitrary split affine type. \par Define a sign function \[ o(\cdot): \mathbb{I} \longrightarrow \{\pm 1\}, \] such that $o(i) o(j)=-1$ whenever $c_{ij} <0$.\par Define elements $B_{i,k},\acute{\Theta}_{i,m},\Theta_{i,m}$ in $\widetilde{{\mathbf U}}^\imath$ for $i\in \mathbb{I}_0 $, $k\in \mathbb{Z}$ and $m\ge 1$ by \begin{align} B_{i,k} &= o(i)^k \texttt{\rm T}_{\omega_i}^{-k} (B_i), \label{Bik} \\ \acute{\Theta}_{i,m} &= o(i)^m \Big(-B_{i,m-1} \texttt{\rm T}_{\omega_i'} (B_i) +v_i^{2} \texttt{\rm T}_{\omega_i'} (B_i) B_{i,m-1} \label{Thim1} \\ & \qquad\qquad\qquad\qquad + (v_i^{2}-1)\sum_{p=0}^{m-2} B_{i,p} B_{i,m-p-2} \mathbb{K}_{i}^{-1}\mathbb{K}_{\delta} \Big), \notag \\ \Theta_{i,m} &=\acute{\Theta}_{i,m} - \sum\limits_{a=1}^{\lfloor\frac{m-1}{2}\rfloor}(v_i^2-1) v_i^{-2a} \acute{\Theta}_{i,m-2a}\mathbb{K}_{a\delta} -\delta_{m,ev} v_i^{1-m} \mathbb{K}_{\frac{m}{2}\delta}. \label{Thim} \end{align} In particular, $B_{i,0}=B_i$. $B_{i,k},\Theta_{i,l}$ are homogeneous with respect to $\mathbb{Z} \mathbb{I}$-grading on $\widetilde{{\mathbf U}}^\imath$ with weights \[ \mathrm{wt} (B_{i,k})=\alpha_i + k\delta,\qquad \mathrm{wt}(\Theta_{i,l})=l\delta. \] Set ${\Theta}_{i,0} =(v_i-v_i^{-1})^{-1},$ and ${\Theta}_{i,m} =0,$ for $m<0.$ With root vectors defined above, a Drinfeld type presentation for the affine $\imath$quantum group of split ADE type is introduced in \cite[\S 3.4]{LW20b}. By replacing $v$ by $v_i$ and adding the equal-index version of Serre relations, a current presentation for $\widetilde{{\mathbf U}}^\imath$ of arbitrary split affine type is given in Definition \ref{def:Dpr}. We call it a reduced Drinfeld type presentation for $\widetilde{{\mathbf U}}^\imath$, since it is an $\imath$analogue of reduced Drinfeld presentation $\mathbf U_{red}$ for affine quantum groups. \begin{definition}\label{def:Dpr} Let $ {}^{\mathrm{Dr}}\tUi_{red} $ be the $\mathbb{Q}(v)$-algebra generated by $\mathbb{K}_{i}^{\pm1}$, $C^{\pm1}$, $H_{i,m}$ and $B_{i,l}$, where $i\in \mathbb{I}$, $m \in \mathbb{Z}_{\geq 1}$, $l\in\mathbb{Z}$, subject to the following relations, for $m,n \in \mathbb{Z}_{\geq1}$ and $k,l\in \mathbb{Z}$: \begin{align} & \mathbb{K}_i, C \text{ are central, }\quad [H_{i,m},H_{j,n}]=0, \quad \mathbb{K}_i\mathbb{K}_i^{-1}=1, \;\; C C^{-1}=1,\label{iDRG0} \\%2 &[H_{i,m},B_{j,l}]=\frac{[mc_{ij}]_{v_i}}{m} B_{j,l+m}-\frac{[mc_{ij}]_{v_i}}{m} B_{j,l-m}C^m,\label{iDRG1'} \\%3 &[B_{i,k}, B_{j,l+1}]_{v_i^{-c_{ij}}} -v_i^{-c_{ij}} [B_{i,k+1}, B_{j,l}]_{v_i^{c_{ij}}}=0, \text{ if }i\neq j,\label{iDRG2'} \\ % &[B_{i,k}, B_{i,l+1}]_{v_i^{-2}} -v_i^{-2} [B_{i,k+1}, B_{i,l}]_{v_i^{2}} =v_i^{-2}\Theta_{i,l-k+1} C^k \mathbb{K}_i-v_i^{-4}\Theta_{i,l-k-1} C^{k+1} \mathbb{K}_i\label{iDRG3'} \\ &\qquad\qquad\qquad\qquad\qquad\qquad\quad\quad\quad +v_i^{-2}\Theta_{i,k-l+1} C^l \mathbb{K}_i-v_i^{-4}\Theta_{i,k-l-1} C^{l+1} \mathbb{K}_i, \notag \\\label{iDRG34} &[B_{i,k} ,B_{j,l}]=0,\qquad \text{ if }c_{ij}=0, \\%4 & \sum_{s=0}^2(-1)^s \qbinom{2}{s}_{v_i} B_{i,k}^{2-s}B_{j,l} B_{i,k}^s =-v_i^{-1} B_{j,l} \mathbb{K}_i C^k,\qquad\text{ if } c_{ij}=-1,\label{iDRG4'} \\ \label{iDRG5'} &\sum_{s=0}^3(-1)^s \qbinom{3}{s}_{v_i} B_{i,k}^{3-s}B_{j,l} B_{i,k}^s =-v_i^{-1} [2]_{v_i}^2 (B_{i,k} B_{j,l}-B_{j,l} B_{i,k})\mathbb{K}_i C^k,\qquad\text{ if } c_{ij}=-2, \\\label{iDRG6'} &\sum_{s=0}^4(-1)^s \qbinom{4}{s}_{v_i} B_{i,k}^{4-s}B_{j,l} B_{i,k}^s = -v_i^{-1}(1+[3]_{v_i}^2)( B_{j,l} B_{i,k}^2+ B_{i,k}^2 B_{j,l})\mathbb{K}_i C^k\\\notag &\qquad +v_i^{-1}[4]_{v_i} (1+[2]_{v_i}^2) B_{i,k} B_{j,l} B_{i,k} \mathbb{K}_i C^k-v_i^{-2}[3]^2_{v_i} B_{j,l} \mathbb{K}_i^2 C^{2k} ,\qquad\text{ if } c_{ij}=-3, \end{align} where $\Theta_{i,m}$ are related to $H_{i,m}$ by the following functional equation: \begin{align}\label{H-TH} 1+ \sum_{m\geq 1} (v_i-v_i^{-1})\Theta_{i,m} u^m = \exp\Big( (v_i-v_i^{-1}) \sum_{m\geq 1} H_{i,m} u^m \Big). \end{align} \end{definition} \begin{theorem}\label{DprADE} There is a $\mathbb{Q}(v)$-algebra isomorphism $\Phi_{red}: {}^{\mathrm{Dr}}\tUi_{red} \to \widetilde{{\mathbf U}}^\imath$, which sends \begin{align*} B_{i,k} \mapsto B_{i,k},\quad H_{i,k} \mapsto H_{i,k},\quad \Theta_{i,k}\mapsto \Theta_{i,k},\quad \mathbb{K}_i \mapsto \mathbb{K}_i, \quad C \mapsto \mathbb{K}_\delta, \end{align*} for $i\in \mathbb{I}_0,k\in \mathbb{Z}, m\geq 1$. \end{theorem} \begin{proof}Most defining relations for $^{Dr}\widetilde{{\mathbf U}}^\imath_{red}$ are verified in $\widetilde{{\mathbf U}}^\imath$ in similar ways as \cite{LW20b}, except the relation \eqref{iDRG1'} for $i\neq j$. We postpone details of the proof of this relation to Section \ref{verf}.\par Relations \eqref{iDRG0}-\eqref{iDRG1'} for $i=j$ and the relation \eqref{iDRG3'} follow from Lemma \ref{lem:bra}(c) and the rank one computation offered by Lu and Wang; see \cite[Theorem 2.16]{LW20b} for a summary.\par The relation \eqref{iDRG2'} is proved as follows. Since $d_ic_{ij}=d_j c_{ji},$ we have $v_i^{c_{ij}}=v_j^{c_{ji}}$. Then the LHS of the relation \eqref{iDRG2'} is symmetric with respect to $(i,k)$ and $(j,l)$, i.e., we have \[ [B_{i,k}, B_{j,l+1}]_{v_i^{-c_{ij}}} - v_i^{-c_{ij}} [B_{i,k+1}, B_{j,l}]_{v_i^{c_{ij}}}= [B_{j,l}, B_{i,k+1}]_{v_j^{-c_{ji}}} - v_j^{-c_{ji}} [B_{j,l+1}, B_{i,k}]_{v_j^{c_{j i}}}. \] Hence, it suffices to prove the relation \eqref{iDRG2'} for $c_{ij}=-1,0$. For these two cases, \eqref{iDRG2'} is verified in the same way as \cite[\S 4.2]{LW20b} with the help of Lemma \ref{lem:Lus} and Lemma \ref{lem:bra}(b).\par The verification of the relation \eqref{iDRG1'} for $i\neq j$ is given in Section \ref{verf}, using other defining relations \eqref{iDRG2'}-\eqref{iDRG3'}; note that proofs for these two relations, as provided above, do not need the relation \eqref{iDRG1'}, and hence we did not run into a circular. \par The relation \eqref{iDRG0} for $i\neq j$ is verified using \eqref{iDRG1'} and \eqref{iDRG3'} in the similar way as \cite[\S 4.5]{LW20b}. Relations \eqref{iDRG34}- \eqref{iDRG6'} are obtained by applying $\texttt{\rm T}_{\omega_i}^{-k} \texttt{\rm T}_{\omega_j}^{-l}$ to finite type Serre relations \eqref{eq:S1}-\eqref{eq:S4} respectively. Hence, $\Phi_{red}$ is a well defined homomorphism.\par The surjectivity and injectivity of $\Phi_{red}$ follows by similar arguments in \cite[proof of Theorem 3.13]{LW20b}. (For surjectivity, one need to replace all $ ^{\mathrm{Dr}}\widetilde{{\mathbf U}}^\imath$ and $ ^{\mathrm{Dr}}\mathbf U$ in their arguments by ${}^{\mathrm{Dr}}\tUi_{red}$ and $ ^{\mathrm{Dr}}\mathbf U_{red} $ respectively, and follow similar arguments there.) \end{proof} Define generating functions \begin{align}\label{eq:Genfun} \begin{cases} {\mathbf B }_{i}(z) =\sum_{k\in\mathbb{Z}} B_{i,k}z^{k}, \\ \boldsymbol{\Theta}_{i}(z) =1+ \sum_{m > 0}(v_i-v_i^{-1})\Theta_{i,m}z^{m}, \\ {\mathbf H}_i(u)=\sum_{m\geq 1} H_{i,m} u^m, \\ \boldsymbol{\Delta}(z)=\sum_{k\in\mathbb{Z}} C^k z^k. \end{cases} \end{align} Then \eqref{iDRG1'} can be written in terms of generating functions as \begin{align}\label{H-TH2} &(v_i-v_i^{-1})[\frac{\partial}{\partial z}{\mathbf H}_i(z),{\mathbf B }_j(w)]\\\notag = &\left(\frac{1}{1-v_i^{c_{ij}}zw^{-1}}-\frac{1}{1-v_i^{-c_{ij}}zw^{-1}}-\frac{1}{1-v_i^{c_{ij}} z w}+\frac{1}{1-v_i^{-c_{ij}}zw}\right){\mathbf B }_j(w), \end{align} and \eqref{H-TH} can be written as \begin{equation} \boldsymbol{\Theta}_i(z)=\exp((v_i-v_i^{-1}) {\mathbf H}_i(z)). \end{equation} Conjugate \eqref{H-TH2} by $ \boldsymbol{\Theta}_i(z)$. Since $(v_i-v_i^{-1}) \boldsymbol{\Theta}_i(z)\frac{\partial}{\partial z}{\mathbf H}_i(z)=\frac{\partial}{\partial z} \boldsymbol{\Theta}_i(z) $, we have \begin{align*} &\frac{\partial}{\partial z}\big( \boldsymbol{\Theta}_i(z) {\mathbf B }_j(w) \boldsymbol{\Theta}_i^{-1}(z)\big) \\ = &\left(\frac{1}{1-v_i^{c_{ij}}zw^{-1}}-\frac{1}{1-v_i^{-c_{ij}}zw^{-1}}-\frac{1}{1-v_i^{c_{ij}} z wC}+\frac{1}{1-v_i^{-c_{ij}}zwC}\right) \boldsymbol{\Theta}_i(z){\mathbf B }_j(w) \boldsymbol{\Theta}_i(z)^{-1}. \end{align*} By integrating it with respect to $z$, we obtain the following equivalent formulation of \eqref{H-TH2} \begin{equation}\label{H-TH3} \boldsymbol{\Theta}_i(z) {\mathbf B }_j(w)=\left(\frac{1 -v_i^{-c_{ij}}zw^{-1}}{1 -v_i^{c_{ij}}zw^{-1}} \cdot \frac{1 -v_i^{c_{ij}} zw C}{1 -v_i^{-c_{ij}}zw C}\right){\mathbf B }_j(w) \boldsymbol{\Theta}_i(z). \end{equation} Hence, \eqref{H-TH2} is equivalent to \eqref{H-TH3}. Relation \eqref{H-TH3} can be written component-wisely as the following relation, \begin{equation}\label{iDRG1''} [\Theta_{i,k},B_{j,l}]+[\Theta_{i,k-2},B_{j,l}]C=v_i^{c_{ij}}[\Theta_{i,k-1},B_{j,l+1}]_{v_i^{-2c_{ij}}}+v_i^{-c_{ij}}[\Theta_{i,k-1},B_{j,l-1}]_{v_i^{2c_{ij}}}C. \end{equation} Thus, \eqref{iDRG1'} is equivalent to \eqref{iDRG1''}. See also \cite[Proposition 2.8]{LW20b} for the rank one case, and \cite[Proposition 3.12]{LW20b} for the general case. \begin{corollary} There exists a $\mathbb{Q}(v)$-algebra antiautomorphism $\Psi: {}^{\mathrm{Dr}}\tUi_{red} \to {}^{\mathrm{Dr}}\tUi_{red}$ given by \begin{align*} B_{i,k} &\mapsto B_{i,-k}, \qquad H_{i,l} \mapsto C^{-l}H_{i,l},\qquad \Theta_{i,r} \mapsto C^{-r} \Theta_{i,r},\\ C &\mapsto C^{-1}, \qquad \mathbb{K}_i \mapsto \mathbb{K}_i, \end{align*} for $k\in \mathbb{Z}, r,l>0,i\in \mathbb{I}_0$. \end{corollary} \begin{proof} Note that $\Psi^2=1$. It is straightforward to check that $\Psi$ preserves defining relations \eqref{iDRG1'}-\eqref{iDRG6'} of ${}^{\mathrm{Dr}}\tUi_{red}$. Hence, $\Psi$ is a well-defined antiautomorphism of ${}^{\mathrm{Dr}}\tUi_{red}$. \end{proof} By computing weights in the sense of Remark \ref{deg}, we can regard $\Psi$ as the reflection $\alpha+k\delta \mapsto \alpha-k\delta, \alpha\in \mathcal {R}_0, k\in \mathbb{Z}$ on the affine root system $\mathcal {R}$. The effect of $\Psi$ can be written in the generating function format by \begin{align*} \Psi({\mathbf B }_i(w))={\mathbf B }_i(w^{-1}),\qquad \Psi\big( \boldsymbol{\Delta}(wz) \boldsymbol{\Theta}_i(z)\big)= \boldsymbol{\Delta}\big((zw)^{-1}\big) \boldsymbol{\Theta}_i(w^{-1}). \end{align*} Let $\Theta_i(s,r)$ be a short notation for the RHS of \eqref{iDRG3'}, i.e., \begin{align}\label{eq:Th1} \Theta_i(s,r):=(-\Theta_{i,s-r+1} C^r +v_i^{-2}\Theta_{i,s-r-1} C^{r+1} -\Theta_{i,r-s+1} C^s +v_i^{-2}\Theta_{i,r-s-1} C^{s+1} )\mathbb{K}_i. \end{align} Note that $\Theta_i(s,r)=\Theta_i(r,s)$. By a direct computation, we have \begin{align} \Psi(\Theta_i(s,r) &=\Theta_i(-r-1,-s-1). \end{align} \subsection{A Drinfeld type presentation for $\widetilde{{\mathbf U}}^\imath$ of split affine BCF type}\label{BCF} In this section, we add the general version of Serre relations to the current presentation given in Definition \ref{def:Dpr}; then we provide a Drinfeld type presentation in terms of generating functions in Theorem \ref{DprBCF} for $\widetilde{{\mathbf U}}^\imath(\widehat{\mathfrak{g}})$, where $\mathfrak{g}$ is allowed to be any finite type except $G_2$.\par Recall generating functions defined in \eqref{eq:Genfun}. Define $\mathbb{S}(w_1,w_2,w_3|z;i,j)$ to be the following expression \begin{align} \operatorname{Sym}\nolimits_{w_1,w_2,w_3}\left\{\sum_{r=0}^3(-1)^{3-r}\qbinom{3}{r}_{v_i}{\mathbf B }_{i}(w_1)\cdots{\mathbf B }_{i}(w_{r}){\mathbf B }_{j}(z){\mathbf B }_{i}(w_{r+1})\cdots{\mathbf B }_{i}(w_3)\right\}\label{eq:BCF}, \end{align} and similarly define $\mathbb{S}(w_1,w_2|z;i,j)$ to be the following expression \begin{align} \operatorname{Sym}\nolimits_{w_1,w_2}\left\{\sum_{r=0}^2(-1)^{ r}\qbinom{2}{r}_{v_i}{\mathbf B }_{i}(w_1)\cdots{\mathbf B }_{i}(w_{r}){\mathbf B }_{j}(z){\mathbf B }_{i}(w_{r+1})\cdots{\mathbf B }_{i}(w_2)\right\}.\label{eq:ADE} \end{align} Denote \begin{align*} \phi_i(w_1,w_2,w_3)&=\frac{v_i^{-2 }w_2^{2}w_3^{-1}-w_2}{1+w_2 w_1^{-1}+w_1 w_3^{-1}+ w_2^2 w_3^{-2}+ w_2^2 w_1^{-1} w_3^{-1}+w_1 w_2 w_3^{-2}-([3]^2_{v_i}-3) w_2 w_3^{-1} },\\ \psi_i(w_1,w_2,w_3)&=\frac{-1-(1-v_i^{-2})w_2 w_3^{-1}+v_i^{-2}w_2^2 w_3^{-2}- w_1 w_3^{-1}+v_i^{-2}w_1 w_2 w_3^{-1} }{1+w_2 w_1^{-1}+w_1 w_3^{-1}+ w_2^2 w_3^{-2}+ w_2^2 w_1^{-2} w_3^{-1}+w_1 w_2 w_3^{-2}-([3]^2_{v_i}-3) w_2 w_3^{-1}}. \end{align*} Details for the proof of the following general version of Serre relations \eqref{iDRG5} and \eqref{iDRG6} are given in Section \ref{Serre}. \begin{theorem}\label{DprBCF} The universal affine $\imath$quantum group $\widetilde{{\mathbf U}}^\imath$ is isomorphic to the $\mathbb{Q}(v)$-algebra $ {}^{\mathrm{Dr}}\tUi $ which is defined by generators $\mathbb{K}_{i}^{\pm1}$, $C^{\pm1}$, $\Theta_{i,m},B_{i,k}$ $(i\in \mathbb{I}_0$, $m\geq 1$, $k\in\mathbb{Z})$, subject to the following defining relations, for $i, j \in \mathbb{I}_0$: \begin{align} &\mathbb{K}_i,C \text{ are central, }\qquad \boldsymbol{\Theta}_i(z) \boldsymbol{\Theta}_j(w) = \boldsymbol{\Theta}_j(w) \boldsymbol{\Theta}_i(z), \label{iDRG1} \\ & {\mathbf B }_j(w) \boldsymbol{\Theta}_i(z) = \left( \frac{1 -v_i^{c_{ij}}zw^{-1}}{1 -v_i^{-c_{ij}}zw^{-1}} \cdot \frac{1 -v_i^{-c_{ij}}zw C}{1 -v_i^{c_{ij}} zw C} \right) \boldsymbol{\Theta}_i(z) {\mathbf B }_j(w), \label{iDRG2} \\ &(v_i^{c_{ij}}z -w) {\mathbf B }_i(z) {\mathbf B }_j(w) +(v_i^{c_{ij}}w-z) {\mathbf B }_j(w) {\mathbf B }_i(z)=0, \qquad \text{ if }i\neq j,\label{iDRG3a} \\ &(v_i^2z-w) {\mathbf B }_i(z) {\mathbf B }_i(w) +(v_i^{2}w-z) {\mathbf B }_i(w) {\mathbf B }_i(z)\label{iDRG3b} \\\notag =&v_i^{-2} \frac{ \boldsymbol{\Delta}(zw)}{v_i-v_i^{-1}} \big( (v_i^2z-w) \boldsymbol{\Theta}_i(w) +(v_i^2w-z) \boldsymbol{\Theta}_i(z) \big)\mathbb{K}_{i}, \\ \label{iDRG4}&{\mathbf B }_i(w){\mathbf B }_j(z)-{\mathbf B }_j(z){\mathbf B }_i(w)=0, \qquad\text{ if }c_{ij}=0, \\ \label{iDRG5}&\mathbb{S}(w_1,w_2|z;i,j) \\\notag =& -\operatorname{Sym}\nolimits_{w_1,w_2} \frac{ \boldsymbol{\Delta}(w_1w_2)}{v_i-v_i^{-1}}\frac{[2]_{v_i} z w_1^{-1} }{1 -v_i^{2}w_2w_1^{-1}}[ \boldsymbol{\Theta}_i(w_2),{\mathbf B }_j(z)]_{v_i^{-2}}\mathbb{K}_i\\\notag & -\operatorname{Sym}\nolimits_{w_1,w_2} \frac{ \boldsymbol{\Delta}(w_1w_2)}{v_i-v_i^{-1}}\frac{1 +w_2w_1^{-1}}{1 -v_i^{2}w_2w_1^{-1}}[{\mathbf B }_j(z), \boldsymbol{\Theta}_i(w_2)]_{v_i^{-2}}\mathbb{K}_i, \\ &\text{ if }c_{ij}=-1, \notag \\ & \label{iDRG6}\mathbb{S}(w_1,w_2,w_3|z ;i,j) \\\notag =& v_i[2]_{v_i}[3]_{v_i}z^{-1}\operatorname{Sym}\nolimits_{w_1,w_2,w_3} \frac{ \boldsymbol{\Delta}(w_2w_3)}{v_i-v_i^{-1}} \phi_i(w_1,w_2,w_3) \big[{\mathbf B }_i(w_1),[{\mathbf B }_j(z), \boldsymbol{\Theta}_i(w_2)]_{v_i^{-4}}\big]\mathbb{K}_i\\\notag &-[3]_{v_i}z^{-1}\operatorname{Sym}\nolimits_{w_1,w_2,w_3} \frac{ \boldsymbol{\Delta}(w_2w_3)}{v_i-v_i^{-1}} \phi_i(w_1,w_2,w_3) \big[[ {\mathbf B }_j(z),{\mathbf B }_i(w_1)]_{v_i^{-2}}, \boldsymbol{\Theta}_i(w_2)\big]\mathbb{K}_i\\\notag &-v_i[2]_{v_i}\operatorname{Sym}\nolimits_{w_1,w_2,w_3} \frac{ \boldsymbol{\Delta}(w_2w_3)}{v_i-v_i^{-1}} \psi_i(w_1,w_2,w_3) \big[[ \boldsymbol{\Theta}_i(w_2),{\mathbf B }_j(z)]_{v_i^{-4}}, {\mathbf B }_i(w_1)\big]\mathbb{K}_i\\\notag &+\operatorname{Sym}\nolimits_{w_1,w_2,w_3} \frac{ \boldsymbol{\Delta}(w_2w_3)}{v_i-v_i^{-1}} \psi_i(w_1,w_2,w_3)\big[ \boldsymbol{\Theta}_i(w_2),[{\mathbf B }_i(w_1), {\mathbf B }_j(z)]_{v_i^{-2}}\big]\mathbb{K}_i, \\\notag & \text{ if } c_{ij}=-2. \end{align} where $ \phi_i(w_1,w_2,w_3) ,\psi_i(w_1,w_2,w_3)$ are defined above. \end{theorem} \begin{proof} By Theorem \ref{DprADE}, it suffices to show that ${}^{\mathrm{Dr}}\tUi_{red}$ is isomorphic to ${}^{\mathrm{Dr}}\tUi$. The componentwise version of relations \eqref{iDRG3a}-\eqref{iDRG4} are the same as relations \eqref{iDRG2'}-\eqref{iDRG34}, and the componentwise version of the relation \eqref{iDRG2} is the relation \eqref{iDRG1''}. One can find a proof for this in \cite[Theorem 5.1]{LW20b}. By a direct computation, \eqref{iDRG4'} is the $w_1^k w_2^k z^l$ component of \eqref{iDRG5}, and \eqref{iDRG5'} is the $w_1^k w_2^k w_3^k z^l$ component of \eqref{iDRG6}. Hence, the map $\Phi_{red}:{}^{\mathrm{Dr}}\tUi_{red}\longrightarrow {}^{\mathrm{Dr}}\tUi$ by sending generators $\mathbb{K}_{i}^{\pm1}$, $C^{\pm1}$, $\Theta_{i,m},B_{i,k}$ to those with same names is a well-defined homomorphism. \par We will show in Section \ref{Serre} that relations \eqref{iDRG5} and \eqref{iDRG6} can be derived from defining relations of $ {}^{\mathrm{Dr}}\tUi_{red}$. Thus, the inverse of $\Phi_{red}$ constructed in the obvious way is well-defined, which implies $\Phi_{red}$ is an isomorphism. \end{proof} \begin{remark} As pointed out in the proof, when $\mathfrak{g}$ is of ADE type, this presentation is identical to the one in \cite[\S 3.4]{LW20b} and thus Theorem \ref{DprBCF} can be viewed as a generalization of their work. \end{remark} \begin{remark} As originally formulated in \cite[(3.32)(3.38)(5.6)]{LW20b}, \eqref{iDRG5} can be written component-wisely as \begin{equation} \mathbb{S}(k_1,k_2|l;i,j)=\mathbb{R}(k_1,k_2|l; i,j), \end{equation} where \begin{align} \label{eq:Skk}\mathbb{S}(k_1,k_2|l;i,j) &=\operatorname{Sym}\nolimits_{k_1,k_2} \big( B_{i,k_1} B_{i,k_2} B_{j,l} -[2]_{v_i} B_{i,k_1} B_{j,l} B_{i,k_2} + B_{j,l} B_{i,k_1} B_{i,k_2}\big), \\ \label{eq:Rkk}\mathbb{R}(k_1,k_2|l; i,j) &=\operatorname{Sym}\nolimits_{k_1,k_2}\mathbb{K}_i C^{k_1} \Big(-\sum_{p\geq 0} v_i^{2p} [2]_{v_i} [\Theta _{i,k_2-k_1-2p-1},B_{j,l-1}]_{v_i^{-2}}C^{p+1} \\\notag &\qquad\qquad -\sum_{p\geq 1} v_i^{2p-1} [2]_{v_i} [B_{j,l},\Theta _{i,k_2-k_1-2p}]_{v_i^{-2}} C^{p} - [B_{j,l}, \Theta _{i,k_2-k_1}]_{v_i^{-2}} \Big). \end{align} \end{remark} \begin{remark}\label{rmk:com} One can obtain the componentwise formulas of \eqref{iDRG6} by expanding the denominators of $\phi_i(w_1,w_2,w_3)$ and $\psi_i(w_1,w_2,w_3)$. Note that, after rewriting $w_3^{-1}$ as $ w_2 C$ using $\Delta(w_2 w_3)$, denominators of $\phi_i(w_1,w_2,w_3),\psi_i(w_1,w_2,w_3)$ have the form $1+A$ such that $w_3$ and nonpositive powers of $w_2$ do not appear in $A$. Hence, once we expand the denominators, each component of the RHS will be a finite sum. \par The constant component of \eqref{iDRG6} is the same as \eqref{eq:S3}. The general componentwise formula of \eqref{iDRG6} is, however, too complicated to write down. \end{remark} \begin{remark} Relations \eqref{iDRG1}-\eqref{iDRG4} are homogeneous by a direct observation on their componentwise formulas. Relations \eqref{iDRG5} and \eqref{iDRG6} are also homogeneous, since they can be derived from relations \eqref{iDRG1}-\eqref{iDRG4} in Section \ref{Serre}. \end{remark} \begin{remark}\label{rmk:spe} Recall the filtration and $\widetilde{\U}^{\imath,m}$ in Remark \ref{deg}. For any $\beta=\sum_{i\in \mathbb{I}} n_i \alpha_i\in \mathcal {R}^+$, define its height to be \[ \mathrm{ht}^+(\beta)=\sum_{i\in \mathbb{I}} n_i. \] Let $d=\mathrm{ht}(\delta)$. By similar arguments in \cite[Proposition 4.4]{BK20}, \[ B_{i,k}\in \widetilde{\U}^{\imath, 1+k d}\setminus \widetilde{\U}^{\imath, k d},\quad \Theta_{i,l} \in \widetilde{\U}^{\imath,ld}\setminus \widetilde{\U}^{\imath, ld-1},\quad H_{i,l} \in \widetilde{\U}^{\imath,ld}\setminus \widetilde{\U}^{\imath, ld-1} , \] and the images of $B_{i,k},\Theta_{i,l},H_{i,l}$ in $\mathrm{gr}\U^\imath$ are, up to a $\mathbb{Q}(v)[\mathbb{K}_i^{\pm 1}]$ multiple, Drinfeld generators $x_{i,-k}^-,\varphi_{i,-l},h_{i,-l}$ of $ \mathbf U^- $ respectively for $k\geq 0,l > 0$. Since $B_{i,-k}=-\frac{1}{[2]_{v_i}}[H_{i,k},B_i]C^{-1}+B_{i,k}C^{-1}$ for $k>0$, we have \[ B_{i,-k}\in \widetilde{\U}^{\imath, 1+kd}. \]\par We claim the $w_1^{k_1} w_2^{k_2} w_3^{k_3} z^l$ component of \eqref{iDRG6} for $k_1, k_2, k_3, l \geq 0$ reduces to the Serre relation \eqref{Dr8} in $\mathrm{gr}\widetilde{{\mathbf U}}^\imath\cong \mathbf U^-\otimes \mathbb{Q}(v)[\mathbb{K}_i^\pm|i\in \mathbb{I}]$. Observe that this component has the form \begin{align} \label{iDRG6''}&\operatorname{Sym}\nolimits_{k_1,k_2,k_3}\sum_{t=0}^{3} (-1)^t \qbinom{3}{t}_{v_i} B_{i,k_1}\cdots B_{i,k_t} B_{j,l} B_{i,k_t+1} \cdots B_{i,k_n}\\\notag = \operatorname{Sym}\nolimits_{k_1,k_2,k_3}\bigg(&\sum (*)\Theta_{i,k_2-s}B_{j,l'}B_{i,k_{1}+t}+\sum (*)\Theta_{i,k_2-s}B_{i,k_{1}+t}B_{j,l'}+\sum (*)B_{j,l'}\Theta_{i,k_2-s}B_{i,k_{1}+t}\\\notag +&\sum (*)B_{i,k_{1}+t}\Theta_{i,k_2-s}B_{j,l'}+\sum (*)B_{j,l'}B_{i,k_{1}+t}\Theta_{i,k_2-s}+\sum (*)B_{i,k_{1}+t}B_{j,l'}\Theta_{i,k_2 -s}\bigg). \end{align} where coefficients $(*)$ lie in $\mathbb{Q}(v)[\mathbb{K}_i^\pm|i\in \mathbb{I}]$ and each sum ranges in $0\leq s \leq k_2, -s\leq t \leq s,l'\in\{l,l+1\}$. By a direct computation of heights, the RHS lies in $\widetilde{\U}^{\imath,3+(k_1+k_2+k_3+l)d}$, while the LHS lies in $\widetilde{\U}^{\imath, 4+(k_1+k_2+k_3+l)d}\setminus\widetilde{\U}^{\imath, 3+(k_1+k_2+k_3+l)d}$. Hence, the RHS of \eqref{iDRG6''} disappears in $\mathrm{gr}\widetilde{{\mathbf U}}^\imath$, and thus the componentwise version of \eqref{iDRG6} reduces to the Serre relation \eqref{Dr8} in the original Drinfeld presentation. \end{remark} \section{Verification of the relation \eqref{iDRG1'}}\label{verf} In this section, we establish the relation \eqref{iDRG1'} for $i\neq j$ in $\U^\imath$ and complete the proof of Theorem \ref{DprADE}. \par Recall that \eqref{iDRG1'} is equivalent to \eqref{iDRG1''}. Hence, it suffices to show that \eqref{iDRG1''} for $i\neq j$ holds in $\U^\imath$. Fix $i\neq j\in \mathbb{I}_0$ and denote \begin{equation}\label{Yk} Y_{k,l}=[\Theta_{i,k},B_{j,l}]+[\Theta_{i,k-2},B_{j,l}]C-v_i^{c_{ij}}[\Theta_{i,k-1},B_{j,l+1}]_{v_i^{-2c_{ij}}}-v_i^{-c_{ij}}[\Theta_{i,k-1},B_{j,l-1}]_{v_i^{2c_{ij}}}C. \end{equation} Since $\Theta_{i,0}=\frac{1}{v_i-v_i^{-1}}$ and $\Theta_{i,k}=0,\forall k<0$ by our convention, $Y_{k,l}=0$ if $k\leq 0$ and the relation \eqref{iDRG1''} is equivalent to $Y_{k,l}=0$. \par We will show that $Y_{k,l}=0$ for $k>0,l\in \mathbb{Z}$ in $\widetilde{{\mathbf U}}^\imath$ in this section, in order to verify the relation \eqref{iDRG1''}. Other two defining relations \eqref{iDRG2'} and \eqref{iDRG3'} of ${}^{\mathrm{Dr}}\tUi_{red}$ are allowed to be used in this section, since their proof does not need \eqref{iDRG1'}. \par Recall some basic properties for $q$-brackets, which shall be used heavily in various computations in this section as well as remaining sections. \begin{lemma}[\text{\cite[Remark 4.17]{Da12}, also \cite[Introduction]{Ji98}}]\label{lem:jac}Let $a,b,c\in \widetilde{{\mathbf U}}^\imath,u,v,w\in \mathbb{C}(q)\setminus\{0\}$. We have \begin{enumerate} \item $[a,b]_u=-u [b,a]_{u^{-1}}$, \item $\big[[a,b]_u,b\big]_v=\big[[a,b]_v,b\big]_u$, \item $\big[[a,b]_u,c\big]_v=\big[a,[b,c]_{v/w}\big]_{uw}-u\big[b,[a,c]_w\big]_{v/uw}.$ \end{enumerate} \end{lemma} \subsection{An induction on $k$}\label{Verf1} Since the index $l$ of $Y_{k,l}$ can be shifted using $\texttt{\rm T}_{\omega_j}$, it suffices to focus on $k$. We first establish an inductive formula on $k$, which relates $Y_{k,l}$ and $Y_{k+2,l}.$ Such an induction is partially inspired by Damiani's reduction for the relation \eqref{Dr5} affine quantum group \cite[Proposition 7.15]{Da12}. \begin{proposition}\label{prop:recur} Let $k>1$ or $k=0$. We have for $l\in \mathbb{Z}$, \begin{equation}\label{recur} Y_{k+2,l}=v_i^{-2}Y_{k,l}C. \end{equation} We also have $Y_{3,l}=(1-v_i^{-2})Y_{1,l}C$ for $l\in \mathbb{Z}$. \end{proposition} \begin{proof} Write $Y_{k+2,l}\mathbb{K}_i-v_i^{-2}Y_{k,l}C\mathbb{K}_i=\Sigma_1+\Sigma_2+\Sigma_3+\Sigma_4$ for $k>1$ or $k=0$ where each summand $\Sigma_i$ is defined and rewritten as follows: \begin{align}\label{verf1} \Sigma_1:=&\big[\Theta_{i,k+2}-v_i^{-2}\Theta_{i,k}C , B_{j,l}\big]\mathbb{K}_i\\\notag =&-\big[[B_{i,k+2},B_i]_{v_i^2} , B_{j,l}\big]-\big[[B_{i,1},B_{i,k+1}]_{v_i^2} , B_{j,l}\big],\\ \label{verf2}\Sigma_2:=&[\Theta_{i,k}-v_i^{-2}\Theta_{i,k-2}C , ,B_{j,l}] \mathbb{K}_i C\\\notag =& -\big[[B_{i,k+1},B_{i,1}]_{v_i^2} , B_{j,l}\big]-\big[[B_{i,2},B_{i,k}]_{v_i^2} , B_{j,l}\big], \\ \label{verf3}v_i^{-c_{ij}}\Sigma_3:=&-[\Theta_{i,k+1}-v_i^{-2}\Theta_{i,k-1}C,B_{j,l+1}]_{v_i^{-2c_{ij}}}\mathbb{K}_i\\\notag =& \big[[B_{i,k+1},B_i]_{v_i^2} , B_{j,l+1}\big]_{v_i^{-2c_{ij}}}+\big[[B_{i,1},B_{i,k}]_{v_i^2} , B_{j,l+1}\big]_{v_i^{-2c_{ij}}}\\\notag =& \big[B_{i,k+1},[B_i , B_{j,l+1}]_{v_i^{-c_{ij}}}\big]_{v_i^{2-c_{ij}}}-v_i^2 \big[B_{i},[B_{i,k+1} , B_{j,l+1}]_{v_i^{-c_{ij}}}\big]_{v_i^{-2-c_{ij}}}\\\notag & +\big[B_{i,1},[B_{i,k} , B_{j,l+1}]_{v_i^{-c_{ij}}}\big]_{v_i^{2-c_{ij}}}-v_i^2 \big[B_{i,k},[B_{i,1} , B_{j,l+1}]_{v_i^{-c_{ij}}}\big]_{v_i^{-2-c_{ij}}}\\\notag =& - \big[B_{i,k+1},[ B_{j,l} ,B_{i,1}]_{v_i^{-c_{ij}}}\big]_{v_i^{2-c_{ij}}}+v_i^2 \big[B_{i},[B_{j,l} ,B_{i,k+2}]_{v_i^{-c_{ij}}}\big]_{v_i^{-2-c_{ij}}}\\\notag & -\big[B_{i,1},[ B_{j,l},B_{i,k+1}]_{v_i^{-c_{ij}}}\big]_{v_i^{2-c_{ij}}}+v_i^2 \big[B_{i,k},[ B_{j,l},B_{i,2}]_{v_i^{-c_{ij}}}\big]_{v_i^{-2-c_{ij}}}, \\ \label{verf4}v_i^{c_{ij}}\Sigma_4:=&-[\Theta_{i,k+1}-v_i^{-2}\Theta_{i,k-1}C,B_{j,l-1}]_{v_i^{2c_{ij}}}C\mathbb{K}_i \\\notag =& \big[[B_{i,k+2},B_{i,1}]_{v_i^2} , B_{j,l-1}\big]_{v_i^{2c_{ij}}}+\big[[B_{i,2},B_{i,k+1}]_{v_i^2} , B_{j,l-1}\big]_{v_i^{2c_{ij}}}\\\notag =& \big[B_{i,k+2},[B_{i,1} , B_{j,l-1}]_{v_i^{c_{ij}}}\big]_{v_i^{2+c_{ij}}}-v_i^2 \big[B_{i,1},[B_{i,k+2} , B_{j,l-1}]_{v_i^{c_{ij}}}\big]_{v_i^{-2+c_{ij}}}\\\notag & +\big[B_{i,2},[B_{i,k+1} , B_{j,l-1}]_{v_i^{c_{ij}}}\big]_{v_i^{2+c_{ij}}}-v_i^2 \big[B_{i,k+1},[B_{i,2} , B_{j,l-1}]_{v_i^{c_{ij}}}\big]_{v_i^{-2+c_{ij}}}\\\notag =& -\big[B_{i,k+2},[B_{j,l} ,B_{i}]_{v_i^{c_{ij}}}\big]_{v_i^{2+c_{ij}}}+v_i^2 \big[B_{i,1},[ B_{j,l}, B_{i,k+1}]_{v_i^{c_{ij}}}\big]_{v_i^{-2+c_{ij}}}\\\notag & -\big[B_{i,2},[B_{j,l} , B_{i,k+1}]_{v_i^{c_{ij}}}\big]_{v_i^{2+c_{ij}}}+v_i^2 \big[B_{i,k+1},[B_{j,l} , B_{i,1}]_{v_i^{c_{ij}}}\big]_{v_i^{-2+c_{ij}}}, \end{align} where relation \eqref{iDRG3'} is used in the first equality in each of \eqref{verf1}-\eqref{verf4}, and relation \eqref{iDRG2'} is used in the last equality of \eqref{verf3}-\eqref{verf4}. Now, adding \eqref{verf1}-\eqref{verf4} together, we have \begin{align*} &Y_{k+2,l}\mathbb{K}_i-v_i^{-2}Y_{k,l}C\mathbb{K}_i=\Sigma_1+\Sigma_2+\Sigma_3+\Sigma_4\\ =&-\big[[B_{i,k+2},B_i]_{v_i^2} , B_{j,l}\big]-\big[[B_{i,1},B_{i,k+1}]_{v_i^2} , B_{j,l}\big]\\ &-\big[[B_{i,k+1},B_{i,1}]_{v_i^2} , B_{j,l}\big]-\big[[B_{i,2},B_{i,k}]_{v_i^2} , B_{j,l}\big]\\ &-v_i^{c_{ij}}\big[B_{i,k+1},[ B_{j,l} ,B_{i,1}]_{v_i^{-c_{ij}}}\big]_{v_i^{2-c_{ij}}}+v_i^{2+c_{ij}} \big[B_{i},[B_{j,l} ,B_{i,k+2}]_{v_i^{-c_{ij}}}\big]_{v_i^{-2-c_{ij}}}\\ &-v_i^{c_{ij}}\big[B_{i,1},[ B_{j,l},B_{i,k+1}]_{v_i^{-c_{ij}}}\big]_{v_i^{2-c_{ij}}}+v_i^{2+c_{ij}} \big[B_{i,k},[ B_{j,l},B_{i,2}]_{v_i^{-c_{ij}}}\big]_{v_i^{-2-c_{ij}}}\\ &-v_i^{-c_{ij}}\big[B_{i,k+2},[B_{j,l} ,B_{i}]_{v_i^{c_{ij}}}\big]_{v_i^{2+c_{ij}}}+v_i^{2-c_{ij}} \big[B_{i,1},[ B_{j,l}, B_{i,k+1}]_{v_i^{c_{ij}}}\big]_{v_i^{-2+c_{ij}}}\\ &-v_i^{-c_{ij}}\big[B_{i,2},[B_{j,l} , B_{i,k+1}]_{v_i^{c_{ij}}}\big]_{v_i^{2+c_{ij}}}+v_i^{2-c_{ij}} \big[B_{i,k+1},[B_{j,l} , B_{i,1}]_{v_i^{c_{ij}}}\big]_{v_i^{-2+c_{ij}}}\\ =&0, \end{align*} where the last step follows by a direct computation using Lemma \ref{lem:jac}. Hence, $Y_{k+2,l}=v_i^{-2}Y_{k,l}C$ for $k>1$ or $k=0$. For $k=1$, using a similar method, we have $Y_{3,l}=(1-v_i^{-2})Y_{1,l}C$. \end{proof} \subsection{Base cases}\label{Verf2} By Proposition \ref{prop:recur}, $Y_{2m,l}$ is a scalar multiple of $Y_{0,l}$ and $Y_{2m-1,l}$ is a scalar multiple of $Y_{1,l}$ for $m>0,l\in \mathbb{Z}$. Since $Y_{0,l}=0$ as discussed in the beginning of Section \ref{verf}, it remains to show that $Y_{1,l}=0$ for $l\in \mathbb{Z}$. \par We explain the underlying idea for proving the base case $Y_{1,l}=0$, since details in the proof are quite technical. By the definition \eqref{Yk}, we have \begin{equation}\label{Y1} Y_{1,l}=[\Theta_{i,1},B_{j,l}]-[c_{ij}]_{v_i} B_{j,l+1} + [c_{ij}]_{v_i} B_{j,l-1}C. \end{equation} Since $\Theta_{i,1}=-[B_{i,1},B_i]_{v_i^2}$ by \eqref{iDRG3'}, we replace $\Theta_{i,1}$ in \eqref{Y1} by the $q$-brackets of real root vectors and we obtain \begin{equation}\label{Y1'} Y_{1,l}=-\big[[B_{i,1},B_i]_{v_i^2},B_{j,l}\big]-[c_{ij}]_{v_i} B_{j,l+1} + [c_{ij}]_{v_i} B_{j,l-1}C. \end{equation} We prove that the RHS of \eqref{Y1'} equal $0$ in separate cases depending on $c_{ij},i,j\in\mathbb{I}_0$. For $c_{ij}=-1,$ we use the finite type Serre relation \eqref{eq:S1}. For $c_{ij}=-2$, we use the formulas of $\texttt{\rm T}_i,\texttt{\rm T}_i^{-1}$ in Lemma \ref{lem:Ti}. For $c_{ij}=-3$, we use both of the finite type Serre relation \eqref{eq:S3} and the formulas of $\texttt{\rm T}_i,\texttt{\rm T}_i^{-1}$. \par We also recall that, by Lemma \ref{lem:bra}(b) and the construction of real root vectors, $\texttt{\rm T}_{\omega_j}$ fixes $B_{i,k}$ for any $j\neq i,k\in \mathbb{Z}$ while $\texttt{\rm T}_{\omega_i}$ sends $B_{i,k}$ to $o(i)B_{i,k-1}$. \par We now start to prove $Y_{1,l}=0$ case by case.\par (1)$c_{ij}=c_{ji}=0$. In this case, since both $B_{i}, B_{i,1}$ commute with $B_{j,l}$, $\Theta_{i,1}$ commutes with $B_{j,l}$ for $l\in \mathbb{Z}$. Hence, $Y_{1,l}=0$. \par (2)$c_{ij}=c_{ji}=-1$. We rewrite the finite type Serre relation \eqref{eq:S2} in terms of $q$-brackets as \begin{equation} \big[B_i,[B_i, B_j]_{v_i} \big]_{v_i^{-1}}=-v_i^{-1} B_{j} \mathbb{K}_i,\qquad \big[[ B_j, B_i]_{v_i},B_i \big]_{v_i^{-1}}=-v_i^{-1} B_{j} \mathbb{K}_i. \end{equation} i.e. each of these two relations is equivalent to $\eqref{eq:S2}$.\par Applying $o(j)^l\texttt{\rm T}_{\omega_j}^{-l}\texttt{\rm T}_{\omega_i}^{-k}$ to them, for $k,l\in \mathbb{Z}$, we have \begin{equation}\label{Se:brkt} \big[B_{i,k},[B_{i,k}, B_{j,l}]_{v_i} \big]_{v_i^{-1}}=-v_i^{-1} B_{j,l} \mathbb{K}_i C^k,\qquad \big[[ B_{j,l}, B_{i,k}]_{v_i},B_{i,k} \big]_{v_i^{-1}}=-v_i^{-1} B_{j,l} \mathbb{K}_i C^k. \end{equation} We now compute \begin{align*} [\Theta_{i,1},B_j]\mathbb{K}_i&\overset{\eqref{iDRG3'}}{=}-\big[[B_{i,1},B_i]_{v_i^2},B_j\big]\\ &\overset{\qquad}{=}-\big[B_{i,1},[B_i,B_j]_{v_i }\big]_{v_i}+v_i^2 \big[B_i,[B_{i,1},B_j]_{v_i^{-1}}\big]_{v_i^{-1}}\\ &\overset{\eqref{iDRG2'}}{=} \big[B_{i,1},[ B_{j,-1},B_{i,1}]_{v_i }\big]_{v_i}-v_i^2 \big[B_i,[B_{j,1},B_i]_{v_i^{-1}}\big]_{v_i^{-1}}\\ &\overset{\eqref{Se:brkt}}{=} B_{j,-1} \mathbb{K}_iC -B_{j,1}\mathbb{K}_i. \end{align*} Hence, $Y_{1,0}=0$, and by applying $\texttt{\rm T}_{\omega_j}^{-l}$, we get $Y_{1,l}=0$.\par (3)$c_{ij}=-2,c_{ji}=-1$. We first write $\texttt{\rm T}_i(B_j)$ defined in Lemma \ref{lem:Ti} in terms of $q$-brackets as \begin{align} [2]_{v_i}\texttt{\rm T}_i(B_j) = \big[ [B_j, B_i]_{v_i^2}, B_i\big] + [2]_{v_i} B_j \mathbb{K}_i,\\ [2]_{v_i}\texttt{\rm T}^{-1}_i(B_j) = \big[ B_i, [B_i, B_j ]_{v_i^2}\big] + [2]_{v_i} B_j \mathbb{K}_i. \end{align} Since $\texttt{\rm T}_{ i}, \texttt{\rm T}_{\omega_j}$ commute by Lemma \ref{lem:Lus}(a), applying $o(j)^l\texttt{\rm T}_{\omega_j}^{-l}$ to these equalities, we have for $l\in \mathbb{Z}$, \begin{align}\label{Br:brkt1} [2]_{v_i}\texttt{\rm T}_i(B_{j,l}) = \big[ [B_{j,l}, B_i]_{v_i^2}, B_i\big] + [2]_{v_i} B_{j,l} \mathbb{K}_i,\\\label{Br:brkt2} [2]_{v_i}\texttt{\rm T}^{-1}_i(B_{j,l}) = \big[ B_i, [B_i, B_{j,l} ]_{v_i^2}\big] + [2]_{v_i} B_{j,l} \mathbb{K}_i. \end{align} Apply $\texttt{\rm T}_{\omega_i}^{-1}$ to \eqref{Br:brkt1}, we have \begin{equation}\label{Br:brkt3} [2]_{v_i}\texttt{\rm T}_{\omega_i}^{-1}\texttt{\rm T}_i(B_{j,l}) = \big[ [B_{j,l}, B_{i,1}]_{v_i^2}, B_{i,1}\big] + [2]_{v_i} B_{j,l} \mathbb{K}_i C. \end{equation} We now compute \begin{align*} [\Theta_{i,1},B_j]\mathbb{K}_i&\overset{\eqref{iDRG3'}}{=}-\big[[B_{i,1},B_i]_{v_i^2},B_j\big]\\ &\overset{\qquad}{=}-\big[B_{i,1},[B_i,B_j]_{v_i^2}\big]+v_i^2 \big[B_i,[B_{i,1},B_j]_{v_i^{-2}}\big]\\ &\overset{\eqref{iDRG2'}}{=} \big[B_{i,1},[ B_{j,-1},B_{i,1}]_{v_i^2}\big]-v_i^2 \big[B_i,[B_{j,1},B_{i}]_{v_i^{-2}}\big] \\ &\overset{\eqref{Br:brkt2}}{=}\big[B_{i,1},[ B_{j,-1},B_{i,1}]_{v_i^2}\big] + [2]_{v_i} \texttt{\rm T}_i^{-1}(B_{j,1})-[2]_{v_i} B_{j,1}\mathbb{K}_i \\ &\overset{\eqref{Br:brkt3}}{=}-[2]_{v_i} \texttt{\rm T}_{\omega_i}^{-1} \texttt{\rm T}_i(B_{j,-1})+[2]_{v_i}B_{j,-1}\mathbb{K}_i C + [2]_{v_i} \texttt{\rm T}_i^{-1}(B_{j,1})-[2]_{v_i} B_{j,1}\mathbb{K}_i \\ &\overset{\qquad}{=} [2]_{v_i}B_{j,-1}C\mathbb{K}_i-[2]_{v_i} B_{j,1}\mathbb{K}_i-[2]_{v_i}\big(\texttt{\rm T}_{\omega_i}^{-1} \texttt{\rm T}_i(B_{j,-1})-\texttt{\rm T}_i^{-1}(B_{j,1})\big)\\ &\overset{\qquad}{=}[2]_{v_i}B_{j,-1}C\mathbb{K}_i-[2]_{v_i} B_{j,1}\mathbb{K}_i, \end{align*} where the last step follows from $\texttt{\rm T}_i \texttt{\rm T}_{\omega_i}^{-1} \texttt{\rm T}_i= \texttt{\rm T}_{\omega_i} \texttt{\rm T}_{\omega_j}^{-2}\prod_{k\neq i,j} \texttt{\rm T}^{c_{ik}}_{\omega_k}$, which is given by Lemma \ref{lem:Lus}(b). Hence, $Y_{1,0}=0$, and by applying $\texttt{\rm T}_{\omega_j}^{-l}$, we get $Y_{1,l}=0$.\par (4)$c_{ij}=-3,c_{ji}=-1$. Without loss of generality, assume $o(i)=1$. In this case, we rewrite $\texttt{\rm T}_i(B_j)$ in Lemma \ref{lem:Ti} as \begin{align} \texttt{\rm T}_i(B_j)&=\frac{1}{[3]_{v_i}!}\bigg[\big[[B_j,B_i]_{v_i^3} , B_i\big]_{v_i}, B_i \bigg]_{v_i^{-1}}+\frac{1}{[3]_{v_i}!}v_i^{-1}[B_j,B_i]_{v_i^3} \mathbb{K}_i+ [B_j,B_i]_{v_i}\mathbb{K}_i,\\ \texttt{\rm T}_i^{-1}(B_j)&=\frac{1}{[3]_{v_i}!}\bigg[ B_i,\big[ B_i, [B_i, B_j]_{v_i^3}\big]_{v_i} \bigg]_{v_i^{-1}}+\frac{1}{[3]_{v_i}!}v_i^{-1}[ B_i,B_j]_{v_i^3} \mathbb{K}_i+ [B_i,B_j]_{v_i}\mathbb{K}_i. \end{align} Since $\texttt{\rm T}_i,\texttt{\rm T}_{\omega_j}$ commute, applying $o(j)^l\texttt{\rm T}_{\omega_j}^{-l}$ for $l\in \mathbb{Z}$ to these equalities, we have \begin{align} \texttt{\rm T}_i(B_{j,l})&=\frac{1}{[3]_{v_i}!}\bigg[\big[[B_{j,l},B_i]_{v_i^3} , B_i\big]_{v_i}, B_i \bigg]_{v_i^{-1}}+\frac{1}{[3]_{v_i}!}v_i^{-1}[B_{j,l},B_i]_{v_i^3} \mathbb{K}_i+ [B_{j,l},B_i]_{v_i}\mathbb{K}_i,\\ \texttt{\rm T}_i^{-1}(B_{j,l})&=\frac{1}{[3]_{v_i}!}\bigg[ B_i,\big[ B_i, [B_i, B_{j,l}]_{v_i^3}\big]_{v_i} \bigg]_{v_i^{-1}}+\frac{1}{[3]_{v_i}!}v_i^{-1}[ B_i,B_{j,l}]_{v_i^3} \mathbb{K}_i+ [B_i,B_{j,l}]_{v_i}\mathbb{K}_i. \end{align} In particular, for $l=-1$ and $1$ respectively, we have \begin{align} \label{Br:brkt4}[\texttt{\rm T}_i(B_{j,-1}),B_i]_{v_i^{-3}}&=\frac{1}{[3]_{v_i}!}\bigg[\bigg[\big[[B_{j,-1},B_i]_{v_i^3} , B_i\big]_{v_i}, B_i \bigg]_{v_i^{-1}},B_i\bigg]_{v_i^{-3}}\\\notag &+\frac{1}{[3]_{v_i}!}v_i^{-1}\big[[B_{j,-1},B_i]_{v_i^3},B_i\big]_{v_i^{-3}} \mathbb{K}_i+ \big[[B_{j,-1},B_i]_{v_i}, B_i\big]_{v_i^{-3}}\mathbb{K}_i,\\ \label{Br:brkt5}[B_i,\texttt{\rm T}_i^{-1}(B_{j,1})]_{v_i^{-3}}&=\frac{1}{[3]_{v_i}!}\bigg[B_i,\bigg[ B_i,\big[ B_i, [B_i, B_{j,1}]_{v_i^3}\big]_{v_i} \bigg]_{v_i^{-1}}\bigg]_{v_i^{-3}}\\\notag &+\frac{1}{[3]_{v_i}!}v_i^{-1}\big[ B_i, [ B_i,B_{j,1}]_{v_i^3}\big]_{v_i^{-3}} \mathbb{K}_i+\big[B_i, [B_i,B_{j,1}]_{v_i}\big]\mathbb{K}_i. \end{align} Apply $\texttt{\rm T}_{\omega_i}^{-1}$ to \eqref{Br:brkt4}, since $\texttt{\rm T}_{\omega_i}^{-1}\texttt{\rm T}_i(B_i)=\texttt{\rm T}_{\omega_i}^{-1} (B_i \mathbb{K}_i^{-1})=B_{i,1}\mathbb{K}_i^{-1}C^{-1}$, we have \begin{align} \label{Br:brkt6}\texttt{\rm T}_{\omega_i}^{-1}\texttt{\rm T}_i\big([B_{j,-1},B_{i }]_{v_i^{-3}}\big)\mathbb{K}_iC =&\frac{1}{[3]_{v_i}!}\bigg[\bigg[\big[[B_{j,-1},B_{i,1}]_{v_i^3} , B_{i,1}]\big]_{v_i}, B_{i,1} \bigg]_{v_i^{-1}},B_{i,1}\bigg]_{v_i^{-3}}\\\notag &+\frac{1}{[3]_{v_i}!}v_i^{-1}\big[[B_{j,-1},B_{i,1}]_{v_i^3},B_{i,1}\big]_{v_i^{-3}} \mathbb{K}_i C\\\notag &+ \big[[B_{j,-1},B_{i,1}]_{v_i}, B_{i,1}\big]_{v_i^{-3}}\mathbb{K}_iC. \end{align} On the other hand, we also rewrite the finite type Serre relation \eqref{eq:S4} as \begin{align}\label{Se:brkt2} \bigg[\bigg[\big[[B_{j},B_i]_{v_i^3} , B_i\big]_{v_i}, B_i \bigg]_{v_i^{-1}},B_i\bigg]_{v_i^{-3}}&=-v_i^{-1}(1+[3]_{v_i}^2)( B_{j} B_{i}^2+ B_{i}^2 B_{j})\mathbb{K}_i \\\notag & +v_i^{-1}[4]_{v_i} (1+[2]_{v_i}^2) B_{i} B_{j} B_{i} \mathbb{K}_i -v_i^{-2}[3]^2_{v_i} B_{j} \mathbb{K}_i^2. \end{align} Apply $o(j)\texttt{\rm T}^{-1}_{\omega_j}$ and we obtain \begin{align}\label{Se:brkt3} \bigg[\bigg[\big[[B_{j,1},B_i]_{v_i^3} , B_i\big]_{v_i}, B_i \bigg]_{v_i^{-1}},B_i\bigg]_{v_i^{-3}}&=-v_i^{-1}(1+[3]_{v_i}^2)( B_{j,1} B_{i}^2+ B_{i}^2 B_{j,1})\mathbb{K}_i \\\notag & +v_i^{-1}[4]_{v_i} (1+[2]_{v_i}^2) B_{i} B_{j,1} B_{i} \mathbb{K}_i -v_i^{-2}[3]^2_{v_i} B_{j,1} \mathbb{K}_i^2. \end{align} Note the leading term (of degree $5$) on the RHS of \eqref{Br:brkt5} coincides with the LHS of \eqref{Se:brkt3}. We substitute it using \eqref{Se:brkt3} and simplify as \begin{align}\label{Se:br1} [3]_{v_i}[2]_{v_i}[B_i,\texttt{\rm T}_i^{-1}(B_{j,1})]_{v_i^{-3}}&=[3]_{v_i} \big[B_i,[B_{j,1},B_i]_{v_i^{-3}}\big]_{v_i} \mathbb{K}_i -v_i^{-2}[3]^2_{v_i} B_{j,1} \mathbb{K}_i^2. \end{align} Similarly, we apply $\texttt{\rm T}_{\omega_j} \texttt{\rm T}_{\omega_i}^{-1}$ to \eqref{Se:brkt2}, by Lemma \ref{lem:jac}(a), \begin{align}\notag \bigg[B_{i,1},\bigg[B_{i,1} ,\big[ B_{i,1},&[B_{i,1},B_{j,-1}]_{v_i^3}\big]_{v_i} \bigg]_{v_i^{-1}}\bigg]_{v_i^{-3}}=-v_i^{-1}(1+[3]_{v_i}^2)( B_{j,-1} B_{i,1}^2+ B_{i,1}^2 B_{j,-1})\mathbb{K}_i \\\label{Se:brkt4} & +v_i^{-1}[4]_{v_i} (1+[2]_{v_i}^2) B_{i,1} B_{j,-1} B_{i,1} \mathbb{K}_i C-v_i^{-2}[3]^2_{v_i} B_{j,-1} \mathbb{K}_i^2 C^2. \end{align} and then we substitute the leading term of RHS of \eqref{Br:brkt6} using \eqref{Se:brkt4}. We obtain \begin{align*} [3]_{v_i}[2]_{v_i}\texttt{\rm T}_{\omega_i}^{-1}\texttt{\rm T}_i [B_{j,-1} ,B_{i}]_{v_i^{-3}} \mathbb{K}_i C&=[3]_{v_i} \big[[B_{i,1},B_{j,-1}]_{v_i^{-3}},B_{i,1}\big]_{v_i} \mathbb{K}_i C -v_i^{-2}[3]^2_{v_i} B_{j,-1} \mathbb{K}_i^2 C^2, \end{align*} which can be simplified as \begin{align}\label{Se:br2} [2]_{v_i}\texttt{\rm T}_{\omega_i}^{-1}\texttt{\rm T}_i [B_{j,-1} ,B_{i}]_{v_i^{-3}} &= \big[[B_{i,1},B_{j,-1}]_{v_i^{-3}},B_{i,1}\big]_{v_i} -v_i^{-2}[3]_{v_i} B_{j,-1} \mathbb{K}_i C. \end{align} We now compute $Y_{1,0}$ in this case. \begin{align*} [\Theta_{i,1},B_j]\mathbb{K}_i&\overset{\eqref{iDRG3'}}{=}-\big[[B_{i,1},B_i]_{v_i^2},B_j\big]\\ &\overset{\qquad}{=}-\big[B_{i,1},[B_i,B_j]_{v_i^3 }\big]_{v_i^{-1}}+v_i^2 \big[B_i,[B_{i,1},B_j]_{v_i^{-3}}\big]_{v_i }\\ &\overset{\eqref{iDRG2'}}{=} \big[B_{i,1},[ B_{j,-1},B_{i,1}]_{v_i^3 }\big]_{v_i^{-1}}-v_i^2 \big[B_i,[B_{j,1},B_i]_{v_i^{-3}}\big]_{v_i}\\ &\overset{\qquad }{=}v_i^2 \big[[B_{i,1},B_{j,-1}]_{v_i^{-3}},B_{i,1}\big]_{v_i}-v_i^2 \big[B_i,[B_{j,1},B_i]_{v_i^{-3}}\big]_{v_i}\\ &\overset{\ \,(*)\,\ }{=}[3]_{v_i} B_{j,-1} \mathbb{K}_iC -[3]_{v_i}B_{j,1}\mathbb{K}_i\\ &\qquad +v_i^2[2]_{v_i}\bigg(\texttt{\rm T}_{\omega_i}^{-1}\texttt{\rm T}_i [B_{j,-1} ,B_{i}]_{v_i^{-3}} -[B_i,\texttt{\rm T}_i^{-1}(B_{j,1})]_{v_i^{-3}}\mathbb{K}_i^{-1}\bigg)\\ &\overset{\qquad}{=}[3]_{v_i} B_{j,-1} \mathbb{K}_iC -[3]_{v_i}B_{j,1}\mathbb{K}_i+v_i^2[2]_{v_i}\texttt{\rm T}_i^{-1}\bigg(\texttt{\rm T}_i \texttt{\rm T}_{\omega_i}^{-1}\texttt{\rm T}_i [B_{j,-1} ,B_{i}]_{v_i^{-3}} -[B_i, B_{j,1} ]_{v_i^{-3}}\bigg)\\ &\overset{\ (**)\ }{=}[3]_{v_i} B_{j,-1} \mathbb{K}_iC -[3]_{v_i}B_{j,1}\mathbb{K}_i+v_i^2[2]_{v_i}\texttt{\rm T}_i^{-1}\bigg({\color{red}-}[B_{j,2} ,B_{i,-1}]_{v_i^{-3}} -[B_i, B_{j,1} ]_{v_i^{-3}} \bigg),\\ &\overset{\eqref{iDRG2'}}{=}[3]_{v_i} B_{j,-1} \mathbb{K}_iC -[3]_{v_i}B_{j,1}\mathbb{K}_i \end{align*} where step (*) follows by applying \eqref{Se:br2} to the first term and applying \eqref{Se:br1} to the second term, and step (**) follows from $\texttt{\rm T}_i \texttt{\rm T}_{\omega_i}^{-1} \texttt{\rm T}_i= \texttt{\rm T}_{\omega_i} \texttt{\rm T}_{\omega_j}^{-3} $ given in Lemma \ref{lem:Lus}(b). (also $o(j)=-1$ gives the red additional sign in this step) Hence, $Y_{1,0}=0$ and by applying $\texttt{\rm T}_{\omega_j}^{-l}$ we have $Y_{1,l}=0$ for $l\in \mathbb{Z}.$ \section{Verification of Serre relations}\label{Serre} The goal of this section is to establish general Serre relations \eqref{iDRG5}-\eqref{iDRG6} in ${}^{\mathrm{Dr}}\tUi_{red}$. We first recover the general Serre relation \eqref{iDRG5} formulated in \cite{LW20b} for $c_{ij}=-1$, using a more straightforward approach compared with the original one. We generalize this approach and offer several formulations for the Serre relation for $c_{ij}=-2$. We obtain two symmetric formulations in \S \ref{symfun}. Using these symmetric formulations, we derive the relation \eqref{iDRG6} in terms of generating functions and finish the proof of Theorem \ref{DprBCF} in \S \ref{genfor}. \subsection{Serre relation for $c_{ij}=-1$}\label{SeADE} Let $c_{ij}=-1$. Recall the notation $\mathbb{S}(k_1,k_2|l;i,j)$ introduced in \eqref{eq:Skk} and denote it by $\mathbb{S}(k_1,k_2|l)$ for short. The Serre relation \eqref{iDRG4'}, together with the relation~ \eqref{iDRG1'}, is verified in \cite[\S 4.7-4.8]{LW20b}, using a spiral induction. In this section, we give a new proof for the Serre relation \eqref{iDRG4'}, without the help of relation \eqref{iDRG1'}. To begin with, we recall two technical lemmas from their paper. \begin{lemma}[\text{\cite[Lemma 4.13]{LW20b}}]\label{lem:SSS} For $k_1, k_2, l \in \mathbb{Z}$, we have \begin{align*} & \mathbb{S}(k_1,k_2+1 |l) + \mathbb{S}(k_1+1,k_2|l) -[2]_{v_i} \mathbb{S}(k_1,k_2|l+1) \\ &=\operatorname{Sym}\nolimits_{k_1,k_2} \Big( -[\Theta_{i, k_2-k_1+1}, B_{jl}]_{v_i^{-2}} C^{k_1} +v_i^{-2} [\Theta_{i, k_2-k_1-1}, B_{jl}]_{v_i^{-2}} C^{k_1+1} \Big) \mathbb{K}_i . \end{align*} \end{lemma} \begin{lemma}[\text{\cite[Lemma 4.9]{LW20b}}] \label{lem:SS} For $k_1, k_2, l \in \mathbb{Z}$, we have \begin{align*} & \mathbb{S}(k_1,k_2+1 |l) + \mathbb{S}(k_1+1,k_2|l) -[2]_{v_i} \mathbb{S}(k_1+1,k_2+1|l-1) \\ &= \operatorname{Sym}\nolimits_{k_1,k_2}\Big( -[B_{jl},\Theta_{i, k_2-k_1+1} ]_{v_i^{-2}} C^{k_1} +v_i^{-2} [B_{jl},\Theta_{i, k_2-k_1-1}]_{v_i^{-2}} C^{k_1+1} \Big) \mathbb{K}_i . \end{align*} \end{lemma} Denote \begin{align*} \mathbb{S}(w_1,w_2|z)=\operatorname{Sym}\nolimits_{w_1,w_2}\big\{{\mathbf B }_{i}(w_1){\mathbf B }_{i}(w_2){\mathbf B }_{j}(z) -[2]_{v_i}{\mathbf B }_{i}(w_1){\mathbf B }_{j}(z){\mathbf B }_{i}(w_2)+{\mathbf B }_{j}(z){\mathbf B }_{i}(w_1){\mathbf B }_{i}(w_2)\big\}. \end{align*} Lemma \ref{lem:SSS} and \ref{lem:SS} can be written in terms of generating functions respectively as \begin{align} \label{gen7}&(w_1^{-1}+w_2^{-1}-[2]_{v_i}z^{-1})\mathbb{S}(w_1,w_2|z) \\\notag =&\operatorname{Sym}\nolimits_{w_1,w_2} \frac{ \boldsymbol{\Delta}(w_1 w_2)}{v_i-v_i^{-1}} (v_i^{-2} w_1^{-1}-w_2^{-1})[ \boldsymbol{\Theta}_i(w_2),{\mathbf B }_{j}(z)]_{v_i^{-2}}\mathbb{K}_i, \end{align} and \begin{align} \label{gen6}&(w_1+w_2-[2]_{v_i}z)\mathbb{S}(w_1,w_2|z) \\\notag =&\operatorname{Sym}\nolimits_{w_1,w_2} \frac{ \boldsymbol{\Delta}(w_1w_2)}{v_i-v_i^{-1}} (v_i^{-2} w_2-w_1)[{\mathbf B }_{j}(z), \boldsymbol{\Theta}_i(w_2)]_{v_i^{-2}}\mathbb{K}_i. \end{align} Then we calculate $\eqref{gen7}\times [2]_{v_i}z+\eqref{gen6} \times (w_1^{-1}+w_2^{-1})$ and we obtain \begin{align}\notag &(w_1-v_i^2 w_2)(w_2^{-1}-v_i^{-2}w_1^{-1})\mathbb{S}(w_1,w_2|z) \\\label{gen8} =&[2]_{v_i} z\operatorname{Sym}\nolimits_{w_1,w_2} \frac{ \boldsymbol{\Delta}(w_1w_2)}{v_i-v_i^{-1}}(v_i^{-2} w_1^{-1}-w_2^{-1})[ \boldsymbol{\Theta}_i(w_2),{\mathbf B }_{j}(z)]_{v_i^{-2}}\mathbb{K}_i \\\notag &+\operatorname{Sym}\nolimits_{w_1,w_2} \frac{ \boldsymbol{\Delta}(w_1w_2)}{v_i-v_i^{-1}} (v_i^{-2} w_2-w_1)(w_1^{-1}+w_2^{-1})[{\mathbf B }_{j}(z), \boldsymbol{\Theta}_i(w_2)]_{v_i^{-2}}\mathbb{K}_i. \end{align} We also calculate $\eqref{gen7} \times (w_1+w_2)+\eqref{gen6}\times [2]_{v_i} z^{-1}$ and we obtain \begin{align}\notag &(w_1-v_i^2 w_2)(w_2^{-1}-v_i^{-2}w_1^{-1})\mathbb{S}(w_1,w_2|z) \\\label{gen8'} =&\operatorname{Sym}\nolimits_{w_1,w_2} \frac{ \boldsymbol{\Delta}(w_1w_2)}{v_i-v_i^{-1}}(v_i^{-2} w_1^{-1}-w_2^{-1})(w_1+w_2)[ \boldsymbol{\Theta}_i(w_2),{\mathbf B }_{j}(z)]_{v_i^{-2}}\mathbb{K}_i \\\notag &+[2]_{v_i} z^{-1}\operatorname{Sym}\nolimits_{w_1,w_2} \frac{ \boldsymbol{\Delta}(w_1w_2)}{v_i-v_i^{-1}} (v_i^{-2} w_2-w_1)[{\mathbf B }_{j}(z), \boldsymbol{\Theta}_i(w_2)]_{v_i^{-2}}\mathbb{K}_i. \end{align} Simplify \eqref{gen8} as \begin{align} \mathbb{S}(w_1,w_2|z) \label{gen9} =&-\operatorname{Sym}\nolimits_{w_1,w_2} \frac{ \boldsymbol{\Delta}(w_1w_2)}{v_i-v_i^{-1}}\frac{[2]_{v_i} z}{w_1-v_i^2 w_2}[ \boldsymbol{\Theta}_i(w_2),{\mathbf B }_{j}(z)]_{v_i^{-2}}\mathbb{K}_i \\\notag &-\operatorname{Sym}\nolimits_{w_1,w_2} \frac{ \boldsymbol{\Delta}(w_1w_2)}{v_i-v_i^{-1}} \frac{w_1+w_2}{w_1-v_i^2 w_2}[{\mathbf B }_{j}(z), \boldsymbol{\Theta}_i(w_2)]_{v_i^{-2}}\mathbb{K}_i, \end{align} which is exactly \eqref{iDRG5}. \subsection{Symmetric formulation}\label{symfun} We now forward to the case $c_{ij}=-2$. Two symmetric formulations \eqref{ind6} and \eqref{ind7}, generalizing Lemma \ref{lem:SSS} and \ref{lem:SS}, are formulated and verified in this section. Denote \begin{align*} S(k_1,k_2,k_3|l)=\operatorname{Sym}\nolimits_{k_1,k_2,k_3}\sum_{s=0}^3(-1)^s \qbinom{3}{s}_{v_i} B_{i,k_1}\cdots B_{i,k_s}B_{j,l} B_{i,k_s+1} \cdots B_{i,k_3}. \end{align*} Note that $S(k_1,k_2,k_3|l)$ is symmetric with respect to the first three components. \begin{proposition}\label{symS} We have, for any $k_1,k_2,k_3$, \begin{align} \label{ind6}&S(k_1,k_2,k_3+1|l)+S(k_1+1,k_2,k_3|l)+S(k_1,k_2+1,k_3|l)-[3]_{v_i}S(k_1,k_2,k_3|l+1)\\\notag =&\frac{1}{2}\operatorname{Sym}\nolimits_{k_1,k_2,k_3}\Big(-v_i^{-1}[2]_{v_i}\big[[\Theta_i(k_2,k_3),B_{j,l}]_{v_i^{-2}},B_{i,k_1}\big]_{v_i^{2}}+\big[\Theta_i(k_2,k_3),[B_{i,k_1},B_{j,l}]_{v_i^2}\big]_{v_i^{-4}}\Big) \end{align} \end{proposition} The following relation can be obtained from \eqref{ind6} by applying $\Psi$. \begin{align}\notag &S(k_1-1,k_2,k_3|l)+S(k_1 ,k_2-1,k_3 |l)+S(k_1 ,k_2 ,k_3-1|l)-[3]_{v_i} S(k_1 ,k_2,k_3|l-1)\\ \label{ind7}=&\frac{1}{2}v_i^{-1}[2]_{v_i}\operatorname{Sym}\nolimits_{k_1,k_2,k_3}\big[B_{i,k_3},[B_{j,l},\Theta_i(k_1-1,k_2-1)]_{v_i^{-2}}\big]_{v_i^2} \\\notag &- \frac{1}{2}\operatorname{Sym}\nolimits_{k_1,k_2,k_3} \big[[B_{j,l},B_{i,k_3 }]_{v_i^2},\Theta_i(k_1-1,k_2-1)\big]_{v_i^{-4}} \\\notag \end{align} \begin{proof}[Proof of Proposition \ref{symS}] Recall the definition of $\Theta_i(s,r)$ from \eqref{eq:Th1}. Since $\Theta_i(s,r)=\Theta_i(r,s)$, we have $\operatorname{Sym}\nolimits_{r,s} \Theta_i(s,r)=2\Theta_i(s,r)$. We rewrite \eqref{ind6} in the following equivalent form, \begin{align}\notag &S(k_1,k_2,k_3+1|l)+S(k_1+1,k_2,k_3|l)+S(k_1,k_2+1,k_3|l)-[3]_{v_i}S(k_1,k_2,k_3|l+1)\\\label{ind5-1} =&-v_i^{-1}[2]_{v_i}\big[[\Theta_i(k_2,k_3),B_{j,l}]_{v_i^{-2}},B_{i,k_1}\big]_{v_i^{2}}+\big[\Theta_i(k_2,k_3),[B_{i,k_1},B_{j,l}]_{v_i^2}\big]_{v_i^{-4}} \\\notag &-v_i^{-1}[2]_{v_i}\big[[\Theta_i(k_1,k_3),B_{j,l}]_{v_i^{-2}},B_{i,k_2}\big]_{v_i^{2}}+\big[\Theta_i(k_1,k_3),[B_{i,k_2},B_{j,l}]_{v_i^2}\big]_{v_i^{-4}} \\\notag &-v_i^{-1}[2]_{v_i}\big[[\Theta_i(k_1,k_2),B_{j,l}]_{v_i^{-2}},B_{i,k_3}\big]_{v_i^{2}}+\big[\Theta_i(k_1,k_2),[B_{i,k_3},B_{j,l}]_{v_i^2}\big]_{v_i^{-4}}. \end{align} Denote \begin{align}\notag R(k_1,k_2,k_3|l)=\operatorname{Sym}\nolimits_{k_1,k_2} \bigg(B_{i,k_1}B_{i,k_2}[B_{i,k_3},B_{j,l}]_{v_i^{2}} &-v_i^{-1}[2]_{v_i}B_{i,k_1}[B_{i,k_3},B_{j,l}]_{v_i^{2}}B_{i,k_2}\\ &\quad+v_i^{-2}[B_{i,k_3},B_{j,l}]_{v_i^{2}}B_{i,k_1}B_{i,k_2}\bigg). \end{align} Note that $R(k_1,k_2,k_3|l)$ is only symmetric with respect to its first two components. In fact, $R(k_1,k_2,k_3|l)$ plays the role of breaking the symmetry of $S(k_1,k_2,k_3|l)$ as it satisfies \begin{equation}\label{ind0} S(k_1,k_2,k_3|l)=R(k_1,k_2,k_3|l)+R(k_1,k_3,k_2|l)+R(k_2,k_3,k_1|l). \end{equation} We compute \begin{align}\notag &S(k_1,k_2,k_3+1|l)-[3]_{v_i} R(k_1,k_2,k_3|l+1)\\\label{ind1} =&\bigg\{(1+v_i^2)B_{i,k_1}[B_{i,k_3+1},B_{i,k_2}]_{v_i^2}B_{j,l}+[B_{i,k_3+1},B_{i,k_1}]_{v^2_i}B_{i,k_2}B_{j,l}\\\notag &-[3]_{v_i}[B_{i,k_3+1},B_{i,k_1}]_{v_i^2}B_{j,l} B_{i,k_2}-v_i^{-2}[3]_{v_i}B_{i,k_1} B_{j,l} [B_{i,k_3+1},B_{i,k_2}]_{v_i^2}\\\notag &+v_i^{-2}B_{j,l} B_{i,k_1} [B_{i,k_3+1},B_{i,k_2}]_{v_i^2}+(v_i^{-2}+v_i^{-4})B_{j,l}[B_{i,k_3+1},B_{i,k_1}]_{v_i^2}B_{i,k_2}\bigg\}\\\notag &+\{k_1\leftrightarrow k_2\}. \end{align} where $\{k_1\leftrightarrow k_2\}$ represents the element obtained by swapping $k_1,k_2$ in the first curly brackets.\par Rewrite \eqref{ind1} using the symmetrizer as \begin{align}\notag &S(k_1,k_2,k_3+1|l)-[3]_{v_i} R(k_1,k_2,k_3|l+1)\\\label{ind2} =&\bigg\{(1+v_i^2)B_{i,k_1}[B_{i,k_3+1},B_{i,k_2}]_{v_i^2}B_{j,l}+[B_{i,k_3+1},B_{i,k_2}]_{v^2_i}B_{i,k_1}B_{j,l}\\\notag &-[3]_{v_i}[B_{i,k_3+1},B_{i,k_2}]_{v_i^2}B_{j,l}B_{i,k_1}-v_i^{-2}[3]_{v_i}B_{i,k_1} B_{j,l} [B_{i,k_3+1},B_{i,k_2}]_{v_i^2}\\\notag &+v_i^{-2}B_{j,l} B_{i,k_1} [B_{i,k_3+1},B_{i,k_2}]_{v_i^2}+(v_i^{-2}+v_i^{-4})B_{j,l}[B_{i,k_3+1},B_{i,k_2}]_{v_i^2}B_{i,k_1}\bigg\}\\\notag &+\{k_1\leftrightarrow k_2\}. \end{align} On the other hand, the relation \eqref{iDRG3'} implies that \begin{align}\label{ind21} [B_{i,k_3+1},B_{i,k_1}]_{v_i^2}&=\Theta_i(k_1,k_3)-[B_{i,k_1+1},B_{i,k_3}]_{v_i^2},\\\label{ind22} [B_{i,k_3+1},B_{i,k_2}]_{v_i^2}&=\Theta_i(k_2,k_3)-[B_{i,k_2+1},B_{i,k_3}]_{v_i^2}. \end{align} Substitute \eqref{ind21} and \eqref{ind22} into \eqref{ind2}, and we have \begin{align}\notag &S(k_1,k_2,k_3+1|l)-[3]_{v_i} R(k_1,k_2,k_3|l+1)\\\label{ind3} =&-\bigg\{(1+v_i^2)B_{i,k_1}[B_{i,k_2+1},B_{i,k_3}]_{v_i^2}B_{j,l}+[B_{i,k_2+1},B_{i,k_3}]_{v_i^2}B_{i,k_1}B_{j,l}\\\notag &-[3]_{v_i}[B_{i,k_2+1},B_{i,k_3}]_{v_i^2}B_{j,l}B_{i,k_1}-v_i^{-2}[3]_{v_i}B_{i,k_1} B_{j,l} [B_{i,k_2+1},B_{i,k_3}]_{v_i^2}\\\notag &+v_i^{-2}B_{j,l} B_{i,k_1} [B_{i,k_2+1},B_{i,k_3}]_{v_i^2}+(v_i^{-2}+v_i^{-4})B_{j,l}[B_{i,k_2+1},B_{i,k_3}]_{v_i^2}B_{i,k_1}\bigg\}\\\notag &-\{k_1\leftrightarrow k_2\}\\\notag &+Q_{1,2}, \end{align} where $Q_{1,2}$ denotes all terms involving the imaginary root vectors \begin{align*} Q_{1,2}=&-v_i^{-1}[2]_{v_i}\big[[\Theta_i(k_2,k_3),B_{j,l}]_{v_i^{-2}},B_{i,k_1}\big]_{v_i^{2}}+\big[\Theta_i(k_2,k_3),[B_{i,k_1},B_{j,l}]_{v_i^2}\big]_{v_i^{-4}} \\ &-v_i^{-1}[2]_{v_i}\big[[\Theta_i(k_1,k_3),B_{j,l}]_{v_i^{-2}},B_{i,k_2}\big]_{v_i^{2}}+\big[\Theta_i(k_1,k_3),[B_{i,k_2},B_{j,l}]_{v_i^2}\big]_{v_i^{-4}}. \end{align*} We recognize that the first $\bigg\{\ \bigg\}$-term on the RHS of \eqref{ind3} is the same as the $\bigg\{\ \bigg\}$-term on the RHS of \eqref{ind2}, up to a swap of indices $k_2\leftrightarrow k_3$. Hence, we replace the one in \eqref{ind3} by \eqref{ind2}. We do the same thing for the term $\{k_1\leftrightarrow k_2\}$ in \eqref{ind3}. \begin{align}\notag &S(k_1,k_2,k_3+1|l)-[3]_{v_i}R(k_1,k_2,k_3|l+1)\\\notag =&-\bigg(S(k_1,k_3,k_2+1|l)-[3]_{v_i} R(k_1,k_3,k_2|l+1)\bigg)\\\notag &+\bigg\{(1+v_i^2)B_{i,k_3}[B_{i,k_2+1},B_{i,k_1}]_{v_i^2}B_{j,l}+[B_{i,k_2+1},B_{i,k_1}]_{v_i^2}B_{i,k_3}B_{j,l}\\\label{ind4} &-[3]_{v_i}[B_{i,k_2+1},B_{i,k_1}]_{v_i^2}B_{j,l}B_{i,k_3}-v_i^{-2}[3]_{v_i}B_{i,k_3} B_{j,l} [B_{i,k_2+1},B_{i,k_1}]_{v_i^2}\\\notag &+v_i^{-2}B_{j,l} B_{i,k_3} [B_{i,k_2+1},B_{i,k_1}]_{v_i^2}+(v_i^{-2}+v_i^{-4})B_{j,l}[B_{i,k_2+1},B_{i,k_1}]_{v_i^2}B_{i,k_3}\bigg\}\\\notag &-\bigg(S(k_2,k_3,k_1+1|l)-[3]_{v_i} R(k_2,k_3,k_1|l+1)\bigg)+\bigg\{k_1\leftrightarrow k_2\bigg\}\\\notag &+Q_{1,2}. \end{align} By \eqref{iDRG3'}, we have the following relation \begin{equation} [B_{i,k_2+1},B_{i,k_1}]_{v_i^2}+[B_{i,k_1+1},B_{i,k_2}]_{v_i^2}=\Theta_i(k_1,k_2). \end{equation} Now we can apply the above relation to those two $\bigg\{\ \bigg\}$-terms in \eqref{ind4} and we obtain \begin{align}\notag &S(k_1,k_2,k_3+1|l)+S(k_1+1,k_2,k_3|l)+S(k_1,k_2+1,k_3|l) \\\notag &-[3]_{v_i}\big( R(k_1,k_2,k_3|l+1)+ R(k_1,k_3,k_2|l+1)+ R(k_2,k_3,k_1|l+1)\big) \\ \label{ind5}=&-v_i^{-1}[2]_{v_i}\big[[\Theta_i(k_2,k_3),B_{j,l}]_{v_i^{-2}},B_{i,k_1}\big]_{v_i^{2}}+\big[\Theta_i(k_2,k_3),[B_{i,k_1},B_{j,l}]_{v_i^2}\big]_{v_i^{-4}} \\\notag &-v_i^{-1}[2]_{v_i}\big[[\Theta_i(k_1,k_3),B_{j,l}]_{v_i^{-2}},B_{i,k_2}\big]_{v_i^{2}}+\big[\Theta_i(k_1,k_3),[B_{i,k_2},B_{j,l}]_{v_i^2}\big]_{v_i^{-4}} \\\notag &-v_i^{-1}[2]_{v_i}\big[[\Theta_i(k_1,k_2),B_{j,l}]_{v_i^{-2}},B_{i,k_3}\big]_{v_i^{2}}+\big[\Theta_i(k_1,k_2),[B_{i,k_3},B_{j,l}]_{v_i^2}\big]_{v_i^{-4}}. \end{align} Finally, by \eqref{ind0}, we obtain \eqref{ind5-1} from \eqref{ind5}, as desired. \end{proof} \subsection{Generating function formulation}\label{genfor} By taking suitable linear combination of two symmetric formulations, we derive the Serre relation \eqref{iDRG6} and thus finish the proof of Theorem \ref{DprBCF}.\par Fix $i,j\in \mathbb{I}_0$ such that $c_{ij}=-2$. Recall the notation $\mathbb{S}(w_1,w_2,w_3|z;i,j)$ from \eqref{eq:BCF} and denote it by $\mathbb{S}(w_1,w_2,w_3|z)$ for short.\par We can rewrite \eqref{ind6} in terms of generating functions as \begin{align}\notag &(w_1^{-1}+w_2^{-1}+w_3^{-1}-[3]_{v_i}z^{-1})\mathbb{S}(w_1,w_2,w_3|z)\\ \label{gen1}=&-v_i[2]_{v_i}\operatorname{Sym}\nolimits_{w_1,w_2,w_3} \frac{ \boldsymbol{\Delta}(w_2w_3)}{v_i-v_i^{-1}}(v_i^{-2}w_3^{-1}-w_2^{-1})\big[[ \boldsymbol{\Theta}_i(w_2),{\mathbf B }_j(z)]_{v_i^{-4}}, {\mathbf B }_i(w_1)\big]\mathbb{K}_i\\\notag &+\operatorname{Sym}\nolimits_{w_1,w_2,w_3} \frac{ \boldsymbol{\Delta}(w_2w_3)}{v_i-v_i^{-1}}(v_i^{-2}w_3^{-1}-w_2^{-1})\big[ \boldsymbol{\Theta}_i(w_2),[{\mathbf B }_i(w_1), {\mathbf B }_j(z)]_{v_i^{-2}}\big]\mathbb{K}_i, \end{align} and rewrite \eqref{ind7} in terms of generating functions as \begin{align}\notag &(w_1+w_2+w_3-[3]_{v_i}z)\mathbb{S}(w_1,w_2,w_3|z)\\ \label{gen2}=&v_i[2]_{v_i}\operatorname{Sym}\nolimits_{w_1,w_2,w_3} \frac{ \boldsymbol{\Delta}(w_2w_3)}{v_i-v_i^{-1}}(v_i^{-2}w_2-w_3)\big[{\mathbf B }_i(w_1),[{\mathbf B }_j(z), \boldsymbol{\Theta}_i(w_2)]_{v_i^{-4}}\big]\mathbb{K}_i\\\notag &-\operatorname{Sym}\nolimits_{w_1,w_2,w_3} \frac{ \boldsymbol{\Delta}(w_2w_3)}{v_i-v_i^{-1}}(v_i^{-2}w_2-w_3)\big[[ {\mathbf B }_j(z),{\mathbf B }_i(w_1)]_{v_i^{-2}}, \boldsymbol{\Theta}_i(w_2)\big]\mathbb{K}_i. \end{align} We calculate $\eqref{gen2}\times [3]_{v_i}z^{-1}+\eqref{gen1}\times (w_1+w_2+w_3)$ and obtain \begin{align} \label{gen4'}&\big((w_1+w_2+w_3)(w_1^{-1}+w_2^{-1}+w_3^{-1})-[3]^2_{v_i}\big)\mathbb{S}(w_1,w_2,w_3|z)\mathbb{K}_i^{-1}\\\notag =&[3]_{v_i}z^{-1}\operatorname{Sym}\nolimits_{w_1,w_2,w_3} \frac{ \boldsymbol{\Delta}(w_2w_3)}{v_i-v_i^{-1}}(v_i^{-2}w_2-w_3)\\\notag &\quad\times\bigg(v_i[2]_{v_i}\big[{\mathbf B }_i(w_1),[{\mathbf B }_j(z), \boldsymbol{\Theta}_i(w_2)]_{v_i^{-4}}\big]-\big[[ {\mathbf B }_j(z),{\mathbf B }_i(w_1)]_{v_i^{-2}}, \boldsymbol{\Theta}_i(w_2)\big]\bigg)\\\notag &+\operatorname{Sym}\nolimits_{w_1,w_2,w_3} \frac{ \boldsymbol{\Delta}(w_2w_3)}{v_i-v_i^{-1}}(v_i^{-2}w_3^{-1}-w_2^{-1})(w_1+w_2+w_3)\\\notag &\quad\times\bigg(-v_i[2]_{v_i}\big[[ \boldsymbol{\Theta}_i(w_2),{\mathbf B }_j(z)]_{v_i^{-4}}, {\mathbf B }_i(w_1)\big]+\big[ \boldsymbol{\Theta}_i(w_2),[{\mathbf B }_i(w_1), {\mathbf B }_j(z)]_{v_i^{-2}}\big]\bigg). \end{align} Dividing both sides of \eqref{gen4'} by the coefficient of $\mathbb{S}(w_1,w_2,w_3|z)$, we obtain the defining relation \eqref{iDRG6} of ${}^{\mathrm{Dr}}\tUi$. \begin{remark}\label{double} Our formulation \eqref{iDRG6} of the Serre relation for $c_{ij}=-2$ is not unique since an alternative formulation can be derived as follows. We calculate $\eqref{gen2}\times(w_1^{-1}+w_2^{-1}+w_3^{-1}) + \eqref{gen1}\times [3]_{v_i}z$ and obtain a variant of \eqref{gen4'} as \begin{align} \label{gen4}&\big((w_1+w_2+w_3)(w_1^{-1}+w_2^{-1}+w_3^{-1})-[3]^2_{v_i}\big)\mathbb{S}(w_1,w_2,w_3|z)\mathbb{K}_i^{-1}\\\notag =&\operatorname{Sym}\nolimits_{w_1,w_2,w_3} \frac{ \boldsymbol{\Delta}(w_2w_3)}{v_i-v_i^{-1}}(v_i^{-2}w_2-w_3)(w_1^{-1}+w_2^{-1}+w_3^{-1})\\\notag &\quad\times \bigg(v_i[2]_{v_i}\big[{\mathbf B }_i(w_1),[{\mathbf B }_j(z), \boldsymbol{\Theta}_i(w_2)]_{v_i^{-4}}\big]- \big[[ {\mathbf B }_j(z),{\mathbf B }_i(w_1)]_{v_i^{-2}}, \boldsymbol{\Theta}_i(w_2)\big]\bigg)\\\notag &+[3]_{v_i}z\operatorname{Sym}\nolimits_{w_1,w_2,w_3} \frac{ \boldsymbol{\Delta}(w_2w_3)}{v_i-v_i^{-1}}(v_i^{-2}w_3^{-1}-w_2^{-1})\\\notag &\quad\times\bigg(-v_i[2]_{v_i}\big[[ \boldsymbol{\Theta}_i(w_2),{\mathbf B }_j(z)]_{v_i^{-4}},{\mathbf B }_i(w_1)\big]+\big[ \boldsymbol{\Theta}_i(w_2),[{\mathbf B }_i(w_1), {\mathbf B }_j(z)]_{v_i^{-2}}\big]\bigg). \end{align} Dividing both sides of \eqref{gen4} by the coefficient of $\mathbb{S}(w_1,w_2,w_3|z)$, we get an alternative formulation for the Serre relation, which looks different from \eqref{iDRG6}. In fact, \eqref{gen4} can be obtained from \eqref{gen4'} by applying $\Psi$, and thus the alternative formulation of the Serre relation can also be obtained from \eqref{iDRG6} by applying $\Psi$. \end{remark}
2024-02-18T23:40:33.543Z
2021-07-30T02:06:22.000Z
algebraic_stack_train_0000
2,746
17,120
proofpile-arXiv_065-13355
\section{Introduction and Background} Sound design is the process of using a synthesizer and audio effects to craft a desired output sound, typically by leveraging virtual studio technology (VST) on a computer. Often, the audio effects applied to the synthesizer play the biggest role in producing a desired sound. Sound design for the music industry is a very difficult task done by professionals with years of experience. Educational tools are limited and beginners are usually forced to learn via trial and error or from online resources created by others more experienced who typically also learned in a similar way. Prior work leveraging AI to program audio VSTs uses genetic algorithms [1; 2; 3], genetic programming [3], k-means clustering + tree-search [4], and deep convolutional neural networks [2; 5] to achieve this objective. There has also been research on using deep learning to model or apply audio effects directly [6; 7; 8; 9]. These systems typically suffer from one or more of the following problems: they are applied to toy VSTs with little practical use, they are incompatible with existing VSTs, their inference time is prohibitively long, or they are black-boxes with uninterpretable results. When using an AI assisted system, the user's sense of ownership over their work should be preserved. Our system is inspired by white-box automatic image post-processing systems [10] and collaborative production tools [11; 12] that can educate and augment a user rather than aiming to replace them. \section{System Overview} Our system iteratively nudges an input audio towards the same timbre of a desired target audio and provides interpretable intermediate steps. It uses, to the best of our knowledge, a novel approach consisting of an ensemble of models working together: a recurrent neural network (RNN) to select which effect to apply next and then a collection of convolutional neural networks (CNN), one per supported effect, to apply the correct effect parameters. An example sequence of spectrograms and steps output by our system is shown in Figure 1. We collect training data using five timbre-changing audio effects from Serum's effects rack: multi-band compression, distortion, equalizer (EQ), phaser, and hall reverb. We also use 12 different synthesizer presets split into three groups: \emph{basic shapes} (sine, triangle, saw, and square wave), \emph{advanced shapes} (four presets), and \emph{advanced modulating shapes} (four presets). Since our system focuses on reproducing a desired timbre, we represent input audio as power dB Mel spectrograms. Using a modified automated VST rendering tool [13], \textasciitilde120k one second long audio clips are generated for each synthesizer preset and are sampled from all possible combinations of the five supported effects. The CNN effect models take as input two spectrograms stacked together (two channels total): the target spectrogram and the input spectrogram. Their outputs vary depending on the effect they are modeling, but consist of some combination of binary, categorical, and continuous outputs. A Cartesian product is used for selecting current and target spectrograms to train on, thus resulting in \textasciitilde1.2M available training data points for each effect. The RNN model takes as input an arbitrary length sequence consisting of the CNN effect model spectrogram input and a sequence of one hot vectors of the same length representing the used effects. Its output is a 5-dimensional softmax layer indicating the probability of the next effect to be applied. More details about data collection, model architectures, and training can be found in supplemental Figures 2 and 3 and supplemental Table 2. \section{Evaluation and Discussion} \begin{table}[h] \caption{Mean errors and $\Delta$s for input and output audio from the \emph{basic shapes} preset group.} \centering \begin{tabular}{l} \toprule \multicolumn{1}{c}{} \\ \cmidrule(r){1-1} Metric \\ \midrule MSE \\ MAE \\ MFCC \\ LSD \\ \bottomrule \end{tabular} \begin{tabular}{lll} \toprule \multicolumn{3}{c}{Mean Error against Target Audio} \\ \cmidrule(r){1-3} Init. Audio & Final Audio & $\Delta$ \\ \midrule 0.055 & 0.012 & -0.043 \\ 0.172 & 0.074 & -0.098 \\ 157.15 & 70.06 & -87.09 \\ 16.16 & 7.62 & -8.54 \\ \bottomrule \end{tabular} \begin{tabular}{lllll} \toprule \multicolumn{5}{c}{Mean Error $\Delta$ per Step} \\ \cmidrule(r){1-5} 1 & 2 & 3 & 4 & 5 \\ \midrule -0.024 & -0.013 & -0.008 & -0.004 & -0.002 \\ -0.052 & -0.034 & -0.020 & -0.008 & -0.004 \\ -45.97 & -28.64 & -20.13 & -7.30 & -4.28 \\ -4.62 & -3.04 & -1.64 & -0.60 & -0.32 \\ \bottomrule \end{tabular} \end{table} The CNN effect models are evaluated individually against their parameter reconstruction ability and how closely their output matches the target audio. Audio similarity is measured via four different metrics: MAE and MSE between the two power dB spectrograms, and the mean Euclidean distance between the first 20 Mel frequency cepstral coefficients (MFCC) and the log-spectral distance (LSD) between the two power spectrograms. The RNN model is evaluated against its prediction accuracy for the next effect and the entire ensemble of models is evaluated against changes in audio similarity as steps are taken by the system. Evaluation results for our entire system are shown in Table 1 and additional results can be found in supplemental Tables 3, 4, 5, and 6. The results indicate that our system is consistently able to produce intermediate steps that bring the input audio significantly closer to the target audio.\footnote{Audio examples can be listened to at \url{https://bit.ly/serum_rnn}} Our system also provides near real-time, quantitative feedback about which effects are the most important. The user can pick and choose which intermediate steps they would like to use and can feed tweaked versions back into the system for additional fine-tuning or to learn more. We also noticed fun creative applications when our system produced unexpected results or was given significantly out of domain target audio. \section{Ethical Implications} Combining artificial intelligence with creativity carries with it various different ethical considerations, one of which is a potential future decrease in demand for professional audio producers due to an increasing ability to replace them with technology. We believe the best approach to this is to build systems that are collaborative and can augment people rather than replacing them entirely. While we believe our research is just the tip of the iceberg for building AI powered sound design tools, we can imagine a future where tools like ours might be able to find more efficient and simpler methods of creating sounds, thus educating students more effectively and democratizing sound design. We compare this to a similar situation that occurred when beat-matching technology was invented and added to DJ systems (to the disgruntlement of some DJing "purists"). However, this sometimes controversial technology democratized DJing and enabled a new generation of artists to focus on new creative applications, thus progressing the community as a whole. \section{Acknowledgements} We would like to thank Erwin Wu for providing additional computing resources. \section{References} \small [1] Tatar, K., Matthieu Macret and P. Pasquier. “Automatic Synthesizer Preset Generation with PresetGen.” \emph{Journal of New Music Research 45} (2016). [2] Yee-King, Matthew, Leon Fedden and Mark d'Inverno. “Automatic Programming of VST Sound Synthesizers Using Deep Networks and Other Techniques.” \emph{IEEE Transactions on Emerging Topics in Computational Intelligence 2} (2018). [3] Macret, Matthieu and P. Pasquier. “Automatic design of sound synthesizers as pure data patches using coevolutionary mixed-typed cartesian genetic programming.” \emph{GECCO '14} (2014). [4] Cáceres, Juan-Pablo. “Sound Design Learning for Frequency Modulation Synthesis Parameters.” (2007). [5] Barkan, Oren, David Tsiris, O. Katz and Noam Koenigstein. “InverSynth: Deep Estimation of Synthesizer Parameter Configurations From Audio Signals.” \emph{IEEE/ACM Transactions on Audio, Speech, and Language Processing 27} (2019). [6] Ramírez, M. A. M. and J. Reiss. “End-to-end equalization with convolutional neural networks.” \emph{International Conference on Digital Audio Effects} (2018). [7] Damskägg, Eero-Pekka, Lauri Juvela, Etienne Thuillier and V. Välimäki. “Deep Learning for Tube Amplifier Emulation.” \emph{ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)} (2019). [8] Sheng, Di and Gyorgy Fazekas. “A Feature Learning Siamese Model for Intelligent Control of the Dynamic Range Compressor.” \emph{2019 International Joint Conference on Neural Networks (IJCNN)} (2019). [9] Engel, J., Lamtharn Hantrakul, Chenjie Gu and Adam Roberts. “DDSP: Differentiable Digital Signal Processing.” \emph{2020 International Conference on Learning Representations (ICLR)} (2020). [10] Hu, Yuanming \& He, Hao \& Xu, Chenxi \& Wang, Baoyuan \& Lin, Stephen. "Exposure: A White-Box Photo Post-Processing Framework." \emph{ACM Transactions on Graphics.} (2017). [11] Sommer, Nathan and A. Ralescu. “Developing a Machine Learning Approach to Controlling Musical Synthesizer Parameters in Real-Time Live Performance.” \emph{MAICS} (2014). [12] Thio, Vibert, and Chris Donahue. "Neural Loops." \emph{2019 NeurIPS Workshop on Machine Learning for Creativity and Design} (2019). [13] Fedden, Leon. "RenderMan". GitHub. \url{https://github.com/fedden/RenderMan} (accessed 2020). \pagebreak \normalsize \section{Supplemental Material} \subsection{Data Collection} \begin{table}[h] \caption{Parameters sampled from the Serum VST synthesizer.} \centering \begin{tabular}{llll} \toprule Effect & Parameter Name & Type & Sampled Values \\ \midrule Compressor & Low-band Compression & Continuous & [0.0, 1.0] \\ Compressor & Mid-band Compression & Continuous & [0.0, 1.0] \\ Compressor & High-band Compression & Continuous & [0.0, 1.0] \\ Distortion & Mode & Categorical & 12 classes \\ Distortion & Drive & Continuous & [0.3, 1.0] \\ Equalizer & High Frequency Cutoff & Continuous & [0.50, 0.95] \\ Equalizer & High Frequency Resonance & Continuous & [0.0, 1.0] \\ Equalizer & High Frequency Gain & Continuous & [0.0, 0.4] and [0.6, 1.0] \\ Phaser & LFO Depth & Continuous & [0.0, 1.0] \\ Phaser & Frequency & Continuous & [0.0, 1.0] \\ Phaser & Feedback & Continuous & [0.0, 1.0] \\ Hall Reverb & Mix & Continuous & [0.3, 0.7] \\ Hall Reverb & Low Frequency Cutoff & Continuous & [0.0, 1.0] \\ Hall Reverb & High Frequency Cutoff & Continuous & [0.0, 1.0] \\ \bottomrule \end{tabular} \end{table} Data collection and processing systems represent a significant portion of the software engineering effort required for this project. Table 2 summarizes which Serum synthesizer parameters are sampled for each supported effect. Parameter sampling value ranges are occasionally limited to lie within practical, everyday use regions. The \emph{basic shapes} preset group consists of the single oscillator sine, triangle, saw, and square wave default Serum presets. The \emph{advanced shapes} preset group consists of the dry (no effects) dual oscillator \texttt{"LD Power 5ths"}, \texttt{"SY Mtron Saw"}, \texttt{"SY Shot Dirt Stab"}, and \texttt{"SY Vintage Bells"} default Serum presets. The \emph{advanced modulating shapes} preset group consists of the dry dual oscillator \texttt{"LD Iheardulike5ths"}, \texttt{"LD Postmodern Talking"}, \texttt{"SQ Busy Lines"}, and \texttt{"SY Runtheharm"} default Serum presets. All of these presets also use intense time varying modulations. Audio samples are played and rendered for one second using a MIDI pitch of C4, maximum velocity, and a sampling rate of 44100 Hz. In the future we would like to include audio pitch, attack, decay, sustain, and release features directly into our system by modifying these values. Mel spectrograms are calculated using a hop length of 512 samples, a FFT window length of 4096, and 128 Mel filters. \subsection{Modeling} \begin{figure}[ht] \centering \includegraphics[width=0.85\linewidth]{images/cnn_2x.png} \caption{CNN effect model architecture (not to scale).} \end{figure} All five CNN effect models use ELU activations and a 50\% dropout rate for each of their fully connected (FC) layers. Their architecture is shown in Figure 2. They are trained using a batch size of 128, mean squared error loss for continuous parameters, binary cross-entropy loss for binary parameters, and categorical cross-entropy loss for categorical parameters. \begin{figure}[ht] \centering \includegraphics[width=0.72\linewidth]{images/rnn_cnn.png} \caption{CNN model architecture used in the RNN next effect prediction model (not to scale).} \end{figure} The RNN model consists of a bi-directional, 128-dimensional LSTM layer followed by a 128-dimensional FC layer and lastly the 5-dimensional softmax output layer. The FC layer uses ELU activation units and a 50\% dropout rate. Features are extracted from the Mel spectrogram sequence input using a smaller, time-distributed CNN with an architecture displayed in Figure 3. This sequence of extracted, 128-dimensional Mel spectrogram features is concatenated with the sequence of one hot vectors representing which effects have been used and is then fed as input to the LSTM layer. The RNN model is trained with a batch size of 32. All models are trained for 100 epochs with early stopping and a validation and test split of 0.10 and 0.05 respectively. The Adam optimizer is used with a learning rate of 0.001. \subsection{Evaluation} \begin{table}[h] \caption{CNN effect models mean error reduction for all three preset groups.} \centering \begin{tabular}{lllll} \toprule && \multicolumn{3}{c}{Mean Error $\Delta$ against Target Audio} \\ \cmidrule(r){3-5} Effect & Metric & \emph{Basic Shapes} & \emph{Adv. Shapes} & \emph{Adv. Mod. Shapes} \\ \midrule Compressor & MSE & -0.012 & -0.013 & -0.007 \\ Compressor & MAE & -0.050 & -0.049 & -0.030 \\ Compressor & MFCC & -53.19 & -54.66 & -37.42 \\ Compressor & LSD & -4.20 & -4.15 & -2.42 \\ \midrule Distortion & MSE & -0.036 & -0.019 & -0.037 \\ Distortion & MAE & -0.062 & -0.056 & -0.082 \\ Distortion & MFCC & -60.50 & -60.70 & -88.40 \\ Distortion & LSD & -5.30 & -5.11 & -7.18 \\ \midrule Equalizer & MSE & -0.004 & -0.009 & -0.005 \\ Equalizer & MAE & -0.018 & -0.038 & -0.019 \\ Equalizer & MFCC & -24.63 & -45.20 & -29.93 \\ Equalizer & LSD & -1.31 & -3.27 & -1.54 \\ \midrule Phaser* & MSE & 0.002 & 0.000 & 0.002 \\ Phaser* & MAE & 0.005 & -0.002 & 0.008 \\ Phaser* & MFCC & 1.23 & -5.98 & 1.96 \\ Phaser* & LSD & 0.64 & -0.11 & 0.99 \\ \midrule Hall Reverb & MSE & -0.016 & -0.005 & -0.007 \\ Hall Reverb & MAE & -0.064 & -0.029 & -0.033 \\ Hall Reverb & MFCC & -47.16 & -26.61 & -31.59 \\ Hall Reverb & LSD & -6.27 & -2.46 & -3.10 \\ \bottomrule \end{tabular} \end{table} * It's important to note that Mel spectrograms do not represent phase information well. As a result, the error metrics used are less representative of the phaser effect's error. A positive error $\Delta$ may occur even when the predicted audio sample clearly sounds much closer to the target audio sample when compared to the initial audio sample. We plan to include phase information in future iterations of our system. \bigskip \bigskip \begin{table}[h] \caption{RNN model next effect prediction accuracy for all three preset groups.} \centering \begin{tabular}{llll} \toprule & \multicolumn{3}{c}{Mean Next Effect Prediction Accuracy} \\ \cmidrule(r){2-4} Step & \emph{Basic Shapes} & \emph{Adv. Shapes} & \emph{Adv. Mod. Shapes} \\ \midrule 1 & 0.997 & 0.993 & 0.997 \\ 2 & 0.983 & 0.985 & 0.989 \\ 3 & 0.981 & 0.979 & 0.983 \\ 4 & 0.973 & 0.988 & 0.977 \\ 5 & 0.999 & 1.000 & 0.997 \\ \midrule All & 0.983 & 0.985 & 0.986 \\ \bottomrule \end{tabular} \end{table} \bigskip \bigskip \begin{table}[h] \caption{Mean errors and $\Delta$s for input and output audio from the \emph{advanced shapes} preset group.} \centering \begin{tabular}{l} \toprule \multicolumn{1}{c}{} \\ \cmidrule(r){1-1} Metric \\ \midrule MSE \\ MAE \\ MFCC \\ LSD \\ \bottomrule \end{tabular} \begin{tabular}{lll} \toprule \multicolumn{3}{c}{Mean Error against Target Audio} \\ \cmidrule(r){1-3} Init. Audio & Final Audio & $\Delta$ \\ \midrule 0.039 & 0.009 & -0.030 \\ 0.150 & 0.067 & -0.083 \\ 146.79 & 62.25 & -84.54 \\ 14.13 & 6.71 & -7.42 \\ \bottomrule \end{tabular} \begin{tabular}{lllll} \toprule \multicolumn{5}{c}{Mean Error $\Delta$ per Step} \\ \cmidrule(r){1-5} 1 & 2 & 3 & 4 & 5 \\ \midrule -0.017 & -0.010 & -0.007 & -0.003 & 0.000 \\ -0.042 & -0.030 & -0.022 & -0.009 & -0.001 \\ -43.45 & -30.36 & -21.87 & -9.00 & -0.40 \\ -3.91 & -2.66 & -1.83 & -0.70 & -0.05 \\ \bottomrule \end{tabular} \end{table} \bigskip \bigskip \begin{table}[h] \caption{Mean errors and $\Delta$s for input and output audio from the \emph{advanced mod. shapes} preset group.} \centering \begin{tabular}{l} \toprule \multicolumn{1}{c}{} \\ \cmidrule(r){1-1} Metric \\ \midrule MSE \\ MAE \\ MFCC \\ LSD \\ \bottomrule \end{tabular} \begin{tabular}{lll} \toprule \multicolumn{3}{c}{Mean Error against Target Audio} \\ \cmidrule(r){1-3} Init. Audio & Final Audio & $\Delta$ \\ \midrule 0.049 & 0.013 & -0.036 \\ 0.181 & 0.077 & -0.104 \\ 176.37 & 72.52 & -103.85 \\ 16.90 & 7.74 & -9.16 \\ \bottomrule \end{tabular} \begin{tabular}{lllll} \toprule \multicolumn{5}{c}{Mean Error $\Delta$ per Step} \\ \cmidrule(r){1-5} 1 & 2 & 3 & 4 & 5 \\ \midrule -0.022 & -0.010 & -0.008 & -0.000 & -0.002 \\ -0.070 & -0.026 & -0.018 & -0.004 & -0.004 \\ -68.97 & -24.92 & -18.50 & -4.26 & -3.49 \\ -6.17 & -2.22 & -1.49 & -0.33 & -0.32 \\ \bottomrule \end{tabular} \end{table} \end{document}
2024-02-18T23:40:33.674Z
2021-02-08T02:16:40.000Z
algebraic_stack_train_0000
2,751
2,801
proofpile-arXiv_065-13479
\section{Introduction} Gravitational waves (GWs) are now a ripe testing ground for many aspects of gravitational physics \cite{PhysRevLett.116.061102,LIGOScientific:2016lio,LIGOScientific:2019fpa,Abbott:2020jks}. One of the principle foundations of General Relativity is the Einstein Equivalence Principle, which includes the universality of freefall and the spacetime-symmetry principle of the local Lorentz invariance of physics \cite{Will:2014kxa}. The latter principle has seen a boom in tests in the last 20+ years \cite{datatables}, owing primarily to an interesting piece of motivation: that in a fundamental unified theory of physics, local Lorentz invariance may be broken \cite{ksstring89,gp99,chkl01}. The development of an effective field theory framework that describes spacetime-symmetry violations makes comparisons between vastly different kinds of tests possible, generalizing older kinematical test frameworks with a modern viewpoint \cite{ck97,ck98,k04}. The specific consequences of local Lorentz-symmetry breaking for GWs has been studied in several works, within a general effective field theory framework \cite{bk06,km16,km18,Xu:2019fyt,Xu:2021dcw,nascimento21}, and in specific models \cite{yunes16,Berti:2018cxi,Amarilo:2019lfq,ferrari07,tso16,Wang_2020,Qiao_2019}. In particular, the effects on propagation have been determined for generic Lorentz-violating terms in the linearized gravity limit \cite{Mewes:2019}, which is the focus in this work. Examples of searches for Lorentz violation in gravity include table-top tests like gravimetry \cite{Muller:2007es,Chung:2009rm,Hohensee:2011wt,Hohensee:2013hra,Flowers:2016ctv,Shao:2017bgz,Ivanov:2019ouz}, short-range gravity tests \cite{Long:2014swa,Shao:2016cjk,Shao:2018lsx}, near-Earth tests \cite{Bourgoin:2016ynf,Bourgoin:2017fpo,Bourgoin:2020ckq,Bars:2019lek}, solar system planetary tests \cite{Iorio:2012gr,Hees:2013iv,Poncin-Lafitte:2016nqd}, and astrophysical tests with pulsars \cite{Shao:2014oha,Shao:2018vul,Wex:2020ald}. Measurements of simultaneous gravitational and electromagnetic radiation have yielded limits on certain types of Lorentz violation in gravity versus light \cite{Abbott_2017}. Recent work has begun to look at the available GW catalog to place constraints on coefficients describing Lorentz violation for gravity \cite{Liu:2020slm, shao20, wang21}. Additionally, closely-related searches for parameterizations of deviations from General Relativity have been completed \cite{LIGOScientific:2019fpa,Abbott:2020jks,Wang_2020,Wang:2020cub}. In this article, we discuss the derivation of the polarization-dependent dispersion of GW due to Lorentz and Charge-Parity-Time reversal (CPT) symmetry breaking. We describe the implementation of the modified GW strain in the LIGO-Virgo \cite{LIGOScientific:2016lio, LIGOScientific:2019fpa} algorithm library, \texttt{LALSuite} \cite{lalsuite}, as well as the statistical method used to infer the posterior probability of the coefficients for symmetry-breaking. In order to link the theoretical derivation to the analysis of astrophysical signals, we provide detailed explanations of the steps necessary to measure the coefficients for CPT and Lorentz violation, alongside simulations of the modified signals and studies of the sensitivity of current GW interferometers for parameter inference. The layout of the article is as follows. In Section \ref{theory}, we describe the theoretical methodology for effective field theory descriptions of local Lorentz violation, including a scalar field example, and an effective quadratic action for the spacetime metric fluctuations. This Section also includes the discussion of the modified plane wave solutions, and the conversion of various expressions to Syst\`eme International (SI) units. Following this, in Section \ref{lalsuite} we describe the implementation of the modified GW signals and the statistical method used for the inference of the coefficients controlling the Lorentz and CPT-breaking effects on propagation. Section \ref{simulation} includes simulations for a particular subset of the possible forms of Lorentz and CPT violations. A summary and outlook is included in Section \ref{conclusion}. For the bulk of the paper, we work in natural units where $\hbar=c=1$ and Newton's gravitational constant is $G_N \neq 1$, except when we explicitly write some expressions in SI units. Our convention is to work with Greek letters for spacetime indices and latin letters for spatial indices. The flat spacetime metric signature aligns with the common General Relativity related convention $-+++$. \section{Theoretical Framework} \label{theory} \subsection{Background and General Relativity} \label{background} As in the typical gravitational wave scenario, we expand the spacetime metric $g_{\mu\nu}$ around a flat Minkowski background $\eta_{\mu\nu}$ as \begin{equation} g_{\mu \nu} = \eta_{\mu \nu}+h_{\mu \nu}. \label{metric} \end{equation} Far from the source at the detectors, GWs are treated as perturbations $h_{\mu\nu}$ around the Minkowski metric where ${h_{\mu\nu} << \eta_{\mu\nu}}$ (e.g., components on the order of ~$10^{-21}$). However, one does not assume $h_{\mu\nu}$ is small compared to unity in all regions. In particular, in solving for the complete solution in the far radiation zone, one needs to solve in the near zone as well, for example in a post-Newtonian series \cite{pw00,pw02}. In standard General Relativity, one solves the Einstein field equations for the metric; The form in \rf{metric} is a rewritten form, not yet a specific solution. The full Einstein field equations can be written in the ``relaxed" form, as \begin{equation} (G_L)^{\mu\nu} =\kappa [ (T_M)^{\mu\nu} +\tau^{\mu\nu} ], \label{relaxed} \end{equation} where $(T_M)^{\mu\nu}$ is the matter stress-energy tensor, and $\tau^{\mu\nu}$ is the energy-momentum pseudo tensor \cite{pw14}, and $\kappa=8 \pi G_N$. Note that in this equation, $(G_L)^{\mu\nu}$ is the Einstein tensor linearized in $h_{\mu\nu}$. In the wave zone, where the gravitational fields are weak, the equation \rf{relaxed} becomes simply the ``vacuum" equations $(G_L)^{\mu\nu} =0$, which admit wave solutions with two transverse degrees of freedom after choosing a gauge. The transverse-traceless gauge (TT-gauge) is used to describe the propagation of GWs; in this gauge, GR predicts two linearly independent polarizations labelled ``$+$'' and ``$\times$'', with a phase angle difference of $\pi/4$, \begin{equation} h_{\mu\nu} = \begin{pmatrix} 0 & 0 & 0 & 0\\ 0 & h_{+} & h_{\times} & 0\\ 0 & h_{\times} & -h_{+} & 0\\ 0 & 0 & 0 & 0 \end{pmatrix}. \label{TT} \end{equation} The observable signal comes from the LIGO and Virgo detector responses to the incoming GW, \begin{equation} S_{A}(t,\theta,\phi,\psi) = F_{A,+}(\theta,\phi,\psi)\,h_{+}(t,\theta,\phi,\psi,\tau)\,+\,F_{A,\times}(\theta,\phi,\psi)\,h_{\times}(t,\theta,\phi,\psi,\tau), \label{antenna} \end{equation} where $F_{A,*}$ are the antenna response patterns of the detectors. The angles $\theta$ and $\phi$ are the source sky locations, $\tau$ is the time delay between detectors receiving the signal, and $\psi$ is the GW frame rotation with respect to the detectors' frame. Note the individual, polarization terms above are not gauge independent, as they depend on $\psi$, yet the entire observed signal is gauge independent. We return to this point below when discussing Lorentz violation effects for GWs. \subsection{Spacetime-symmetry breaking scenario} \label{spacetime} We consider the observable effects on gravitational wave propagation subject to Lorentz- and CPT-breaking terms in an effective field theory framework known as the Standard-Model Extension (SME) \cite{ck97,ck98,k04,bk06,kt11,bkx15,km16}. \subsubsection{Scalar field example} \label{scalar} To help understand this framework, and how the action is developed, we consider first a scalar field action in flat spacetime. A free\, massless scalar field $\Phi$ \, is described by the action \begin{equation} I_{\rm sc}= -\frac 12 \int d^4x \, \eta^{\mu\nu} (\partial_\mu \Phi) (\partial_\nu \Phi ). \label{scalar1} \end{equation} When varying this action $\Phi \rightarrow \Phi + \delta \Phi$ one obtains, to first order in $\delta \Phi$, and applying the Leibniz rule, \begin{eqnarray} \delta I_{\rm sc} &=& -\int d^4x \, \eta^{\mu\nu} (\partial_\mu \delta \Phi ) (\partial_\nu \Phi ) \nonumber\\ &=& -\int d^4x \, [ \partial_\mu \left( \delta \Phi (\partial^\mu \Phi ) \right) - \delta \Phi \partial_\mu \partial^\mu \Phi ]. \label{scalar2} \end{eqnarray} Because of the total derivative, the first term is total 4 divergence and hence is normally considered a surface term, to be evaluated on the 3 dimensional hypersurface $\Sigma$ bounding the volume of spacetime considered. Since the variational principle in field theory normally assumes the variation $\delta \Phi$ vanishes on the boundary, this term vanishes. What is left is proportional to the arbitrary variation $\delta \Phi$ and therefore if $\delta I = 0 $ is imposed we obtain the field equations: \begin{equation} \Box \Phi = 0, \label{scalareq1} \end{equation} where $\Box=\partial_\alpha \partial^\alpha$. In the effective field theory framework description of Lorentz violation, terms are added to the action \rf{scalar1} that are formed from contractions of general background coefficients with arbitrary numbers of indices $k^{\mu\nu\lambda...}$ and terms involving the scalar field like $\partial_\mu \Phi \partial_\nu \Phi$. This is based on the premise that any form of Lorentz violation can be described by the coupling of known matter fields to a fixed background field $k^{\mu\nu\lambda...}$ \cite{ck97,ck98}. Under {\it particle} Lorentz transformations, the matter fields transform as tensors, while the background field remain fixed. On the other hand, under {\it observer} transformations, both background and matter fields transform. The latter condition reflects the idea that physics should be independent of coordinates. These concepts are detailed in the literature. Most notably, see references \cite{Bertschinger:2013jla,bertschinger19} for illustrations in classical mechanics contexts. There are several treatments of the origin of Lorentz violation that can play a role in the phenomenology of the effective field theory test framework (SME). The Lorentz violation can be explicit, in which the coefficients are prescribed, {\it a priori} unknown background fields, unaccompanied by additional dynamical modes. On the other hand, a more elegant mechanism of spontaneous Lorentz-symmetry breaking can be considered. In this latter case, the underlying action for the model is Lorentz invariant but through a dynamical process, nonzero vacuum expectation values for tensor fields can arise \cite{ksstring89}. Other scenarios with alternative geometries like Riemann-Finsler geometry have been explored \cite{Kostelecky:2011qz,Lammerzahl:2012kw,AlanKostelecky:2012yjr,Javaloyes:2013ika,Schreck:2014hga,SILVA201474}. Much theoretical discussion of these topics exists in the literature \cite{k04,PhysRevD.82.044020,Arraut:2015nqa,bk05,bkx08,bluhm15,kostelecky:2021xhb}, but we do not delve into details here. As a simple start, one might consider trying to add a vector coupled to a first derivative of the scalar to \rf{scalar1}, as in \begin{equation} \Delta I_{\rm sc}=\int d^4x \, k_\nu ( \Phi \partial^{\nu}\Phi ) \label{scalarLV1} \end{equation} for arbitrary background vector $k_\nu$ (we assume the explicit symmetry breaking case for the moment). However, this can be shown to be equivalent to a surface term: \begin{eqnarray} \Delta I_{\rm sc}&=&\frac{1}{2}\int d^4x \, k_{\nu}\partial^{\nu}(\Phi^2) \nonumber\\ &=&\frac{1}{2}\int d^3 \Sigma_\nu \, k^{\nu}\,\Phi^2 , \label{surfacek} \end{eqnarray} where $d^3\Sigma_\nu$ is the hypersurface ``area" element. Since the variation $\delta \Phi$ is assumed to vanish on the hypersurface, this contribution will vanish from the field equations. Alternatively, variation of \rf{scalarLV1} yields a null result more directly: \begin{eqnarray} \delta \Delta I_{\rm sc} &=& \int d^4x \, k_{\nu}\left[\delta \Phi \, \partial^{\nu}\Phi +\Phi \,\delta(\partial^{\nu}\Phi)\right] \nonumber\\ &=&\int d^4x \, k_{\nu}(\partial^{\nu}\Phi - \partial^{\nu}\Phi )\, \delta \Phi, \end{eqnarray} where the last line identically is zero. To obtain Lorentz-violating terms that yield physical results we modify the action in \rf{scalar1} as \begin{equation} I_{\rm sc}= -\frac 12 \int d^4x \, \left( \eta^{\mu\nu} (\partial_\mu \Phi ) (\partial_\nu \Phi ) + (\partial_\mu \Phi ) k^{\mu\nu} \partial_\nu \Phi ) \right), \label{scalarLV2} \end{equation} where $k^{\mu\nu}$ are the coefficients for Lorentz violation \cite{ck98,Edwards:2018lsn}, containing $10$ independent coefficients describing the degree of Lorentz violation. Note that we assume here that the coefficients are constants in the chosen coordinate system (i.e., the partials vanish, $\partial_\alpha k^{\mu\nu} = 0$). Upon variation, as in \rf{scalar1}, we obtain the modified field equations \begin{equation} \Box \Phi + k^{\mu\nu} \partial_\mu \partial_\nu \Phi=0. \label{scalareqn2} \end{equation} To complete the discussion here we also consider the plane wave solutions to \rf{scalareqn2}. This is achieved by assuming $\Phi$ takes the form $\Phi = A e^{i p_\mu x^\mu}$, where $x^\mu$ is spacetime position and $p^\mu = (\omega, \vec p )$ is the four-momentum for the plane wave. This yields the momentum-space equation \begin{equation} p_\mu p^\mu + k_{\mu\nu} p^\mu p^\nu = 0. \label{disp1} \end{equation} Using the definition of the four-momentum we can write this out in a space and time decomposed form: \begin{equation} \omega^2 (1-k_{00}) -2 k_{0j} p^j \omega - k_{ij} p^i p^j - \vec p^2 = 0. \label{disp2} \end{equation} We can solve for the dispersion relation $\omega (\vec p)$ and then expand the result to leading order in the coefficients $k_{\mu\nu}$. We obtain \begin{equation} \omega \approx |\vec p| \left( 1+ \frac 12 (k_{00} + 2 k_{0j} {\hat p}^j + k_{ij} {\hat p}^i {\hat p}^j ) \right) \label{disp3}. \end{equation} This dispersion would modify the propagation of the scalar mode, in particular its speed $v=\omega/|\vec p|$ can be written as \begin{equation} v \approx 1+ \frac 12 (k_{00} + 2 k_{0j} {\hat p}^j + k_{ij} {\hat p}^i {\hat p}^j ). \label{speed} \end{equation} Note the directional dependence of the speed due to the anisotropic coefficients $k_{0j}$ and $k_{ij}$. Even in the case of the isotropic limit, where only $k_{00}$ appears, due to the observer Lorentz covariance, this limit is is a special feature of a particular observer frame. For example when viewed by an observer boosted by small $\beta^j$, anisotropic terms will arise ( e.g., $(k^\prime)_{0j} \sim -\beta^j k_{00}$ ). In the typical effective field theory treatment of searches for Lorentz violation, additional, ``higher order", terms are also included \cite{km09}. Thus the Lagrange density takes the form \begin{equation} I_{\rm sc} = -\frac 12 \int d^4x \, \left( \eta^{\mu\nu} (\partial_\mu \Phi ) (\partial_\nu \Phi ) + (\partial_\mu \Phi ) \sum_d (k^{(d)})^{\mu\nu\lambda...} (\partial_\nu \partial_\lambda ...\Phi ) \right), \label{scalarLV3} \end{equation} where now the coefficients are labeled $d$ for the mass dimension of the term in the action, with the scalar field itself having mass dimension $1$ and each derivative introducing a mass dimension $M^1$. Thus, the result in \rf{scalarLV2} is the $d=4$ limit and the coefficients $k^{\mu\nu}$ are dimensionless. In general the coefficients $(k^{(d)})^{\mu\nu\lambda...} $ have mass dimension $M^{4-d}$. \subsubsection{Gravity sector case} \label{gravity sector case} The action from the gravity sector that includes both linearized Lorentz invariant and Lorentz-violating terms, can be described similarly to the scalar case. However, with a multicomponent field, the details of the tensor algebra are more complicated. First we note that linearized General Relativity can be derived from the action \begin{equation} I_{GR} = -\frac {1}{4\kappa} \int d^4x \, h_{\mu\nu} G^{\mu\nu}, \label{GR} \end{equation} where the Einstein tensor is expressed in linearized form with terms of order $h^2$ and higher discarded. Note that an action quadratic in $h_{\mu\nu}$ yields field equations linear in $h_{\mu\nu}$. We now explain in some detail, the construction outlined in Ref.\ \cite{km16}. The starting point for an action that generalizes \rf{GR} is \begin{equation} I =\frac {1}{8\kappa} \int d^4x \, h_{\mu\nu} \hat{K}^{(d)\mu\nu\rho\sigma} h_{\rho\sigma}, \label{gravaction} \end{equation} where $\hat{K}^{(d)\mu\nu\rho\sigma}$ is an operator given by \begin{equation} \hat{K}^{(d)\mu\nu\rho\sigma} = K^{(d)\mu\nu\rho\sigma \epsilon_1 ...\epsilon_{d-2}}\partial_{\epsilon_1}... \partial_{\epsilon_{d-2}}. \label{operators} \end{equation} The operator contains partial derivatives that act on the gravitational field fluctuations $h_{\mu\nu}$; the $K^{(d)\mu\nu\rho\sigma \epsilon_1 ...\epsilon_{d-2}}$ are a set of constants in the chosen coordinates. The mass dimension label $d$ refers to the natural units of mass that each term has. At this stage, the nature of these constants is unknown and in what follows we explain the conditions applied to constrain them. One derives the field equations via variation of the action with respect to the fields, similar to the scalar example above. Varying the action \rf{gravaction} with respect to the metric fluctuations $h_{\mu\nu}$, yields \begin{eqnarray} \delta I &=& \frac {1}{8\kappa} \int d^4x \, [ \delta h_{\mu\nu}\, K^{(d)\mu\nu\rho\sigma\epsilon_1 ...\epsilon_{d-2}} \partial_{\epsilon_1}...\partial_{\epsilon_{d-2}} \,\, h_{\rho\sigma} \nonumber\\ && \pt{\frac 18 \int d^4 x} + h_{\mu\nu}\, K^{(d)\mu\nu\rho\sigma\epsilon_1 ...\epsilon_{d-2}} \partial_{\epsilon_1}...\partial_{\epsilon_{d-2}} \, \delta h_{\rho\sigma} ]. \label{var1} \end{eqnarray} In order to completely factor out the variation of the metric field $\delta h_{\mu\nu}$, integration by parts is performed on the second term. (Note that in doing the integration by parts, we discard surface terms with derivatives of the fluctuations, which is a nontrivial step reflecting the fact that the action contains an arbitrary number of derivatives, going beyond the usual first order derivative form of conventional dynamics.) When $d$ is even, the integration by parts is done an even number of times, creating an overall positive value for the term; if $d$ is odd, the over term is negative in value. We can represent this with $(-1)^d$, and then obtain \begin{equation} \delta I = \frac {1}{8\kappa} \int d^4x \delta h_{\alpha\beta} \left[K^{(d)(\alpha \beta)(\mu\nu)\epsilon_1...\epsilon_{d-2}}+(-1)^d K^{(d)(\mu\nu)(\alpha\beta) \epsilon_1...\epsilon_{d-2}}\right]\partial_{\epsilon_1}...\partial_{\epsilon_{d-2}}h_{\mu\nu}. \label{vary2} \end{equation} Since $h_{\mu\nu}$ is symmetric, we can indicate the symmetry with parenthesis in $\hat{K}^{(d)(\mu\nu)(\alpha\beta)}$. There are two considerations in \rf{vary2} to investigate, the first being that only terms contributing to the field equations should survive. Thus we must have \begin{equation} K^{(d)(\alpha\beta)(\mu\nu)\epsilon_1...\epsilon_{d-2}}+(-1)^d \,K^{(d)(\mu\nu)(\alpha\beta)\epsilon_1...\epsilon_{d-2}}\neq 0. \label{cond1} \end{equation} The second consideration is the imposition of the linearized gauge symmetry, i.e., $h_{\mu\nu}\rightarrow h_{\mu\nu}-\partial_{\mu}\xi_{\nu}-\partial_{\nu}\xi_{\mu}$, where $\xi^\mu$ is an arbitrary vector.\footnote{General gauge-breaking terms are considered in Ref.\ \cite{km18}.} If we apply this transformation on the metric within the action \rf{gravaction}, i.e., $\delta_{\xi}h_{\mu\nu}=-\partial_{\mu}\xi_{\nu}-\partial_{\nu}\xi_{\mu}$, we obtain, from \rf{vary2}, \begin{eqnarray} \delta_\xi I &=& \frac {1}{8\kappa} \int d^4x \, \partial_\alpha \xi_\beta \left[(-1)^d \hat{K}^{(d)(\mu\nu)(\alpha\beta)}+\hat{K}^{(d)(\alpha \beta)(\mu\nu)} \right] h_{\mu\nu}, \nonumber\\ &=& -\frac {1}{8\kappa} \int d^4x \, \xi_{\nu} \left[(-1)^d \hat{K}^{(d)(\rho\sigma)(\mu\nu)}+\hat{K}^{(d)(\mu\nu)(\rho\sigma)} \right]\,\partial_{\mu}h_{\rho\sigma}. \label{diffvary} \end{eqnarray} Since $\xi_{\mu}$ is arbitrary and derivatives of $h_{\rho\sigma}$ are not necessarily zero, the second condition becomes \begin{equation} \left[ (-1)^d \hat{K}^{(d)(\rho\sigma)(\mu\nu)}+\hat{K}^{(d)(\rho\sigma)(\mu\nu)} \right]\partial_{\mu}=0. \label{cond2} \end{equation} Under these two conditions \rf{cond1} and \rf{cond2}, there are three categories of coefficients. These categories are based in part on discrete spacetime symmetry properties of the terms in the action: their behavior under CPT transformations, for which they can be even or odd. Also, the possible tensor index symmetries categorize these coefficients \cite{km16}. The three types of $\hat{K}^{(d)\mu\nu\rho\sigma}$ ``hat" operators are written as \begin{eqnarray} \hat{s}^{\mu\rho\nu\sigma}&=&s^{(d)\mu\rho\epsilon_1\nu\sigma\epsilon_2...\epsilon_{d-2}}\partial_{\epsilon_1}...\partial_{\epsilon_{d-2}}, \nonumber\\ \hat{q}^{\mu\rho\nu\sigma}&=&q^{(d)\mu\rho\epsilon_1\nu\epsilon_2\sigma\epsilon_3...\epsilon_{d-2}}\partial_{\epsilon_1}...\partial_{\epsilon_{d-2}} \nonumber\\ \hat{k}^{\mu\nu\rho\sigma}&=&k^{(d)\mu\epsilon_1\nu\epsilon_2\rho\epsilon_3\sigma\epsilon_4...\epsilon_{d-2}}\partial_{\epsilon_1}...\partial_{\epsilon_{d-2}}. \label{sqk} \end{eqnarray} The $\hat{s}$ operators have even CPT and mass dimension $d \geq 4$; $\hat{q}$ operators have odd CPT and mass dimension $d\geq 5$; $\hat{k}$ operators have even CPT and mass dimension $d\geq 6$. The process also reproduces the GR terms. The Lagrange density is then \begin{eqnarray} {\cal L} &=& \frac{1}{8\kappa} \epsilon^{\mu\rho\alpha\kappa}\epsilon^{\nu\sigma\beta\lambda}\eta_{\kappa\lambda}h_{\mu\nu}\partial_{\alpha}\partial_{\beta}h_{\rho\sigma} \nonumber\\ &&+\frac{1}{8\kappa} h_{\mu\nu}(\hat{s}^{\mu\rho\nu\sigma}+\hat{q}^{\mu\rho\nu\sigma} +\hat{k}^{\mu\rho\nu\sigma})h_{\rho\sigma}, \label{gravlag} \end{eqnarray} where the first term is an equivalent way of writing the standard GR using the totally antisymmetric Levi-Civita tensor density $\epsilon^{\mu\rho\alpha\kappa}$(equivalent to \rf{GR}). It should be remarked at this point that the Lagrange density in \rf{gravlag} is the most general one constructed purely from the metric fluctuations $h_{\mu\nu}$ and taken to quadratic order only. While it includes only constant coefficients in \rf{sqk}, it maintains linearized gauge symmetry. Terms in this Lagrange density can arise in spontaneous-symmetry breaking models, when the additional fluctuations (including possible Nambu-Goldstone and massive modes around the vacuum values have been ``integrated out" or ``de-coupled" \cite{bk06,abk10,seifert09,seifert18}.\footnote{Discussion of the SME framework including the fluctuations more generally can be found in Refs. \cite{kl21,b21}.} On the other hand, examples exist where the quadratic order Lagrange density in \rf{gravlag} can arise from models with explicit symmetry breaking. In either scenario, one is then left with an ``effective" Lagrange density, quadratic in the metric fluctuations around a flat background, in which the fluctuations do not appear. Proceeding, the resulting vacuum field equations from \rf{gravlag} are \begin{equation} 0 = \, G^{\mu\nu} \, -[\frac{1}{4}(\hat{s}^{\mu\rho\nu\sigma}+\hat{s}^{\mu\sigma\nu\rho}) +\frac{1}{2}\hat{k}^{\mu\nu\rho\sigma} +\frac{1}{8}(\hat{q}^{\mu\rho\nu\sigma} +\hat{q}^{\nu\rho\mu\sigma}+\hat{q}^{\mu\sigma\nu\rho}+\hat{q}^{\nu\sigma\mu\rho})]\, h_{\rho\sigma}. \label{eom1} \end{equation} In the absence of Lorentz violation, the field equations \rf{eom1} reduce to $G^{\mu\nu}=0$. In the Lorentz gauge, this reduces to $\Box \bar{h}^{\mu\nu} =0$, where $\bar{h}^{\mu\nu} = h^{\mu\nu} - (1/2) \eta^{\mu\nu} h^\alpha_{\pt{\alpha}\alpha}$ and $\partial_\mu \bar{h}^{\mu\nu}=0$. For plane wave solutions $\bar{h}_{\mu\nu} = A_{\mu\nu} e^{-i p_\alpha x^\alpha}$, this yields $p^2 = p^\alpha p_\alpha=0$. This provides the dispersion relation for GR, \begin{equation} \omega = |\vec p|, \label{dispGR} \end{equation} the equation of motion in energy-momentum space which describes the propagation for GWs. Using the residual gauge freedom in this limit, the number of independent components of the plane wave solutions can be reduced to $2$ and will take the form of \rf{TT} in the Transverse-Traceless gauge. To find the dispersion relation for the modified equations \rf{eom1}, one again assumes a plane wave form above. There are then at least two approaches to solving the resulting equations, where the components of $h_{\mu\nu}$ appear highly coupled with one another due to the extra symmetry-breaking terms in \rf{eom1}. The equations \rf{eom1} retain the usual gauge freedom, and so one can proceed by choosing a gauge condition and then decomposing the resulting equations into time and space components. For example, using a temporal-type gauge $h_{0\mu}=0$ and a helicity basis for the spatial components, one can show that to first order in the coefficients for Lorentz violation, still only $2$ degrees of freedom remain \cite{Mewes:2019}. Alternatively, a gauge-independent method for deriving the dispersion relation that uses differential forms exists \cite{km09}. Despite the fact that only two physical propagating degrees of freedom remain in the leading order Lorentz violation case, the two modes generally travel at different speeds in the vacuum, resulting in birefringence, and the frequencies of the modes are highly dispersive. (Note that in contrast, for the scalar field example in \eqref{disp3}, there is no birefringence effect because there is only one scalar mode whose propagation is modified.)\, With a helicity basis choice of spatial coordinates, the two propagating modes can be shown to lie in the $+2$ and $-2$ helicity projections of the spatial components of the metric fluctuations $h^{ij}$. The modified dispersion relation can be written as \begin{equation} \omega = |\vec p| \, \left( 1-\zeta^0 \pm |\vec{\zeta}| \right), \label{dispEq} \end{equation} where \begin{equation} |\vec{\zeta}|=\sqrt{(\zeta^1)^2 + (\zeta^2)^2 +(\zeta^3)^2} \end{equation} and \begin{eqnarray} \zeta^0 &=& \frac{1}{4 |{\vec p}|^2} \left(-\hat{s}^{\mu\nu}\,_{\mu\nu}+\frac{1}{2}\hat{k}^{\mu\nu}\,_{\mu\nu}\right), \nonumber\\ (\zeta^1)^2+(\zeta^2)^2 &=& \frac{1}{8 |{\vec p}|^4} \left(\hat{k}^{\mu\nu\rho\sigma} \hat{k}_{\mu\nu\rho\sigma}-\hat{k}^{\mu\rho}\,_{\nu\rho}\,\hat{k}_{\mu\sigma}\,^{\nu\sigma}+ \frac{1}{8}\hat{k}^{\mu\nu}\,_{\mu\nu}\,\hat{k}^{\rho\sigma}\,_{\rho\sigma} \right), \nonumber\\ (\zeta^3)^2 &=&\frac{1}{16|{\vec p}|^4} \left(-\frac{1}{2}\hat{q}^{\mu\rho\nu\sigma}\,\hat{q}_{\mu\rho\nu\sigma} -\hat{q}^{\mu\nu\rho\sigma}\,\hat{q}_{\mu\nu\rho\sigma} +(\hat{q}^{\mu\rho\nu} \,_{\rho} +\hat{q}^{\nu\rho\mu}\,_{\rho})\hat{q}_{\mu\sigma\nu}\,^{\sigma} \right). \label{zetas} \end{eqnarray} All of the derivative factors $\partial_\mu$ from \rf{sqk} are replaced with momenta $\partial_\mu \rightarrow i p_\mu$. The plus and minus signs indicate the different dispersion relations for each propagating mode, in vacuum (birefringence). Note that the dispersion and birefringence effects depend on the arrival direction of the plane wave $\hat p$, revealing this to be a fundamentally anisotropic effect that differs from kinematical isotropic descriptions of symmetry breaking \cite{myw12}. \subsubsection{Gravitational wave signals} \label{signals} Since the terms involving the coefficients in \rf{dispEq} are already at leading order, they can be evaluated with the zeroth-order solution (e.g., $p^\mu = \omega (1, \hat p) = |\vec p| (1, \hat p)$). This reveals that any effects associated with arriving plane waves should depend on angular functions of the unit vector $\hat p$. Further, since LIGO-Virgo analysis use angular sky map coordinate systems, it is advantageous to use the machinery of spherical harmonics and spherical tensors. We can decompose the above coefficients into spherical harmonic form, \begin{eqnarray} \zeta^0 &=& \sum\limits_{djm} \omega^{d-4} \, Y_{jm}(\hat{\textbf{n}})\,k^{(d)}_{(I)jm}, \label{spherical1}\\ \zeta^1 \mp i\,\zeta^2 &=& \sum\limits_{djm} \omega^{d-4}\, _{\pm 4}Y_{jm}(\hat{\textbf{n}})\left(k^{(d)}_{(E)jm}\pm i k^{(d)}_{(B)jm} \right), \label{spherical2} \\ \zeta^3 &=& \sum\limits_{djm} \omega^{d-4} \, Y_{jm}(\hat{\textbf{n}})\,k^{(d)}_{(V)jm}. \label{spherical3} \end{eqnarray} In these expressions, $Y_{jm} (\hat{\textbf{n}}) $ are the usual spherical harmonics with ${\hat n}=-{\hat p}$, while $_{\pm 4}Y_{jm}(\hat{\textbf{n}})$ are spin-weighted spherical harmonics. The coefficients, formerly in cartesian tensor form in \rf{dispEq} are expressed in spherical form $k^{(d)}_{(I)jm}$, $k^{(d)}_{(E)jm}$, $k^{(d)}_{(B)jm}$, and $k^{(d)}_{(V)jm}$, where $j =0,1,...,d-2$ and $-j \leq m \leq j$. The meaning of the subscripts $I,E,B,V$ and the relation between the two forms of the coefficients is determined by whether the terms are CPT odd or even and which mass dimensions they encompass, detailed in Refs.\ \cite{km09,km16,km17}. In GR, there is no difference in the speed between gravitational wave polarizations; both travel at the speed of light (i.e., $v=\omega / |\vec p| =1$). In the case of a Lorentz violation in the form in \rf{dispEq}, the speed of the waves is given by \begin{equation} v= 1-\zeta^0 \pm |\vec{\zeta}|. \label{LVspeed} \end{equation} Given enough propagation distance from source to detector, a difference in arrival times may be detectable even for small Lorentz violation, a feature that has been used for photon tests of Lorentz invariance \cite{Kostelecky:2001mb,Kostelecky:2006ta,Kostelecky:2007zz,Kostelecky:2008be,Kostelecky:2013rv,Kislat:2017kbh,Friedman:2020bxa}. Using LVC data, we can test for these effects by looking for a phase deviation from GR via polarization comparisons. If Lorentz violation effects are not resolvable given current precision, we can then provide constraints for the LV coefficients. Modifications in the analysis code use the expressions for the gravitatonal wave strain polarizations. The plane wave solutions will have a phase shift $\delta \psi_{\pm}$ due to terms in \rf{LVspeed} or \rf{dispEq}. Consider first the strain \begin{equation} h \sim e^{-i (\omega t -kl)} \end{equation} where $l$ is the distance travelled and $k$ is the wave number. The difference in phase grows in magnitude the farther the gravitational wave travels from the source to detectors. On cosmological scales, it is important to include effects on propagation time from the expanding universe using luminosity distance. Noting $k \sim |\vec{p}| = \omega / v$, inputting \rf{LVspeed}, and including distance and frequency alterations form cosmology, one finds the phase shift expression, \begin{equation} \delta \psi_{\pm}=\omega_{obs}\int^z_0\,dz' \frac{(-\zeta^0\pm |\vec{\zeta}|)}{H(z')}, \end{equation} where $H(z)$ is the Hubble parameter with redshift $z$ and the observed frequency is related to that emitted via $\omega_{obs}(1+z)=\omega_{emit}$. For each mode, the modified phase shift can be written as \begin{equation} \delta \psi_{\pm} =- \delta \pm \beta, \label{delpsi} \end{equation} where \begin{align} \delta &= \omega^{d-3} \tau \zeta^{(d)0}, \nonumber\\ \beta &= \omega^{d-3} \tau |\vec{\zeta}^{(d)}|,\nonumber\\ \tau &= \int_0^z dz \fr{(1+z)^{d-4}}{H(z)} \label{quant1} \end{align} and $ |\vec{\zeta}| = \omega^{d-4} |\vec{\zeta}^{(d)}|$ and $ \zeta^{0}= \omega^{d-4}\zeta^{(d)0}$. The $\tau$ is the effective propagation time due to cosmological redshift $z$. It is useful to rewrite the coefficients in terms of effective angles $\vartheta$ and $\varphi$ defined by \begin{equation} \sin\,\vartheta=\frac{|\zeta^1\mp i\zeta^2|}{|\vec{\zeta}|}, \pt{30} \cos\vartheta =\frac{\zeta^3}{|\vec{\zeta}|}, \pt{30} e^{\mp i \varphi}=\frac{\zeta^1\mp i \zeta^2}{\sqrt{(\zeta^1)^2+(\zeta^2)^2}}. \end{equation} Note that these angles are not the sky location angles $\theta$, and $\phi$. Also using the plus and cross polarizations \rf{TT}, the modified gravitational wave solutions in terms of the Lorentz-invariant solutions can be written \begin{eqnarray} h_{(+)} &=& e^{i\delta} (\cos \beta - i \sin \vartheta \cos \varphi \sin \beta )\, h^{LI}_{(+)} \nonumber\\ && - e^{i\delta}\sin \beta (\cos \vartheta + i \sin \vartheta \sin \varphi ) \, h^{LI}_{(\times)} \nonumber\\ h_{(\times)} &=& e^{i\delta} (\cos \beta +i \sin \vartheta \cos \varphi \sin \beta )\, h^{LI}_{(\times)} \nonumber\\ && + e^{i\delta}\sin \beta(\cos \vartheta - i \sin \vartheta \sin \varphi ) \, h^{LI}_{(+)}.\label{h_LIV} \label{plcr} \end{eqnarray} The $h^{LI}_{(+,\times)}$ are the Lorentz-invariant gravitational wave for standard GR; one can retrieve GR as a limiting case as $\beta \rightarrow 0$ and $\delta \rightarrow 0$. The measured signal at a given detector can be obtained from an equation of the form \rf{antenna}. It is standard in the literature to adopt a Sun-centered Celestial-Equatorial coordinate system (or SCF frame) for reporting measurements of the components of the coefficients for Lorentz violation either in the form $s^{TXY...},...$ or in spherical tensor form $k^{(d)}_{(I)10},...$ \cite{datatables}. Under observer coordinate transformations, the coefficients transform as tensors. In many cases, these transformations can be implemented as global Lorentz transformations on the coefficients. In the present case, we want to ensure the coefficients in the expression for the measured strain are all expressed in terms of the SCF coefficients, thereby leaving any angular, sky location, dependence in the relevant angular variables. Thus, when analyzing data, the signal generically will have extra angular, isotropy breaking, dependence on the sky angles. This will differ significantly from the GR case. \subsection{Unit changes and dimension} \label{units} For applications below, it becomes essential to convert from natural units to SI units when implementing modifications into analysis code. We note here several useful unit substitutions that can used for this and various key equations discussed previously. Recall natural units are based on $\hbar = c =1$. In these units, quantities can have dimensions of energy, typically expressed in terms of electron volts, as \rm{GeV}$=10^{9}$ \rm{eV}, for example. For instance, mass dimension $d$ coefficients for Lorentz violation have units of $M^{4-d}$. To convert various quantities to SI units, we assume that the starting action has units of joules meters $\rm{Jm}$.\footnote{Alternatively one can choose $\rm{Js}$ to match classical mechanics.} For instance, the full Einstein Hilbert action in SI units can be written as \begin{equation} I_{EH} = \frac{c^4}{16 \pi G} \int d^4 x \sqrt{-g} R, \label{EH} \end{equation} or for the quadratic action limit of equation \rf{GR}, simply multiply by $c^4$. Units of $\rm{kg\,m\,s^{-2}}$ \, come from the factor $\frac{c^4}{G}$, $\rm{m^4}$ \, from $d^4x$, and $\rm{m^{-2} }$ \, from the derivatives contained within the Einstein tensor. Implicit here is the assumption that the metric tensor $g_{\mu\nu}$ is dimensionless (the Minkowski metric retains its form $\eta_{\mu\nu}={\rm diag} (-1,1,1,1)$). Likewise, the Lorentz-violating action \eqref{gravaction} contains operators with SI units $\rm{m^{-2} }$ \, , and thus from \eqref{operators}, when introducing higher derivatives, the units of the coefficients compensate, thus the coefficients have units $\rm{m^{d-4} }$. When converting the field equations \eqref{eom1} from position to momentum space, every partial derivative contributes a factor with Planck's constant, i.e., $\partial_\alpha \rightarrow \frac{i}{\hbar}p_\alpha$. Schematically, the the position space equation has the form \begin{equation} \partial \partial\, h\,\, + s^{(4)}\,\partial\partial\,h\,\, +q^{(5)}\,\partial \partial \partial\,h \,\,+\,...=0, \label{SIposition} \end{equation} where, e.g., for $d=4$ a term involving the $\hat{s}$ operators contains coefficients for $s^{(4)}$ coupled to two derivatives that act on $h$. In momentum space, \begin{equation} (\frac{i}{\hbar})^2 pp\, h\,\,+ (\frac{i}{\hbar})^2s^{(4)}\,pp\,h\,\, + (\frac{i}{\hbar})^3q^{(5)}\,ppp\,h \,\,+ \,...=0, \label{SImomentum} \end{equation} where the operators $\hat{s}$, $\hat{q}$, and $\hat{k}$ now contain $(\frac{i}{\hbar})^{(d-2)}\,p_{\alpha_1}...p_{\alpha_{d-2}}$ in place of partials. The units for the coefficients are unchanged, i.e., $\rm{m^{d-4} }$. One must also keep track of the corrected time-component factors in the four-momenta, $p_{\alpha}=(-\frac{\hbar}{c}\omega,\, \vec{p})$. For instance, the wave speed, via the dispersion relation, becomes \begin{equation} v_{\pm}=\hbar\,\omega/|\vec{p}|=c\left(1+c^2(-\zeta^0 \pm |\vec{\zeta}|\,\,)\right) \label{vSI} \end{equation} where each $\zeta$ quantity in \rf{zetas} inherits a factor of $(\frac{\hbar}{c})^2$. To ensure the coefficients, $k^{(d)}_{(I)jm}$, $k^{(d)}_{(E)jm}$, $k^{(d)}_{(B)jm}$, and $k^{(d)}_{(V)jm}$ have SI units of $\rm{m^{d-4} }$, we redefine the equations \rf{spherical1}-\rf{spherical3} by implementing a factor of $c^{(2-d)}$, e.g., \begin{equation} \zeta^0= c^{(2-d)}\sum\limits_{djm} \omega^{d-4} \, Y_{jm}(\hat{\textbf{n}})\,k^{(d)}_{(I)jm}, \label{zetaSI} \end{equation} with similar SI factors for $\zeta^1, \zeta^2, \zeta^3$. \section{Analysis method} \label{lalsuite} The coefficients for Lorentz and CPT violation can be measured from the comparison of the speed of gravitational and electromagnetic waves, an analysis that has been performed with gravitational-wave event \, GW170817 and the associated counterpart gamma-ray burst (GRB) \, GRB170817A to constrain coefficients of mass dimension 4 with improved accuracy~\cite{Abbott_2017}. Using GW signals only, limits on mass dimensions 5 and 6 coefficients have been obtained from the non-observation of a delay between the arrival time of the $h_{+}$ and $h_{\times}$ polarizations in the LIGO and Virgo interferometers~\cite{km16,shao20,wang21}. The constraints on the birefringence parameters are obtained from the posterior samples inferred under the assumption of no symmetry breaking, and are limited by the detector resolution to determine the waveform peak frequency, focusing on information from signals at higher frequencies. We aim to compliment prior work by analysing directly the LVC interferometers strain in order to bypass the reliance on posterior parameters inferred under a GR model. Our analysis therefore fully takes into account the correlation between the SME coefficients and the source parameters, including dispersion or birefringence effects, during the inference process. We have implemented the modification of the GW strain obtained in \rf{plcr} to estimate the coefficients for symmetry-breaking from the morphology of the signals. As the dispersive and birefringent effects are degenerate with the source properties (e.g. the luminosity distance, due to the additional energy loss during the propagation) we perform a joint estimation of the source parameters and the coefficients for Lorentz and CPT violation taking into account the modifications at all frequencies of the waveform. We implement the Bayesian analysis into a version of the LIGO Algorithm Library suite \texttt{LALSuite} modified for our purposes as described below \cite{lalsuite}. \subsection{Implementation of the modified waveform} \label{sec:lalsim} The joint measurement of source and beyond-GR constraints have been performed for a variety of new physics parameterizations, including modification of the GW generation and propagation~\cite{Abbott:2020jks}. Following a similar methodology, we implement the modifications of the GW signals derived from the SME framework in the GW simulation package of \texttt{LALSuite}. Such deformations can be anisotropic, as can be infered from the appearance of ${\hat n}$ in Eq.\ \rf{plcr} via $\beta$ and $\delta$. Here we focus on the simplest coefficients that produce dispersion and birefringence via Lorentz and CPT violating effects,i.e., those of mass dimension 5. These coefficients are contained within $\beta$ in \rf{plcr} and obey the complex conjugate relation $k^{(d)*}_{jm}=(-1)^m k_{j(-m)}^{(d)}$, for $j=0,1,2,3$, $-j\leq m \leq j$. There are {\it a priori} independent coefficients in this set of terms \cite{Mewes:2019}. We display the first terms within $\beta$ in SI units: \begin{equation} \beta^{(5)}=\frac{\omega^2\tau^{(5)}}{2\sqrt{\pi}c} \, \bigl\lvert k^{(5)}_{(V)00}- \sqrt{\frac{3}{2}}\sin \theta \left(e^{i\phi} \,k^{(5)}_{(V)11}+e^{-i\phi} \,k^{(5)*}_{(V)11}\right)+ \sqrt{3}\cos \theta\, k^{(5)}_{(V)10}+... \bigr\rvert. \label{eq:beta_kv5} \end{equation} where the sky location of the source ($\theta$, $\phi$) appears. The coefficients are taken as expressed in the SCF in this expression. The general form of the signal observed in the interferometer is: \begin{equation} S_A = F_{(+)} h_{(+)}+F_{(\times)}h_{(\times)}\label{strainang1}, \end{equation} where $h_{(+,\times)}$ are the expressions \rf{plcr}, and $F_{(+,\times)}$ are the (standard) detector response functions. The expressions for $F_{(+,\times)}$ include the rotation angles relating different frames, e.g., the merger frame and the detectors frame as defined in the \texttt{LALSuite} software. The effective propagation time $\tau$ parameter of equation \rf{eq:beta_kv5} is defined in equation \rf{quant1} as an integral function of the redshift. Since it needs to be evaluated for every value of the SME coefficients being tested, for computing time feasibility we instead probe the effective coefficient $(k^{(5)}_{(V)jm})_{eff} = \tau \ k^{(5)}_{(V)jm}$. The value of the SME coefficient is recovered after convergence of the inference process further described in the following Section. Finally we note that transformations of the coefficients under observer boosts are also computable. This would be important, should it become necessary to include the motion of the Earth, the interferometers, or the motion of a source system's center of mass, relative to the SCF. Currently, it appears the strain measurements are not sensitive to this level of nonrelativistic boosts (e.g. $v/c=10^{-4}$). \subsection{Bayesian analysis} After implementing the modification of the strain, we include the SME coefficients in \texttt{LALInference}, the parameter estimation package of \texttt{LALSuite}~\cite{Veitch:2014wba}. \texttt{LALInference} performs Bayesian inference of the posterior probability of the GW source parameters with the inclusion of the systematic uncertainties due to the detectors resolutions. The vector set of GR prior parameters, $\vec{\theta}_{GR}$, includes intrinsic parameters describing the binary system (e.g. the black holes masses and spins) as well as extrinsic parameters placing it in the astrophysical environment (e.g. the sky location, distance, inclination). We add to the preexisting parameters the SME coefficients $(k^{(5)}_{(V)jm})_{eff}$ described in Section~\ref{sec:lalsim} for the mass dimension 5 case, contained within $\vec{\theta}_{SME}$. In order to include the correlation between the GR parameters and the SME coefficients, we perform a simultaneous inference of all the parameters, obtaining the joint posterior probability: \begin{eqnarray} P(\vec{\theta}_{GR},\vec{\theta}_{SME}|d,I) = \frac{ P(d|\vec{\theta}_{GR},\vec{\theta}_{SME},I)\,\,P(\vec{\theta}_{GR},\vec{\theta}_{SME}|I)}{P(d|I)}, \label{eq:bayes} \end{eqnarray} where $P(\vec{\theta}_{GR},\vec{\theta}_{SME}|d,I)$ is the posterior probability, $P(d|\vec{\theta}_{GR},\vec{\theta}_{SME},I)$ the likelihood, $P(\vec{\theta}_{GR},\vec{\theta}_{SME}|I)$ the prior probability and $P(d|I)$ the evidence, and any pertinent background information is included in $I$. We set a flat prior probability for $(k^{(5)}_{(V)jm})_{eff}$ bounded between $|(k^{(5)}_{(V)jm})_{eff}| \in [0 ; 10^{-10} ]$, with maximal value well above the existing constraints on the order of $10^{-15}$~\cite{shao20}. The likelihood is computed in the frequency domain: \begin{eqnarray} P(d|\vec{\theta}_{GR},\vec{\theta}_{SME},I) = \exp \sum_i \left[ - \frac{2 | \tilde{h}_i (\vec{\theta}_{GR},\vec{\theta}_{SME}) - \tilde{d}_i|^2 }{T S_n(f_i)} - \frac{1}{2} \log \left( \frac{\pi T S_n(f_i)}{2} \right) \right], \label{eq:llh} \end{eqnarray} where $\tilde{h}_i$ is the frequency-domain template signal, $\tilde{d}_i$ are the data observed by the interferometers, $T$ is the duration of the signal, and $S_n$ the power spectral density of the detector noise. Due to the large number of parameters describing the GW emitted by the coalescence of binary systems, the posterior probability is infered with Markov Chain (MC) methods. The chains perform semi-random walks in the parameter space where the recorded steps of the walks are proportional to the quantity in Eq.\ \rf{eq:bayes}. Different algorithms have been shown to be able to perform parameter inference, of which Markov Chain Monte-Carlo (MCMC) with parallel tempering and nested sampling are implemented in the LVC algorithm library. The method returns joint posterior probabilities of the GR parameters and the SME coefficients. From this we extract the marginalised posterior probability on a subset of parameters by integrating over the distribution of the other variables. The credible intervals are finally obtained by summing the volume of the posterior probability corresponding to the desired fraction of confidence. We present the results of Bayesian inference on simulated signals in Section~\ref{simulation}, and will provide the results of the ongoing analysis of LVC detections in a separate publication. \section{Sensitivity study} \label{simulation} As an illustration, we assume for the following, one non-zero coefficient $k^{(5)}_{(V)00}$ corresponding to isotropic polarization-dependent dispersion. Figure \ref{fig:kv5_00_sim} plots the waveforms for both GR and the modified wave form for different values of $k^{(d)}_{(V)00}$. We assume a non-spinning binary system that has a luminosity distance of $4$ Gpc, and equal masses of $m_1=m_2=50 M_\odot$. Note that significant differences in the waveform shape occur for coefficient values as small $10^{-13}$ m, impacting both the amplitude and frequency of the signal. This result can be compared with simulations using analytical template models presented in Ref.\ \cite{Mewes:2019}. In the latter publication in Figures 1 and 2, simulated waveforms with non-zero coefficients for Lorentz and CPT violation appear to modify the waveform mostly around peak amplitude times, whereas the simulations here in Figure \ref{fig:kv5_00_sim} show modification at earlier times. \begin{figure}[H] \centering \includegraphics[width=13cm]{figures/lalsim-inspiral_IMRPhenomPv2pseudoFourPN_m1_50_m2_50_waveform_universe.png} \caption{The above waveforms with varying $k^{(5)}_{(V)00}$ values, are for a simulated coalescence of a non-spinning binary system of black holes with $m_1=m_2=50 M_\odot$ located at a luminosity distance of 4 Gpc. GR denotes the case where $k^{(5)}_{(V)00} = 0$ and Lorentz violation is the case where $k^{(5)}_{(V)00}$ has the value specified above the plot.} \label{fig:kv5_00_sim} \end{figure} Using the methodology outlined in Section~\ref{sec:lalsim}, we perform a Bayesian inference of the source parameters and the coefficients for Lorentz-violation with simulated dispersed signals in order to study the potential to measure the coefficients with the LVC detections. We simulate a GW emitted by a non-spinning binary system of black holes with symmetric masses of $50 M_\odot$ located at 5 Gpc where the dispersion is controlled by one coefficient set to a value of $k^{(d)}_{(V)00} = 10^{-14}$. Figure \ref{fig:kv5_post} shows the posterior probability on the luminosity distance and the coefficient, where both are recovered around the simulated values. The $1 \sigma$ credible interval shows a constraint on $k^{(d)}_{(V)00}$ where the zero value is excluded, showing that the coefficient can be measured with a single event providing that it is relatively large. The $k^{(d)}_{(V)00}$ posterior probability density marginalised over the source and systematic uncertainties is shown in the violin plot. \begin{figure}[H] \centering \includegraphics[height=5cm]{figures/kv5_contour_violin.png} \caption{Posterior probability density on the $k^{(5)}_{(V)00}$ coefficient for a simulated coalescence of a non-spinning binary system of black holes with $m_1=m_2=50 M_\odot$ located at a luminosity distance of 5 Gpc. The left figure shows the 1$\sigma$ and 90\% credible intervals in the $D_L - k^{(d)}_{(V)00}$ plane, the right figure shows the posterior probability of $k^{(5)}_{(V)00}$ marginalising the source and systematical uncertainty parameters. } \label{fig:kv5_post} \end{figure} These results, obtained with a single event, present encouraging prospects towards the measurement of the coefficients for symmetry-breaking with the current generation of GW interferometers. The second catalog of GW detections, encompassing the two first observing run as well the first half of the third observation run of the LVC, contains 50 events from the coalescence of binary systems of astrophysical compact objects, of which 46 are consistent with black hole systems~\cite{PhysRevX.11.021053}. Comparing our results with measurements of the mass of the graviton, that also induce a modified dispersion of the GW signal, we note that the constraint has been improved of one order of magnitude from a single event to the analysis of a larger population of GW detections. The constraint from GW159014 was $m_g \leq 1.2 \cdot 10^{-22} \textrm{ eV/c}^2$ while it is now $m_g \leq 1.76 \cdot 10^{-23} \textrm{ eV/c}^2$ when analysing 33 events from the second GW catalog~\cite{LIGOScientific:2016lio, Abbott:2020jks}. Based on such results, we can conjecture that the constraints on the SME coefficients from the full catalog of GW detections will provide more stringent measurements than the preliminary sensitivity study shown here. The robustness of such measurements due to the waveform modelling approximant has been explored in~\cite{LIGOScientific:2019fpa}, showing that those systematic uncertainties do not lead to a large bias nor re-estimation of the constraints at the current detector sensitivity. Other studies show that transient noise may impact the measurement~\cite{Kwok:2021zny} by mimicking a GR deviation, an effect that we palliate by using the LVC-released power spectral densities and frequency ranges that exclude the presence of glitches in the strain data. \section{Conclusions and Future Work} \label{conclusion} We describe the implementation of an effective field theory framework for testing Lorentz and CPT symmetry into a version of the LIGO-Virgo Algorithm Library suite LALSuite. The Lorentz- and CPT-violating modifications include the coefficients controlling birefringence and dispersion effects on the gravitational wave polarizations. This work does not rely on posterior results inferred by the LVC that assume no deviations from standard GR; we implement the modifications due to dispersion directly at the level of the templates used for the Bayesian inference of the GW source and propagation parameters in order to incorporate the full information provided by the signal morphology. Initially, one starts with the action in the effective field theory framework that is quadratic in the metric fluctuations $h_{\mu\nu}$, \rf{gravaction}, and after theoretical constraints including gauge invariance, we arrive at the general result in \rf{gravlag}. From the field equations \rf{eom1}, a dispersion relation is derived, both in terms of the coefficients from the Lagrange density \eqref{dispEq}, and in terms of spherical coefficients in a special observer frame \eqref{spherical1}-\eqref{spherical3}. The result shows birefringence and dispersion for the two propagating modes; moreover these effects will vary with sky location of the source. Then considering the expression for propagating and applying a modified phase shift, including cosmological considerations, one can rewrite the expressions for the plus and cross polarizations \eqref{plcr}, which are directly implemented within the modified \texttt{} package. Through Bayesian inference, we can perform a parameter estimation to constrain the coefficients for Lorentz violation Samples of visible effects are shown in the sensitivity plots in Section \ref{simulation}. The theoretical derivations and sensitivity studies presented in this article precede the measurement of SME coefficients with the events detected by the LVC. This computationally intensive analysis is currently ongoing and the results will be reported in a future publication, where we aim to fulfil our analysis for coefficients for Lorentz and CPT violation of mass dimension five and six, with a global analysis. In such a global analysis, the availability of what is now a plethora of GW sources across the sky has the potential to disentangle measurements for a large set of coefficients and thereby obtain an exhaustive search for signals of new physics. \authorcontributions{Conceptualization, Q.G.B., K.O-A., L.H.; methodology, K.O-A. and L.H.; software, K.O-A., L.H.,T.D., and J.T.; validation, Q.G.B., K.O-A., L.H., T.D., and J.T.; formal analysis, Q.G.B., K.O-A., L.H., T.D., and J.T.; investigation, Q.G.B., K.O-A., L.H., T.D., and J.T.; resources, K.O-A., L.H., T.D., and J.T.; data curation, K.O-A., L.H., T.D., and J.T.; writing---original draft preparation, Q.G.B., K.O-A., L.H., and J.T.; writing---review and editing, Q.G.B., K.O-A., L.H., and J.T.; visualization, K.O-A., L.H., T.D. All authors have read and agreed to the published version of the manuscript.} \funding{This work was supported in part by the United states National Science Foundation (NSF) grants: Q.G.B.\ and K.\ A.\ are supported by grant number 1806871 and J.T.\ is supported by grant number 1806990. L.H.\ is supported by the Swiss National Science Foundation grant 199307. The author(s) would like to acknowledge the contribution of the COST Actions CA16104 and CA18108. Computational resources were provided through the support of the NSF, STFC, INFN and CNRS, and LIGO Lab (CIT) supported by National Science Foundation Grants PHY-0757058 and PHY-0823459.} \institutionalreview{Not applicable.} \informedconsent{Not applicable.} \acknowledgments{ The authors gratefully acknowledge the support of the NSF for the construction and operation of the LIGO Laboratory and Advanced LIGO as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council, and further support from the Italian Istituto Nazionale di Fisica Nucleare (INFN), the French Centre National de la Recherche Scientifique (CNRS) and the Netherlands Organization for Scientific Research for the construction and operation of the Virgo detector and the creation and support of the EGO consortium. The authors also thank two anonymous referees and Javier M.\ Antelis for valuable critiques of the manuscript.} \conflictsofinterest{The authors declare no conflict of interest.} \end{paracol} \reftitle{References} \externalbibliography{yes}
2024-02-18T23:40:34.249Z
2021-10-12T02:14:03.000Z
algebraic_stack_train_0000
2,771
9,067
proofpile-arXiv_065-13615
\section{Introduction} Throughout the paper we consider a dataset or sample $S$ given as an $n \times d$ matrix, where each row pertains to an individual in the sample, and $d$ variables are measured for each of the $n$ participant in the sample. The sample $S$ is held by some agency and an analyst is interested in a vector function $f(S) =(f_1(S),\ldots,f_k(S)) \in \mathbb{R}^k$ of the data, to be called a query. Thus, a query consists of $k$ functions of the data to be posed to the agency by the analyst. We consider throughout the case $k>1$. We assume that for privacy considerations, the agency releases the response to the query $f(S)$ with noise, using a standard Gaussian mechanism that adds independent $N(0,\sigma^2)$ noise to each coordinate of $f(S)$. The distribution of the added noise is always assumed to be known to the analyst, a standard assumption in the differential privacy literature. Two samples $S$ and $S'$ are said to be neighbors, denoted by $S \sim S'$, if they differ by a single individual, i.e., a single row. See, e.g., \cite{Dwork_2014_book} for all needed details on Differential Privacy (henceforth DP). When we consider $S$ and $S'$ together we always assume that they are neighbors. More generally, consider a noise mechanisms applied to $S$ via a query $h(S) \in \mathbb{R}^k$ of the form ${\cal M}_h(S)= h(S)+U \in \mathbb{R}^k$, where $U$ is a random vector. A mechanism ${\cal M}_h$ is said to be DP$(\varepsilon,\delta)$ if for all (measurable) sets $E$ we have \begin{equation}\label{eq:DPDP} P({\cal M}_h(S) \in E) \le e^\varepsilon P({\cal M}_h(S') \in E)+\delta\end{equation} for all $S \sim S'\in {\mathfrak D}$, where the probability refers to the randomness of $U$, and ${\mathfrak D}$ is the universe of potential datasets. For example, if $S$ is a sample of a given size $n$ from some population, then ${\mathfrak D}$ is the universe of all samples that could have been drawn and considered for dissemination. The standard definition of DP takes ${\mathfrak D}$ to be a product $C^n$ where $C$ consists of all possible rows. Our results hold for any given $\varepsilon>0$ and $\delta \in (0,1)$, which we fix for the rest of this paper. Our goal is to describe some simple natural examples where posing a linear transformation of the query $f(S)$, getting the agency's response via a mechanism that guarantees DP, and inverting the response to obtain the required information on $f(S)$ yields better inference on $f(S)$ in the case where $S$ is a given fixed dataset, and on the model that generates $f(S)$, when $S$ is a random sample. \textbf{Some related work}: The principle of modifying queries for better results is not new. The Matrix Mechanism (MM) is put forward in a line of work that started with \cite{OptHisQue}. Further literature includes \cite{edmonds2020power, MM, li2015matrix, HDMM} and numerous references therein. For given queries, MM linearly modifies the original data by applying a matrix that depends on the queries to be answered. The modified data is released with noise, and the answer to the original queries is computed. The above literature studies algorithms for finding optimal modifying matrices that minimizes the distance between the original queries and the mechanism's output relative to different utility metrics. The difference between the above papers and ours is fourfold. First, we provide simple explicit transformations for the situations we consider rather than a numerical optimization algorithm; second, we consider continuous data while MM is directed mostly toward frequency tables and counting queries. Applying MM to continuous data leads to large and sparse tables of counts (contingency tables) and high complexity of the algorithm; third, our transformations are aimed toward specific statistical goals, rather than standard norms (metrics); and fourth, we consider also random datasets where inference is on the data-generating process. For certain specific utility metrics and queries, optimal noise mechanisms have been found; see, e.g., \cite{ghosh2012universally, gupte2010universally}, but in general many researchers consider simple mechanisms with a well-known distribution (e.g., addition of Laplace or Gaussian iid noise) without considering optimality. In \cite{R2DP} a noise mechanism is proposed where the variance of the Laplace noise is random. Given a query and its sensitivity, an algorithm for optimal choice of the distribution of the Laplace variance from a certain class of distributions is provided. Optimality is with respect to the expected distance (metric) between the original query and the output of the mechanism. When a dataset is randomly generated by some assumed distribution, it is well known that the analyst has to adjust the statistical procedure to the distribution of the observed data, taking the distribution of the added noise into account; see, e.g., \cite{solea2014differentially, rogers2017new, wang2018statistical, canonne2019structure, gaboardi2016differentially} and references therein. Most of these results are asymptotic. \section{Fixed (non-random) datasets}\label{sec:fixedNR} Consider a dataset $S$ held by an agency and an analyst who poses a query $f(S)$ in terms of measurement units of his choosing. For example, the components of $f(S)$ could be average age, average years of schooling, and median income in the sample $S$. The observed response is given with noise through a privacy mechanism applied by the data-holding agency. The analyst's goals are to construct a confidence region for $f(S)$ and to test simple hypotheses about it. For any given level $\varepsilon,\delta$ of DP, we show that instead of posing the query $f(S)$, the analyst can obtain a smaller confidence region for $f(S)$ by computing it from a query of the form $f_\xi(S)= Diag(\xi)^{1/2}f(S)$ for a suitable $\xi \in \mathbb{R}_{\ge 0}^k$ (a vector having nonnegative coordinates), where $Diag(\xi)$ is a diagonal matrix whose diagonal elements form the vector $\xi$. For the goal of testing hypotheses, it turns out that a different choice of $\xi$ maximizes the power of the standard likelihood-ratio test. Thus, the analyst can achieve better inference by adjusting his queries to the planned statistical procedure. Consider a row $(x_1,\ldots,x_d)$ in the sample $S$. For simplicity we assume that $x_i \in C_i$ for $i=1,\ldots,d$ for suitable sets $C_i$. In this case each row is in the Cartesian product $C := {C}_1\times\ldots\times {C}_d$ and we set ${\mathfrak D}:=C^n$. We can also write ${\mathfrak D} \coloneqq {\mathcal C}_1\times\ldots\times {\mathcal C}_d$, where ${\mathcal C}_i$ is the set of $n$-vectors whose coordinates are all in $C_i$. We assume that the agency releases data under DP$(\varepsilon,\delta)$ relative to this universe ${\mathfrak D}$, which is known to both the agency and the analyst. In Section \ref{sec:fixedNR} we assume that the components $f_i$ of the vector query $f= \left(f_1,\ldots,f_k\right)$ are functions of disjoint sets of columns of $S$. This assumption is not needed in Section \ref{sec:randomdata}. The quantity $$\Delta(f):=\operatorname*{max}_{S\sim S' \in {\mathfrak D}}||f(S)-f(S')||,$$ where $|| \cdot ||$ denotes the $L_2$ norm, is known as the sensitivity of $f$; higher sensitivity requires more noise for DP. Under simple assumptions on the functions $f_i$ such as monotonicity, the agency can readily compute $\Delta(f)$, as well as \begin{equation}\label{eq:tildeS} (\widetilde{S},\widetilde{S}') := \operatorname*{argmax}_{S\sim S' \in {\mathfrak D}}||f(S)-f(S')||; \end{equation} see Lemma \ref{le:ontildeS}, where it is shown that the maximization can be done separately for each coordinate of $f$. In general, the maximum in \eqref{eq:tildeS} is not unique, in which case $\arg\max$ is a set of pairs. The agency plans to release a response to the query $f(S)$ via a standard Gaussian mechanism; that is, the response is given by $${\cal M}(S) = f(S)+U \text{ where } U \sim N(0, \sigma^2I)$$ and I is the $k \times k$ identity matrix. The variance $\sigma^2$ is the minimal variance such that the mechanism satisfies DP$(\varepsilon,\delta)$ for given $\varepsilon,\delta$; it can be determined by Lemma \ref{le:BW} below, which appears in \cite{balle2018improving}. This variance depends on $\Delta(f)$; however, here $f$ is fixed and hence suppressed. Consider a family of queries adjusted by $Diag(\xi)$: $$f_\xi(S):=Diag(\xi)^{1\!/2}f(S)=\big({\xi_1}^{\!\!\! 1\!/2}f_{1}(S), \ldots,{\xi_k}^{\!\!\! 1\!/2}f_{k}(S) \big).$$ In particular, for the vector $\xi$ whose components are all equal to one we have $f_{\xi=1}=f$. Given a query from this family, the agency returns a perturbed response using a Gaussian mechanism ${\cal M}_\xi$ by adding to $f_\xi$ a Gaussian vector $U \in\mathbb{R}^k$ where $U \sim N(0,\sigma^2 I)$, that is, $${\cal M}_\xi(S) = f_\xi(S) + U.$$ It is easy to see directly or from Lemma \ref{le:BW} that we can fix $\sigma^2$ and guarantee a given level of DP$(\varepsilon,\delta)$ by choosing $\xi$ appropriately. Hence fixing $\sigma^2$ does not result in loss of generality. This is explained immediately following Theorem \ref{th: CIfix}. \subsection{Confidence regions} \label{sec:cifix} The following discussion concerns the choice of $\xi \in \mathbb{R}^k_{>0}$ such that the standard confidence region $CR^t_\xi$ for $\mu^*:=f(S)$ given in formula \eqref{eq:conf_regi_ellips_adjusted} below, which is based on the observed ${\cal M}_\xi(S)$, has the smallest volume. It is easy to see that allowing the variance of $U$ in ${\cal M}_\xi$ to depend on $\xi$ does not lead to smaller volumes. The idea is simple: intuitively it appears efficient to add more noise to the more variable components of $f(S)$ rather than ``waste noise" on components with low variability. Note that ``more variable" depends on both the population being measured and the chosen units of measurement. Instead of asking the agency to adjust the noise to different components, we adjust the query, and thus the agency can use a standard Gaussian mechanism. This intuition, as the whole paper, is clearly relevant only for $k>1$. The analyst observes ${\cal M}_\xi(S) = f_\xi(S) + U$, where \begin{equation*} {\cal M}_\xi(S) = \big(Diag(\xi)^{1/2}f(S) + U\big) \sim N(Diag(\xi)^{1/2}f(S), \sigma^2 I). \end{equation*} Thus, \begin{equation*} Diag(\xi)^{-1/2}{\cal M}_\xi(S) = \big( f(S) + Diag(\xi)^{-1/2}U\big) \sim N(\mu^*, Diag(\xi)^{-1}\sigma^2), \end{equation*} where $\mu^*:=f(S)$. The standard confidence region for $\mu_x$ based on $X \sim N(\mu_x, \Sigma)$ is $\{\mu:(X-\mu)^T\Sigma^{-1}(X-\mu)\le t\}$; see, e.g., \cite{anderson1962introduction}, p. 79. Thus, the confidence region for $\mu^*=f(S)$ based on $Diag(\xi)^{-1/2}{\cal M}_\xi(S)$ becomes \begin{equation}\label{eq:conf_regi_ellips_adjusted} CR^t_\xi = \{ \mu \in \mathbb{R}^k: \big(Diag(\xi)^{-1/2}{\cal M}_\xi(S)-\mu \big)^T\\ (Diag(\xi)\sigma^{-2}) \big(Diag(\xi)^{-1/2}{\cal M}_\xi(S)-\mu \big) \leq t \}. \end{equation} For any $\xi \in \mathbb{R}_{>0}^k$ and any $\mu^* \in \mathbb{R}^k$, the coverage probability $P(\mu^* \in CR^t_\xi)=P(Y\le t)$ where $Y \sim {\cal X}^2_k$ (the chi-square distribution with $k$ degrees of freedom), and thus all the regions $CR^t_\xi$ have the same confidence (coverage) level. We denote the volume by $Vol( CR^t_{\xi})$. For a discussion of the volume as a measure of utility of confidence regions see, e.g., \cite{efron2006minimum}. We now need the notation \begin{equation*}\label{eq:tilda}\psi \coloneqq f(\widetilde{S})-f(\widetilde{S}')=\big(f_1(\widetilde{S})-f_1(\widetilde{S}'),\ldots,f_k(\widetilde{S})-f_k(\widetilde{S}') \big), \end{equation*} where $(\widetilde{S},\widetilde{S}')$ is any pair in the set defined in \eqref{eq:tildeS}, and we assume that $\psi_i^2>0$ for all $i$. \begin{theorem} \leavevmode \label{th: CIfix} {\rm({\bf 1})} For any fixed $t$, all confidence regions $CR^t_\xi$ defined in Equation \eqref{eq:conf_regi_ellips_adjusted} have the same confidence level; that is, the probability $P(\mu^* \in CR^t_\xi)$ depends only on $t$ (and not on $\xi$). {\rm({\bf 2})} Set $$\Lambda(\xi)=\frac{\sqrt{\psi^T Diag(\xi) \psi}}{\sigma}.$$ If for two vectors $\xi_a$ and $\xi_b$ the mechanisms ${\cal M}_{\xi_a}$ and ${\cal M}_{\xi_b}$ have the same level of {DP} (that is, the same $\epsilon$ and $\delta$) then $\Lambda(\xi_a)=\Lambda(\xi_b)$. {\rm({\bf 3})} The choice $\xi = \xi^* := c\,(1/\psi_1^2,\ldots,1/\psi_k^2)$ with $c=||\psi||^2/k$ minimizes $ Vol( CR^t_\xi)$ for any $t>0$ over all vectors $\xi \in \mathbb{R}_{>0}^k$ and associated mechanisms $\cal M_\xi$ having the same DP level. In particular, $$Vol( CR^t_{\xi^*}) \leq Vol( CR^t_{\xi=1}),$$ with strict inequality when $max_i(\psi_i) \neq min_i(\psi_i)$. The right-hand side of the inequality pertains to the query $f$. \end{theorem} Fix $\sigma^2$ to be the smallest variance such that the Gaussian mechanism ${\cal M}_{\xi=1}(S)$ guarantees $DP(\varepsilon, \delta)$ for the query $f$. Part (2) of Theorem \ref{th: CIfix} or of Theorem \ref{th:LRfix} below shows that for any ${\cal M}_\xi$ to have the same DP level as ${\cal M}_{\xi=1}$ we must have $\frac{\sqrt{\psi^T Diag(\xi) \psi}}{\sigma}=\frac{\sqrt{\psi^T \psi}}{\sigma}$. For the proof we need two lemmas that are given first. \begin{lemma}For any $\xi \in \mathbb{R}_{>0}^k$. \label{le:ontildeS} \begin{equation*} \Delta(f_\xi) \equiv \operatorname*{max}_{S\sim S'\in {\mathfrak D}}||f_\xi(S)-f_\xi(S')|| = ||f_\xi(\widetilde{S})-f_\xi(\widetilde{S}'))||, \end{equation*} where the pair $(\widetilde{S},\widetilde{S}')$ is defined in Equation \eqref{eq:tildeS}. \end{lemma} The proof is given in the Appendix. For an agency willing to release the query $f$, releasing $f_\xi$ under the mechanism $\cal M_\xi$ with the same DP level does not add any complications. The agency needs to compute the sensitivity defined by $\Delta({f_\xi}) \equiv \operatorname*{max}_{S\sim S'\in {\mathfrak D}}||(f_\xi(S)-f_\xi(S')\big)||$. By Lemma \ref{le:ontildeS}, this amounts to computing $||f_\xi(\widetilde{S})-f_\xi(\widetilde{S}'))||$ using the pair $(\widetilde{S},\widetilde{S}')$ from Equation \eqref{eq:tildeS}, which is needed to compute the sensitivity of $f$. In particular, $\Delta({f_\xi})= \sqrt{\psi^T Diag(\xi) \psi},$ and we shall see below that the quantity $\Delta({f_\xi})/{\sigma}$ is determined by the DP level. Given a DP$(\varepsilon,\delta)$ level, the agency guarantees it by choosing $\sigma$ using Lemma \ref{le:BW} below. The proof of Theorem \ref{th: CIfix} (and all other theorems) relies on the next lemma; it can be obtained readily from the results of \cite{balle2018improving}, which hold for any query $f$. \begin{lemma}\label{le:BW} Let ${\cal M}(S)=f(S)+U$ be a Gaussian mechanism with $U\sim N(0, \sigma^2I)$, and for given datasets $S$ and $S'$ set $D := D_{S,S'} = ||f(S)-f(S')|| $. {\rm({\bf 1})} If \begin{equation}\label{ana_gaussi_requir} \Phi \left(\frac{D}{2\sigma}-\frac{\varepsilon\sigma}{D}\right) - e^{\varepsilon}\Phi \left(-\frac{D}{2\sigma}-\frac{\varepsilon\sigma}{D}\right) \leq \delta, \end{equation} then for all $E \subseteq \mathbb{R}^k$, \begin{equation}\label{condition_rdp} \mathbb{P}({\cal M} (S) \in E)\le e^\varepsilon \mathbb{P}({\cal M}(S') \in E)+\delta. \end{equation} {\rm({\bf 2})} Setting $\widetilde D := \Delta(f) = ||f(\widetilde S)-f(\widetilde S')||$, with $(\widetilde S,\widetilde S')$ given in Equation \eqref{eq:tildeS}, Equation \eqref{ana_gaussi_requir} holds with $D$ replaced by $\widetilde D$ if and only if the inequality \eqref{condition_rdp} holds for all $S \sim S'$ and $E \subseteq \mathbb{R}^k$, that is, if and only if DP$(\varepsilon,\delta)$ holds. \end{lemma} Part (2) of Lemma \ref{le:BW} coincides with Theorem 8 of \cite{balle2018improving}, and the first part follows from their method of proof. \textit{Proof} of Theorem \ref{th: CIfix}. Part (1) follows from the fact mentioned above that all these regions have confidence level $P(Y\le t)$ where $Y\sim {\cal X}^2_k$. Part (2) is obtained by replacing $f$ of Part (2) of Lemma \ref{le:BW} by $f_\xi$; then $(\widetilde D/\sigma)$ becomes $\frac{\sqrt{\psi^T Diag(\xi) \psi}}{\sigma}$ and the result follows. To prove Part (3), note that the confidence region for the adjusted query given in Equation \eqref{eq:conf_regi_ellips_adjusted} is an ellipsoid whose volume is given by: \begin{equation}\label{volume_adjusted_fixed} Vol(CR^t_\xi) = V_k\cdot (\sigma^2 t)^{k/2} \big({det[Diag(\xi)]}\big)^{-1/2}\,, \end{equation} where $V_k$ is the volume of the unit ball in $k$ dimensions. By Part (2), we have to minimize the volume as a function of $\xi$ subject to the constraint ${\psi^T Diag(\xi)\psi}= {\psi^T\psi}$, which we do by using Lagrange multipliers. See the Appendix for details. \qed Given a DP level, the volume is minimized by choosing $\xi_i$ proportionally to $1/\psi^2_i$. Multiplying $\xi$ by a suitable constant guarantees the desired DP level. It is easy compute the ratio of the volumes of the optimal region and the one based on the original query $f$: \begin{equation*}\label{improv_diff_vol} \frac{Vol(CR^t_{\xi^*})}{Vol(CR^t_{\xi=1})} = \left( \frac{\big(\prod_{i=1}^k \psi_i^2\big)^{1/k}}{\frac{1}{k}\sum_{i=1}^k \psi_i^2} \right)^{k/2}. \end{equation*} Clearly the ratio is bounded by one, which can be seen again by the arithmetic-geometric mean inequality. Also, if one of the coordinates $\psi_i$ tends to zero, so does the ratio, implying the possibility of a substantial reduction in the volume obtained by using the optimally adjusted query $f_{\xi^*}$. We remark that the ratio is decreasing in the partial order of majorization applied to $(\psi^2_1,\ldots,\psi^2_k)$; see \cite{MOA}. \subsection{Testing hypotheses: Likelihood-ratio test} As in Section \ref{sec:cifix}, consider a query $f(S) \in \mathbb{R}^k$, which is observed with noise via a Gaussian privacy mechanism. Now the analyst's goal is to test the simple hypotheses $ H_0: f(S)=0, \quad H_1: f(S)=\eta$. The null hypothesis is set at zero without loss of generality by a straightforward translation. For any $\xi \in \mathbb{R}_{\ge 0}$ (a vector with nonnegative components), let $f_\xi(S)=Diag(\xi)^{1/2}f(S)$ and let ${\cal M}_\xi(S)=f_\xi(S)+U$, where $U \sim N(0,\sigma^2 I)$ and $\sigma^2$ is the smallest variance such that the Gaussian mechanism ${\cal M}_{\xi=1}(S)$ guarantees $DP(\varepsilon, \delta)$ for the query $f$. Let $h_{\xi i}$ denote the density of ${\cal M}_\xi(S)$ under the hypothesis $H_i$, $i=0, 1$. The log-likelihood ratio based on the observed ${\cal M}_\xi(S)$, ${\log}\big\{\frac{h_{\xi 1}({\cal M}_\xi(S))}{h_{\xi 0}({\cal M}_\xi(S))}\big\}$, is proportional to $\frac{{\cal M}_\xi(S)^T Diag(\xi)^{1/2} \eta}{\sigma^2}$, which under $H_0$ has the $N(0, \frac{{\eta^{T}Diag(\xi)\eta}}{\sigma^2})$ distribution. The likelihood-ratio test (which by the Neyman--Pearson lemma has a well-known optimality property) rejects $H_0$ when the likelihood ratio is large. For a given significance level $\alpha$, the rejection region has the form \begin{equation} \label{eq:rejreg} R_\xi=\Big\{ {\cal M}_\xi(S): \frac{{\cal M}_\xi(S)^T Diag(\xi)^{1/2} \eta}{\sigma^2} > t\,\,\Big\}, \ \textnormal{where} \ t=\Phi^{-1}(1-\alpha) \frac{\sqrt{\eta^{T}Diag(\xi)\eta}}{\sigma}. \end{equation} Let $\pi(R_\xi):=P_{H_1}({\cal M}_\xi(S) \in R_\xi)$ denote the power associated with the region $R_\xi$. \begin{theorem}\label{th:LRfix}\leavevmode {\rm({\bf 1})} For any fixed $\alpha$ and for all $\xi \in \mathbb{R}_{\ge 0}$, the rejection regions $R_\xi$ defined in \eqref{eq:rejreg} have significance level $\alpha$, that is, $P_{H_0}(R_\xi)=\alpha$. {\rm({\bf 2})} Assume that for two vectors $\xi_a$ and $\xi_b$ the mechanisms ${\cal M}_{\xi_a}$ and ${\cal M}_{\xi_b}$ have the same level of DP (same $\epsilon$ and $\delta$); then $\varLambda(\xi_a)=\varLambda(\xi_b)$, where $\varLambda(\xi)=\frac{\sqrt{\psi^T Diag(\xi) \psi}}{\sigma}$. {\rm({\bf 3})} Let $ j^* = \arg\max_i ( \eta_i^2/\psi_i^2),$ and define $\xi^*$ by $\xi_{j^*}^*=||\psi||^2/\psi^2_{j^*}$ and $\xi^*_i = 0 \ \ \forall\, i\neq j^*$; then the choice $\xi=\xi^*$ maximizes the power $\pi(R_{\xi})$ over all vectors $\xi \in \mathbb{R}^k_{\ge 0}$ and the associated mechanisms $\cal M_\xi$ having the same DP level, and in particular $\pi(R_{\xi^*}) \geq \pi(R_{\xi=1}), $ with strict inequality unless $max_i( \eta_i^2/\psi_i^2) = min_i ( \eta_i^2/\psi_i^2 ).$ \end{theorem} The right-hand side of the latter inequality pertains to the original query $f$, and thus a query of just one coordinate of $f$, the one having the largest ratio of (loosely speaking) signal ($\eta_i^2$) to noise ($\psi_i^2$) maximizes the power of the test. Note the difference between the optimal query of Theorem \ref{th:LRfix} and that of Theorem \ref{th: CIfix}, which uses all coordinates of $f$. \textit{Proof} of Theorem \ref{th:LRfix}. Part (1) follows from \eqref{eq:rejreg} and the discussion preceding it with standard calculations; Part (2) is similar to that of Theorem \ref{th: CIfix}. The proof of Part (3) is given in the Appendix. \section{Random, normally distributed data} \label{sec:randomdata} So far the dataset $S$ was considered fixed, that is, nonrandom. Statisticians often view the data as random, model the data-generating process, and study the model's parameters. Accordingly, we now assume that the dataset, denoted by $T$, is randomly generated as follows: the rows of $T$, $T_1,\ldots,,T_n$ are iid, where each row $T_\ell \in \mathbb{R}^d$ represents $d$ measurements of an individual in the random sample $T$. We also assume that $f$ is a linear query, that is, $$ f(T) = \left( f_1(T),...,f_k(T) \right) = \Big( \frac{1}{n}\sum_{\ell=1}^n q_1(T_\ell),...,\frac{1}{n}\sum_{\ell=1}^n q_k(T_\ell)\Big) $$ for some functions $q_1,\ldots,q_k$. In addition, we assume that $q(T_\ell) := \big(q_1(T_\ell),\ldots,q_k(T_\ell)\big) \sim N(\mu^*, \Sigma)$ for some unknown $\mu^*$ and a known covariance matrix $\Sigma$. The normality assumption holds when the entries of $T$ are themselves normal, and $q_i$ are linear functions. Assuming normality, possibly after transformation of the data, and iid observations is quite common in statistical analysis. It follows that $f(T)\sim N(\mu^*, \Sigma_n)$, where $\Sigma_n=\Sigma/n$. This may hold approximately by the central limit theorem even if normality of the dataset is not assumed. Here we assume that $\Sigma$ is known. The case where it is obtained via a privatized query is beyond the scope of this paper. Assuming that $\Sigma$ is known is sometimes natural. For example, when we test hypotheses on means of subpopulations, we sometimes use the covariance matrix estimated from the general population. In annual economic surveys, for example, the focus is on change between consecutive years in, say, average income or unemployment rate; however, one can assume that the past years' covariance matrix is roughly unchanged. In Section \ref{sec:numst} we give an example of blood-test data, where we use a correlations matrix estimated from the general population. Since the observed data will depend only on $q(T_\ell)$, we now redefine the dataset to be $S$, consisting of the $n$ iid rows $S_\ell :=q(T_\ell)$, \, $\ell=1, \ldots,n$. The assumption $q(T_\ell)\sim N(\mu^*, \Sigma)$ implies that these rows can take any value in $C:=\mathbb{R}^k$. The universe of all such matrices $S$ is $\mathfrak D := C^n =\mathbb{R}^{n \times k}$. Our goal is to construct a confidence region for the model parameter $\mu^*$ and test hypotheses about it. This can be done via the query $f(S)=\frac{1}{n}\sum_{\ell=1}^nS_\ell$ having the distribution $N(\mu^*, \Sigma_n)$; however, we show that posing the query $g(S):=\Sigma_n^{-1/2}f(S)$ under the same Random Differential Privacy parameters (RDP, to be defined below) results in smaller confidence regions. We also compare the powers of certain tests of hypotheses. We say that a query $f$ is invariant if $f(S)$ is invariant under permutations of the rows of $S$. This happens trivially when $f$ is a linear query as defined above. If $f$ is invariant then the distribution of the output of any mechanism that operates on $f$ is obviously unchanged by permutations of rows. In this case it suffices to consider neighbors $S \sim S'$ of the form $ S=(S_1,\ldots,S_{n-1},S_n), \ \ \ S'=(S_1,\ldots,S_{n-1},S_{n+1}).$ We assume that $S_1,\ldots,S_{n+1}$ are iid rows having some distribution $Q$. \begin{definition} \label{def:RDP} $(\varepsilon, \delta, \gamma$)-{\rm Random Differential Privacy (\cite{hall2013random})}. A random perturbation mechanism $\cal M$ whose distribution is invariant under permutations of rows is said to be ($\varepsilon, \delta, \gamma$)-{Randomly Differentially Private}, denoted by RDP$(\varepsilon, \delta, \gamma)$, if \begin{equation*} {P}_{S_1,\ldots,S_{n+1}} \Big(\forall \ E \subseteq \mathbb{R}^k, \, \, {P}({\cal M}(S) \in E|S) \\ \le e^\varepsilon \mathbb{P}({\cal M}(S') \in E|S')+\delta \Big) \geq 1-\gamma,\end{equation*} where $S$ and $S'$ are neighbors as above, the probability ${P}_{S_1,\ldots,S_{n+1}}$ is with respect to $S_1,\ldots,S_{n+1}\overset{iid}{\sim} Q$ and the probability ${P}({\cal M}(S) \in E|S)$ refers to the noise $U$ after conditioning on $S$. \end{definition} In words, instead of requiring the condition of differential privacy to hold for all $S\sim S' \in {\mathfrak D}$, we require that there be a ``privacy set" in which any two random neighboring datasets satisfy the DP condition, and its probability is bounded below by $1-\gamma$. An objection to this notion may arise from the fact that under RDP ``extreme" participants, who are indeed rare, are not protected, even though they may be the ones who need privacy protection the most. Since RDP is not in the worst-case analysis spirit of DP, we remark that DP can be obtained if, instead of ignoring worst cases having small probability as in RDP, the agency trims them by either removing them from the dataset or by projecting them to a given ball (that is independent of the dataset) which determines the sensitivity. Such trimming, if its probability is indeed small, corresponding to a small $\gamma$, will not overly harm the data analysis. To define a mechanism ${\cal M}_h(S)=h(S) +U$ (see \eqref{eq:DPDP}) that satisfies RDP$(\varepsilon, \delta, \gamma)$, we need to define a ``privacy set" $H$, which is a subset of $\mathfrak D \times \mathfrak D$ consisting of neighboring pairs $(S,S')$, that satisfies two conditions. \textbf{(A)}: $P((S,S')\in H)=1-\gamma$, where the probability $P$ is ${P}_{S_1,\ldots,S_{n+1}}$ of Definition \ref{def:RDP}, and \textbf{(B)}: Equation \eqref{eq:DPDP} holds for all $E$, and any pair of neighboring datasets $(S,S') \in H$. We then say that ${\cal M}_h^H(S)$ is RDP$(\varepsilon, \delta, \gamma)$ with respect to the privacy set $H$ and the query $h$. To construct a suitable $H$ satisfying condition \textbf{(A)} note that \begin{equation}\label{eq:disff} f(S)-f(S')= \frac{1}{n}[q(S_n)-q(S_{n+1})] \sim N(0, 2\Sigma/n^2) \end{equation} and $$ ||g(S)-g(S')\,||^2=||\Sigma_n^{-1/2}[f(S)-f(S')]||^2 =||\Sigma^{-1/2}[q(S_n)-q(S_{n+1})]\,||^2/n \sim 2{\cal X}_k^2/n. $$ Thus, if $Y \sim {\cal X}^2_k$ satisfies $P(Y \le r^2)=1-\gamma$ then $P\big(||g(S)-g(S')\,||^2 \le 2r^2/n \big)= 1-\gamma$, and we can choose the set $H$ to be $$H_g:=\big\{(S,S') \in \mathfrak{D}\times \mathfrak{D} \,:\,||g(S)-g(S')\,||^2 \le 2r^2/n\big\}.$$ Also, by \eqref{eq:disff}, we have that $||f(S)-f(S')\,||^2$ is distributed as $(2/n)Z^T\Sigma_n Z$ with $Z \sim N(0,I)$, and it is well known (by diagonalizing $\Sigma_n$) that the latter expression has the $(2/n)\sum_{i=1}^k \lambda_i X_i$ distribution, where $\lambda_i$ denote the eigenvalues of $\Sigma_n$ and $X_i$ are iid ${\cal X}^2_1$. Another privacy set we consider is given by $$H_f:=\big\{(S,S')\in \mathfrak{D}\times \mathfrak{D}\,:\,||f(S)-f(S')\,||^2 \le C^2\big\},$$ where $C$ is such that $P((S,S')\in H_f)=1-\gamma$, and by \eqref{eq:disff} the constant $C$ depends on $\Sigma$ and $n$. We consider three Gaussian mechanisms: \begin{align*} {\cal M}^{H_g}_g(S)&=g(S)+U, \text{ where } U\sim N(0, \sigma_g^2 I), \\ {\cal M}^{H_g}_f(S)&=f(S)+U \text{ with } U \sim N(0,\sigma_{fg}^2 I),\\ {\cal M}^{H_f}_f(S)&=f(S)+U \text{ with } U \sim N(0,\sigma_f^2 I), \end{align*} where the first two are with respect to the privacy set $H_g$, and the third is with respect to $H_f$. For each of the three, an appropriate noise variance $\sigma^2_g$, $\sigma^2_{fg}$, and $\sigma_f^2$ has to be computed, given the privacy set and the RDP parameters, so that condition \textbf{(B)} above holds. To determine the noise variance we have to compute the sensitivity of the query $g$ on the set $H_g$ and the sensitivity of $f$ on both $H_g$ and $H_f$. Define the sensitivity of $f$ and $g$ on $H_g$, denoted by $D(fg)$ and ${D}(g)$, respectively, and the sensitivity of $f$ on $H_f$, denoted by $D(f)$, as follows: \begin{align}\label{eq:DfDg} &D(fg) := \operatorname*{max}_{\ (S, S') \in H_g} ||f(S)-f(S')||, \nonumber\\ &{D}(g) := \operatorname*{max}_{\ {(S, S')} \in H_g} ||g(S)-g(S')|| =\sqrt{2}\, r/\sqrt{n}\,,\\ &D(f):= \operatorname*{max}_{\ {(S, S')} \in H_f} ||f(S)-f(S')|| =C. \nonumber \end{align} We compare the above three mechanisms with the same RDP level in terms of the volume of confidence regions and the power of tests of hypotheses for the model parameter $\mu^*$, computed from data given by these mechanisms. We shall prove in Sections \ref{sec:CIrand} and \ref{sec:testrand} that the mechanism ${\cal M}^{H_g}_g(S)$ is better than ${\cal M}^{H_g}_f(S)$ in terms of the volumes of confidence regions and the power of tests. It is easy to see that $D(f) \le D(fg)$ and we show below that this implies that ${\cal M}^{H_f}_f(S)$ is better than ${\cal M}^{H_g}_f(S)$ both in terms of the volume of confidence regions, and the power of tests of simple hypotheses. We shall also show that for small $\gamma$ we have $D(g) \le D(f)$, and that ${\cal M}^{H_g}_g(S)$ is better than ${\cal M}^{H_f}_f(S)$ in terms of volume of confidence regions. The latter mechanism is discussed in Section \ref{sec:fHf}. By the definition of RDP, the mechanism ${\cal M}^{H_g}_f(S)$ satisfies $RDP(\varepsilon, \delta, \gamma)$ when \eqref{ana_gaussi_requir} holds with $D=D(fg)$ and $\sigma=\sigma_{fg}$, as does the mechanisms ${\cal M}^{H_g}_g(S)$ with $D=D(g)$ and $\sigma=\sigma_g$, and likewise the mechanism ${\cal M}^{H_f}_f(S)$ with $D=D(f)$ and $\sigma=\sigma_f$. \begin{lemma}\label{le:RDPlemma} If ${\cal M}^{H_g}_g$, ${\cal M}^{H_g}_f$, and ${\cal M}^{H_f}_f$ have the same RDP, then $D(g)/\sigma_g = D(fg)/\sigma_{fg} = D(f)/\sigma_f$. The first equality is equivalent to $\sigma_{fg}^2=\lambda_{max}(\Sigma_n)\sigma^2_g$, where $\lambda_{max}(\Sigma_n)$ denotes the largest eigenvalue of $\Sigma_n$. \end{lemma} \begin{proof} The first part follows from Lemma \ref{le:BW} and the above discussion. For the second part it suffices to prove that $[D(fg)]^2=\lambda_{max}(\Sigma_n)[D(g)]^2$. To see this note that the maximization in \eqref{eq:DfDg} is equivalent to maximizing $(g(S)-g(S'))^T \Sigma (g(S)-g(S'))$ subject to $||g(S)-g(S')||^2=[D(g)]^2$, and the result follows readily from Rayleigh's theorem; see, e.g., \cite{horn2012matrix}, Chapter 4. \end{proof} \subsection{Confidence regions}\label{sec:CIrand} We have \begin{equation*}\Sigma_n^{1/2}{\cal M}^{H_g}_g(S) \sim N \left(\mu^*, \Sigma_n(1+\sigma_g^2) \right), \ \ {\cal M}^{H_g}_f(S) \sim N(\mu^*, \Sigma_n+\sigma^2_{fg} I), \ \ {\cal M}^{H_f}_f(S) \sim N(\mu^*, \Sigma_n+\sigma_f^2 I).\end{equation*} The standard confidence regions for $\mu^*:=E[f(S)]$ based on ${\cal M}^{H_g}_g(S)$, ${\cal M}^{H_g}_f(S)$, and ${\cal M}^{H_f}_f(S)$ are \begin{align*}\label{eq:conf_regi_ellips_per} CR^t_{g} &= \Big\{ \mu \in \mathbb{R}^k: \big(\Sigma_n^{1/2}{\cal M}^{H_g}_g(S)-\mu \big)^T (\Sigma_n(1+\sigma_g^2))^{-1} \big(\Sigma_n^{1/2}{\cal M}^{H_g}_g(S)-\mu \big) \leq t \Big\},\\ CR^t_{fg} &= \Big\{ \mu \in \mathbb{R}^k: \big({\cal M}^{H_g}_f(S)-\mu \big)^T (\Sigma_n+\sigma_{fg}^2 I)^{-1} \big({\cal M}^{H_g}_f(S)-\mu \big) \leq t \Big\},\\ CR^t_{f} &= \Big\{ \mu \in \mathbb{R}^k: \big({\cal M}^{H_f}_f(S)-\mu \big)^T (\Sigma_n+\sigma_f^2 I)^{-1} \big({\cal M}^{H_f}_f(S)-\mu \big) \leq t \Big\}. \end{align*} The next theorem shows that confidence regions based on ${\cal M}^{H_g}_g$ have a smaller volume than those based on ${\cal M}^{H_g}_f$, and, for $\gamma$ sufficiently small, also than those based on ${\cal M}^{H_f}_f$. Thus, of the three natural candidates we consider, ${\cal M}^{H_g}_g$ is the best mechanism for small $\gamma$. \begin{theorem} \leavevmode\label{th: CIrnd} {\rm({\bf 1})} For any fixed $t$, the confidence regions $CR^t_{g}$, $CR^t_{fg}$, and $CR^t_{f}$ have the same confidence level; that is, for any $\mu^*$ we have $P(\mu^* \in CR^t_{g})=P(\mu^* \in CR^t_{fg})=P(\mu^* \in CR^t_{f})$. {\rm({\bf 2})} If the mechanisms ${\cal M}^{H_g}_g$, ${\cal M}^{H_g}_f$, and ${\cal M}^{H_f}_f$ have the same level of $RDP(\varepsilon, \delta, \gamma)$ then $D(g)/\sigma_g = D(fg)/\sigma_{fg}=D(f)/\sigma_f$. {\rm({\bf 3})} $Vol( CR^t_{g}) \leq Vol( CR^t_{fg}),$ with strict inequality unless all the eigenvalues of $\Sigma_n$ are equal. {\rm({\bf 4})} For sufficiently small $\gamma$, $Vol( CR^t_{g}) \leq Vol( CR^t_{f}),$ with strict inequality, unless all the eigenvalues of $\Sigma_n$ are equal. \end{theorem} \textit{Proof}. \,\, Part (1) holds as in Theorem \ref{th: CIfix}, and Part (2) holds by Lemma \ref{le:RDPlemma}. The proof of Part (3), given in the Appendix, uses the relation $\sigma_{fg}^2=\lambda_{max}(\Sigma_n)\sigma^2_g$ of Lemma \ref{le:RDPlemma}, and a straightforward eigenvalue comparison. The proof of Part (4) is somewhat more involved. It uses a comparison of distribution functions of weighted sums of independent gamma random variables and a majorization argument. Details and references are given in the Appendix. \subsection{Testing hypotheses: Likelihood-ratio test}\label{sec:testrand} With $E[f(S)]=\mu^*$ we consider the hypotheses $ H_0: \mu^*=0$ and $H_1: \mu^*=\eta$ and the mechanisms ${\cal M}^{H_g}_f(S)$ and ${\cal M}^{H_g}_g(S)$ defined above. If ${\cal M}^{H_g}_f(S)$ is observed then the rejection region $R_{fg}$ of the likelihood-ratio test with significance level $\alpha$ has the form \begin{equation*}\label{eq:optim_rej_regio_gaus_data} R_{fg} =\big\{ {\cal M}^{H_g}_f(S): {\cal M}^{H_g}_f(S)^T(\Sigma_n+\sigma_{fg}^2 I)^{-1}\eta > t \,\,\big\}, \ \textnormal{where} \ t=\Phi^{-1}(1-\alpha) \sqrt{\eta^T(\Sigma_n+\sigma_{fg}^2 I)^{-1}\eta}. \end{equation*} If ${\cal M}^{H_g}_g(S)$ is observed then the testing problem becomes $ H_0: \mu^*=0$ vs. $ H_1: \mu^*=\Sigma_n^{-1/2}\eta$, and the rejection region $R_g$ of the likelihood-ratio test with significance level $\alpha$ has the form \begin{equation*} R_g =\big\{ {\cal M}^{H_g}_g(S): {\cal M}^{H_g}_g(S)^T[(1+\sigma_g^2)I]^{-1}\Sigma_n^{-1/2}\eta > t \big\}, \ \textnormal{where} \ t=\Phi^{-1}(1-\alpha) \sqrt{\frac{\eta^T\Sigma_n^{-1}\eta}{\sigma_g^2+1}}. \end{equation*} \begin{theorem} \leavevmode \label{th:LRRnd} {\rm({\bf 1})} The rejection regions $R_{fg}$ and $R_g$ have the same significance level $\alpha$. {\rm({\bf 2})} If both mechanisms ${\cal M}^{H_g}_g$ and ${\cal M}^{H_g}_f$ have the same level of $RDP(\varepsilon, \delta, \gamma)$ then $D(g)/\sigma_g = D(fg)/\sigma_{fg}$. {\rm({\bf 3})} Let $\pi(R_g)$ and $\pi(R_{fg})$ denote the power associated with the rejection regions $R_g$ and $R_{fg}$, respectively; then $ \pi(R_g) \geq \pi(R_{fg})$ with strict inequality, unless all the eigenvalues of $\ \Sigma_n$ are equal. \end{theorem} \begin{proof} Part (1) is similar to Part (1) of Theorem \ref{th:LRfix}. Part (2) is already given in Theorem \ref{th: CIrnd}. The proof of Part (3), given in the Appendix, involves a simultaneous diagonalization argument and a comparison of eigenvalues using $\sigma_{fg}^2=\lambda_{max}(\Sigma_n)\sigma^2_g$. \end{proof} \subsection{The mechanism ${\cal M}^{H_f}_f(S)$}\label{sec:fHf} It is easy to see that the sensitivity $D(f)$ (on $H_f$) satisfies $D(f) \le D(fg)$. An equal RDP level implies by Lemma \ref{le:RDPlemma} that $D(fg)/\sigma_{fg}=D(f)/\sigma_f$ and therefore $\sigma_{fg} \ge \sigma_f$. The power of the likelihood-ratio test based on ${\cal M}^{H_f}_f(S)$ is the same as in \eqref{eq:opt_pow_gaus_data}, with $\sigma_{fg}$ replaced now with $\sigma_f$. This power is easily seen to be decreasing in $\sigma$ and therefore the likelihood-ratio test based on ${\cal M}^{H_f}_f(S)$ has a higher power than the test based on ${\cal M}^{H_g}_f(S)$. On the other hand, an extensive computational study shows that the power of the test based on ${\cal M}^{H_g}_g(S)$ may be higher or lower than that based on ${\cal M}^{H_f}_f(S)$ depending on the parameters involved. In the case of Figure 1 we see that the test based on ${\cal M}^{H_g}_g(S)$ has a higher power than that based on ${\cal M}^{H_f}_f(S)$. \subsection{Four degrees of naivet\'{e}} We consider the random query $f(S)\sim N(\mu^*, \Sigma_n)$. The discussion below applies in principle to other distributions. We discuss four ways of testing hypotheses on $\mu^*$ in the presence of perturbation noise having a known distribution. The discussion pertains either to a DP$(\varepsilon,\delta)$ mechanism ${\cal M}_f$, see \eqref{eq:DPDP}, or to an RDP$(\varepsilon,\delta,\gamma)$ mechanism ${\cal M}^H_f$ with some privacy set $H$, which together with the RDP parameters determines the variance of the added noise. \begin{enumerate} \item The most naive approach to analyzing perturbed data is to ignore the added noise altogether and determine rejection regions and significance levels as if $f(S)$ is observed without noise. In this case the test may not be optimal, and the significance level will be wrong. We call this approach \textit{super-naive analysis}. \item A less naive approach is to choose a test based as above on the wrong assumption that $f(S)$ is observed without noise, but to set its critical value $t$, which determines the significance level, according to the correct distribution of the observed data, taking the noise into account. In Figure 1 we depict the power curve (denoted by $[\textbf{a}]$ in the example below) of this approach where RDP is with respect to the privacy set $H_f$. We call this approach \textit{naive analysis}. \item An even better approach is to choose the test optimally by computing the likelihood-ratio test and the significance level using the correct distribution of the observed response, taking the Gaussian noise into account. For simple queries and Gaussian noise this approach is feasible analytically. In Figure 1 we show two curves of the power under this approach, associated with ${\cal M}^H_f$ when RDP is with respect to the privacy set $H_g$ (denoted by $[\textbf{b}]$) and $H_f$ (denoted by $[\textbf{c}]$), respectively. We call this approach \textit{optimal analysis}. \item In this paper we propose adjusting the query to the statistical goals of the analyst and then using the optimal rejection or confidence regions based on the observed response to the adjusted query, and on its correct distribution, taking the adjustment and the noise into account. This is accomplished by the mechanism ${\cal M}^{H_g}_g(S)$ discussed above. Its power curve is denoted by $[\textbf{d}]$ in Figure 1. We call this approach \textit{adjusted optimal analysis}. \end{enumerate} The first two approaches are sometimes used by practitioners; they may be acceptable for very large samples. Their properties have been studied asymptotically. \section{A numerical example}\label{sec:numst} We provide a simple data example. The privacy of medical data is of utmost importance. Consider a dataset consisting of blood-test results. A standard blood test contains 30-40 variables measured in different units, with ranges and variances that are very different. Some of these variables are highly correlated. Of the many blood-test measurements, we chose for the sake of our examples to focus on six variables: Cholesterol, High Density Lipoprotein, Apo Protein A-1, Low Density Lipoprotein, Total Lipid, and Glucose (all having the same units, MG/DL). In this order, their covariance matrix $\Sigma$ is given below. It is based on data from \cite{qureshi2017application, castelli1977distribution, blum1985relationship}, and other Internet medical sources. Clearly, our goal is to provide a simple example to make our point, and not as a study of how to protect blood-test data. $$ {\Sigma} = \begin{pmatrix*}[r] 1600\, & -160\, & -400\, & 840\, & 800\, & -40\\ *\, & 400\, & 160\, & -175\, & -200\, & 0\\ * & * & 1600\, & 280\, & 600\, & 0\\ * & * & * & 1225\, & 700\, & -35\\ * & * & * &* & 2500 & -50\\ * & * & * & * & * & 100 \\ \end{pmatrix*} \quad $$ We consider the release of averages of the above six variables over a sample of size $n$, which will vary in our examples. It is quite standard for statisticians to assume (with justification by the central limit theorem) that such vectors of averages are multivariate normal. We assume the agency releases data under $RDP(\varepsilon, \delta, \gamma)$. If instead of RDP we use DP and trimming as described in Section \ref{sec:randomdata}, we obtain essentially the same results, with DP$(4\varepsilon, \delta)$. In the examples below we consider various parameters and alternatives. \begin{table}[htbp] \caption{Examples 1--4} \label{tab:freq} \begin{center} \resizebox{0.55\textwidth}{!}{ \begin{tabular}{ccccc} \textbf{Example} &\textbf{ $\eta$} & \textbf{$n$} & \textbf{$\delta$} & \textbf{$\gamma$}\\ \hline (1)&(10, 5, 10, 8.75, 12.5, 2.5)&50&$0.0200$&$10^{-4}$\\ (2)&(10, 5, 10, 8.75, 12.5, 2.5)&50&$0.0004$&$10^{-6}$\\ (3)&(0, 0, 20, 0, 25, 5)&50&$0.0004$&$10^{-6}$\\ (4)&(0, 0, 20, 0, 25, 5)&100&$0.0001$&$10^{-6}$\\ \end{tabular}} \end{center} \end{table} \vspace{-0.5cm} \begin{figure}[H] \centering \includegraphics[width=0.55\linewidth]{Figure1} \caption{Comparison of power of the likelihood-ratio test with significance level $\alpha=0.05$ as a function of\, $\varepsilon$\, for naive, optimal, and adjusted optimal analyses.} \label{fig:fig1} \end{figure} \section*{Acknowledgments} This work was supported in part by a gift to the McCourt School of Public Policy and Georgetown University, Simons Foundation Collaboration 733792, and Israel Science Foundation (ISF) grants 1044/16 and 2861/20. We are grateful to Katrina Ligett and Moshe Shenfeld for very useful discussions and suggestions. \bibliographystyle{plain}
2024-02-18T23:40:34.882Z
2021-10-13T02:17:29.000Z
algebraic_stack_train_0000
2,800
7,520
proofpile-arXiv_065-13695
\section{Introduction} Despite the enormous and extraordinary success of Quantum Mechanics (QM) in all fields of physics and in an unlimited set of technical applications, it is a widespread opinion that there are still open problems in its foundation and in the description that it provides for the natural phenomena. These problems are directly or indirectly connected with what has been called the "measurement problem" and they can be classified into three categories. Here we refer only to what is called "strong measurements". In standard QM a physical system is described by the associated wave function (w.f.), that evolves according to the proper wave equation, which in the non-relativistic regime is the Schr\"odinger equation. However in a generic process which involves some type of measurement, e.g. the detection of a particle, a different type of evolution occurs, which is usually referred to as the "reduction" of the wave function. As it is discussed in the next section, it is well known that this reduction cannot be described by the same wave equation and it is essentially unknown. A possible way out of this discrepancy is to assume that QM is able to describe the measurements only statistically, i.e. the frequencies of the outputs. In any case QM appears in some way incomplete, since it is unable to describe the physical process that occur during a single measurement process. This statement is also compatible with the so called "Copenhagen interpretation" of QM, since in this case one excludes {\it a priori} the possibility of a complete description of a generic experiment, and one should refer only to the arrangement of the macroscopic apparatus. Here we take the attitude that this impossibility should and can be overcome. Along the same talking, we consider as legitimate to describe the quantum processes in terms of wave functions and their evolution. \par The second category refers to the so-called "null result" measurements, or interaction-free measurements. In some case it looks that the w.f. reduction can occur without any interaction with the apparatus that is considered as the detecting system. This renders the reduction process even less understandable. \par Finally, one can mention the so-called Einstein-Podolsky-Rosen (EPR) "paradox", i.e. the action at distance between the measurements on two systems that have interacted or produced in the past. In this case there is not a real problem and QM predictions have been tested with great accuracy. However the action at distance looks to violate relativity, and it is legitimate to ask for the physical reason and the possible physical mechanism that are at the basis of this intriguing theoretical and experimental result. This action at distance is usually referred to as "entanglement". \par In this paper we present a model that is intended to complete QM and offers a coherent picture of an extended QM which covers all physical situations where ordinary QM is unable to give an answer or it is ambiguous. The model is based on the extension of the standard three dimensional real space ${\bf R}^3$ to the nonstandard real space ${\bf R^*}^3$ as support of the w.f., and on the introduction of a minimal infinitesimal length for the space distance. Under these hypothesis, and additional natural assumptions, new degrees of freedom appear, which give the possibility to complete quantum mechanics.\par The paper is organized as follows. In Sec. 2 we discuss the measurement problem by recalling some elementary examples, which are well known, but they allow to fix the framework of the problems that we want to solve. The set of the considered examples is surely not exhaustive, but it is used to illustrate the different facets of the measurement problem. \par In Sec. 3 the model is introduced and developed, with few subsections as intermediate steps. The treatment here is not following a rigorous mathematical language, but it is using well known results of nonstandard analysis. Section 4 is devoted to the general discussions of the meaning and applications of the results. The discussion is further developed in Secs. 5 and 6. Conclusions are drawn in Sec. 7. Few Appendixes contain details of the calculations. \section{Sketch of the 'problem of measurement'.} In the following we recall some of the features of the wave function reduction process, by means of some thought or real experiments. They are a quite small selection with respect to all possible cases and the numerous more sophisticated experiments that have been performed. They are well known, but we present them in a form suitable for the later discussion. The treatment is at the elementary level. \subsection{Measurements without interaction. \label{sec:null}} \subsubsection{On a modified ideal Stern-Gerlach experiment.\label{sec:SG}} Let us consider a Stern-Gerlach experiment with a modified arrangement. As it is well known in this experiment one measures the spin component of a particle (usually an ion) by means of an inhomogeneous magnetic field, which splits the initial wave packet into two parts, each one corresponding to the two different values of the spin projection. The two parts of the wave packet move in different directions and separate. Then the wave packet is detected at a screen, and the detection of the ion occurs at two different possible positions, each one corresponding to one of the two values of the spin projection. If the particle initially has not a definite spin, each detection will occur alternatively in one position or the other, according to the probability in the initial wave packet. It is essential to consider an initial wave packet which contains a coherent mixture of the spin components, which is preserved after the splitting. The detection includes necessarily the reduction of the wave packet to one of the two components. Let us consider now a slightly modified arrangement of the experiment. Instead of a single screen, that detects the particle, one considers two half-screen, one for each direction (spin up, spin down), but at different distances. If the initial state contains an equal population of spins, the probability to detect a particle in each screen is 1/2 (ideal detector). We can detect a particle at the more distant half-screen, spin up say, or the closer one. Suppose that we detect a particle at the distant screen. One can expect that the evolution of the system between the arrival of the wave packet at the position of the first screen and the arrival at the second should in general involve the linear coherent superposition of a state corresponding to the detection (and localization) of the particle at the first screen and a state corresponding to the undetected particle, moving freely toward the second screen. It looks natural to assume that the reduction of the wave function has not occurred before the arrival at the second screen since no detection (interaction) was happening. But this would imply that the component localized at the first screen, including the activated detector itself, 'disappears' at the moment of detection in the second screen. This looks as an action at distance on the wave function at macroscopic level. However it is not clear when the wave function reduction takes place due to a measuring process of any sort. It could be that the w.f. reduction happens at the first screen, even if the particle is not detected there. This last assumption is considered the interpretation of the Renninger paradox, which is the analogous thought experiment devised with a radioactive nucleus. \subsubsection{The Renninger paradox.\label{sec:Renni}} Following refs. \cite{renn1,renn2}, let us consider a radioactive nucleus which emits alpha particles in s-wave, i.e. with a spherically symmetric wave function, inside a spherical surface covered by a detector (e.g. a scintillator). Each emitted alpha particle will produce a spot at a certain position at the surface. The spherical wave of each alpha particle is then reduced to a localized wave function, corresponding to a given angular direction. A collection of large number of decay events will show an isotropic distribution all along the detecting surface. Let us now divide the sphere into two emispheres, of two different radius, with a large enough difference in size, as schematically depicted in Fig \ref{fig:renn}. If a single alpha particle is emitted it will hit at the surface of one of the two emispheres. It can happen that after a time much longer than the time necessary for the alpha particle to reach the inner surface, but still smaller than the one necessary to reach the larger surface, no detection has still occurred. This would mean that the alpha particle has been emitted inside the larger emisphere and therefore the initial spherical wave has been reduced to a smaller size inside the larger emisphere. This reduction has occurred without any interaction. Ultimately the alpha particle wave function will be further reduced at the hit on the larger detecting surface. \begin{center} \begin{figure} \includegraphics[width=1.0\textwidth]{fig_Renninger.ps} \vskip -12 cm \caption{Schematic representation of the thought experiment due to Renninger \cite{renn1,renn2}.} \label{fig:renn} \end{figure} \end{center} Also in this case the wave function reduction occurs without interaction with the detector. Notice that if the original sphere (or as well the two emispheres) is implemented as a cloud chamber (i.e. filling it with a supra-saturated vapor) each decay event will produce a straight line track. In this case the reduction of the alpha particle wave function occurs at the first interaction with the molecule of the vapor, and after that the alpha particle, i.e. its wave packet, moves along a straight line trajectory. This last arrangement suggests that the reduction process has some physical basis since it breaks the spherical symmetry and determines the whole subsequent dynamics of the alpha particle and of the cloud chamber. \subsection{Coherent 'attenuation' of the wave function.\label{sec:attenuation}} Besides the paradoxical cases of wave function reduction without interaction, it is usually assumed that in Quantum Mechanics the coherence among the different components of the wave function remains until the detection. Here we recapitulate, in a simplified version, the interesting experiment performed by Summahammer et al. \cite{Summa} in order to stress this salient feature of the process of reduction corresponding to a definite measurement. We can describe it as a two slits experiments with neutrons, where one of the slit has different degrees of (stochastic) absorption or alternatively a different degree of interruption by a chopper (as the chopper size is varied). The intensity is such that only one neutron at a time is present in the apparatus. \par\noindent A) Case of absorption. \par Let us first consider only the slit (labeled by $R$) with a so-called semi-transparent 'absorber'. Since such a device acts as a splitter, after the slit the initial wave function $\Phi_I$ will evolve in the function $\Phi_R$ which is a superposition of a wave packet $\psi_R$ that has passed undisturbed in forward direction and a wave function of a neutron scattered out of the beam. Here we neglect the possible reaction processes at the absorber. Then \vskip 0.2 cm \begin{equation} \Phi_I \, \rightarrow \, \Phi_R \,=\, \sqrt{a}\, \psi_R\times R_0 \,+\, \sqrt{1-a} \sum_i \psi_n(i) \times R_{abs}(i) \end{equation} \par\noindent $a$ : probability of passing undisturbed through the slit (and to be detected at the distant detector). \par\noindent $1-a$ : probability to be scattered away and $\psi_n$ the corresponding scattering wave function. \par\noindent $R_0$ : ground state wave function of the absorber. \par\noindent $R_{abs}$ wave functions of the absorber. In fact any state of the absorber plus the particle can be written as the sum of product wave functions. Notice that the particle wave functions $\psi_n(i)$ is a scattering state, while $R_{abs}$ differs by $R_0$ only for the effect of the recoil, since only elastic scattering is considered. \par\noindent The probability to detect the neutron at the distant detector is $a$, the probability to be not detected is $1-a$. This is similar to a Stern-Gerlach experiment with an initial state corresponding to a coherent mixture of spin up and spin down. In this case the emerging wave packet is a superposition of two spatially (angularly) separated components. Until the particle is detected both components are present. According to standard rules of quantum mechanics both components must be present until the neutron can be considered "detected". Only the undisturbed component is explicitly detected, which means that the wave function of this component is 'attenuated' by the factor $\sqrt{a}$. One can actually check if the coherent superposition of the two components (absorbed and non-absorbed) is still present at the moment of the detection in the screen. \vskip 0.2 cm If we open the other slit (without any absorber), labeled by $L$, the (non normalized) wave function $\Phi_R'$ after the two slits will be \vskip 0.2 cm \begin{equation} \Phi_R' = \psi_L\times R_0 + \Phi_R \end{equation} \par\noindent where $\psi_L$ is the free undisturbed wave packet emerging from the slit L. \par At the final detector (putting for simplicity $R_0 = 1$) \begin{equation} | \Phi_R'(x)|^2 = | \psi_L + \Phi_R |^2 = |\psi_L(x)|^2 + | \Phi_R |^2 + \psi_L \times \Phi_R^* +c.c. \end{equation} \par\noindent Notice that on the detector \begin{equation} \psi_n(i) = 0 \ \ \ , \ \ \ (spatial\ separation) \end{equation} \par\noindent then \begin{equation} | \Phi_R|^2 = a | \psi_R|^2 \end{equation} \par\noindent \begin{equation} \psi_L(x) \times \Phi_R^* = \sqrt{a} \psi_L \times \psi_R^* \end{equation} \par\noindent \begin{equation} \psi_L(x)^* \times \Phi_R = \sqrt{a} \psi_L^* \times \psi_R \end{equation} \par\noindent Finally \begin{equation} | \psi(x)|^2 = | \psi_L(x)|^2 + a | \psi_R(x)|^2 + \sqrt{a} (\psi_L(x) \times \psi_R^* + c.c.) \end{equation} \par\noindent Putting $\psi_R = \psi_L \times \exp(i\alpha)$ one gets \begin{equation} |\psi(x)|^2 = |\psi_L|^2 ( 1 + a + 2\sqrt{a} \cos(\alpha) ) \end{equation} \par\noindent The oscillation intensity (i.e. fringes) at the screen has a magnitude proportional to $\sqrt{a}$. \vskip 0.2 cm \par\noindent B) Case of the chopper \par In this case the two possibilities correspond to not attenuated free wave packet and a wave packet scattered away or localized at the chopper. However they are alternatively present and no superposition between them is possible. Then one has \begin{equation} |\psi(x)|^2 = (1-a) |\psi_L|^2 + a |\psi_L + \psi_R|^2 = |\psi_L|^2 ( 1 + a + 2a \cos(\alpha) ) \end{equation} \par\noindent and the oscillation intensity is proportional to $a$, being $a$ the probability to be not detected by the chopper. \vskip 0.2 cm \par The experiments confirm that the fringe contrast is proportional to $\sqrt{a}$ in case A and to $a$ in case B. \par In case A the fact that the oscillation intensity is proportional to $\sqrt{a}$ is a direct consequence of the assumption of coherence between the two components of the wave function before the detection. An alternative could be \begin{equation} |\psi|^2 = a|\psi_L + \psi_R(x)|^2 + (1-a) |\sum_i \psi_n(i) R_{abs}(i)|^2 --> (detector)\ a\,|\psi_L + \psi_R(x)|^2 \end{equation} \par\noindent which corresponds to a density matrix with lower intensity but no attenuation of the fringes contrast. This is contradicted by the experiment, therefore no reduction of w.f. is realized at the semi-transparent slit. In other words the semi-transparent slit acts as a splitter rather than an imperfect detector. \subsection{Indeterminism of the wave function reduction and irreversibility.} \par Take a linearly polarized photon impinging on a polarizer tilted at 45 degrees with respect to the direction of the polarization. The photon has 50\% probability to be transmitted and 50\% probability to be absorbed. Then for a collection of many photons 50\% of them will be transmitted and 50\% will be absorbed. This means that for the same initial condition there can be two different outputs. This is indeterminism. \par This example can be generalized to a generic experiment. Let us consider the measurement of a physical quantity A, with eigenvalues $a_n$ and corresponding eigenstates $\phi_n$. If the initial state $\Psi_I$ is a mixture of eigenstates \begin{equation} \Psi_I \,=\, \sum_n c_n \phi_n \end{equation} \noindent the result of each measurements will be one of the eigenvalue $a_n$ with a frequency proportional to $|c_n|^2$. Then again for the same initial condition $\Psi_I$ different outputs can be obtained. \par If the initial state coincides with one of the eigenstates, the output will be the same eigenstate and corresponding eigenvalue for A. This means that different initial conditions can produce the same output. From that one can deduce another property of the wave function reduction process. In fact given an eigenstate $\phi_n$, obtained after the process of reduction on a generic initial state, one can imagine to apply a time reversal operation, eventually including the apparatus. In a generic case this will be a stationary state and therefore it will not evolve, apart a possible phase. In any case since there are many initial states that can produce $\phi_n$ as output of the reduction, no deterministic time reversal evolution can recover the initial state. This indicates that the process of reduction is irreversible and therefore it must include some statistical or stochastic elements. \subsection{Non linearity of the reduction process.} \par To elucidate one of the aspect of the measurement process, let us consider the case of a selective measurement. If one consider as initial state anyone of the eigenstate of the physical quantity A, the output of the measurement process will be the state of the apparatus+system corresponding to that eigenstate \begin{equation} \Psi_I \,=\, \phi_n \Phi_0 \, \rightarrow \, \phi_n \Phi_n \, \equiv \, \Psi_n \end{equation} \noindent where $\Phi_0$ is the initial state of the apparatus and $\Phi_n$ the final state of the apparatus indicating that the result of the measure is $a_n$. The final state $\Psi_n$ could be more complex, but in any case it must be different and distinguishable for each one of the initial eigenstate. This is the description of this type of measurement and it is usually referred to as "strong measurement". If the initial state of the system is a coherent mixture of eigenstates, then a linear evolution operator would produce the corresponding mixture of $\Psi_n$ \begin{equation} (\ \sum_n c_n \phi_n\ ) \Phi_0 \, \rightarrow \, \sum_n c_n \Psi_n \label{eq:vonN} \end{equation} \noindent On the contrary, if the apparatus is a proper one, the output must be one of the state $\Psi_n$ only. Therefore linearity is not tenable and one must adopt a non linear evolution in the description of the reduction process. \par The same reasoning is at the basis of the Schr\"odinger cat paradox. \subsection{Size of the detector and time sequence of the reduction.} \par It is usually stated that a detector has to be of macroscopic size to work properly. It is not clear in which sense this is true. Let us consider for instance a scintillator in planar geometry and a charged particle hitting on it. The scintillator can be considered a detector that emits a certain number of photons from the spot corresponding to the position of the particle detection. It can be considered a detector that measures, with a certain uncertainty, the position of the particle on the plane. The photons are emitted from a definite position by a certain number of excited molecules, not necessarily very large. \par If the energy of the particle is lowered so much that only few molecules can be excited, the detector can be still considered operating properly. If the initial wave packets of the particle is large enough to cover a region much larger than the size of a generic spot, one has to conclude that a reduction of the wave function is occurring at each detection, despite the small number of molecules involved. \par Besides that, each spot at a given position should be similar to the one that occurs if the initial wave packet is localized around the same position. In the simplest picture therefore one can imagine that the large wave packet is reduced to the size of a generic spot during the coherent excitation of the molecules in the screen. One is then brought to conclude that the process of reduction is induced by the molecules that overlap with the large wave packet even if they are not finally excited, except for the few ones at the spot. The same conclusion could be obtained if the w.f. reduction occurs after the final process of excitation of the detector molecules, but then one has to assume that the reduction process is an additional process disconnected from that of the interaction. \subsection{Non locality and entanglement.} \par Non locality is an intrinsic feature of QM. This was stressed by Einstein even well before the formulation of the well known Einstein-Podolsky-Rosen (EPR) "paradox" \cite{EPR,Bohm}. One can go back to the 1927 Solvay Conference. The Proceedings are reported in e.g. Bacciagaluppi and Valentini book \cite{Bag}. In one of the discussion sessions Einstein presented a thought but feasible experiment, that we briefly summarize here. He schematized the experiment with a drawing similar to the one in Fig. 2. An electron beam is sent toward a screen with a hole small enough that the outgoing electrons are spread in all direction in almost isotropic way. The hole acts as a source of electrons emitted in all directions in the outgoing hemisphere. In a distant photographic screen that covers the hemisphere the electrons are detected at one of the possible positions, indicated by (A), along the screen. If the beam has a low enough intensity, only one electron at a time is present in the region of the experiment. According to QM each electron can be described by a wave packet with an angular spread covering the hemisphere and at the detector screen the electron is localized at (A). Before detection the wave packet is spread around all the portions of the screen, so potentially it could be detected at any other point (B). The detection at (A) delete this possibility instantaneously, so something has happened in (B). This suggests some sort of non locality, but of course this cannot be tested, since the detection in (A) or in (B) are mutually exclusive. In any case this way of thinking is directly related to the finite size of the wave packet and to the indivisibility of the wave function. The state of the particle is embodied in the whole wave function. It seems \cite{Bag} that nobody in the audience understood what Einstein wanted to say, in particular N. Bohr. Apart from that, one can see that the discussion of this thought experiment contains some suggestions for the EPR paradox. In fact, if we now consider two particles instead of only one, this non local effect can be tested if the two particles are entangled, as shown by J. Bell with his famous inequality and confirmed experimentally. The relevance of the indivisibility of the wave function is especially apparent if the wave packet is split into two wave packets traveling in different directions. If then a detection occurs at one of the two wave packets, the other wave packet will immediately 'disappears'. Think to the case of tunneling experiment \cite{Stein}. After that tunneling process has occurred, the wave packet is split in the reflected part and in the transmitted part, moving in opposite directions. Then if the transmitted part is detected, the reflected one must 'disappear', at whatever distance it can be, since the particle is described by a single wave function.\par \begin{center} \begin{figure} \includegraphics[width=1.0\textwidth]{fig_Einstein.ps} \vskip -12 cm \caption{Schematic representation of the thought experiment illustrated by A. Einstein, as reported in ref. \cite{Bag}.} \end{figure} \end{center} \par \section{Extension of ordinary quantum mechanics.} \par The main conclusion that can be drawn from the discussion in the last section is that the formalism based on the Schr\"odinger equation is invalid during the reduction process. Indeed the Schr\"odinger equation has the following basic properties \par\noindent i) It entails a deterministic dynamics. For a given initial state, i.e. wave function, the final state after a certain amount of time is unique. \par\noindent ii) It is reversible, at least for systems that do not contain an interaction with explicit violation of time reversal symmetry. \par\noindent iii) It is linear and so is the corresponding dynamics. \par\noindent iv) No perturbation of a physical system, i.e. its wave function, can occur without an external interaction. In other words the whole evolution is determined solely by the hamiltonian. \par\noindent In the reduction process all these properties are invalid. To complete ordinary Quantum Mechanics with the inclusion of the reduction process one needs to extend the formalism beyond the Schr\"odinger equation framework. \subsection{A schematic introduction to nonstandard analysis\label{nons}} \indent We are all familiar with physical quantities that are assumed to be 'measurable' or 'observable'. The results of these operations are real numbers. Take as examples the energy, the momentum, the positions, and so on. In reality the result of any measurements is a number with a finite set of digits, i.e. a rational number, however large the set of digits can be. It is tacitly assumed that the 'actual' values that a physical quantity can assume is a real number, that is a number that can be irrational or even transcendental (e.g. $\pi$), to which a more and more refined measurement would tend. This is surely the case of the position, or better the coordinate along a given axis, which is supposed to be the support of a particle wave function. Despite in Quantum Mechanics the precision of the position of a particle cannot be arbitrarily accurate, because of the uncertainty principle, one assumes that the physical space is a real number manifold of suitable dimensions. However this can be considered a non necessary limitation. Nonstandard analysis (NSA) has shown rigorously that the set of real numbers can be enriched by the introduction of additional numbers, usually indicated indeed as nonstandard (NS) numbers. In this section we give a very limited introduction to nonstandard analysis, in order to provide some explanations of of the terminology and summarize some basic properties of nonstandard analysis. The presentation is restricted mainly to the features of nonstandard analysis which will be used in the paper. The language is not in a rigorous mathematical style, but rather it appeals to intuition, as it is appropriate for a physicist reader. And of course no proof is presented.\par A possible way of introducing nonstandard analysis is the axiomatic one \cite{Keisler,Diener,Nelson,Loeb,Benci}. Let us remind that the real axis is a complete ordered field, which means schematically that for its elements the operations of addition and multiplication are well defined and moreover a relation of ordering is given, i.e. for any pair of real numbers it is possible to establish which one is larger and eventually if they are equal. The completeness property refers to the fact that any finite set in \textbf{R} has a lowest upper bound. Following ref. \cite{Keisler,Benci} we first introduce two axioms for the nonstandard real axis \textbf{R}$^*$ \par\noindent A) \textbf{R}$^*$ is an ordered field, proper extension of \textbf{R}. \par\noindent B) \textbf{R}$^*$ has a positive infinitesimal $\epsilon$, which has the property to be non-zero but smaller than any positive real r of \textbf{R}. \par\noindent The extension is a proper one, in the sense that there are elements belonging to \textbf{R}$^*$ but not to \textbf{R}. Notice that \textbf{R}$^*$ is not complete. It turns out that any proper extension of \textbf{R} cannot be complete. This means that not all finite sets has a least upper bound. This is the case of the set of infinitesimals. If $\epsilon$ is an infinitesimal, $H\, =\, 1/\epsilon$, $\epsilon > 0$, is larger than any real number in \textbf{R}. Such a number will be indicated as unlimited. The set of unlimited numbers has no greatest lower bound belonging to \textbf{R}$^*$ or \textbf{R}. It follows that the set of unlimited numbers are strictly disjoint from \textbf{R}. From the physical point of view this means that all processes which can be considered as occurring within this set can be assumed to be due to additional degrees of freedom with respect to the ones occurring in \textbf{R}. All that can be trivially extended to the negative sector of the real axis. Summarizing, an element $x$ of \textbf{R}$^*$ is infinitesimal if $ |x| < r $ , for any real $r$ of \textbf{R}, finite if $ |x| < r $, for some real $r$ of \textbf{R}, unlimited if $ |x| > r $ for any real $r$ of \textbf{R}. Notice that a finite element by definition is not infinitesimal. The expectations for the addition and multiplication of two of the elements x,y in \textbf{R}$^*$ can be summarized as follows \par x,y infinitesimal , x+y and xy infinitesimal \par x infinitesimal and y finite, x+y finite, xy infinitesimal \par x,y finite, x+y and xy finite \par x finite, y unlimited, x+y and xy unlimited \par x,y unlimited, x+y and xy unlimited \par x infinitesimal, y unlimited, x+y unlimited, xy not unique \par\noindent The last line can be understood from the following examples \par $\epsilon \ 1/\epsilon \,=\, $ finite \par $\epsilon \ 1/\sqrt{\epsilon} \,=\, $ infinitesimal \par $\epsilon \ 1/\epsilon^2 \,=\, $ unlimited \par \noindent Each one of these products is between an infinitesimal and an unlimited number, but the result depends on the relative 'size' of the unlimited number. \par One of the key nonstandard object is the monad of a finite element. Two elements x,y of \textbf{R}$^*$ are said to be infinitely close if x - y is an infinitesimal, and we write x $\sim$ y. Then, given an element x of \textbf{R}$^*$, the monad of x, monad(x), is formed by all elements infinitely close to x. In particular each element of \textbf{R} has a monad. It turns out that any finite element x of \textbf{R}$^*$ can be written as x $=$ r $+ \epsilon$, where r is an element of \textbf{R}, i.e. each monad contains a real number r of \textbf{R}. One of the key property of the monads is that any two of them, if they are not disjoint, then they coincide, i.e. they cannot have elements in common except when they coincide. From these properties it follows that to each element x of \textbf{R}$^*$ one can associate a unique element r of \textbf{R}, which is called the standard part of x, and one writes \begin{equation} x \,=\, r \,+\, \epsilon \ \rightarrow \ r \,=\, st(x) \end{equation} \noindent For an unlimited number the standard part does not exist. This means that \textbf{R} is embedded properly in \textbf{R}$^*$, provided we identify the standard part of any number of \textbf{R}$^*$ with the corresponding number of \textbf{R}. As in the case of the set of unlimited numbers, the additional nonstandard numbers that do not coincide with elements of \textbf{R} can be viewed physically as additional degrees of freedom. \par To each element $x$ of \textbf{R}$^*$ one can associate also what is called a galaxy, which is the set of numbers $y$ such that $y\, -\, x$ is finite and this set is indicated as gal($x$).\par To complete the extension of \textbf{R} to \textbf{R}$^*$ one needs two other axioms \cite{Keisler}. The first one has to do with functions, and it asserts that each standard function $f$ in \textbf{R} has an extension, called canonical extension, to a nonstandard function $^*f$. A nonstandard function (in one variable) is just a correspondence that assigns to each element of a set of nonstandard numbers a unique element of another set of nonstandard numbers. Generalization to functions of several variables is trivial. Once nonstandard functions have been introduced, various operations in \textbf{R} can be extended to \textbf{R}$^*$, noticeably the operation of differentiation and integration, for details see e.g. \cite{Keisler,Diener,Nelson,Benci}. The last axiom \cite{Keisler} is the so called Transfer Principle, which connects, loosely speaking, the validity of a statement in \textbf{R} to the validity of the same statement in \textbf{R}$^*$, when the different elements of the statement have been extended. Since the principle will not be explicitly used, we refer the reader to the above quoted literature. Finally we notice that the extension of to \textbf{R}$^*$ entails that the set of natural numbers \textbf{N} can be extended to the nonstandard set \textbf{N}$^*$, which include unlimited integer numbers. \par Another method to introduce nonstandard analysis is the construction of an explicit model for \textbf{R}$^*$, which will satisfy the axioms and eliminates any doubt on the existence of infinitesimals and unlimited numbers as introduced axiomatically. This is developed in the celebrated paper and text book of Robinson \cite{Robinson1,Robinson2}. \par The method is similar to the one used in elementary analysis for introducing the field of real numbers from the field of rationals. We sketch her the procedure for future purpose. As it is well known, in standard analysis the Cauchy sequences of rational numbers can be used to define the real numbers. Furthermore, as the field of real numbers \textbf{R} is complete, a Cauchy sequence of real numbers is convergent to a real number. It is possible to construct a new field by considering the set of general sequences of standard real numbers, which is called 'ultraproduct'. It is easy to see that this a field if the arithmetic operation are defined through their application term by term of two sequences. To obtain an ordered and well defined field however it is not possible to introduce the operation of comparison, i.e. larger or smaller, term by term. Furthermore one needs an equivalence relation within the set of sequences. For all that it is necessary to select the terms of the sequences for which the comparison and equivalence can be applied. Such a selection is coherently performed by introducing what is called an 'ultrafilter', which indeed defines the set of terms that has to be used for for the comparison. Since the terms of a sequence are labeled by an integer number, the ultrafilter is a family of subsets of the set of integer numbers \textbf{N}. Once this is done the field is completely ordered and one can identify each equivalence class of sequences with a number. This new field is embedding the standard real numbers. Each real number r can be simply identified with the sequence where all terms are equal to r. We are not going to define what is an ultrafilter, which can be chosen in many but equivalent ways, but just assume that it exists. Of course the new field \textbf{R}$^*$ contains numbers which have no correspondence in \textbf{R}, i.e. all the sequences not equivalent to the above mentioned constant one. It is then possible to identify infinitesimal and unlimited numbers. To exemplify the case of infinitesimal, let us consider sequence in \textbf{R} convergent to zero. It turns out that the terms of the sequence that are smaller than whatsoever real number belong have indexes belonging to the ultrafilter, so that the number in \textbf{R}$^*$ defined by the sequence is smaller than any real number. \par Within the model it is possible to extend in a constructive way many standard mathematical objects to \textbf{R}$^*$ and show that the above mentioned axioms are indeed satisfied. We consider the case of the extension of a standard function $f(x)$, which will appear several times in the sequel. To extend the definition of $f$ at a given nonstandard point $x^*$, we consider the sequence $\{ x_n \}$ which define $x^*$, and the corresponding sequence $\{ f(x_n) \} $. The latter defines in general a nonstandard number $f^*$. The correspondence between $x^*$ and $f^*$ defines a nonstandard function $^*f$ \begin{equation} \{ x_n \} \ \rightarrow \ \{ f(x_n) \} \,=\, f^* \,\equiv\, ^*f(x^*) \end{equation} \noindent This extension will be referred to as 'canonical extension'. Vice versa, given a nonstandard function, one can introduce its standard part \begin{equation} ^*f(x^*) \ \rightarrow \ st(^*f(st(x^*))) \,\equiv\, f(x) \end{equation} \noindent It has to be noticed that the standard part of a canonically extended function $^*f$ not necessarily coincide with the original standard function $f$, i.e. the two operations are not the 'inverse' of each other. However, for smooth enough (finite) functions this is the case. For an extended exposition of the ultrafilter model and several applications we can ref. to \cite{Goldblatt,Albeverio} \subsection{Basic assumptions.} \par Following ref. \cite{Sousa,Benci}, we introduce in \textbf{R}$^*$\ the so-called hyperfinite real line ${\rm \Pi}$. Given an unlimited integer number $N$, that is a number belonging to \textbf{N}$^*$ - \textbf{N}, the quantity \begin{equation} d \,=\, 1/N \end{equation} \noindent is an infinitesimal. Here the number $1$ in the numerator is some unit of length. We then consider the numbers \begin{equation} x_j \,=\, \frac{j}{N} \,\equiv\, j\times d \ \ \ ; \ \ \ j \,=\, -M, -M+1\, -M+2, ........ M-2, M-1, M \end{equation} \noindent with $ M \,=\, N^2 $. These $2M + 1$ numbers are a set of equidistant points in \textbf{R}$^*$~, which contains both infinitesimal and unlimited numbers. The standard parts of the finite numbers contained in this set can be embedded in \textbf{R}~. It is essential to realize that ${\rm \Pi}$ is not the often used discrete version of the real axis \textbf{R}~. Indeed its cardinality cannot be smaller than the one of \textbf{R}$^*$~. In fact the monad of each real number in \textbf{R}~ contains an unlimited number of point belonging to the hyperfinite line, as we show now by arguments that will be useful in the sequel.\par Given a real number $x \,\epsilon\, \textbf{R}$, let us consider the points $x_j$ of ${\rm \Pi}$ such that $st(x_j) \,=\, x$. The set of these numbers is the so called ${\rm \Pi}$-monad of $x$, that will be indicated as $monad_{\Pi}$(x). Let us consider the unique point $x_h$ belonging to the ${\rm \Pi}$-monad such that $|x_d \,-\, x|\, \leq \,d\, $ and the points \begin{equation} x_j \,=\, x_d \,+ j\times d \ \ \ ; \ \ \ j \,=\, \pm 1, \pm 2, ........ \pm K \label{eq:points} \end{equation} \noindent Let us take e.g. $ K \,=\, [N^\alpha] $, where $[A]$ indicates the largest integer smaller then $A$ and $a < \alpha < 1$, with $a$ a finite real number. The number $K/N$ is an infinitesimal, then all the $x_j$ belong to monad(x) since they differ from $x$ by an infinitesimal. The number $K$ is an unlimited one, which proves the statement. \par \par The basic assumption is that the space is made of these infinitesimal grains, such that no space distance can be smaller than $d$. We stress again that $d$ is an infinitesimal quantity, that does not need to be further specified, provided it is infinitesimal. It does not fix any scale at standard level. The hyperfinite real line is then just a basis where phenomena can be described. At the nonstandard level translational symmetry is broken, and in three dimensions rotational symmetry is also broken. However we will see that the physical effects at standard level will not depend on the particular lattice that is used. It has to be stressed that the use of the nonstndard hyoerfinite line is essential for the construction of the mode in many respects, as it will apparent in the sequel. The use of a standard grid for the real line, i.e. a finite step size, would result in a trivial discretization of ordinary quantum mechanics, and the model simply could not be formulated. \par Let us consider the propagation (in one dimension) of a wave $W$ along ${\rm \Pi}$. If all wavelengths are allowed, down to 2$d$, one needs to consider the discrete version of the wave equation \begin{equation} \frac{\partial^2 W_n(t)}{\partial t^2} \,-\, c^2 \frac{1}{d^2} (W_{j+1} - 2 W_{j} + W_{j-1}) \,= 0 \label{eq:w1} \end{equation} \par\noindent where $W_j = W(x_j)$ and $c$ the wave phase velocity. In the following we put $c \,=\, 1$. The index $j$ runs from $-M +1$ to $M - 1$, for a total of $2M - 1$ equations. The values of $W_{-M}$ and $W_M$ must be fixed by the boundary conditions. The right hand side of Eq. (\ref{eq:w1}) is just the laplacian in discrete form, but with an infinitesimal step size. Eq. (\ref{eq:w1}) can be viewed for instance as the wave equation for a photon, or a neutral massless (and spinless) particle. Klein-Gordon, Dirac and Schrodinger equations will be discussed later. \par Eq. (\ref{eq:w1}) defines a band matrix $A$ \begin{equation} A_{j,j'} \,=\, \frac{1}{d^2} (\delta_{j,j'-1} - 2\delta_{j,j'} + \delta_{j,j'+1} ) \label{eq:bm} \end{equation} \par\noindent with $j,j'$ running again from $-M+1$ to $M-1$. Then the wave equation can be written for the vector $W \,=\, (W_{-M}, \cdots\cdots W_{M})$ \begin{equation} \frac{\partial^2 W(t)}{\partial t^2} \,-\, A W \,= 0 \label{eq:mat} \end{equation} \par\noindent supplemented by the boundary conditions. Notice that the variable time $t$ has not been put on a hyperfinite line. One can equally well consider that time varies also in infinitesimal step $ d_T $, but of higher order than $d$, i.e. $ d_T/d \, \sim \, 0 $. This will be tacitly assumed in the sequel. The matrix (\ref{eq:bm}) can be diagonalized in closed form. We follow the treatment of ref. \cite{Delf}, extended to the hyperfinite line and to complex solutions with periodic boundary conditions. As shown in Appendix A, the eigenvectors of the matrix are \begin{equation} \phi_k(x_j) \,=\, \frac{1}{\sqrt{2M+1}} \exp(\frac{\imath k j \pi}{M}) \ \ \ ; \ \ \ k \,=\, 1, 2 \cdots\cdots M \label{eq:ev} \end{equation} \noindent with eigenfrequencies \begin{equation} \omega(k) \,=\, \pm \frac{2}{d} \sin(\frac{k\pi}{2M}) \label{eq:ef} \end{equation} The eigenvectors (\ref{eq:ev}) are orthonormal according to the "usual" scalar product, which for two generic vectors $u,v$ reads \begin{equation} (u,v) \,=\, \sum_j\, u(x_j)^* v(x_j) \label{eq:sp} \end{equation} \noindent Notice that the summation on $j$ is over a set with the cardinality of the continuum. It follows that the wave equation (\ref{eq:w1}) has the set of solutions \begin{equation} \psi_k(t,x_j) \,=\, \frac{1}{\sqrt{2M+1}} \exp\Big(\imath(\frac{k j \pi}{M} - \omega(k) t) \Big) \label{eq:pw} \end{equation} \noindent It can be verified that for smooth standard functions the scalar product (\ref{eq:sp}) can be directly related to the usual scalar product in functional Hilbert space, i.e. the integral of the product of the two functions, provided one introduced the integration step \begin{equation} st(d\sum_j\, u(x_j)^* v(x_j)) \longrightarrow \int u(x)^* v(x) dx \label{eq:sps} \end{equation} \noindent where st() stands for standard part. This is a result of nonstandard analysis. The specification of nonstandard part will be often omitted in the following, if there is no ambiguity. The general expression of Eq. (\ref{eq:pw}) is a well defined nonstandard number, and it will be sufficient for the development of the paper. \par In the spectrum of frequencies (\ref{eq:ef}) one has to distinguish essentially two regions. If $ k $ is of the form $ q\times N $, or smaller, with $ q $ a finite number, then the argument of the sine function is an infinitesimal and one gets \begin{equation} \omega(k) \,=\, \frac{k\pi}{M d} \,=\, \frac{k\pi}{N} \,=\, q\pi \end{equation} \noindent where $ q\pi $ can be identified with the wave number $ p $ of the radiation and $ \lambda \,=\, 2\pi/p $ is the corresponding wavelength. If we multiply $ \omega $ by $ \hbar $ we get the usual energy spectrum linear in momentum of e.g. photons. However if $ k $ is still of the form $ q\times N $, but $ q $ is an unlimited nonstandard number, with necessarily $ q < N $, then the argument of the sine function is finite and cannot be expanded. The frequency spectrum reaches nonstandard large values, but it is also bending at increasing value of $ k $ (i.e. of $q$) and reaches a maximum (in absolute value) at $ q \,=\, N $, where the argument of the sine function is $ \pi/2 $. The direct connection of the energy with the momentum is then lost. On the other hand the wavelength has still the usual meaning. \par Another deep consequence of this feature is the behavior of the group velocity in this region. Although $ \hbar p $ cannot be identified with momentum, in the propagation of a wave packet, which contains nonstandard frequencies, the group velocity $ v(p) $ can be identified as the (nonstandard) derivative of the frequency with respect to the wave number \begin{equation} v(p) \,=\, \frac{\partial \omega}{\partial p} \,=\, \cos(\frac{p}{2N}) \,=\, \cos(\frac{q}{N}\frac{\pi}{2}) \label{eq:gv} \end{equation} \noindent Near the maximum of the spectrum, where the nonstandard frequencies are present, the group velocity is vanishing small, and it tends to zero at the maximum. One can see that the group velocity is infinitesimal in a full region below the endpoint at $ q \,=\, N$. Take $ q_0 \,=\, N \,-\, N_0 $, where $ N_0 \,=\, [N^\alpha] $, with $ \alpha < 1 $. Then for any $ q_0 < q < N $ \begin{equation} v(p) \,=\, \cos\Big(\frac{\pi}{2}\frac{q}{N}\Big) \,<\, \cos\Big(\frac{\pi}{2}(1 - \frac{N_0}{N})\Big) \,=\, O\Big( \frac{N_0}{N} \Big)^2 \,\sim\, 0 \end{equation} \noindent These correspond to standing waves of unlimited frequency. One cannot associate an energy to them, as we will see later on the basis of other arguments. \par We now go back to the wave equation (\ref{eq:w1}). We separate the functional space of the wave functions into two subspaces. One is spanned by the smooth standard wave functions which can be characterized by standard finite values of wavelengths and frequencies. It can be embedded in the usual Hilbert (or Dirac) space, or whatever space that can be used to formulate ordinary Quantum Mechanics. The other subspace is spanned by the wave functions that are characterized by nonstandard unlimited frequencies and infinitesimal wavelengths. We extend Quantum Mechanics by assuming that a generic wave function $ \Psi $ contains components in each one of the two subspaces, so that it can be written as the sum of two functions \begin{equation} \Psi \,=\, \Psi_S \,+\, \Psi_{NS} \label{eq:SNS} \end{equation} \noindent By construction the component $ \Psi_S $ is a nonstandard function and to recover ordinary QM one needs a standard function defined over \textbf{R}~. This can be obtained by taking the standard part of the function $ \Psi_S $ \cite{Sousa}. If the function $ \Psi_S $ is smooth enough, its standard part does exist and it is defined over the standard part of ${\rm \Pi}$, which is the standard real axis, with standard real values \cite{Sousa}. More precisely, the value of $ \Psi_S$ at the standard point $ x $ is actually taken at any point of the ${\rm \Pi}$-monad of $ x $. Furthermore the derivatives in discrete forms appearing in (\ref{eq:w1}) equal the ordinary derivatives of $ st(\Psi) $. Then one recovers the Laplacian and eq. (\ref{eq:w1}) is just the standard free wave equation. From now on we will assume, without further specification, that the component $\Psi_S$ is the canonical extension of a standard smooth function. Here and in the following we distinguish between the term "standard component", with reference to $\Psi_{S}$, which is still a NS function, and the term "standard part", which corresponds to the extraction of the standard part of a nonstandard quantity, a well defined operation in nonstandard analysis. \subsection{Origin and form of nonstandard waves.\label{sec:nsw}} \noindent In this subsection we analyze how the nonstandard waves can appear and in which way they are associated with the free wave packet propagation.\par We start with a basic example adapted from ref. \cite{Delf}. Let us consider as initial condition for the wave $ \Psi(t,x_j) $ an infinitely narrow wave packet, i.e. \begin{equation} \Psi(0,x_j) \,=\, \left\{ \begin{array}{ll} \sqrt{N} & \mathrm{ if}\ x_j = 0\\ 0 & \mathrm{ if}\ x_j \neq 0 \end{array} \right. \end{equation} \noindent with initial zero velocity $ \Psi(0,x_j)' = 0 $. It looks like a square root of a delta function, so that the modulus square $|\Psi|^2$ is normalized to 1. It is clearly a quite discontinuous function of $ x_j $. The time evolution of the initial wave packet can be obtained by expanding in the plane waves of eq. (\ref{eq:pw}). Because of symmetry the initial wave packet splits into two parts moving in opposite directions. One gets \begin{equation} \Psi(t,x_j) \,=\, \frac{1}{2N^{3/2}} \sum_n \Big[ \exp\big(\imath\omega_n t -\imath p_n x_j\big) \,+\, \exp\big(\imath\omega_n t +\imath k_n x_j\big) \Big] \label{eq:delf} \end{equation} \noindent with $ \omega_n \,=\, \omega(p_n) $, $ p_n \,=\, n\pi/N$. If one would take for $ \omega(p) $ a spectrum linear in $ p $ for all standard and nonstandard values of the momentum $ p $, then the initial narrow wave packet would split into two identical narrow wave packets of height $ 1/2 $ each, moving in opposite directions along the $ x $ axis. \par However along the hyperfinite line the time evolution is rather different. As shown in Appendix B, one can calculate the hyper-summation over $ n $ in Eq. (\ref{eq:delf}) by means of the stationary phase method, which is well justified in the unbound part of the frequency spectrum. One gets \begin{equation} \Psi(t,x_j) \,=\, \frac{1}{\sqrt{\pi}}\, \frac{e^{\imath \alpha}}{[t^2 \,-\, x_j^2]^{1/4}} \label{eq:tail} \end{equation} \noindent where $ \alpha $ is a phase which depends on $ t $ and $ |x_j| $, given in the Appendix B, and $ |x_j| \,<\, t $. It has to be noted that the phase function belongs to the nonstandard sector. The expression is in principle valid only not too close to the points $ |x| \,=\, t $, where this wave function diverges. However the region of invalidity is vanishing small, and indeed one can verify that (Appendix B) \begin{equation} \sum_j d |\Psi(t,x_j)|^2 \,=\, 1 \end{equation} \noindent independently of $ t $. \par From Eq. (\ref{eq:tail}) one can see that the wave packet at a generic time $ t $ is extended symmetrically between $x_j \,\sim\, -t$ and $ x_j \,\sim\, t $. The two peaks near $ x_j \,\sim\, \pm t$ move in opposite directions but leave a tail that join them at all time. This is due to the vanishing small group velocity that characterizes the last region of the spectrum at the highest values of the wave vectors. \par The lesson one can get from this very simple example is that, as expected, the nonstandard waves arise from the possible discontinuity of the wave functions and that their propagation follows the group velocity of the nonstandard part of the frequency spectrum. Finally, the behavior just at the endpoints of the wave packet is not necessarily a divergence, which is probably due to our extrapolation of the stationary phase method, but surely will keep some degrees of singularity (discontinuity or others). Notice that an initial full $\delta$-function in \textbf{R}\ would produce two $\delta$-function with weight 1/2, propagating undisturbed in opposite direction. However the w.f. would be not square integrable. \par A general wave packet which presents some discontinuities, of the type just discussed, on the top of a smooth behavior can be expected to propagate keeping some types of singularity and leaving a "tail" along the hyperfinite real line. The nonstandard part of the wave function can be associated with a distribution of singularities, and then it will be a superposition of wave functions like Eq. (\ref{eq:tail}), which will form a wave packet overlapping with the standard wave packet corresponding to a given quantum state. This nonstandard wave packet can be viewed as "dual" to the standard one. \par For massless particles in the standard part of the spectrum phase velocity and group velocity coincide. This is not the case for massive particles. If, for simplicity, we adopt the Klein-Gordon wave equation, to a given standard wave packet one can still associate a dual nonstandard wave packet characterized by the same group velocity or smaller, in analogy with Eq. (\ref{eq:tail}). As shown in Appenix B the dual wave packet is a superposition of wave function of the form \begin{equation} \Psi(t,x_j) \,=\, \frac{1}{\sqrt{\pi }}\, \frac{e^{\imath \alpha}}{[t^2 \,-\, x_j^2]^{1/4}}\ \ \ \ \ \ \ \ \ ; \ \ \ \ \ \ \ x_j \,<\, vt \label{eq:tail1} \end{equation} \noindent where $v\, (\, <\, 1 )$ is a given group velocity. In particular the dual wave packet can include a distribution of group velocities similar or even equal to the one of the associated standard wave packet. \par A generic single particle free state will be represented by a standard wave packet together with the associated nonstandard wave packet, which is a superposition of wave functions of the form of Eq. (\ref{eq:tail}) or (\ref{eq:tail1}). \par The particular linear superposition which forms the dual wave packet can be fixed only on the basis of physical considerations, as it will be discussed in the following. \subsection{Bound states\label{sec:bound}} Let us consider a particle bound to a (finite) potential well $v$ and look for the possible stationary states. Any matrix elements between the standard and nonstandard components vanish. Indeed, if the two components are expanded in plane waves, they contain a different set of wave numbers, and, in agreement with the orthogonality of the states of Eq. (\ref{eq:ev}), their scalar product vanishes. For the same reason, the matrix elements of the potential between the two components vanish (assuming that the potential is the canonical extension of a smooth standard function). One can then solve the wave equation separately for the two components. It is clear that the two components cannot be both in a stationary state with a common eigenfrequency, since the standard and nonstandard parts of the frequency spectrum of Eq. (\ref{eq:ef}) are separated. The physical spectrum of the eigenfrequencies belongs to the standard part, and therefore we are forced to look for the stationary solutions of the standard wave equation and let the nonstandard part of the wave function to be non-stationary. This is a basic point of the model, which has essential consequences. This is also the fundamental reason why the nonstandard part of the frequency spectrum cannot be related to an energy spectrum. The standard part of the wave function can be identified with the canonical extension of a given eigenfunction of ordinary QM, and the corresponding eigenfrequency with the eigenvalues of the bound state. It is possible to find the general form for the non-stationary nonstandard part of the wavefunction corresponding to one of such bound states. Let us call $\phi_S(t,x_j)$ a stationary solution of the standard wave equation, belonging to a given eigenfrequency $\omega_B$. It is shown in Appendix C that the corresponding nonstandard part can be written in general as \begin{equation} \phi_{NS}(t,x_j) \,=\, G(t,x_j) \phi_S(t,xj) \label{eq:ns} \end{equation} \noindent with \begin{equation} G(t,x_j) \,=\, \sum_p g(p) \exp\big(-\imath(\omega(p)t\, -\, p x_j)\big)/\sqrt{2M+1} \label{eq:G} \end{equation} \noindent an arbitrary linear combinations of the free nonstandard plane waves of Eq. (\ref{eq:pw}), i.e. the wave numbers $p$ run over the nonstandard sector. The nonstandard function $g(p)$ is to a large extent arbitrary, because the wave equation cannot fix its particular form. As in the case of the propagating particle state the function $g(p)$ can be specified only on the basis of physical considerations, to be discussed in the sequel. It has to be strongly stressed that the form of Eqs. (\ref{eq:ns})(\ref{eq:G}) for the nonstandard part is not determined by its direct interaction with the binding potential. As it is apparent in the derivation of Appendix C, the potential $v$ is in fact acting only on the standard part, and indeed Eq. (\ref{eq:ns}) indicates that the nonstandard part remains in a free state, "modulated" by the standard part of the bound state wave function. This effect is in agreement with the fact that the two components of the wave function represent a single physical state. The total wave function of a bound state can then be written \begin{equation} \phi(t,x_j) \,=\, \big[ 1 \,+\, G(t,x_j) \big] \phi_S(t,x_j) \label{eq:tot} \end{equation} \noindent which indicates that the standard part is modified by nonstandard and dynamical distortions. The latter corresponds, in a generic case, to a set of singular "spikes" in dynamical evolution. In fact a generic function $g(p)$ has an unlimited support in wave numbers, and therefore an infinitesimal width in coordinate space. In general many of these singularities can be present. Finally it is important to notice that in the expansion of Eq. (\ref{eq:G}) both positive and negative frequencies, as well as positive and negative wave vectors, are present and therefore standing waves are possible. \par The next question is the physical effects that the nonstandard part can produce. It is not affecting the energy of the bound state, but one can ask if it is measurable or if it is affecting other physical quantities or physical processes. We will argue now that it cannot be detected directly, but, as we will see later, it can indirectly produce fundamental effects on the evolution of the wave function during a measuring process.\par We consider an external probe represented by a field $\zeta(x)$ with support in \textbf{R}~, i.e. an external field coupled to the standard part of the wave function of an external system. The nonstandard component of a generic wave function can be written \begin{equation} \Psi_{NS}(x_l) \,=\, \sum_p f(p) \exp (-\imath p x_l)/\sqrt{2M+1} \label{eq:expa} \end{equation} \noindent with $\sum_p |f(p)|^2|$ finite or unlimited, which simply states that it belong to the NS functional space of nonstandard momenta. As such it is orthogonal to any standard function, and therefore any matrix elements of a standard interaction between $ \Psi_{NS} $ and a standard wave function vanishes. Notice that the considered matrix elements are strictly zero, not just infinitesimal. This means that the non standard part of the wave functions cannot be probed by means of the standard part of the w.f. of an external system. This result does not apply in the case of a possible interaction between the nonstandard components of the wave functions of two physical systems. In this case the interaction can be canonically extended to \textbf{R}$^*$\ and the matrix elements will not vanish in general. However we will see that this interaction matrix elements have relevance only for the wave function reduction process. \subsection{Setting the functional space.\label{sec:sfs}} In the previous sections we have introduced the general form and properties of a possible wave packet that includes both standard and nonstandard components. Since the wave function must represent a single physical state, one has to restrict the functional space to a set of permissible functions, each one corresponding to a physical wave function. In particular one has to assume a definite connection between the standard and nonstandard components, to avoid an unphysical redundancy of the functional space. Let us first consider a single particle bound state. In this case the (time dependent) nonstandard component can include only standing waves, so finally the form of the function $ G(t,x_j) $ of Eqs. (\ref{eq:G})(\ref{eq:tot}) is chosen such that \begin{equation} \Psi_{NS}(t,x_j) \,=\, \sum_{r} \cos(\omega_r t) \exp(\imath p_r x_j)^*\!\phi_S(t,x_j) \label{eq:bound} \end{equation} \noindent where $^*\phi_S(t,x_j) $ is the canonical extension of the stationary standard component. The summation is over a set of $ \lambda $ nonstandard momenta and $ \omega_r \,=\, \omega(p_r) $. It remains to fix this particular set. We will only assume that for $ r \neq r^{\prime} $ the difference $ p_r - p_{r^{\prime}} $ is an unlimited NS number and the number $ \lambda $ of momenta is also unlimited. For instance if on the average $ p_r - p_{r^{\prime}} \approx N^\alpha$, with $ 0 < \alpha < 1 $, then $ \lambda \approx N^{1 - \alpha} $, which is also unlimited. Then the total wave function will be \begin{equation} \psi(t,x_j) \,=\, \big[1 + G(t,x_j)\big]\,^*\!\!\phi_S(t,x_j) \label{eq:bt} \end{equation} \noindent Notice that for definitness we have chosen the cosine functions in Eq. (\ref{eq:bound}). The particular set of momenta is assumed to be characteristic of a given physical system. With the form of Eq. (\ref{eq:bound}) and the above restriction on the momentum set, the scalar product between the NS components of two states $ \psi, \psi' $ turns out to be ( Appendix C ) \begin{equation} d \sum_j \overline{\Psi_{NS}(t,x_j)}\, \Psi_{NS}(t,x_j)' \,=\, <\, \phi_S\, |\, \phi_S' \,> \sum_r \cos^2(\omega_r t) \label{eq:spNS} \end{equation} \noindent i.e. it is proportional to the ordinary scalar product between the standard parts of the two wave functions. To get this result, the form of Eq. (\ref{eq:bound}) is essential, so there is not so much choice if the overlap between two states has to be unique. For a generic spectrum of momenta the above restriction on $ p_r - p_{r'} $ is almost surely satisfied, provided the number of nonstandard momenta is infinitesimal with respect to the total number, as in the previous example. One has to mention that the form (\ref{eq:bound}) corresponds to the presence of a singularity at $ x_j \approx 0 $, and it can be generalized by distributing a set of singularity all around the wave packet. Then the nonstandard w.f. component can be written \begin{equation} \Psi_{NS}(t,x_j) \,=\, \frac{1}{\sqrt{n}} \sum_l \sum_{r} \cos(\omega_r t) \exp(\imath p_r ( x_j - X_l ) )^*\!\!\phi_S(x_j) \label{eq:rphase} \end{equation} \noindent where $ X_l $ are the positions of the singularities and $ n $ their number. In the discussion we will keep for simplicity the form (\ref{eq:bound}), but when necessary we will refer to (\ref{eq:rphase}). \par Once the functional space for single particle states has been defined, one needs the extension to many-particle systems. Let us first consider two independent particles, with wave functions $\Psi^1, \Psi^2$, which include a standard and a nonstandard part \begin{equation} \Psi^1 \,=\, \Psi_S^1 \,+\, \Psi_{NS}^1 \ \ \ \ \ \ \ ; \ \ \ \ \ \ \Psi^2 \,=\, \Psi_S^2 \,+\, \Psi_{NS}^2 \label{eq:twopar} \end{equation} \noindent To be consistent with the general scheme, the two particle wave function $\Psi^{(1,2)} $ must have the form \begin{equation} \Psi^{(1,2)} \,=\, \Psi_S^1 \Psi_S^2 \,+\, \Psi_{NS}^1 \Psi_{NS}^2 \label{eq:compos} \end{equation} \noindent which is still a solution of the wave equations for two particles, but it does not contain any cross product between standard and nonstandard parts. In simple words the total wave function is the sum of the products of standard and nonstandard parts, respectively, and not the product of the two wave functions, each one with their standard and nonstandard part. This is a fundamental prescription, which is unavoidable in order to avoid contradictions. The prescription extends immediately to many-particle systems, eventually with the correct symmetrizations. \par The separation between standard and nonstandard components of the w.f. has to be extended to the density matrix $ \rho(t,x_j,x_{j'}) $, to avoid overlapping between the two components. This means that the nonstandard component $ \rho_{NS}(t,x_j,x_{j'})$ must contain only nonstandard frequencies, which implies the expression \begin{equation} \rho_{NS}(t,x_j,x_{j'}) \,=\, \sum_{r\neq r'} \cos(\omega_r t)\cos(\omega_{r'} t) \exp(\imath ( (p_r - p_{r'}) x_j ) ^* \rho_S(t,x_j,x_{j'}) \label{eq:rons} \end{equation} \noindent otherwise a standard contribution would be present. In particular the nonstandard density, corresponding to $x_j = x_{j'}$, is time dependent with zero mean value and with nonstandard frequencies. This is another reason why the nonstandard density cannot be detected by standard probe. Furthermore the strict connection between the scalar product (\ref{eq:spNS}) and density or density matrix is lost. \par The choice of the normalization in Eq. (\ref{eq:rons}), which is in line with Eq. (\ref{eq:spNS}), is dictated by the requirement of instantaneous reduction, at least for macroscopic system, as it seems to be suggested by phenomenology. This is shown in Appendix D and in the discussion below. \par For the subsequent developments we need to specify further the spectrum of the wave numbers $ p_r $. For a given physical system the spectrum is assumed to include an unlimited set of definite nonstandard differences $ p_r - p_{r'}$, each one appearing an unlimited number of times along the spectrum. Furthermore all the possible differences are assumed to be unlimited. This is possible because of the unlimited range of the nonstandard spectrum. In summary, each state of a system is represented by a standard component and a nonstandard component with a wave number spectrum with these properties. Further specifications are not necessary for the sequel. \par \par \par A similar structure can be adopted for propagating free wave packets. For a wave packet moving in the direction of the positive x-axis, starting at $ t \,=\, 0$, the nonstandard part is then assumed to be of the form \begin{equation} \psi_{NS}(t,x_j) \,=\, \sum_{l} \phi_S(0,x_l) \Psi(t,x_j-X_l) \label{eq:prop} \end{equation} \noindent where the function $\Psi$ is defined in Eq. (\ref{eq:tail}) or Eq. (\ref{eq:tail1}), with the restriction $ x_j > 0$. They represents the propagation of a set of singularities at a position $ t - X_l$. However, because this is the NS part of the wave function, the summation in their definition, Eq. (\ref{eq:delf}), must be restricted to NS values of the momenta. This modifies the singularity only by an infinitesimal quantity, and their explicit expression of Eq. (\ref{eq:tail}) is still valid, except at an infinitesimal distance from $ t $. The singularities are weighted by the standard wave packet. The latter is normalized to 1 with respect to the summation, i.e. the usual standard wave packet is replaced by \begin{equation} \phi_S(0,x_l) \, \rightarrow \, \phi_S(0,x_l)/\big[\, \sum_{l^\prime} | \phi_S(0,x_{l^\prime}) |^2 \big]^{\frac{1}{2}} \label{eq:norm} \end{equation} \noindent If the coordinate $ X_l $ are let to run over \textbf{R}~, then the scalar product between the nonstandard components of two states equals the scalar product between their standard components \begin{equation} <\, \psi_{NS} \,|\, \psi_{NS}^\prime \,> \,=\, <\, \psi_{S} \,|\, \psi_{S}^\prime \,> \label{eq:spsc} \end{equation} \noindent as shown in the Appendix C. In general the time $t$ is counted from a reference time, which depends on the history of the particle and has no influence on the results. The fact that the nonstandard component cannot be directly detected by a standard external probe is not in contradiction with the result of Eq. (\ref{eq:spsc}) and Eq. (\ref{eq:spNS}), since the matrix element here is between nonstandard components. In the case of Eq. (\ref{eq:tail1}), i.e. massive particle, the functions $\Psi$ can be taken to have the same group velocity as the standard part, so that, if one neglects the spreading of the standard wave packet, the discontinuities will follow its motion. If the standard component $\phi_S$ corresponds to a wave packet that at $t \,=\, 0$ splits into two part moving in opposite directions, like in the example of section \ref{sec:nsw}, then the same expression still holds, with no restriction on $x_j$. Notice that in this case $ x_j $ can also be interpreted as the relative coordinate between the two pieces of the wave packet. \par The set of functions introduced by means of Eqs. (\ref{eq:bt}) and (\ref{eq:prop}) defines the basis for the physical wave functions. Notice that, according to the previous discussion, the energy associated with these states is given by the one of their standard component. A physical state will be represented by a wave function that can be written as a linear combinations of these basis functions. The coefficients of the expansion will be determined only by the standard components, since the nonstandard components are then automatically defined. In other words if one expands \begin{equation} \Psi \,=\, \sum_l \, C_l \, \psi_l \label{eq:expan} \end{equation} \noindent with $ \psi_l$ a basis state, then \begin{equation} C_l \,=\, < \psi_l^S | \Psi^S > \label{eq:coef} \end{equation} \noindent with the usual meaning of probability amplitude. How this is working in applications will be described in the following sections. Generalization to many-particle states is straightforward. \section{General scheme for the equations of motion.\label{sec:EM}} \subsection{The interaction matrix elements.\label{sec:mel}} \par We illustrate the process of measurement and the corresponding reduction of the w.f. within the extended Quantum Mechanics outlined in the previous sections. For definiteness one can have in mind the measurement of the position on a screen of a particle initially described by a free wave packet. The particle initially has not a definite position, while at the end of the interaction with the screen its position is concentrated around a definite place. The final (approximate) position is indicated by the somehow macroscopic modification or signal from the screen. This is reduction. In general the initial state is a superposition of eigenstates of a given operator, which is usually considered as corresponding to a given "observable". For an ideal apparatus at the end of the interaction (alias "measurement") the state of the system is an eigenstate of the observable, while the apparatus is in a macroscopic state corresponding to that eigenstate. \par Along the time axis the system to be measured starts to interact with the apparatus. If the evolution remains linear and coherent, i.e. according to the standard part of the Scr\"odinger equation (or its relativistic versions), the total wave function will be a superposition of states that are obtained by evolving a single initial eigenstate. This means that each component of the wave function is evolving towards a macroscopic state, distinct enough if the apparatus has to work properly. At a certain stage this superposition should be reduced to one single component. This process of reduction is triggered by the interaction between the nonstandard parts of the component wave functions. To see that, let us consider the interaction matrix elements between the nonstandard parts of the w.f. of two components. In general we can write the interaction matrix element $V$ between component $l$ and component $m$ as the sum of a standard and a nonstandard contribution \begin{equation} < \Psi_l | V | \Psi_m > \,=\, < \Psi_l | V^S | \Psi_m > \,+\, < \Psi_l | V^{NS} | \Psi_m > \label{eq:int} \end{equation} \noindent where $ V^S $ and $ V^{NS} $ are determined by the standard and nonstandard part of the density matrices, respectively. Eq. (\ref{eq:int}) is just the matrix element of the interaction between the states $ \Psi_l $ and $ \Psi_m $. Only the standard component of the wave functions contributes to the first term, while only the nonstandard one to the second. The cross terms between standard and nonstandard components can be neglected. Because the nonstandard components are explicitly time dependent, this second term contains an additional time dependence besides the one coming from the evolution of the physical states. To be specific we assume that the apparatus remains in a bound state. In general the interaction between the two w.f. can be obtained as the sum of two particle interaction, which in this case involves unlimited values of the transferred wave numbers. It can be expected therefore that the interaction is of infinitesimal strength, as well as of infinitesimal range. Of course we actually do not know the interaction in these extreme regime, and it will be schematized as \begin{equation} v^{NS} \,=\, v_0 \Delta(x_j - x_{j'}) \label{eq:vNS} \end{equation} \noindent where $ v_0 $ is an infinitesimal and $ \Delta $ the nonstandard delta function. Under this assumption, as it is shown in Appendix D, the nonstandard term of the matrix element can be written \begin{equation} < \Psi_l | V^{NS} | \Psi_m > \,\equiv \xi_{lm}(t) \,=\, \sum_s A_{lm}^{(s)}\, \exp (\imath \Omega_s t) \label{eq:nsme} \end{equation} \noindent where the summation is over an unlimited set of nonstandard frequencies $ \Omega_s $, while the amplitude $ A_{lm} $ are complex numbers, to be specified below. This general expression holds also in three dimensions (Appendix D). As expected, the matrix element is a nonstandard function of time, with support on \textbf{R}$^*$ of the time axis. Within a monad of a given standard value $t$ of time, the function (\ref{eq:nsme}) takes an infinite set of values, each one belonging to the hyperfinite time axis $\rm \Pi_T $. If for each monad of the standard time axis we pick up one point, the resulting time sequence of values of this matrix element can be considered a function of the standard time variable. For each selection of points along the nonstandard hyperfinite time axis, we get a different function of the standard variable $t$. Since the function (\ref{eq:nsme}) is rapidly oscillating, each of this function will be quite irregular. All that suggests that the so constructed set of functions can be representative of some sort of stochastic process in \textbf{R} . To see to what extent this is actually the case, we need first to fix the rule of the point choice at each monad. We will adopt the most natural one, i.e. the points are chosen with equal frequency. The set of choices will be the ensemble of 'events' or 'trajectories', over which we have to average the stochastic evolution. Notice that for a smooth function, in particular a standard function like the standard matrix element, this procedure just reproduces the function itself (once the standard part of each function value is taken), and no stochasticity can be present. \par Further, we have to check first that the mean value of the supposed stochastic process is zero at each standard time value, and second that the correlation function has the proper structure.\par As to the first requirement, we consider the average of each exponential function over the set of points of the $\rm \Pi_T$-monad belonging to a generic time $ t $. These points can be written as \begin{equation} t_j \,=\, t_h \,+\, j \times h_T \ \ \ \ \ \ \ \ \ \ \ \ \ j \,=\, -n_0,\ \ -n_0+1, \cdots\cdots n_0-1,\ \ n_0 \label{eq:tmon} \end{equation} \noindent with $ t_h $ the point of $\rm \Pi_T $ closest to t and $ n_0 \times h_T $ still an infinitesimal. Then one gets \begin{eqnarray} <\, \exp(\imath \Omega_s t)\, > & = & \frac{1}{2n_0} \sum_{l = -n_0}^{l = n_0} \exp \big[\imath \Omega_s ( t_h + l h_T )\big] \nonumber\\ \ & \, & \nonumber\\ \, & = & \exp \big( \imath \Omega_s (t + \eta) \big) \sin\big(\Omega_s \eta\big) / \Omega_s \eta \,\sim\, 0 \end{eqnarray} \noindent where $ \eta = n_0 h_T $ is still an infinitesimal, while $ \Omega_r \eta $ is unlimited. In fact, if the set of frequencies has a lower bound $ \Omega_0 \,\sim\, N^{\alpha} $, with $ 0 < \alpha < 1 $, we can always choose $ \eta \,\sim\, N^{ -\beta } $, with $ 0 < \beta < \alpha $. The summation over $ s $ involves the rapidly oscillating sine function in [55], and then indeed \begin{equation} <\, \xi_{lm} (t)\, > \,\sim\, 0 \label{eq:av0} \end{equation} \noindent This result indicates only that the average values of $ \xi $ is infinitesimal with respect to the average of its absolute value, which is enough for the subsequent development. \par Next, let us consider the correlation function of the (stochastic) variable $ \xi $ at two different times. According to the construction above, the correlation function is defined as \begin{equation} \Xi_{l'm',lm}(t,t+\tau) = < \overline{\xi}_{l'm'}(t) \xi_{lm}(t+\tau) > = \frac{1}{2n_0}\sum_j \overline{\xi}_{l'm'}(t_j) \xi_{lm}(t_j+\tau) \end{equation} \noindent where the short over-line indicates complex conjugate and the $ t_j $ are as in (\ref{eq:tmon}). Following the same procedure as in [57], one sees that the dominant contribution to $ \Xi $ comes from the terms with $ s = s'$, while the others are negligibly small. Furthermore, it turns out that the summation include a random phase if $ (l',m') \neq (l,m) $, and the correlation vanishes. Finally one gets \begin{equation} \Xi_{l'm',lm}(t,t+\tau) \,=\, \delta_{ll'}\delta_{mm'} \sum_s |\, A_{lm}^{(s)}\, |^2 \cos( \Omega_s \tau ) \label{eq:corr} \end{equation} \noindent For each index $s$ the summation includes both $\Omega_s$ and $-\Omega_s$, so that the exponential can be replaced by the cosine function. We now analyze the properties of this correlation function. First of all, for $ \tau \,=\, 0 $ one gets \begin{equation} \Xi_{lm,lm}(t,t) \,=\, \sum_s |\, A_{lm}^{(s)}\, |^2 \,=\, \Xi_0^{lm} \label{eq:chi0} \end{equation} \noindent which is a definite number in \textbf{R}$^*$. For an infinitesimal interval belonging to monad(0) the correlation function remain close to $\Xi_0$. More precisely if $ |\tau| < \pi/2\Omega_M $, where $\Omega_M$ is the largest value of the frequency set, then the cosine functions remain all close to 1. On the other other hand, if $ |\tau| > \pi/2\Omega_m $, where $\Omega_m$ is the lowest value of the frequency set (which is still unlimited), then the argument of the cosine functions vary so rapidly that the correlation function vanishes. This is indeed the case if the distribution of frequency value has no regularity, i.e. it is essentially random. Notice that the set of frequencies was assumed to be unlimited, and therefore the phases should be distributed evenly along all the interval $(0,2\pi)$, if they have a random distribution. This is our assumption. From its behavior, $\Xi(\tau)$ can be identified with a pre-delta function, as defined in e.g. ref. \cite{Hoskins}. The simplest pre-delta function can be defined as follows \begin{equation} \theta(\tau) = \Bigg\{ \begin{array}{ll} K & \ {\rm if} \ \ \ |\tau| < 1/2K \\ \ & \\ 0 & \ {\rm otherwise} \end{array} \label{eq:pred} \end{equation} \noindent where $ K $ is an unlimited number. The correlation function $ \Xi(\tau) $ can be considered a "smoothed" version of $\theta(\tau)$, if properly re-normalized. The pre-delta function has the sampling property, that can be used as its definition \begin{equation} st\Bigg(\ ^*\!\!\!\int d\tau \theta(\tau) ^*g(\tau)\Bigg) \,=\, g(0) \,=\, \int d\tau \delta(\tau) g(\tau) \end{equation} \noindent where the integral on the left hand side is the nonstandard one, $^*g(\tau)$ is the canonical extension of a smooth standard function $g(\tau)$ and $\delta(\tau)$ the usual Dirac delta function. In a similar way one has \begin{equation} st\Bigg(\ ^*\!\!\!\int d\tau \Xi(\tau) ^*g(\tau)\Bigg) \,=\, C_0 g(0) \,=\, C_0 \int d\tau \delta(\tau) g(\tau) \label{eq:sample} \end{equation} \noindent where $ C_0 \,=\, \pi \Xi_0/2\Omega_M $. Therefore the standard part of $\Xi(\tau)$ in \textbf{R}~ is proportional to a Dirac delta function. \par This result, together with the property (\ref{eq:av0}), entails that the nonstandard part of the matrix element can be described as an Ito stochastic process \cite{stoch}. More precisely the matrix element is the formal derivative of a stochastic process $B$ \begin{equation} \xi_{lm} \,=\, d B_{lm}/d \tau \label{eq:derB} \end{equation} \noindent Of course the derivative in Eq. (\ref{eq:derB}) is ill-defined and it has only a symbolic meaning. The correct statement is that the evolution equation of the quantum state cannot be an ordinary differential equation but it must contain the contribution of a stochastic differential. In any case from Eq. (\ref{eq:sample}) the stochastic process $ B $ is characterized by the variance of the differentials \begin{equation} d\overline{B}_{lm}(t) d B_{l'm'}(t) \,=\, \delta_{ll'}\delta_{lm} \sigma_{lm}^2 dt \label{eq:var} \end{equation} \noindent where the variance $ \sigma_{lm} $ is directly related to the matrix elements $A_{lm}$, as specified in the sequel. \subsection{Stochastic and non-stochastic evolution. \label{sec:stoc}} Let us consider the interaction of a particle with a system of A particles, which is taken as representative of a possible "detector". At some initial time $ t = 0 $ the particle is described by a wave packet $ \Psi $ moving freely towards the system. One can expand the wave packet in a set of states, in particular a set of more localized wave packets $ \Psi_l $ \begin{equation} | \Psi_l > \,=\, \sum_m | \Psi_m > C_{lm} \label{eq:inexp} \end{equation} \noindent where $ C_{l} $ is the overlap between the standard parts \begin{equation} C_{l} \, =\, < \Psi_l^S | \Psi^S > \label{eq:over} \end{equation} \noindent For a single localized wave packet the particle will interact with the apparatus and a definite time dependent state will be evolving. For the case of an initial superposition of wave packets, if the evolution is linear and coherent, the time dependent state will be the linear combination of each evolved component. As indicated in the previous section, the nonstandard interaction between these components produces a stochastic evolution beyond the standard one. One can use the interaction representation to include implicitly the dynamics produced by the standard part only, i.e. one can introduce the evolution operator $ U_S(t)$ determined by the standard interaction only and the amplitudes of the overlap between the state $\Psi$ with the evolved basis states \begin{equation} C_{l}(t) \,=\, < \Psi_l | U_S^\dag(t) | \Psi > \label{eq:amint} \end{equation} \noindent The remaining evolution is determined by the nonstandard interaction. In view of the remarks above, one could write \begin{equation} \imath d C_l(t) \,=\, \sum_m d B_{lm} C_m(t) \label{eq:stochev} \end{equation} \noindent where now $ d B_{lm} $ is determined by the matrix elements of the nonstandard interaction in the interaction representation. However, in view of Eq. (\ref{eq:var}), the square of the stochastic component can give contribution proportional to the differential $dt$, and therefore the evolution equation we are looking for must be written in general as \begin{equation} \imath d C_l(t) \,=\, \sum_m d B_{lm} C_m(t) + \imath\sum_m D_{lm} C_m dt \label{eq:rightone} \end{equation} \noindent where the matrix $ D_{lm} $ has to be determined. It will be argued that this stochastic part of the evolution is relevant only for the w.f. reduction process.\par Stochastic evolution of the amplitudes have been considered in the literature by several authors. Here we are proposing the physical origin of the stochastic part of the dynamics. One of the most common features in many stochastic processes in general is the "martingale" property, which in the present model acquires a special physical meaning, as it will be apparent in the sequel. This feature is characterized by the constancy of the mean square fluctuation, in our case of $ | C_l(t) |^2 $, along the trajectories of the stochastic process. We will then choose the matrix $ D_{lm} $ of Eq. (\ref{eq:rightone}) by demanding the fulfillment of the martingale property. To this purpose let us consider the conjugate of Eq. (\ref{eq:rightone}) \begin{equation} \imath \overline{C}_l(t) \,=\, - \sum_m d \overline{B}_{lm} \overline{C}(t) + \imath\sum_m \overline{D}_{lm} \overline{C}_m dt \label{eq:rightonec} \end{equation} \noindent and apply the Ito algebra for the calculation of the stochastic differential of a composite variable. One gets \begin{equation} \begin{array}{ll} d(\overline{C}_l C_l) &\,=\, \overline{C}_l dC_l \,+\, C_l d\overline{C}_l \,+\, d\overline{C}_l dC_l \nonumber \\ \ &\, \\ \nonumber \ &\,=\, \overline{C}_l ( - \imath \sum_m d B_{lm} C_m \,+\, \sum_m D_{lm} C_m dt ) \\ \nonumber \ &\, \nonumber \\ \ &\,\ +\, C_l ( \imath \sum_m d \overline{B}_{lm} \overline{C}_m \,+\, \sum_m \overline{D}_{lm} \overline{C}_m ) \nonumber \\ \ &\, \nonumber \\ \ &\,\ +\, \sum_{m m'} d \overline{B}_{lm} d B_{lm'} \,+\, O(dt^2) \end{array} \label{eq:martin} \end{equation} \noindent If we now consider the average of this equation and take into account Eqs. (\ref{eq:av0},\ref{eq:derB},\ref{eq:var}), the martingale requirement implies \begin{equation} \overline{C}_l \sum_m D_{lm} C_m \,+\, C_l \sum_m \overline{D}_{lm} \overline{C}_m \,=\, - \sum_m \sigma_{lm}^2 | C_m |^2 \label{eq:martineq} \end{equation} \noindent Because the right hand side of the equation is independent of the actual value of $ C_l $, we must have \begin{equation} D_{lm} \,=\, - \frac{1}{2\overline{C}_l} \sigma_{lm}^2 \overline{C}_m \label{eq:Dlm} \end{equation} \noindent Notice that the martingale property is referring to the average over the stochastic trajectories. However, once this choice for the matrix $ D_{lm} $ is done, on the right hand side of Eq. (\ref{eq:martin}) only the terms linear in $ dB_{lm} $ will remain. It follows that in each stochastic trajectory one has \begin{equation} d \sum_l | C_l |^2 \,=\, 0 \label{eq:sumsq} \end{equation} \noindent which means that if at the initial time this summation is 1, it will remain 1 during the whole stochastic evolution. \par Another fundamental property that follows from the martingale requirement is the evolution of the product of two square amplitudes averaged over the stochastic trajectories, which can be derived again within the Ito algebra. One gets for $ l \neq m $ \begin{equation} d < | C_l(t) |^2 | C_m(t) |^2 > \,=\, -\sigma_{lm}^2 < | C_l(t) |^2 | C_m(t) |^2 > \label{eq:prod} \end{equation} \noindent which implies exponential fall down to zero for each unequal pair of states. This finally means that at most one of the amplitudes can be different from zero at the end of the stochastic process. Indeed this property of the average is automatically transferred to each of the trajectory, since no one of them can have a different result from these particular final averages.\par If we separate the amplitudes in modulus and phase, the corresponding formalism is equivalent to the one of ref. \cite{Pearle}. The present model can be considered an explicit realization of that general scheme. Notice that only the martingale requirement is enough to get all the just described results, and that the additional stochastic contribution is a natural consequence of the assumptions of the model, i.e. the introduction of the nonstandard hyperfinite real axis, and the corresponding nonstandard frequencies. In this way, as it will be apparent in the sequel, stochasticity and reduction is put in a general physical framework. Furthermore the meaning of the stochastic process will acquire features specific of the model, which are essential for the description of the wave function reduction. The assumption of the martingale property can be justified if the reduction process is instantaneous, since then the properties of the system cannot change during the infinitesimal reduction time. \par The results can be summarized as follows. If the initial state is a superposition of a set of states \begin{equation} | \Psi (0) > \,=\, \sum_l | \Psi_l > C_l(0) \label{eq:insup} \end{equation} then \vskip 0.2 cm \par\noindent 1. The summation of the square amplitudes $ \sum_l | C_l(t) |^2 $ remains constant during the measurement process, so if it is initially equal to 1, it remains 1. \vskip 0.3 cm \par\noindent 2. The final amplitudes are all equal to zero except only one (with a modulus square equal to 1), i.e. a single state $ | \Psi_l > $ is reached. \vskip 0.3 cm \par\noindent 3. Each state $ | \Psi_l > $ is reached with a relative frequency equal to the initial modulus square $ | C_l(0) |^2 $ (Born' s rule). \vskip 0.2 cm \par\noindent To get this result one has to assume that the dynamics can be described within the considered set of states. \par All that reproduces the fundamental properties of what is usually assumed to occur in Q.M. measurement process, whatever meaning is given to this word. \par Some comments is needed for point 2 above. In agreement with ref. \cite{Pearle}, this property is a consequence of the relation \begin{equation} d < | C_l(t) |^2 | C_m(t) |^2 > \,=\, -\sigma_{lm}^2 < | C_l(t) |^2 | C_m(t) |^2 > \label{eq:prod} \end{equation} \noindent which can be derived from the coupled equations (\ref{eq:rightone},\ref{eq:rightonec}). This relation implies the asymptotic vanishing of all the $ | C_l(t) |^2 $ except at most one. This is reduction, at least for large enough time. For unlimited $ \sigma_{lm}^2 $ one gets the instantaneous reduction mentioned above. To be more precise the reduction in this case occurs in a (standard) time interval arbitrarily small. \par From this derivation of the reduction process it appears that all the states $l,m\cdot$ must be interconnected among each other by the stochastic dynamics, namely no one of the $ \sigma_{lm} $ should vanish. The meaning of the vanishing of e.g. $ \sigma_{lm} $ can be clarified by considering the relation for the Ito stochastic differentiations \begin{equation} \begin{array}{ll} d < | C_l(t) |^2 | C_m(t) |^2 > &\,=\, < | C_l(t) |^2 d | C_m(t) |^2 > \,+\, < | C_m(t) |^2 d | C_l(t) |^2 > \\ \nonumber \ &\ \\ \nonumber &\ \ \,+\, < d | C_l(t) |^2 d | C_m(t) |^2 > \end{array} \label{eq:uncor} \end{equation} \noindent The first two terms on the r.h.s vanish because of the martingale property, and the vanishing of $ \sigma_{lm}^2 $ implies that the two stochastic differentials are uncorrelated. \par At this point we need to specify the variances $ \sigma_{lm} $. In the Appendix D the matrix element $ A_{lm}^{(r)} $ is estimated in the particular case of the position measurement of a particle, but some features are expected to be general. In that particular case the square of the variance can be written \begin{equation} \sigma_{lm}^2 \,=\, f_{lm}\, v_0^2\ \lambda^{2(A - 1)} \label{eq:sigma} \end{equation} \noindent where $ f_{lm} $ is a finite factor, $ v_0 $ is the infinitesimal strength of the nonstandard interaction as introduced in Eq. (\ref{eq:vNS}) and $\lambda $ is the unlimited number of nonstandard frequencies. In this way the matrix elements and the variances scale with the number of degrees of freedom. The crucial point is that the matrix elements, and therefore $ \sigma_{lm}^2 $, depend on the size of system, in this case on the number $ A $ of the particles involved in the evolved components. This number can be considered as evolving adiabatically in time as the interaction process is developing. In the considered example the number of the excited degrees of freedom in the apparatus increases with time coherently in each component, toward a macroscopic size, as it must be for a proper detector. For $ A = 1 $ the variance is therefore infinitesimal, which means that the stochastic interaction has no effect on the time evolution, which is therefore determined only by the standard component. It follows that in elementary interaction processes between two particles no wave function reduction is possible. For $ A > 1 $ the variance can diverge, provided $ \lambda \sim N^\gamma $, with $ \gamma > 1/(A - 1) $. For a macroscopic system this lower limit of the exponent $ \gamma $ is quite small, and therefore $ \sigma_{lm} $ are likely to become unlimited at a certain stage of the evolution. We interpret this as the indication that the reduction process is 'instantaneous'. As mentioned above, this means that reduction in this case occurs in a (standard) time interval arbitrarily small. Although this result could appear not physical, it has been the standard assumption in ordinary quantum mechanics that the "quantum jumps" are indeed instantaneous \cite{jumps}. However this property of the reduction is a consequence of the assumption of an instantaneous interaction. Therefore the result actually implies that the reduction time is limited from below only by the interaction time which characterizes the considered physical case. For simple systems some indication of a finite transition time has been found, as we will discuss later. \par The picture of a measurement process, that arises from the development above, can be described as follows. As the interaction between system and apparatus starts to evolve according to the scheme of Eq. (\ref{eq:vonN}), the total wave function can be written \begin{equation} \Psi(t) \,=\, \sum_n C_n \phi_n(t) \label{eq:start} \end{equation} \noindent where $ \phi_n $ is the wave function that the apparatus+system would follow if the initial state of the system would have been $ n $, i.e. according to Scr\"odinger equation. When the evolution has developed enough, the set of states $ \phi_n $ can be identified with the states $ l,m,\cdots $ of the stochastic process, which will select only one of the $ \phi_n $, i.e. only one of the $ C_n $ will be non zero and equal to 1. The stochastic process starts since the nonstandard parts of the component wave functions have a non vanishing overlap, and eventually it will be instantaneous, in the sense mentioned above. The instant at which the stochastic process occurs is within the full interaction time, but the exact position cannot be fixed by the present approach. It has to be connected with the degree of evolution of each one of the possible components towards the final output, as we will specify later. Notice that it is the apparatus that fixes the basis states of the quantum superposition. Indeed the apparatus must be sensitive to a specific quantum number of the detected particle and insensitive to other quantum number if it has to work properly. This property, that an ideal apparatus should have, is the one which fixes the particular evolution of the measurement process. This means that the system to be detected must be able to interact at the standard level with the apparatus in order that the reduction can occur. \par The above picture is valid for a detection which is due to the interaction of a system with an apparatus, where the evolution toward a macroscopic response of the apparatus induces a stochastic reduction of the wave function. This has to be considered distinct from the case where the apparatus only splits coherently the incoming wave function in different components, for which no stochastic process can occur. A typical case is a Stern-Gerlach experiment at the moment of the splitting of the wave packet into two separate components, corresponding to the two values of the spin projection. In this case the reduction occurs only at the moment of the particle detection. In fact after the splitting the w.f. is still a coherence superposition of the two components, as clearly indicated by the results of the experiment discussed in Sec. \ref{sec:attenuation}. \par In the case of the modified Stern-Gerlach experiment, discussed in Section \ref{sec:SG}, the reduction occurs at the first half-screen, if it acts indeed as a detector. In fact the splitting at the magnet produces two separate components of the initial wave packet, which are joined by a nonstandard component of the w.f., as described in Section \ref{sec:nsw}. The latter interacts with the nonstandard component of the screen w.f., which produces the reduction to one of the two wave packets of the superposition. Accordingly, the particle will be detected or it will continue towards the last detection at the distant screen. This example will be further discussed in the next section. \par Another example to be mentioned is the two slits experiment. In this case a first reduction occurs at the screen bearing the two slits. The stochastic process selects the two possibilities, the detection of the particle in one of the position outside the slits, or the wave packet corresponding to the coherent exit of the particle from the two slits. The final reduction occurs at the second screen where the interference pattern is detected. \par In the same ref.\cite{Pearle} it is discussed the problem of superluminal transmission that seems to be present in this approach. For this purpose two entangled identical experiments are considered. During the reduction process one experiment can influence the other, and if the experiments are far away this looks to imply superluminal transmission. The conclusion one can draw is that the original formulation of Eq. (\ref{eq:rightone}) violates relativity. However one can notice that entanglement is also at the basis of the EPR-type experiments, where Bell' s inequality is violated. They are therefore incompatible with relativity. Quantum mechanics is in perfect agreement with the measured correlations and the procedure to calculate the correlations indeed assumes instantaneous reduction of the wave function. This is also suggested by the detection in coincidence of the two entangled particles with an apparatus of two identical arms. Despite that entanglement cannot be used to transmit superluminal messages \cite{Ghir}, no relativistic invariant theory can be able to predict these instantaneous quantum mechanical correlations, and indeed the correlation is not relativistic invariant \cite{Friis1,Friis2}. Therefore we are forced to accept violation of relativity in the wave function reduction process, whatever is the theory which is adopted for its description, or alternatively we have to substitute the reduction with some other unknown process. \par To clarify the mechanism of the wave function reduction, it can be useful to discuss a relevant phenomenon related to the particle detection process, which was underlying the discussion above. Let us consider the beam deflection from a diffraction lattice. When a particle wave packet moves towards the surface of the lattice, it is split into a part that is deflected and a part that is penetrating the lattice. The whole wave packet is then undergoing the reduction process due to the coupling of its nonstandard component with the one of the scattering centers (e.g. ions or nuclei). The wave function is therefore reduced to the one penetrating the lattice or to the deflected one. The latter, if realized, will contribute to the diffraction pattern. From the point of view of the standard part of the incoming wave packet, i.e. ordinary quantum mechanics, the process corresponds to the two possibilities of an elastic deflection or to the inelastic interaction of the particle with the lattice. The latter is equivalent to a detection of the particle around a given place in the lattice, i.e. a position measurement. The strength of the reflected and transmitted component is determined by the standard part of the wave function, i.e. ordinary quantum mechanics, which assumes that the eventual wave function reduction occurs when it reaches an external detector, where the diffraction pattern is recorded. Once the nonstandard components are included, the reduction takes place at the diffraction lattice, with relative strengths that follow the prediction of quantum mechanics. \par To reinforce this point one can mention the experiment of ref. \cite{Rempe}, where the diffraction is produced on a Rb atom beam by a lattice of light standing waves. In this case the lattice produces a splitting of the incoming beam into reflected and transmitted component and no reduction takes place at the lattice. Then a second lattice is used to produce different paths, which show interference patterns. In this arrangement the 'which way' question, typical of QM, can be elucidated. Leaving apart this issue, which has been debated since few decades \cite{Quach}, we only notice that in this case the coherence is never lost until the final detection. In general, the 'absence of the knowledge' of which way the system has followed means simply that the wave function is not completely reduced to only one of the possible paths that the system can follow, until the final detection \subsection{The physics of the wave function reduction.\label{sec:redu}} \par In this sub-section we discuss some essential points about the reduction process. To be specific, let us consider again the process of the detection of a particle to measure its position. As the particle wave packet interacts with the detector, each component starts to develop according to the Schr\"odinger equation, and the state of the whole system particle+detector will be the superposition of these components. At a certain stage the coupling between the nonstandard parts of the components produces its instantaneous reduction. The condition for the detector to work properly is that its response is sufficiently different for each different localized wave packet. We can assume that all these possible localized wave packets are degenerate in energy. Notice that the reduction is on the total wave functions, which of course brings the reduction of the particle wave packet. For a general reduction process, the initial wave function will be a superposition of different eigenfunctions of a given observable (momentum, angular momentum, spin, energy, and so on). An ideal reduction will select one of the eigenfunction, and it is enough that the detector has a different response for each eigenvalue to have a proper apparatus system for the measure of the considered observable. Of course the response of the detector must be readable at macroscopic classical level. The reading of the response could require other reduction processes at macroscopic level, but the selection of the eigenvalue is in any case performed.\par As discussed in the previous subsection, another condition for a proper detection apparatus is that it is essentially macroscopic, because it must have a macroscopic final response for each initial eigenvalue. It is this property that identify the basis states among which the stochastic process can take place, i.e. it fixes the proper superposition of states. \par It has to be stressed that the reduction process occurs also in the situation where the response of the apparatus is of different character for each component of the initial wave function. This is the case for the modified Stern-Gerlach experiment of section \ref{sec:SG}. In this arrangement one of the standard component of the split wave packet overlaps with the half screen, while the other moves freely. Then the former component develops initially into an excited many-body state, while the latter remains in the free state. The splitting has produced a nonstandard "tail", through which the two component interact. In this case one evolved component is in a bound state, the other is free, but the nonstandard interaction can still produce a stochastic process, which ultimately produces the reduction, since one of the two component contains a macroscopic number of degrees of freedom.\par It has to be noticed that if the first half-screen does not act as a proper detector, e.g. too few degrees of freedom are excited (think to a single atom), the reduction cannot take place, and the half-screen will remain entangled with the wave packet, until the final detection occurs. For a semi-transparent half-screen the situation will be similar to the experiment of Sec. \ref{sec:attenuation}, which corresponds to a reduction between the detection state and the coherent superposition between the two wave packets. \par Unfortunately the scheme does not allow to specify more precisely what one can consider 'macroscopic' a given physical object, except the somehow trivial condition that it has to include at least two particles. It is compatible with this picture that the reduction occurs at a universal value of the number of excited degrees of freedom in the detector, but there is not an obvious way to check this. \par One can remark from this discussion that the separation between observer and physical object in the observation process is based on a chain of reduction processes, where the coherent evolution is interrupted in general several times by a stochastic evolution, until a macroscopic perceptible level is reached. It has to be kept in mind that the quantum representation of the physical objects is never abandoned, even at macroscopic level, where, of course, the underlying quantum nature of the object cannot be anymore apparent. \section{EPR, null experiments, decay and cats.\label{sec:para}} In this section we consider a set of experiments or thought experiments where the reduction process is in action and how they can be interpreted or how the so called 'paradoxes' can be resolved.\par \subsection{Entanglement.} Let us start with the EPR experiments, for definiteness in the case of spin entanglement. In this case the wave function of the two particles can be written, assuming for simplicity gaussian wave packets \begin{equation} \Psi \,=\, \Phi_{CM} \big[ \phi(r,\uparrow\downarrow) \,-\, \phi(r,\downarrow\uparrow) \big] \label{eq:entang} \end{equation} \noindent where $ \Phi_{CM} $ is the wave function of the center of mass, $ \phi $ the relative wave functions and the arrows indicate the orientations of the spins. The wave function contains the superposition of two different orientations of the spins, which corresponds to total spin zero. The relative wave function contains a nonstandard part, in the form discussed in Sec. \ref{sec:nsw}. It corresponds to a set of singularities that propagate in opposite directions and therefore this nonstandard wave function extends from the position of one particle to the other, as determined by the standard parts of the two wave function. The latter corresponds to two free localized wave packets which move with their group velocity, and therefore the relative wave function $ \phi(r) $ is peaked around the distance between these two wave packets. Let us consider the measurement of the spin of one of the two particles, i.e. at one side of the experimental arrangement. The coupling between the detector and the particle must be characterized by a spin dependent response, and therefore the nonstandard part includes a matrix element between the two spin response w.f. $ \psi_R $ \begin{equation} V_{EPR} \,=\, < \psi_R \uparrow\downarrow | V_{NS} | \psi_R \downarrow\uparrow > \label{eq:EPRmat} \end{equation} \noindent According to the general discussion on the wave function reduction of Sec. \ref{sec:stoc}, this induces a stochastic evolution, which finally reduces the wave function to one of the two configurations, with equal probability. This reduction process is equivalent to a strict correlation between the spins of the two particles that are far apart. This correlation looks a natural consequence of the fact that the nonstandard part of the wave function extends all along between the two particles, which also gives a vivid picture of the entanglement. Of course what is producing the spin correlation is the finite extension of the wave function and its indivisibility, i.e. a modification of any part of the wave function introduces a new state vector of the system. It has to be stressed that the nonstandard part, described above, is present for two generic wave packets, entangled or not. \par \subsection{Null experiments.} As in the examples of Sec. \ref{sec:null}, a null experiment is one where a "measurement" on a physical system is performed without apparent interaction with it. In terms of wave function it appears that in such experiments the reduction takes place without interaction with the system. Besides the cases of Sec. \ref{sec:null}, one of the most reported example is the possibility to discover the presence of a physical object despite the certainty that no interaction with it could have occurred. The original version of this case was a thought experiment with an optical Mach-Zender interferometer proposed in ref. \cite{EV,V}. In the thought arrangement an object is inserted in one of the path of the interferometer, that was initially in the "dark" stage, i.e. destructive interference, in one of the final branches. If then one observes a photon in this dark branch one can be sure that it has not be absorbed by the object and the photon has moved along the other path. This means that we can know if an object was present just because no interaction could have occurred between the photon and the object. In other terms the reduction of the wave function has taken place without any apparent interaction, a typical null experiment. To make the situation dramatic the original proposal was to use as object a bomb sensitive to the absorption of a photon. An experimental realization (without bomb) of this proposal was performed in ref. \cite{Kwi} with a Michelson interferometer and a special device to handle single photon states. Asymmetric beam splitter was also considered, and the results are in agreement with quantum mechanical predictions. This demonstrates that indeed the wave function reduction can occur without apparent interaction and the non-local character of the reduction is apparent. As stressed in ref. \cite{Kwi}, one of the main underlying assumption which is at the basis of this quantum mechanical behavior is the indivisibility of the wave function. \par If one extends Quantum Mechanics by including the nonstandard part of the wave functions, then the reduction process takes place at the object because of the coupling between the nonstandard part of the photon wave function and the one of the object. The corresponding explicit expression of the matrix elements which enter in the stochastic evolution is given in Appendix D. In the Mach-Zender arrangement the incoming photon wave packet is divided into two wave packets traveling along the two paths of the apparatus. Assuming equal weight of the two parts, at the arrival of one of them at the object position the total wave packet has probability 1/2 to be reduced to the absorbed state, i.e. object excitation, and probability 1/2 to be reduced to the position at the other branch. In this second case the photon will arrive at the detector undisturbed (ideally) and we will know that the object was present. The nonstandard components and the indivisibility of the particle wave function are at the basis of this description. This physical case is quite similar to the modified Stern-Gerlach experiment of Sec. \ref{sec:SG}, which indeed can give rise to a null experiment. As in that case, the object in the Mach-Zender interferometer could fail to reduce the wave function, and the entanglement between the excited state of the object and the wave packet will persist until the final detection. A semi-transparent object can again be treated as in the Stern-Gerlach experiment, discussed in Sec. \ref{sec:redu}. Similar considerations apply to the Renninger paradox, discussed in Sec. \ref{sec:Renni}. The reduction takes place when the nontandard part of the wave function reaches the smaller emisphere, where, with probability 1/2, the wave packet is projected inside the smaller or larger emisphere. In the latter case the particle will be detected later, at the larger detecting surface. \subsection{Decay and Zeno effect.\label{sec:decay}} Another phenomenon where the wave function reduction can be present is the decay of a system. Let us first focus on the case of an unstable system, e.g. the radioactive decay of alpha particles. A simplified version of the process is a particle that is trapped in a potential barrier which can escape by tunneling. The escaping time can be quite large if the potential presents a resonance with a small width, which simulates a radioactive decay process. Assuming that the particle initially is in a state confined inside the potential well, the survival probability as a function of time has been calculated in a rigorous way in ref. \cite{Calderon} for a potential of compact support. If $ \Psi_0 $ is the initial state, this quantity is defined as \begin{equation} P(t) \,=\, | < \Psi_0 | U(t) | \Psi_0 > |^2 \label{eq:surv} \end{equation} \noindent where $ U(t) $ is the evolution operator. The probability $ P(t) $ shows the expected exponential decay law, with some deviations at very short and very large time. The latter have been both observed in some atomic systems \cite{Fish,Roth}. The evolved wave function contains a confined component, since $ P(t) $ is different from zero at all time, and a component extended outside the potential well. The latter is a wave packet that originates from the tunneling region and moves toward infinity. This wave packet also expands, but at a smaller rate than its center of mass motion. For an isolated system the evolution will keep the superposition between the localized and the outgoing components, until eventually the particle will be measured at some distant detector, where the reduction should take place. However, according to the view of a strictly orthodox quantum mechanics, the coherent linear evolution, as dictated by the Schrodinger equation, never stops, since even the detector should be considered a quantum system. All that is at the origin of the so called ''Schrodinger cat paradox". As it is well known, in a thought experiment, devised indeed by Schrodinger, a cat is enclosed in a container, where a particle emitted by a radioactive source is able to operate a gun that kill the poor cat. As noticed above, the wave function of the emitted particle contains a localized and an outgoing components, and therefore if the evolution of the whole system follows the linear Schrodinger equation, at a certain stage the cat will be in a superposition of dead and alive state. This is already a paradoxical situation. Even more, only the opening of the container and our observation will reduce the wave function, i.e. it will kill or keep alive the cat. The situation is a direct consequence of keeping the coherent linear evolution at all time, with no possibility of wave function reduction at any stage. The paradox is overcome if one assumes that any macroscopic objects can act as a detector, where the linear evolution ceases to be applicable, which is indeed one of the consequences that comes from the presence of the nonstandard components. \par All that also suggests that in the presence of detectors the decay rate could be altered. This is in fact what is occurring in the so called ''quantum Zeno effect", which has been observed in particular atomic systems \cite{Zeno1,Fish}. However it has to be taken in mind that in the usual Zeno effect the measurement is the projection on a single non-decayed state rather than on the decay product.\par For the decay from complex many-body systems, the wave function reduction could happen even at the moment of emission, since the rest of the system, other than the particle, could act as a detector. In other words the strength of the Zeno effect should depend on the complexity of the system, besides on its structure. \par To be more specific, let us consider a simplified model for the decay of an alpha particle radioactive nucleus, which can be assumed initially to be in a resonant state. The evolution of the survival probability of Eq. (\ref{eq:surv}) at short time $t$ can be expanded as ( $ \hbar $ = 1 ) \begin{equation} P(t) \,=\, 1 \,-\, \Delta E^2 t^2 \,+\, \cdots \label{eq:quadr} \end{equation} \noindent where $ \Delta E $ is the width of the resonance, \begin{equation} \Delta^2 \,=\, < \Psi_0 | H^2 | \Psi_0 > \,-\, ( < \Psi_0 | H | \Psi_0 > )^2 \label{eq:width} \end{equation} \noindent with $ H $ the hamiltonian of the nucleus. The quantity $ \Delta E^2 t^2 $ is just the probability that the nucleus is in a state different from the localized resonant state and eventually at some time $ t $ the wave function will correspond to the formation of an alpha particle in a state with a small component outside the nuclear barrier. The latter is actually produced by the nucleus itself and it can act as a detector, if the residual nucleus is large enough. One can assume that the reduction takes place when the number of excited degrees of freedom is large enough to trigger the stochastic evolution, which can be associated with a critical value $ p_c$ of the strength of the non-resonant component. According to Eq. (\ref{eq:quadr})the time necessary to reach the value $ p_c $ is \begin{equation} \delta t \,=\, \sqrt{p_c} / \Delta E \label{eq:pc} \end{equation} \noindent The wave function will be reduced to the external decay component with probability $ p_c $ or to the localized component with probability $ 1 \,-\, p_c $. In the latter case the evolution of Eq. (\ref{eq:quadr}) will restart. The probability for the alpha particle to remain localized is then \begin{equation} P_l \,=\, ( 1 \,-\, p_c )^n \label{eq:loca} \end{equation} \noindent where $ n $ is the number of times the wave function has been reduced, i.e. the number of trial of the alpha particle to escape from the nucleus. At time $ t $ this number is given by $ n \,=\, t/\delta t $ and the probability to remain in a non-decayed state will be \begin{equation} P_l(t) \,=\, \exp ( - t/\tau ) \label{eq:nod} \end{equation} \noindent with \begin{equation} \tau \,=\, - \frac{1}{\Delta E} \sqrt{p_c}/\ln (1 - p_c ) \label{eq:tau} \end{equation} \noindent This relation connects the lifetime of the nucleus with the resonance width in the usual way, e.g. for a resonance in a single particle potential well. In this pictures the alpha particle is emitted at discrete time steps of size $ \delta t$. However the localized state can be different at each repeated reduction of the wave function. This means that the time $ \delta t $ is fluctuating at each trial. To take into account, at least approximately, this fluctuation, one can average the factor in Eq. (\ref{eq:tau}) over the value of $ p_c $ in the interval between 0 and 1. One can check that the resulting average is close to 1. In this way one gets the continuous version of Eqs. (\ref{eq:loca},\ref{eq:nod}). The procedure leading to Eq. (\ref{eq:nod}) looks similar to the formulation of the quantum Zeno effect \cite{Kof}. However here the reduction of the wave function is not on a single initial state and therefore it occurs at non equally spaced time intervals, and of course no real detector is actually present. The recovering of the exponential decay is in line with ref. \cite{Fish}, where it is shown that repeated measurements induces an exponential decay in a tunneling experiment with trapped ions. Also in this case the analogy can be misleading, since in the experiment the probing was on a specific observable (the number of non-tunneled particles). \par In any case the main conclusion from the model is that the coherent linear evolution of the decaying nucleus is interrupted just at the moment of the alpha particle emission, which then prevents the use of a radioactive decay for the Schrodinger cat paradox. Another conclusion from this simplified model is that for the radioactive alpha decay of heavy nuclei the non-exponential decay evolution should be quite restricted in time. \subsection{Photon emission and statistics. \label{sec:qs}} The exact stage when the reduction occurs along the photon emission process has been debated since several years ago. According to ref. \cite{Pegg1,Cohen,Pegg2,jumps} the decay process proceeds until the photon is detected, where the reduction takes place. In other words the superposition between the non-decayed component and the decayed one of the atomic system plus photon is indeed reduced at the moment of the photon detection. In this sense the photon detection "causes" the atomic system to decay to its ground state. This is actually in agreement with the picture of ordinary quantum mechanics, since in the absence of interaction the quantum superposition remains intact due to the linearity of the free equation of motion. This position was criticized in ref. \cite{Carmi}, where it was argued that the decay of the photon into the vacuum or environment (in particular a cavity) is the irreversible process that produces the wave function reduction. The latter therefore takes place at the moment of the decay. In this way one introduces energy dissipation in an open system. This picture was formalized, and developed in detail in ref. \cite{Carmib}, by introducing a Born-Markoff approximation in the general equation of motion for the atomic system plus the photon field and the corresponding picture in terms of ''stochastic wave functions" and ''quantum trajectories". The theory has gained support from a recent experiment \cite{latency} on essentially three-level atomic system under the influence of a photon bath. One of the prediction of the theory is the appearance of a waiting time (latency) before a quantum jump occurs, a phenomenon that seems to be confirmed by the experiment. It is not clear if the wave function reduction plays indeed some role in this particular experiment with such a relatively simple system. In any case the general theory introduces a stochastic mechanism through the Born-Markoff prescription, by which the reduction process is introduced in the dynamical evolution of the system. In the master equation that describes the evolution of the whole system it introduces an irreversible process. However the spontaneous emission of a photon by itself is reversible, since time reversal is fulfilled in the process, and therefore some other element must be in action to get irreversible evolution. This can be a stochastic process based on a probability space in place of a deterministic evolution, as the one depicted in Sec. \ref{sec:stoc}. For this type of physical systems the formalism should be generalized to a relativistic framework, since we deal here with particle creation and annihilation processes. However one can use the Fock space and consider the reduction between the atomic excited state and the ground state plus a photon. For a single isolated atom only the spontaneous emission can occur. Ordinary quantum mechanics cannot predict the time at which the single excited atom will emit spontaneously the photon, and it is considered completely random, but with a constant probability per unit time, which gives the exponential survival probability. According to ref. \cite{Carmi,Carmib} the reduction takes place because of the ''irreversible coupling with the vacuum''. Even in the case of a single atom this coupling must produce the random process of the decay. It is suggestive that the randomness could originate from the nonstandard interaction of the photon with the rest of the atomic system at the moment of emission, where the superposition between non-decayed state and decayed state (atom g.s. + photon) is reduced to one of the two components. In other words the reduction could take place when the nonstandard part of the outgoing photon wave packet is large enough to be coupled with the underlying atomic system. The physical picture would then be similar to the case of alpha particle decay (even if there is no barrier). \par Partial reduction is also possible. This seems to be suggested by the thought experiment proposed in ref. \cite{Scully}, and realized under a different arrangement in ref. \cite{Brasil}, which are actually aimed to check the complementarity principle without resorting to the indeterminacy principle. Without going into details, in the experiment a single excited atom, after having passed a two slits arrangement, emits a photon within two cavities, each one corresponding to the direction selected by the slits. After exiting from the cavities the atom is with certainty in its ground state, because the photon has been absorbed by the proper mode of the cavities, and therefore the wave function describing the atom and the photon has been reduced. However the atom and the cavities are still entangled, i.e. the atom position and the mode of each cavity. The atom is then detected at a distant screen. By devising the state on which the mode can be detected (i.e. projected), one observes that the signal at the screen can be changed. In particular one can have a distribution with or without interference fringes. This means that the entanglement between the atom and the cavities has persisted until the final mode detection, i.e. no reduction of their wave function has occurred before the completion of the 'measurements'. As discussed in the same ref. \cite{Scully}, this picture of the experiment explains also the so called delayed-choice quantum eraser 'paradox', which turns out to be actually a 'post selection' procedure. For a recent discussion on the subject see e.g. ref. \cite{Qure}. Generally speaking one can expect that in a process or experiment a chain of reductions is occurring. \section{General remarks and 'philosophical' questions. \label{sec:phil}} \par The general scheme presented in this paper is not intended to question standard Quantum Mechanics but to complete it. The underlying 'philosophy' is within the assumption that it is legitimate to consider the wave function in order to describe the physical phenomena. In this way one confers to the wave function some sort of 'reality', in line with the Pusey-Barrett-Rudolph theorem \cite{PBR}. However this does not necessarily imply that it can be viewed as some type of field, but simply that it is the right tool to represent the physical objects. Further specifications are not possible and the point is left intentionally with some degree of vagueness. \par Another point that has been discussed is the violation of relativity in the reduction process, which is just the phenomenon that is at the basis for the completion of Quantum Mechanics. Formally the violation is implied by the Bell' s inequality, that is violated in EPR type of experiments where entanglement is essentially involved. As we have stressed, one has to accept the violation once it is assumed that the wave function reduction can be considered instantaneous. This is confirmed by EPR experiments where the two opposite arms are very long but of equal size. This violation is at the level of correlations only, while messages cannot be sent using the entanglement, since the correlation is probabilistic and each detection event cannot be predicted \cite{Ghir}. \par The starting point of the scheme is the introduction of the hyperfinite real line for the space dimensions. This implies violation of the rotational and translational invariances, but the violation cannot be detected at the standard level for the three dimensional space. Furthermore the only effect of the nonstandard part of the space points is the reduction process, which is independent of the lattice orientation. What remains ambiguous is the interpretation of the lattice, i.e. if the grains of space are indeed irreducible elements of space or only the limitation of the space coordinates. This is a 'philosophical' issue which does not affect the formalism of the scheme.\par One of the main characteristic of the nonstandard part of the wave function is its mandatory time dependence, no static nonstandard component is possible. Here we touch one of the mystery of physics, the reason of the flow of time, in contrast with the space dimensions. A daring philosophical speculation could be to reverse the point of view, that is to consider the flow of time as a 'consequence' of the presence of the nonstandard components. Instead of considering the phase of the nonstandard components as evolving in time, one could consider the time as a measure of this phase. In this way one gets a picture where microscopic clocks are present at each point of the physical space to which the physical processes have to be confronted. They look in fact as the hypothetical clocks that one usually associates to each reference frame in special and general relativity. This is indeed a philosophical issue and we refrain to insist on it. \section{Summary and conclusions.\label{sec:conc}} \par In this paper we have taken the point of view that quantum mechanics is incomplete. This assumption is suggested by some phenomenological facts, if they are considered in the framework of the so called 'realistic' attitude. The main starting point is the characteristics of any measurement on quantum objects. Independently of what the word 'measurement' can mean, nobody can question that the results of a single event in an experiment cannot be in general predicted (except in special cases) and the value of whatever physical quantity that is obtained is random with a well defined probability distribution. This is in contrast with the linear evolution of the Schrodinger equation (at non-relativistic level), which then is violated in all the countless quantum experiments that are daily performed in the world. If one requires that the physical theories must be able to describe explicitly all natural phenomena, one is forced to implement Quantum Mechanics with an additional dynamical equation that must have the appearance of a stochastic evolution. The scheme presented in the paper is aimed to this purpose. The stochastic evolution is obtained by introducing the hyperfinite real space axis, which is a lattice of points of infinitesimal step size in the sense of nonstandard analysis. A similar hyperfinite real line is introduced for the time axis, but with a step infinitesimal of higher order with respect to the space step. The stochastic trajectories are the possible sequences of nonstandard time points, one for each monad of the real standard axis. The probability distribution is then chosen as uniform. The so generated stochastic process is the one which determines the stochastic trajectories of the interaction matrix elements between the nonstandard components and ultimately produces the wave function reduction. This is one of the facets of the Quantum Mechanics implementation. The uniform probability distribution on the possible time sequences is considered the fundamental element that introduces stochasticity in the physical processes, and by construction it cannot be analyzed further. The origin of the stochastic evolution is the interaction between the nonstandard parts of the wave functions, and therefore it is at variance with the spontaneous collapse model by Ghirardi and Weber \cite{Ghir}, where the reduction is a basic process by itself, and it is not developing among the components of the wave function during the measurement process. \par Another consequence of the introduction of nonstandard space-time points is the appearance of unlimited frequencies and wave vectors, which however cannot be associated with energies and momenta. The group velocity of the corresponding plain wave is smaller than the standard plain wave and tends to zero for the maximum allowed wave vector and frequency. Among the consequences of this peculiar behavior is the presence of nonstandard components of the wave functions, which is present as a tail of a generic standard free wave packet. Furthermore, for two separating particles the relative wave function is characterized by a nonstandard component that joins the two corresponding standard wave packets, which gives a natural visualization of the entanglement. Within this picture entanglement is just a direct consequence of the finite extension of the w.f. and its obvious indivisibility. Of course the question of why we need, to describe physical objects, a w.f. of finite extension remains unanswered. \par In this framework, besides the description of the physical processes related to the 'paradoxes' of ordinary Quantum Mechanics, a physical link is apparent between measurement and entanglement. In this sense these two peculiar features of QM turn out to have a common physical origin. \par The outcomes of the model are in line with a recent paper \cite{Griff}, where it is argued, within the 'Consistent Quantum Mechanics' approach \cite{Gribook}, that non-locality is inconsistent with the completeness of ordinary QM. In the same reference it is also shown that the Bell' s inequality can be relaxed if the (local) hidden variables are not classical but quantum. The additional degrees of freedom, introduced in the present model for completing QM, can be viewed as a sort of quantum hidden variables, even if they are not used to determine the quantum states. \par The wave function reduction, as introduced in the paper, has to be considered as an additional stochastic process which can be present in general, and which complements the Schrodinger equation. It is not necessarily associated with measurement processes, whatever is the possible meaning of such an expression. \par \section*{Appendix A \label{sec:appendixA}} \par Let us consider the discrete (in space) wave equation for spinless and massless particles \begin{equation} \frac{\partial^2 W_j(t)}{\partial t^2} \,-\, c^2 \frac{1}{d^2} (W_{j+1} - 2 W_{j} + W_{j-1}) \,= 0 \label{eq:ww1} \end{equation} \par\noindent where $W_j(t) = W(t,x_j)$, being $x_j = jh = j/N$ the nonstandard $j$-point, and $c$ the wave phase velocity. Here $j$ runs from $-M +1$ to $M - 1$, for a total of $2M - 1$ equations, with $ M = N^2 $. The values of $W_{-M}$ and $W_M$ must be fixed by the boundary conditions. \par\noindent Eq. (\ref{eq:ww1}) defines a band matrix $A$ \begin{equation} A_{j,j'} \,=\, \frac{1}{h^2} (\delta_{j,j'-1} - 2\delta_{j,j'} + \delta_{j,j'+1} ) \label{eq:bbm} \end{equation} \par\noindent with $j,j'$ running from $-M+1$ to $M-1$. Then the wave equation can be written for the vector $W \,=\, (W_{-M}, ............. W_{M})$ \begin{equation} \frac{\partial^2 W(t)}{\partial t^2} \,-\, c^2 A W \,= 0 \end{equation} \par\noindent supplemented by the boundary conditions. For any value of $M$ (finite or hyper-finite) the matrix $A$ can be diagonalized. In fact if we take \begin{equation} \phi_k(x_j) \,=\, \frac{1}{\sqrt{2M+1}} \exp(\imath kj\pi/M) \label{eq:sol} \end{equation} \noindent one gets \begin{equation} \begin{array}{ll} \big(A\phi_k)(x_j) &\,=\, \frac{1}{d^2} \frac{1}{\sqrt{2M+1}} \big[ \exp(\imath k(j+1)\pi/M) + \exp(\imath k(j-1)\pi/M) \\ \nonumber \ &\, \\ \nonumber \ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \,-\, 2\exp(\imath kj\pi/M) \big] \\ \nonumber \ &\, \\ \nonumber \, &\,=\, \frac{1}{d^2} \frac{1}{\sqrt{2M+1}}\exp(\imath kj\pi/M) \big[ \exp(\imath k\pi/M) + \exp(-\imath k\pi/M) - 2 \big] \\ \nonumber \ \\ \nonumber \, &\,=\, - \frac{4}{d^2} \sin^2(k\pi/2M) \phi_k(x_j) \end{array} \end{equation} \noindent Here $ k = 0, \pm 1, \pm 2, \cdots\cdots \pm M $. The eigenfrequencies are therefore \begin{equation} \omega(k) \,=\, \pm \frac{2}{d} \sin(k\pi/2M) \end{equation} \noindent and the solutions of the wave equation read \begin{equation} \phi_k(t,x_j) \,=\, \phi_k(x_j) \exp(\imath \omega(k) t) \label{eq:phit} \end{equation} \par The solutions of Eq. (\ref{eq:sol}) correspond to periodic boundary conditions. For massive particles the wave equation includes a mass term and Eq. (\ref{eq:ww1}) becomes the Klein-Gordon equation. The solutions can still be written as in (\ref{eq:phit}), with frequency $\omega_m$ given by ( $\hbar = c = 1$ ) \begin{equation} \omega_m(k) \,=\, \pm \sqrt{m^2 \,+\, \omega(k)^2} \end{equation} \noindent For the standard part of the w.f. these expressions of the frequencies merge in the usual ones. Furthermore in the low wave number limit the Klein-Gordon equation becomes the Schr\"odinger equation. However for the nonstandard part the wave equation (\ref{eq:ww1}) is the appropriate one, since the frequencies are unlimited and one has to work in the ultra-relativistic regime.\par For particles with spin the Dirac equation and its ultra-relativistic limit have to be used, and the formalism can be easily extended to this case. \par \section*{Appendix B\label{sec:appendixB}} \par In this Appendix the time evolution of a nonstandard singularity is presented in terms of the solutions of the wave equation and its specific form is developed on the basis of the stationary phase approximation. Let us consider the following singular function in \textbf{R}$^*$ \begin{displaymath} \psi(0,x_j) = \Big\{ \begin{array}{ll} \sqrt{N}\ \ &\ if\ \ x_j \,=\, 0 \\ \ 0\ \ &\ if\ \ x_j \,\neq\, 0 \end{array} \end{displaymath} \noindent This function can be expanded in \textbf{R}$^*$ with the plane waves of Appendix A. The coefficients of the expansion are \begin{equation} \tilde{\psi}(0,p) \,=\, \frac{1}{\sqrt{2M+1}} \sum_j \psi(0,x_j) \exp( \imath p x_j ) \,=\, \frac{1}{\sqrt{2N}} \label{eq:coef1} \end{equation} \noindent that is they are constant. Here $ p = k\pi/N $, with $ k $ is an hyperrial integer, as defined in the text. The time evolution of the w.f. with initial condition $ \psi(0,x_j) $ can be written \begin{equation} \psi(t,x_j) \,=\, \frac{1}{2} N^{-3/2} \sum_k \exp\big( \imath(\omega(k) \,-\, p x_j)\big) \label{eq:sumk} \end{equation} \par We are now going to evaluate the summation within the stationary phase approximation. This is perfectly justified if the stationary point is in the range of unlimited $ k $ values, since then the phase is rapidly oscillating. Furthermore the variation of the phase for a unit step of $ k $, i.e. $ \Delta k = 1 $, is infinitesimal \begin{equation} \Delta (\omega(k) \,-\, p x_j) \,=\, \frac{\pi}{N} [ \cos(k\pi/2M) \,-\, x_j] \,\in\, mon(0) \end{equation} \noindent and therefore the summation can be replaced by an integral \begin{equation} \psi(t,x_j) \,=\, \frac{1}{2} N^{-3/2}\ \ ^*\!\!\!\int dk \exp\big( \imath(\omega(k) \,-\, p x_j)\big) \end{equation} \noindent The possible stationary point of the phase is the solution of the equation \begin{equation} t \cos(k\pi/2M) \,-\, x_j \,=\, 0 \end{equation} \noindent For each value of $ x_j $ this fixes the corresponding stationary point at \begin{equation} k(x_j) \,=\, \pm \frac{M}{\pi} \arccos(x_j/t) \end{equation} \noindent which is indeed unlimited, except for $ x_j $ at infinitesimal distance from $ \pm t $. If we exclude this infinitesimal interval of $ x_j $ values, the summation in Eq. (\ref{eq:sumk}) can be restricted to the unlimited $ k $ region. For definiteness let us take the point at positive $ x_j $. With these restrictions, the stationary phase formula can be applied. The second derivative of the phase $ \phi(k) $ in (\ref{eq:sumk}) gives \begin{equation} \phi''(k(x_j)) \,=\, - (\pi^2 t/2 N^3) \sin (k(x_j)\pi/2 M) \,=\, -(\pi^2/2 N^3) \sqrt{t^2 \,-\, x_j^2} \end{equation} \noindent and finally one gets \begin{equation} \psi(t,x_j) \,=\, \frac{1}{\sqrt{N}} \frac{1}{[t^2 \,-\, x_j^2]^{1/4}} \exp\big(\imath (\phi(k(x_j)) + \pi/4)\big) \label{eq:solution} \end{equation} \noindent From the above calculations one has \begin{equation} \phi(k(x_j)) \,=\, \omega(k(x_j))t - p(x_j) x_j \end{equation} \noindent which is unlimited and therefore $ \psi(t,x_j) $ is a nonstandard function, as expected. \par A similar case was considered in ref. \cite{Delf}, for a finite space interval. The solution was expressed in term of spherical Bessel functions, which for large index resemble closely the solution (\ref{eq:solution}). \par \section*{Appendix C\label{sec:appenixC}} \par In this Appendix we consider the w.f. for a single particle localized in a scalar potential well $ v(x) $. This w.f. includes the standard component which is the solution $ \Psi_S(t,x) $ of the wave equation \begin{equation} \big(-\frac{\partial^2}{\partial t^2} \,+\, \Delta \,-\, (m + v(x))^2 \big) \Psi_S(t,x) \,=\, 0 \label{eq:swe} \end{equation} \noindent and the nonstandard component , which satisfies the extension in \textbf{R}$^*$ of the wave equation. In particular the Laplacian $ \Delta $ is defined in \textbf{R}$^*$ as in the text and in Appendix A. There is no matrix element between standard and nonstandard component, since they are actually infinitesimal. The nonstandard component can be written \begin{equation} \Psi_{NS}(x_j) \,=\, G(t,x_j) ^*\Psi_N(x_j) \label{eq:GFN} \end{equation} \noindent where $ G $ is a nonstandard function and $ ^*\Psi_N $ the canonical extension of $ \Psi_N $. Substituting (\ref{eq:GFN}) in the wave equation, one gets \begin{equation} \begin{array}{ll} G(t,x_j) &\big[ -\frac{\partial^2}{\partial t^2} \,+\, \Delta \,-\, (m + v(x))^2 \big] ^*\Psi_N(x_j) \,+ \\ \nonumber \ &\ \\ \nonumber \ &^*\Psi_N(x_j)\big[ - \frac{\partial^2}{\partial t^2} \,+\, \Delta \big] G(t,x_j) \,+\, \\ \nonumber \ &\ \\ \nonumber \ & \big[ - (\frac{\partial}{\partial t} ^*\Psi_N(x_j)) (\frac{\partial}{\partial t}G) \,+\, (D\, ^*\Psi_N(x_j)) (D G) \big] \,=\, 0 \end{array} \label{eq:nswe} \end{equation} \noindent where $ D $ is the nonstandard space derivative. Notice that the nonstandard derivative and Laplacian reduce to the standard ones when applied to a smooth standard function (canonically extended). The first line vanishes because of the standard wave equation. The terms in the third line are infinitesimal with respect to each term of the second line, since they contain unlimited frequencies and wave numbers at the first and second power respectively. This can be checked by the Fourier transform of the equation and solving for the eigen frequency. Therefore the second line must vanish apart infinitesimal corrections. The nonstandard function $ G $ must be then a linear superposition of free plane waves.\par It is clearly impossible to obtain a stationary solution including both standard and nonstandard components. As discussed in the text, we consider a stationary standard component and a time dependent nonstandard component in the form of a standing wave. The function $ G $ is of the form \begin{equation} G(t,x_j) \,= \, \sum_r \cos(\omega_r t) \exp(-\imath k_r x_j) \end{equation} \noindent where $ \omega_r = \omega(k_r) $ and the wave numbers $ k_r $ run over a set as described in the text. The scalar product of two states of the same system can be defined in the usual way, provided the integral is the nonstandard one. One gets for the nonstandard components of two w.f. \begin{equation} \begin{array}{ll} \, &< \Psi_{NS} | \Psi_{NS}' > \,=\, d \sum_{j r r'} \overline{\Psi}_{NS}(t,x_j) \Psi_{NS}(t,x_j)' \\ \nonumber \ &\ \\ \nonumber \ &\,=\, \exp\big((\imath(k_r' - k_r) x_j\big) \cos(\omega_r t) \cos(\omega_r' t) ^*\overline{\Psi}_S(t,x_j) ^*\Psi_S(t,x_j)' \end{array} \end{equation} \noindent As discussed in the text, the difference of two different wave numbers is assumed to be unlimited. It follows that only the terms with $ k_r = k_r' $ survive, since the product of the two standard components (canonical extended) can contain only standard wave numbers. The summation over $ x_j $ then gives the scalar product of the two standard components \begin{equation} \begin{array}{ll} < \Psi_{NS} | \Psi_{NS}' > &\,=\, \sum_r \cos(\omega_r t)^2 \big[\, d \sum_j ^*\overline{\Psi}_S(t,x_j) ^*\Psi_S(t,x_j)'\, \big] \\ \nonumber \ &\ \\ \nonumber \ &\,=\, \sum_r \cos(\omega_r t)^2 \, < \Psi_S | \Psi_S' > \end{array} \end{equation} \section*{Appendix D\label{sec:appenixD}} \par In this Appendix the nonstandard matrix elements of the interaction of Eq. (\ref{eq:vNS}) between two states $l, m$, defined and discussed in Secs. \ref{sec:stoc},\ref{sec:mel}, is estimated. We first consider the case of particle detection (a), and then we consider the case involving photons (b), e.g. the Mach-Zender interferometer. \par\noindent a) Particle detection. \par To be specific, one can imagine that each one of the states $l, m$ is the result of the standard evolution of the initial state corresponding to a particle hitting a position detector, and that the state of the system particle+detector, before the reduction, is a linear combinations of them. For an ideal detector therefore they differ only for their mean position in the detector. \par The general for of the nonstandard matrix element can be written \begin{equation} < \Psi_l | V^{NS} | \Psi_m > \,=\, v_0 d\sum_j \rho_{NS}^{(l)}(t,x_j)\, \rho_{NS}^{(m)}(t,x_j) \label{eq:gme} \end{equation} \noindent where the $\rho_{NS}$'s are the density matrices corresponding to the states $l,m$. Their expression can be obtained from the generalization of Eq. (\ref{eq:bound}) to the NS wave function of a many-body system of A particles \begin{equation} \begin{array}{ll} \ &\Psi_{NS}^{(l)}(t,x_{j_1},x_{j_2},\cdots\cdots,x_{j_A}) \,=\, \\ \nonumber \ &\ \\ \nonumber \ &\ \ \ \ \ \sum_{r_1,r_2 \cdots r_A} \prod_n \cos(\omega_{r_n}t) \exp(\imath p_{r_n} x_{j_n})\, \phi_S^{(l)}(t,x_{j_1},x_{j_2},\cdots\cdots,x_{j_A}) \end{array} \label{eq:many} \end{equation} \noindent where the $x_j$ run over the hyperfinite real axis, with $ j $ labeling its position on it and the $ p_r $ run over the spectrum of the nonstandard wave vectors described in Sec. \ref{sec:sfs}. The explicit expression of $ p_r $ is \begin{equation} p_r \,=\, \pi r/2N \label{eq:pr} \end{equation} \noindent with $ r $ an unlimited integer, which in absolute value is not larger than $ N^2 $. The corresponding density matrix is obtained by multiplying $ \Psi_{NS}^{l} $ by its complex conjugate and summing over A - 1 coordinates. For simplicity we assume the w.f. to be symmetric or anti-symmetric under exchange of coordinates, so one can select $ x_{j_1} $ as the free coordinate. Taking into account the orthogonality between plane waves, one gets \begin{equation} \begin{array}{ll} \rho_{NS}^{(l)}(t,x_j) &\,=\, \sum_{r_1,r_1'} \cos(\omega_{r_1}t)\cos(\omega_{r_1'}t) \exp(\imath(p_{r_1} - p_{r_1'})(x_j - X_l)\times \\ \nonumber \ &\ \\ \nonumber \ &\ \ \ \ \times\sum_{r_2,\cdots,r_A} \prod_{n=2}^A \cos^2(\omega_{r_n}t)\, ^*\rho_S^{(l)}(t,x_j) \end{array} \label{eq:rnsmany} \end{equation} \noindent where $ ^*\rho_S $ is the canonical extension of the standard part of the density matrix, and $ X_l $ is the average position of the state $ \Psi^{(l)} $. An essential property of this density matrix is its scaling with the number of particles. If each $ \cos^2 $ factor is substituted by its average value, the density matrix scales as $ \lambda^{A-1} $. This means that it is increasing exponentially with the number of degrees of freedom. This is at variance with the standard part of the density matrix, since in that case no nonstandard momenta and frequencies are present. \par Substituting these expression for the density matrices in Eq. (\ref{eq:gme}) and taking into account that any standard wave vector is strictly negligible with respect to an unlimited wave vector, one finally gets \begin{equation} \begin{array}{ll} \!&< \Psi_l | V^{NS} | \Psi_m > \,=\, v_0 \sum_{\{r\}} \delta_{Kr}(r_1-r_1'+r_2'-r_2)\exp((p_{r_1}-p_{r_1'})(X_l-X_m)) \\ \nonumber \ &\ \\ \nonumber \ &\ \ \ \ \ \times I^{(lm)} \cos(\omega_{r_1}t) \cos(\omega_{r_1'}t) \cos(\omega_{r_2}t) \cos(\omega_{r_2'}t) \prod_{n=2}^A \cos^4(\omega_{r_n}t) \end{array} \label{eq:final} \end{equation} \noindent where $ \delta_{Kr} $ is the discrete Kronecher delta function and \begin{equation} I^{(lm)} \,=\, \int dx \overline{\rho}_S^{(l)}(t,x) \rho_S^{(m)}(t,x) \label{eq:intro} \end{equation} \noindent is the overlap integral of the two standard density matrices. In the wave number summation of Eq. (\ref{eq:final}) one has to exclude terms with equal momentum pairs, i.e. where both $ r_1 = r_2 $ and $ r_1' = r_2' $, since these terms include contributions of the standard form. Then the expression of Eq. (\ref{eq:final}) contains only nonstandard oscillating functions. The dominant term is the one that is obtained by taking for each $ \cos^4 $ in the last factor the constant $ \lambda/4 $, where $ \lambda $ is the unlimited number of frequencies for each degrees of freedom. The latter is obtained by e.g. expanding the cosine function into its exponential form. Then the matrix element is just a nonstandard oscillating functions with the set of frequencies \begin{equation} \Omega_s \,=\, \pm \omega_{r_1} \pm \omega_{r_1'} \pm \omega_{r_2} \pm \omega_{r_2'} \label{eq:omegas} \end{equation} \noindent if only positive values of the frequencies are included. All that complete the estimate of the matrix element, which is then proportional to $ \lambda^{A-1} $, times the oscillating function, in agreement with Eqs. (\ref{eq:nsme},\ref{eq:sigma}). Notice that it is essential that the frequency spectrum is non-linear in the wave vector. \par All that can be generalized to a three dimensional space. The nonstandard plane wave for the wave number $\mathbf{k} = (k_x,k_y,k_z)$ takes the form \begin{equation} \psi_k(t,\mathbf{j}) \,=\, \frac{1}{(2M+1)^{3/2}} \exp\Big(\imath(\frac{\mathbf{k} \mathbf{j} \pi}{M} - \omega(\mathbf{k}) t) \Big) \label{eq:pw3} \end{equation} \noindent where the vector $\mathbf{j} = (j_x,j_y,j_z)$ labels the space position $\mathbf{x}_j$ in the three dimensional hyperfinite real space. \begin{equation} \mathbf{x}_j \,=\, d \mathbf{j} \ \ \ \ \ ; \ \ \ \ \ j_x, j_y, j_z \,=\, 0, \pm 1, \pm 2 \cdots\cdots N^2 \end{equation} \noindent The frequency $ \omega(\mathbf{k}) $ is now given by \begin{equation} \omega(\mathbf{k}) \,=\, \frac{2}{d}\sqrt{\sin^2(k_x\pi/2M) + \sin^2(k_y\pi/2M + \sin^2(k_x\pi/2M)} \label{eq:freq3} \end{equation} \noindent which is clearly anisotropic. However one can follow the same procedure as in the one dimensional case for the three axis and get for the matrix element an expression similar to Eq. (\ref{eq:final}). The frequency $\Omega_s$ are now linear combinations of the frequencies (\ref{eq:freq3}). \par\noindent b) Coupling to photons\par The specific case can be the null experiment with the Mach-Zender interferometer. At a certain stage of the evolution of the experiment, in one branch half of the photon wave packet can hit the 'object', while the other half moves freely in the other branch. The object will be excited by the photon absorption and the coupling between the excited state and the free photon produces the stochastic evolution which reduces the total wave function to one of the two states, i.e. the free photon wave packet or the object excited state (with equal probability). The coupling is given by the matrix element of the electromagnetic interaction between the photon field and the object excited state. For simplicity let us first take a single atom as 'object'. Since the two coupled states are supposed to be at macroscopic distance, one cannot expand the photon wave packet in multipolarity. Neglecting photon polarization, the matrix element $A$ between the two nonstandard parts can be written, see Eqs. (\ref{eq:delf},\ref{eq:tail},\ref{eq:prop}) \begin{equation} \begin{array}{ll} A_{ph} \,=\,& e \frac{d}{N^{3/2}} \sum_{j,l,p,r,r'} \exp(\imath p(x_j - X_l + x_0) -\imath(\omega_p t)) \Phi_{ph}^S(X_l) \\ \nonumber \ &\ \\ \nonumber \ &\times \rho^S(t,x_j) (\exp(\imath(p_r-p_{r'})x_j) \cos(\omega_r t) \cos(\omega_{r'} t) \\ \end{array} \label{eq:photon} \end{equation} \noindent where $\Phi_{ph}^S$ is the standard part of the photon wave packet, $x_0$ the central position of the photon and $\rho^S$ the standard density matrix of the excited atom. The latter involves standard momenta only, that can be strictly neglected with respect to any nonstandard unlimited momenta. Then one gets \begin{equation} \begin{array}{ll} A_{ph} &\,=\, N^{-1/2} \sum_{l,r,r'} \exp(-\imath(p_r-p_{r'}) (X_l-x_0)) \exp(-\imath \omega_{rr'} t) \Phi_{ph}(X_l) \\ \nonumber \ &\ \\ \nonumber \ &\ \ \ \times\cos(\omega_r t) \cos(\omega_{r'} t) \end{array} \label{eq:photon2} \end{equation} \noindent where the density matrix was normalized to 1, and $\omega_{rr'} = \omega(p_r-p_{r'}$. As expected the matrix element is infinitesimal for a single atom. For a macroscopic object it has to be multiplied by an unlimited number, which depends of the object size, and it can give rise to the stochastic reduction.
2024-02-18T23:40:35.780Z
2021-11-22T02:11:29.000Z
algebraic_stack_train_0000
2,816
25,227
proofpile-arXiv_065-13785
\section{Introduction} \label{sec:Intro} The statistical hadronization model (SHM) is the standard tool to predict and describe hadron abundances produced in relativistic nuclear collisions~\cite{Andronic:2017pug}. The main physics assumption underlying the SHM is that, near the phase boundary between the quark-gluon plasma (QGP) at high temperature and confined hadronic matter at lower temperature, the fireball formed in such collisions is close to thermal equilibrium. In the large volume limit applicable for Pb-Pb collisions at LHC energies or Au-Au collisions at RHIC energies the produced hadrons can then be precisely described by using a grand canonical partition function based on the hadron-resonance gas (HRG) and with residual interactions deduced using the S-matrix approach of \cite{Andronic:2018qqt}. We note that this HRG statistical operator provides an equation of state that is very close to that emerging from lattice QCD (lQCD) studies in the hadronic phase ~\cite{Bazavov:2014pvz}. Furthermore, the pseudo-critical temperature $T_{pc}$ at \ensuremath{\mu_{\rm B}} = 0, which is now determined in lQCD calculations \cite{Bazavov:2018mes,Borsanyi:2020fev} with great precision: $T_{pc} = 156.5 \pm 1.5$ MeV~\cite{Bazavov:2018mes}, agrees within (small) uncertainties with the chemical freeze-out temperature obtained from the SHM analysis of light-flavour hadron production data~\cite{Andronic:2017pug,Andronic:2018qqt}. How to extend the SHM to the charm sector, i.e., to SHMc, was outlined more than 20 years ago~\cite{BraunMunzinger:2000px} and further developed in~\cite{Andronic:2003zv,Becattini:2005hb,Andronic:2006ky,Andronic:2007zu}. The main idea behind this development is as follows: The charm quark mass $m_c$ is much larger than $T_{pc}$ and hence thermal production of charm quarks or hadrons is strongly Boltzmann suppressed. However, with increasing center-of-mass energy the total charm production cross section which results from initial hard collisions increases strongly. If the so produced charm quarks thermalize in the hot fireball they participate in the thermal evolution as 'impurities', their total yield being determined by the charm cross section, not by the fireball temperature. Quantitatively, this is described by the charm balance equation~\cite{BraunMunzinger:2000px,Andronic:2006ky} leading to a fugacity $g_c$. Roughly from $\sqrt{s_{NN}} > 15$ GeV on this will lead to an enhancement of hadrons with charm compared to a purely thermal description, see, e.g., Fig.~1 in~\cite{Andronic:2006ky} and the discussion below. Apart from canonical corrections~\cite{Andronic:2003zv,Andronic:2006ky} the enhancement scales $\propto (g_c)^{\alpha}$ where $\alpha$ is the number of charm quarks in a given hadron. Evidence for the thermalization of charm quarks in the fireball is discussed in~\cite{Andronic:2018qqt}. Charm quarks are deconfined inside the QGP, thermalize within the QGP and hadronize at the QCD phase boundary into open and hidden charm hadrons. This SHMc was used to predict~\cite{Andronic:2003zv,BraunMunzinger:2007zz} charmonium yields in Pb-Pb collisions at LHC energies long before the LHC turned on. It provides an excellent description of charmonium production~\cite{Andronic:2006ky,Andronic:2007bi,Andronic:2018vqh,Andronic:2019wva} without any new parameters and this success represents compelling evidence for this new production mechanism on the hadronizing QGP phase boundary. In the present paper we explore the predictions of the SHMc for the production of open charm mesons and baryons. Early predictions for open charm hadrons were made already in~\cite{Andronic:2003zv}, and in~\cite{Becattini:2005hb} for baryons with $\alpha > 1$, but in the absence of experimental data in the relevant low transverse momentum region these early investigations were not pursued further. The situation changed recently when the STAR collaboration at RHIC~\cite{Adam:2019hpq} as well as the ALICE~\cite{Adam:2015sza,Acharya:2018ckj,Acharya:2020lrg,Acharya:2020uqi} and CMS~\cite{Sirunyan:2019fnc} collaborations at the LHC published first results with Au and Pb beams. It is therefore timely to provide a concise description of the SHMc in the charm sector, to compare results based on this approach to the newly available data and to extend the predictions to the multi-charm sector. We note that the only additional information needed for SHMc predictions are the total open charm cross section and as complete as possible information on the mass spectrum of states in the charm sector. Apart from those there are no free parameters in our approach. In Section~\ref{sec:SHM_hq} we discuss the SHMc formalism including the charm-balance equation and fugacities, the information on the total open charm cross section, and the hadron mass spectrum in the charm sector. In addition, we will lay out the framework for extending our results to lighter colliding systems of Xe-Xe, Kr-Kr, Ar-Ar and O-O, which could be studied in future runs of LHC. For the study of system size dependence of $D$ meson $R_\text{AA}$ in a dynamical heavy flavour framework see ref.~\cite{Katz:2019qwv}. For these systems and, in particular for the evaluation of production yields of multi-charm hadrons, a detailed description in terms of canonical thermodynamics is required and outlined. This leads to thermal predictions for rapidity densities of all charmed hadrons in all colliding systems investigated here. In section~\ref{sec:SHMc_spec} we discuss the most up-to date information of the hadron mass spectrum in the charm sector. In particular we review the theoretical and experimental motivation of additional yet-undiscovered charmed hardon states. In section~\ref{sec:SHMc_pt} we present the description of transverse momentum spectra for charmed hadrons using a blast-wave approach. This includes a comparison of results for different freeze-out surfaces. An integral part of this approach is the incorporation of resonance decays into the calculation of spectra. In this section we also outline the 'core-corona' picture which is important to describe the high transverse momentum and centrality dependence of charm hadron production. Results and comparisons to data are discussed in section~\ref{sec:results1}. In this section we first compare SHMc predictions to data of D-mesons and make a prediction for $\Lambda_c$ baryons. With the same approach and no new inputs aside from masses and quantum numbers of charm hadrons we show how a whole hierarchy of predictions emerges depending on whether we deal with single, double, or triple charm hadrons. Because of the above discussed enhancement of production yields for states with multiple charm these predictions will be tested in the upcoming LHC Run3 and Run4 at least for a selected number of states with $\alpha \le 2$. With a new ALICE3 experiment~\cite{Adamova:2019vkf} a large part of the whole mass spectrum of charmed mesons and baryons should be within reach. These experiments can therefore bring completely new information on the degree of deconfinement and mechanism of hadronization of charm quarks in the hot fireball. We conclude this paper with a brief summary and outlook. \section{Heavy quarks in the statistical hadronization model} \label{sec:SHM_hq} Here we recapitulate the physics ideas and formalism behind the SHMc with special focus on the multi-charm sector. For more detail on the original development see~\cite{Andronic:2003zv,Andronic:2006ky,Andronic:2017pug}. Our main emphasis will be on the description of yields and transverse momentum spectra for open charm hadrons with $\alpha \le 3$, produced in Pb-Pb collisions at LHC energy. We will also provide expressions to describe the change of yields when going to lighter collision systems including Ar-Ar and O-O and discuss briefly what can be expected. The production of charmonia or charmonium-like states has recently been investigated, see~\cite{Andronic:2017pug,Andronic:2019wva} and will not be discussed here. Our approach can also be used to make predictions for open charm hadron production at lower energies such as at the RHIC, SPS and FAIR facilities and for higher energies expected at a possible Future Circular Collider~\cite{Dainese:2016gch}. The model can be straightforwardly extended to the beauty sector without conceptual changes or new parameters except for the total open beauty cross section and the corresponding hadronic mass spectrum. However, SHM might need to be modified for beauty hadrons, if future data reveal only partial thermalization of beauty quarks in the QCD medium. \subsection{Multi-charm hadrons, charm balance equation and the charm fugacity factor} \label{sec:balance} Our starting point is the charm balance equation~\cite{BraunMunzinger:2000px} \begin{equation} \begin{aligned} N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace} = \frac{1}{2} & g_c V \sum_{h_{oc,1}^i} n^{{\rm th}}_i \, + \, g_c^2 V \sum_{h_{hc}^j} n^{{\rm th}}_j \, + \, \frac{1}{2} g_c^2 V \sum_{h_{oc,2}^k} n^{{\rm th}}_k, \end{aligned} \label{eq:balance} \end{equation} where $N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}\equiv \mathrm{d} N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}/\mathrm{d} y$ denotes the rapidity density of charm quark pairs produced in early, hard collisions and the (grand-canonical) thermal densities for open and hidden charm hadrons are given by $n_{i,j,k}^{{\rm th}}$. The index $i$ runs over all open charm states $h_{oc,1}^i = D, D_s, \Lambda_c, \Xi_c, \cdots, \bar{\Omega}_c$ with one valence charm or anti-charm quark, the index $j$ over all hidden charm states $h_{hc}^j = J/\psi, \chi_c, \psi',\cdots$, and the index $k$ over open charm states $h_{oc,2}^k = \Xi_{cc} \cdots, \bar{\Omega}_{cc}$ with two charm or anti-charm quarks. We leave out here states with 3 charm or anti-charm quarks as their contribution to the sum is negligible for realistic masses and values of $g_c$ and they have yet to be discovered. These thermal densities are computed using the latest version of the SHMc~\cite{Andronic:2017pug,Andronic:2019wva} with the chemical freeze-out temperature $T_{cf}= 156.5$ MeV and the fireball volume per unit rapidity at mid-rapidity $V = 4997\pm 455\,\text{fm}^3$ as appropriate for the most central 10\% Pb-Pb collisions at LHC energy $\sqrt{s_{NN}}= 5.02$ TeV. In the appendix we also give results for the 30-50\% centrality interval and at mid-rapidity. Scaling with the measured charged particle pseudo-rapidity density the corresponding volume in this centrality bin is $V = 1238\pm 113\,\text{fm}^3$. For the results shown below, the uncertainties in volume were not propagated, because they are sub-leading compared to the uncertainty in $g_c$ discussed below. The total number of charm quark pairs $N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}$ produced in a Pb-Pb collision is a quantity that should be determined by measurement of all hadrons with open or hidden charm. Following this prescription, the only (additional) input parameter of the SHMc, $N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}$, is determined by experiment. In particular, we note that $N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}$ already includes all nuclear effects in charm production as compared to pp collisions, takes into account potential additions to the charm yield from thermal production in the QGP as well as potential losses due to charm quark annihilation. In practice, using this prescription is, however, difficult since the measurement of all open and hidden charm hadrons needs to be performed without cuts in transverse momentum. Achieving a precision measurement of $N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}$ is one of the priorities for the upgraded ALICE experiment in LHC Run3 and Run4. In the absence of a measured charm production cross section in Pb-Pb collisions we obtain $N_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}$ at mid-rapidity from the measured charm cross section $\text{d} \sigma_{c\bar{c}}/\text{d} y$ in pp collisions by multiplication with the appropriate nuclear thickness function for Pb-Pb collisions and taking into account nuclear modifications. The procedure is described in detail below. The pp data were measured at $\sqrt{s}= 5.02$ and 7 TeV at mid-rapidity~\cite{Adam:2016ich,Acharya:2017jgo,Acharya:2019mgn,Acharya:2019mno}. To apply to Pb-Pb collisions, the cross sections are multiplied with the nuclear thickness function and folded with a factor accounting for nuclear modification effects such as shadowing, energy loss or saturation effects. The estimate of this factor is based on the analysis of prompt $D^0$ and \ensuremath{\text{J}/\psi}\xspace production in p-Pb collisions at 5.02 and 8.16 TeV. We used the data from the LHCb collaboration~\cite{Aaij:2016jht,Aaij:2017cqq,Aaij:2017gcy} at forward rapidity, and of \ensuremath{\text{J}/\psi}\xspace production at mid-rapidity measured by the ALICE collaboration in pp and p-Pb collisions at 5.02 TeV~\cite{Acharya:2019mgn,Acharya:2019mno}. The $\sqrt{s}= 8.16$ and 7.0 TeV data are interpolated to 5.02 TeV using the measured data at other center-of-mass energies and the functional form obtained from perturbative QCD (FONLL) ~\cite{Cacciari:2015fta}. For mid-rapidity, we obtain a reduction factor of $0.65 \pm 0.12$, resulting in a value of $\text{d} \sigma_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}/ \text{d} y = 0.532 \pm 0.096$~mb. The corresponding factor for $y$ = 2.0-4.5 is $0.70 \pm 0.08$ leading to a differential charm production cross section of $\text{d} \sigma_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}/ \text{d} y = 0.334 \pm 0.053$~mb. To obtain the charm quark rapidity density for Pb-Pb collisions of a given centrality, the pp cross section is then multiplied with the mean nuclear thickness function $\left<T_\text{AA}\right>$ as described in~\cite{Abelev:2013qoq}. We neglect in the procedure based on results from pp and p-Pb collisions potential contributions to the differential charm cross section in Pb-Pb collisions from thermal charm production as well as reductions from charm quark annihilation. For LHC both contributions were estimated to be very small and negligible for lower energies~\cite{BraunMunzinger:2000dv,Andronic:2006ky}. We note here that the charm balance equation should contain canonical corrections for more peripheral collisions or for lighter collision systems, i.e., whenever the number of charm pairs is not large compared to 1~\cite{Gorenstein:2000ck,BraunMunzinger:2003zd}. The charm balance Eq.~\ref{eq:balance} needs then to be modified accordingly. To that end we define \begin{equation} \begin{aligned} N_{oc,1} = \frac{1}{2} g_c V \sum_{h_{oc,1}^i} n^{{\rm th}}_i,\\ N_{oc,2} = \frac{1}{2} g_c^2 V \sum_{h_{oc,2}^k} n^{{\rm th}}_k,\\ N_{hc} = g_c^2 V \sum_{h_{hc}^j} n^{{\rm th}}_j, \label{eq:charm_numbers} \end{aligned} \end{equation} where $N_{oc,1}$ is the rapidity density of charm quarks bound in hadrons $h_{oc,1}^i$ with one valence charm quark, $N_{oc,2}$ is the rapidity density of charm quarks bound in hadrons $h_{oc,2}^k$ with two valence charm quarks, and $N_{hc}$ is the rapidity density of charm-(anti-charm) quark pairs bound in hidden charm hadrons $h_{hc}^j$. This defines the total rapidity density of charm quarks, neglecting triply charmed states, as $N_c^\text{tot} = N_{oc,1} + N_{oc,2} + N_{hc}$. Note that the value of $N_c^\text{tot}$ itself depends on the charm fugacity $g_c$. Then the modified charm balance equation using the canonical corrections reads: \begin{equation} N_{c\bar{c}}= \sum_{\alpha = 1,2} N_{oc,\alpha} \frac{I_\alpha(N_c^\text{tot})} {I_0(N_c^\text{tot})} \, + \, N_{hc}. \label{eq:canonical} \end{equation} Here, the $I_\alpha$ are modified Bessel functions. For hadrons with 2 or 3 charm quarks there are generally additional terms which are, however, very small because of the small charm densities, and are neglected here (see, e.g. sect. 3.2 in~\cite{BraunMunzinger:2003zd}). Solving Eq.~\ref{eq:canonical} for $g_c$ then determines the charm fugacity factor at 5.02 TeV. For central (0-10\%) Pb-Pb collisions and the above discussed differential charm cross section at mid-rapidity (implying $\mathrm{d} N_{c\bar{c}}/\mathrm{d} y$=12.95$\pm$2.27) this leads to $g_c = 29.6 \pm 5.2$, with the uncertainty determined by the uncertainty in the open charm cross section for Pb-Pb collisions. The rapidity density of open charm hadrons of type $ h_{oc,\alpha}^i $ with $\alpha=1,2$ charm quarks can then be obtained from the computed thermal densities $n_{i}^{\rm th}$ as : \begin{equation} \frac{\mathrm{d} N(h_{oc,\alpha}^i)}{\mathrm{d} y} =g_c^\alpha \, V \, n^{{\rm th}}_i \frac{I_{\alpha}(N_c^\text{tot})}{I_0(N_c^\text{tot})}. \label{eq:yieldsoc} \end{equation} The large value of $g_c = 29.6 \pm 5.2$ for central Pb-Pb collisions for charm production at mid-rapidity (see Fig.~\ref{fig:gc-scaling} in the following section) implies very large enhancements for charmed hadrons compared to what is obtained in the purely thermal case. In the absence of canonical corrections the enhancement factor is (nearly) 900 for doubly charmed, and $ 2.6 \cdot 10^4$ for triply charmed hadrons. For central Pb-Pb collisions at 5.02 TeV the canonical correction factors are in fact close to 1: 0.98, 0.92, and 0.84 for $\alpha = 1, 2, 3$ charm quarks respectively, for the central value of the differential charm cross section at mid-rapidity, see Fig.~\ref{fig:canonical} below. If these enhancement factors are realized in nature then even very massive triply charmed hadrons may come into reach experimentally. For hidden charm states Eq.~\ref{eq:yieldsoc} reduces to \begin{equation} \frac{\mathrm{d} N(h_{hc}^j)}{\mathrm{d} y} = g_c^2 \, V \, n^{{\rm th}}_j. \label{eq:yieldshc} \end{equation} The enhancement factors expressed in Eqs.~\ref{eq:yieldsoc} and \ref{eq:yieldshc} come about because of the assumption that all charm quark reach thermal equilibrium at least for temperatures close to $T_{cf}$. In that case the heavy quarks are completely uncorrelated and the resulting statistical weight is just $g_c^\alpha$. We note that this implies deconfinement of the heavy quarks over the volume $V$, as discussed below. We also stress that all hadron rapidity densities discussed above are computed as rapidity densities for a volume and hence rapidity window of width of $\Delta y =1$. The rationale behind this is that one cannot combine charm quarks into hadrons over large rapidity distances as they are causally disconnected because hadrons have a finite formation time $\tau_f \approx 1$ fm and large rapidity correlations can only be established at very early times $\tau \ll 1$ fm~\cite{Acharya:2019izy,Dumitru:2008wn}. The value of $\Delta y$ is somewhat arbitrary and a range of $\Delta y = 1 - 3$ was explored in the past and for colliders a weak dependence was found \cite{Andronic:2003zv}. We finally note the asymptotic form of the modified Bessel functions $I_\alpha(x)$. For small argument $x$ and order $\alpha$ this reads: \begin{equation} I_{\alpha}(x) \approx \frac{1}{\Gamma(\alpha + 1)} (x/2)^{\alpha} \label{eq:bessel} \end{equation} where $\Gamma$ is the Euler Gamma function. For large $x$ the modified Bessel functions approach \begin{equation} I_\alpha(x) \approx \frac{e^x}{\sqrt{2\pi x}}. \label{eq:bessel1} \end{equation} This implies that the canonical suppression disappears for large arguments $x$, i.e., the system has reached the grand-canonical limit. For small $x$, $I_0 \approx 1$ and the canonical suppression factor approaches $\frac{1}{\Gamma(\alpha + 1)} (x/2)^{\alpha}$. \subsection{Dependence on mass number of the colliding nuclei} \label{sec:A-dependence} In the following we provide information on how to also compute the yields for (multi\nobreakdash-)charm hadrons produced in lighter collision systems such as Xe-Xe, Kr-Kr, Ar-Ar and O-O. Of course, these calculations are valid as long as the charm quarks produced in initial hard collisions reach or closely approach kinetic equilibrium in the hot fireball formed in the collision. This has to be carefully checked when one plans to study the production of charm hadrons in such small systems. In addition, we have not included in these exploratory calculations any contributions due to corona effects. Their importance will increase as the colliding systems become smaller. For the system O-O where the nuclear densities never reach a central plateau we expect very substantial corrections which need to be studied carefully if one wants to look for QGP effects in such very light systems. For more discussion on the corona effect see section~\ref{sec:SHMc_pt} below. To understand the charm hadron yield dependence on mass number A of the colliding nuclei we first determine the A dependence of $g_c$. From the charm balance Eqs.~\ref{eq:balance} and~\ref{eq:canonical} we note that $N_{c\bar{c}} \propto {\rm A^{4/3}}$ since charm is produced in hard collisions and we are interested in central nuclear collisions~\cite{dEnterria:2003xac}. Noting further that the volume $V \propto$ A we immediately obtain that $g_c \propto {\rm A^{1/3}}$ in the grand-canonical limit. In the canonical limit, i.e., for small charm densities, one obtains $g_c \propto {\rm A^{-1/3}}$ using the properties of the modified Bessel functions near the origin (see Eqs.~\ref{eq:bessel} and \ref{eq:bessel1}). However, at LHC energies charm densities are not so small and the grand-canonical approximation is a good approximation for the heavier systems Xe-Xe and Kr-Kr and leads to a 20\% correction for Ar-Ar. The correction becomes large for the O-O system. In Fig.~\ref{fig:gc-scaling} we show the result of the A dependence of $g_c$ as obtained by numerical solution of Eq.~\ref{eq:canonical}. The rather strong deviation from the ${\rm A^{1/3}}$ dependence observed for the O-O system is caused by the changes in the canonical correction factor due to the transition from grand-canonical to canonical thermodynamics where the A dependence of $g_c$ is expected to approach the ${\rm A^{-1/3}}$ scaling as discussed above. For the rapidity range 2.5-4 the non-monotonic feature of the curves is more pronounced, as the system is deeper into the canonical regime, see Fig.~\ref{fig:canonical}. \begin{figure} \centering \includegraphics[scale=0.35]{./figs/gc_3cc_y0.pdf} \includegraphics[scale=0.35]{./figs/gc_3cc_y3.pdf} \vskip -0.4 cm \caption{The system-size (expressed as $\mathrm{A}^{1/3}$) dependence of the charm fugacity factor $g_c$ for the five different collision systems Pb-Pb, Xe-Xe, Kr-Kr, Ar-Ar, and O-O for rapidity $|y| < 0.5$ (left plot) and rapidity 2.5-4 (right plot). The band reflects the uncertainties of $\mathrm{d}\sigma_{c \bar c}/\mathrm{d} y$ indicated in the plots. For details see text.} \label{fig:gc-scaling} \end{figure} In Fig.~\ref{fig:canonical} we present the dependence on mass number A of the canonical correction factors $f_{can}$ for the production of charm hadron $h^i$ in A-A collisions. They are defined as: \begin{equation} f_{can}(\alpha,{\rm A}) = \frac{I_{\alpha}(N_c^\text{tot}({\rm A}))}{I_0(N_c^\text{tot}({\rm A}))}. \label{eq:f_can} \end{equation} The curves on the left and right side are again obtained at rapidity $|y| < 0.5$ and rapidity 2.5-4, respectively. They are evaluated for charm hadrons with the expression given in equation~\ref{eq:canonical}. The A dependence of $g_c$ needs to be obtained numerically and is displayed in Fig.~\ref{fig:gc-scaling} above. \begin{figure} \centering \includegraphics[scale=0.35]{./figs/In2I0_charm_3cc_y0.pdf} \includegraphics[scale=0.35]{./figs/In2I0_charm_3cc_y3.pdf} \vskip -0.4 cm \caption{Canonical correction factors for the five different collision systems Pb-Pb, Xe-Xe, Kr-Kr, Ar-Ar, and O-O at mid-rapidity $|y| < 0.5$ (left panel) and forward rapidity 2.5-4 (right panel) for open flavor hadrons with charm quantum number C. The bands reflect the uncertainties of $\text{d} \sigma_{c \bar c}/\text{d} y$ as indicated in the figure. For details see text.} \label{fig:canonical} \end{figure} With the A-dependence of $g_c$ and of the canonical corrections factors at hand we can now compute the yield of any charmed hadron in the SHMc as function of mass number A. In section~\ref{sec:results1} below we will present our results on yields and transverse momentum distributions. To get a more intuitive understanding of these results we assume, in the following, that the A dependence of $g_c$ can be described by the above grand-canonical relation $g_c \propto {\rm A^{1/3}}$. As can be seen from Fig.~\ref{fig:gc-scaling}, this is well fulfilled, at the better than 10\% (1\%) level, for A $\ge$ 40 (80). Keeping these small deviations in mind, we can provide a good estimate of the A dependence of charm hadron yields provided we stay with A $\ge$ 40 , i.e., Ar-Ar collisions, by making use of Eq.~\ref{eq:yieldsoc} and the above defined canonical suppression factors $f_{can}$. This leads to the scaling relation \begin{equation} \frac{\text{d} N^{\rm AA}}{\text{d} y}(h^i)=\frac{\text{d} N^{\rm PbPb}}{\text{d} y}(h^i) \left(\frac{{\rm A}}{208}\right)^{(\alpha+3)/3} \frac{f_{can}(\alpha,{\rm A})}{f_{can}(\alpha,{\rm Pb})} \label{eq:scaling} \end{equation} for the production of hadron $h^i$ with $\alpha$ charm quarks in collision systems of A-A. Using this relation and the yields for charm hadrons produced in Pb-Pb collisions as displayed in Table~\ref{tab:yields_tot}, see section~\ref{sec:results1} below, the yields can be computed for charm hadrons yields in lighter systems from Ar-Ar to Xe-Xe. For very light systems such as O-O the full approach as discussed above should always be used. In Fig.~\ref{fig:yields_a} the system size dependence of selected hadron yields is displayed for mid-rapidity (left panel) and forward rapidity (right panel). The band for each hadron species correspond to different charm production cross sections as indicated in the figure. Note the change in A dependence for open and hidden charm states as a consequence of the absence of the canonical suppression for the latter (compare Eq.~\ref{eq:yieldshc} and \ref{eq:yieldsoc} above). \subsection{The canonical volume} \label{sec:can_vol} The volume $V$ appearing in Eq.~\ref{eq:balance} is usually set equal to the fireball volume at chemical freeze-out $V$ determined by the requirement that the measured rapidity density of charged particles divided by $V$ equals the thermal density of charged particles after strong decays at chemical freeze-out~\cite{Andronic:2017pug}. Employing a connection between momentum rapidity and space-time rapidity, this volume, corresponding to one unit of rapidity, is a fraction of the entire fireball. To consider such a sub-volume is meaningful since, at high collision energies, equilibration is achieved only locally and not globally. This leads to the picture at freeze-out of a string of fireballs lined up in rapidity and filling the entire gap between the rapidities of the two beams (or between beam and target in fixed target mode). The thermal parameters of these fireballs could differ, albeit at LHC we expect a slow variation with rapidity. Only at low collisions energies (AGS energy and below) one should think of one global thermalized system. We note in this context that in~\cite{Becattini:2005hb} it was assumed that the fireball volume comprises all rapidities up to but excluding beam and target rapidities, hence is significantly larger than what is discussed here. \begin{figure} \centering \includegraphics[scale=0.35]{./figs/Yields_charm_A_3cc_y0.pdf} \includegraphics[scale=0.35]{./figs/Yields_charm_A_3cc_y3.pdf} \vskip -0.4 cm \caption{System size dependence of selected hadron species for mid-rapidity $|y| < 0.5$ (left panel) and forward rapidity 2.5-4 (right panel).} \label{fig:yields_a} \end{figure} When computing the canonical suppression factor $f_{can}$ defined in Eq.~\ref{eq:f_can}, a new scale enters the problem. To obtain the argument of the Bessel functions, the differential cross section or multiplicity needs to be multiplied with the width of a rapidity interval $\Delta y$ which then can be associated with a canonical volume $V_{can}$ over which the relevant quantum number is conserved. For the conservation of baryon number we have recently learned, in the context of net-proton fluctuations, that this volume $V_{can}$ may be significantly larger, not smaller than $V$~\cite{Braun-Munzinger:2019yxj,Acharya:2019izy}. Very recent results concerning canonical strangeness suppression~\cite{Cleymans:2020fsc} at the LHC point also in that direction. Since charm quarks are all produced in the very early phase of the collision we could expect that the canonical volume for charm $V_{can}$ is similarly large, implying a reduced role of canonical suppression and yields larger than computed with $V = V_{can}$. This would affect in particular predicted yields for multi\nobreakdash-charm hadrons from lighter collision systems such as Ar-Ar or O-O. In the numbers given below for (multiple) charm production yields canonical suppression is included. To stay on the conservative side and in the absence of measurements of $V_{can}$ for charm we have, in the following employed only one volume setting $V_{can} = V$, implying that the canonical corrections for the smallest collision systems could be less severe when more information on $V_{can}$ becomes available. \subsection{Charm hadron production and deconfinement of charm quarks} \label{sec:deconfinement} Early on it was realized~\cite{Andronic:2003zv,Andronic:2007bi,BraunMunzinger:2009ih} that a successful description of the measured yields of charmonia in the SHMc would imply deconfinement for charm quarks. The measurements at RHIC and, in particular, LHC energy lend support to this interpretation~\cite{Andronic:2017pug}. Here we briefly discuss what could be learned on deconfinement from analysis of multi-charm meson and, in particular, baryon production data. In the SHMc the production of hadrons with $\alpha$ charm quarks is enhanced by a factor $(g_c)^{\alpha}$ compared to what is expected in a purely thermal approach, see Eq.~\ref{eq:yieldsoc}. Since $g_c \approx 30$ for central Pb-Pb collisions, the expected enhancements for multi-charm hadron production are very substantial and produce a distinctive hierarchy in their yield pattern, as shown below. That pattern results only if the charm quarks making up the final hadron are uncorrelated prior to hadronization as is expected for fully deconfined ('no strings attached') charm quarks. We note that even the residual correlation imposed by overall baryon number and charm conservation will be very small if the measurement window is of order one unit in rapidity~\cite{Acharya:2019izy}. Production of multi-charm hadrons in the (confined) hadronic phase would also be very small as it would necessarily have to involve exotic multi-particle collisions. To illustrate this point, the following estimates are based on energy conservation and on masses of 4.8 GeV for $\Omega_{ccc}$~\cite{Zhao:2020jqu} and 3.62 GeV for $\Xi_{cc}$~\cite{Zyla:2020zbs}. For the most exotic case of $\Omega_{ccc}$ production a possible production path is via collisions such as $3D + m\pi \rightarrow \bar{p} + \Omega_{ccc}$ with $m$ = 3. For the $\Xi_{cc}$ baryon the analogous rate equation reads $2D + m\pi \rightarrow \bar{p} + \Xi_{cc}$ with $m$ = 7. But many other processes such as $\Lambda_c + D\rightarrow \Xi_{cc} + \pi$ or $\Lambda_c+ 2D\rightarrow \Omega_{ccc} + \pi$ are imaginable. While the rates for all these processes will be enhanced compared to purely thermal estimates by a fugacity factors $(g_c)^{\alpha}$, they will, nevertheless, be very small because of the low $D$ meson and $\Lambda_c$ density of $1.2 \cdot 10^{-3}\,\text{fm}^{-3}$ (for $D^0$, the highest for $D$ mesons) and $ 2.6 \cdot 10^{-4}\,\text{fm}^{-3}$ for $g_c = 29.6$ at chemical freeze-out entering at the same power of $\alpha$. These rates will fall very rapidly with temperature during the hadronic expansion~\cite{BraunMunzinger:2003zz}. Also the phase after chemical freeze-out is by construction not in equilibrium. How to constrain the rate for such multi-particle collisions is totally unclear due to the unknown amplitudes for these different possible many-body collision processes. Similar arguments apply for charmonia, where the dominant channel would be $D + \bar{D} \rightarrow J/\psi + \pi$. Here, even the extension to $\psi'$ involves at least one more unknown parameter. This is to be contrasted with the SHMc approach where there is no free parameters. The experimental observation of a significant number of hadrons with multiple charm in relativistic nuclear collisions hence provides a unique opportunity to test the 'deconfinement' prediction and get quantitative information on the degree of deconfinement achieved in the hot fireball. The full predictions of the model, including the contribution from the low density corona, are presented for a selection of species in Table~\ref{tab:yields_tot} for Pb-Pb collisions at 5.02 TeV, for the 0-10\% and 30-50\% centralities (mid-rapidity values). For these hadrons, the production cross sections in pp collisions have recently been measured by ALICE at mid-rapidity \cite{Acharya:2021cqv,Acharya:2019mgn,Acharya:2020lrg,Acharya:2019lkw} and those are employed for the calculation of the corona component (we have employed the ratio $\psi(2S)/(J/\psi)$=0.15 \cite{Andronic:2017pug}). The model predictions for the core part for all systems for the two rapidity ranges are available in numerical form as auxiliary file with the arXiv version of the publication. \section{Charm hadron spectrum and SHMc} \label{sec:SHMc_spec} The spectrum of open charm hadrons incorporated in the SHMc includes all mesons and baryons established experimentally as given by the PDG \cite{Zyla:2020zbs}. This includes 27 D mesons and their anti-particles with angular momenta from 0 to 3 and masses up to 3 GeV. There are 36 established singly-charmed baryons and as many anti-baryons in the mass range up to 3.12 GeV. The known angular momenta are low, mostly 1/2 and 3/2 with one established 5/2 state. The thermal population of the charmed hadrons is strong enough so that the density of the ground state $D^0$ is quadrupled due to feeding from strong decays, the $\Lambda_c$ density is increased by a factor 5 due to feeding. There has been discussion recently that the number of charmed baryons, in particular, could be significantly larger. Fourth order susceptibilities were constructed and evaluated in lQCD calculations \cite{Bazavov:2014yba} and compared to results from HRG calculations of the same quantities in the temperature range up to the pseudo-critical temperature. The ratios were chosen such that they are particularly sensitive to contributions from the charmed baryon sector in the HRG. It was found that the lQCD results are significantly (at least 40\%) above the HRG calculation based on the states established by the PDG in 2012, while adding to the HRG charmed baryon states obtained from a lQCD calculation \cite{Padmanath:2013bla}, resulted in good agreement up to the pseudo-critical temperature. The authors of \cite{Bazavov:2014yba} view this as evidence for so far unobserved charmed hadrons contributing to the thermodynamics in the cross over region. Indeed, while the spectrum of \cite{Padmanath:2013bla} is consistent with the number of known states in the mass range above the respective ground state, about 200 additional baryons with total angular momenta up to 7/2 are predicted. Most of these states are significantly higher in mass. For the positive parity states there is a mass gap of about 500-600 MeV, the gap is only of the order of 400 MeV for the negative parity states (that are generally about 300 MeV higher in mass). The situation is only different for the negative parity $\Xi_c$ states, where the new states start right at the mass of the highest experimentally established state at 3123 MeV. Accordingly, at a freeze-out temperature $T_{cf}= 156.5$ MeV the thermal weights are significantly lower. Still, due to their large number and in part also higher degeneracy factors the feeding of ground state charmed baryons could be significantly affected. In this context it is interesting to note that a wealth of new XYZ states were found at the LHC while only 1 additional $\Lambda_c$, 2 $\Xi_c$ and 5 $\Omega_c$ states were newly discovered (compare e.g. the PDG2012 and PDG2020 compilations). Triggered by the surprizingly large fragmentation of charm into $\Lambda_c$ measured in pp collisions at 7 and 5.02 TeV by the ALICE collaboration \cite{Acharya:2017kfy,Acharya:2020uqi,Acharya:2020lrg}, He and Rapp \cite{He:2019tik} incorporated into a SHM calculation a hadron spectrum resulting from a relativistic quark model calculation \cite{Ebert:2011kk} exhibiting a very large number of additional charmed baryons with angular momenta up to 11/2 and both parities. The additional charmed baryons from the RQM calculation have by and large smaller masses than resulting from lQCD \cite{Padmanath:2013bla}, falling in part even into the mass range of the known states. Using this charmed baryon spectrum and a temperature of 170 MeV, the authors of \cite{He:2019tik} find a doubling of the $\Lambda_c$ ground state population as compared to the PDG spectrum and predict a yield in line with the ALICE experimental data. It should be noted that this poses a conceptual problem because it implies that charmed baryons exist at a temperature significantly above the pseudo-critical temperature for the chiral phase transition, while this is explicitly not supported by lQCD calculations. In \cite{Bazavov:2014yba} it is argued that cumulants on net charm fluctuations indicate that above $T_{pc}$ the charm degrees of freedom are no longer described by an uncorrelated gas of charmed hadrons but that rather the emergence of deconfined charm states sets in just near the chiral cross over transition. On the other hand, Petreczky \cite{Petreczky:2020olb} notes that while the ratio of fourth order baryon-charm susceptibilities around and above the pseudo-critical temperature of the chiral transition is much above the values for the HRG but still below the free quark gas value, that fact could be understood if charm hadron like excitations would still exist above $T_{pc}$ possibly up to 200 MeV. This is not the baseline of the predictions of this publication where deconfinement of all flavors at $T_{pc}$ is assumed. The predictions presented below will provide a stringent test of charm deconfinement and settle this discussion once a large enough dynamic range in mass and charm quantum number is covered by experimental data. Finally we quote recent lQCD results \cite{Lorenz:2020uik} where comparisons of Euclidean correlators to perturbative spectral functions were found to be indicative of charmonium melting in lQCD very close to $T_{pc}$. While the questions raised here are debated in the community, we want to give an indication in this publication how the SHMc predictions given below would be affected by a large number of yet undiscovered charmed baryons behaving like simple resonances. To this extent we have performed also calculations where the statistical weight of all excited charmed baryons was tripled and the corresponding change in the predictions by the SHMc is given in section \ref{sec:results1} where hadron yields are presented. Finally it should be noted that, even if the above plethora of charmed baryons exists, a treatment as simple resonances in the SHMc could be too naive and a situation could arise similar to the light quark sector. In a recent study~\cite{Andronic:2020iyg}, the SHM was augmented by 180 nonstrange and 300 strange baryons predicted by lQCD. When they were treated a simple additional resonances, their presence showed a significant impact on particularly the proton yield, strongly deteriorating agreement with experimental data. Proper treatment of the pion-nucleon interaction by the S-matrix approach and using all measured phase shifts \cite{Andronic:2018qqt} completely cancelled out the effect of these additional states. This strong effect of the S-matrix approach could be traced \cite{Lo:2017lym} to non-resonant and repulsive components in the pion-nucleon interaction for some partial waves. Whether such a situation could arise in the charm baryon sector depends, among other things, on the widths of the additional states, and is currently completely unexplored. We have assumed that all additional resonances are narrow Breit-Wigner-type resonances. \section{Transverse momentum spectra of charm hadrons} \label{sec:SHMc_pt} In the SHM fitted to integrated particle yields no assumption is made about the form of the momentum spectra of produced particles. Therefore the transverse momentum dependence must be supplied by additional modelling of the particle freeze-out. In hydrodynamical modelling of heavy ion collisions the soft momentum part of particle spectra is obtained by the Cooper-Frye~\cite{Cooper:1974mv} integral over the freeze-out surface and subsequently passing to the hadronic afterburner to perform resonance decays and possible hadronic rescattering. The blast-wave model~\cite{Schnedermann:1993ws,Florkowski:2010zz} is motivated by the same physics picture, but realized in simpler but approximate way to generate the $\ensuremath{p_{\text{T}}}\xspace $ spectra. The thermal particle spectra are obtained from a simple freeze-out surface with a given freeze-out temperature and with parametrized radial velocity profile. This thermal blast-wave model has been used extensively in the past to fit and characterize the experimentally measured identified particle spectra~\cite{Abelev:2013vea,Acharya:2019yoi,Acharya:2020zji,Acharya:2018orn}. For boost-invariant and azimuthally symmetric freeze-out surfaces $d\sigma_\mu$, the Cooper-Frye integral can be reduced to a one-dimensional integral along the freeze-out contour in the $\tau$-$r$ plane~\cite{Schnedermann:1993ws,Florkowski:2010zz}: \begin{align}\label{eq:Cooper-Frye} & \frac{\mathrm{d}^2 N}{2\pi \ensuremath{p_{\text{T}}}\xspace d\ensuremath{p_{\text{T}}}\xspace dy} =\frac{2J+1}{(2\pi)^3}\int \mathrm{d} \sigma_\mu p^\mu f(p)\nonumber\\ &= \frac{2J+1}{(2\pi)^3} \int_0^{r_\text{max}}\!\! \mathrm{d} r \; \tau(r) r \left[ K^\text{eq}_1(\ensuremath{p_{\text{T}}}\xspace ,u^r) - \frac{\partial \tau}{\partial r} K^\text{eq}_2(\ensuremath{p_{\text{T}}}\xspace ,u^r) \right], \end{align} where $2J+1$ accounts for spin-degeneracy. Here we consider a freeze-out surface defined by a single-valued function $\tau(r)$ in the range $0<r<r_\text{max}$. The freeze-out kernels $K^\text{eq}_{1,2}(\ensuremath{p_{\text{T}}}\xspace ,u^r)$ can be calculated analytically for the Boltzmann distribution $f(p) = \exp(-\sqrt{m^2+p^2}/T)$ of initial particles on the freeze-out surface and takes the well-known form in terms of modified Bessel functions~\cite{Schnedermann:1993ws,Florkowski:2010zz} \begin{align}\label{eq:thkernel} \begin{split} K^\text{eq}_1(\ensuremath{p_{\text{T}}}\xspace , u^r)& = 4\pi m_\text{T} I_0\left(\frac{\ensuremath{p_{\text{T}}}\xspace u^r}{T}\right)K_1\left(\frac{m_\text{T} u^\tau}{T}\right)\\ K^\text{eq}_2(\ensuremath{p_{\text{T}}}\xspace , u^r)& = 4\pi \ensuremath{p_{\text{T}}}\xspace I_1\left(\frac{\ensuremath{p_{\text{T}}}\xspace u^r}{T}\right)K_0\left(\frac{m_\text{T} u^\tau}{T}\right), \end{split} \end{align} where $m_\text{T}=\sqrt{m^2+\ensuremath{p_{\text{T}}}\xspace ^2}$ and $T$ is the (constant) freeze-out temperature. The 4-velocity $u^r = \beta/\sqrt{1-\beta^2}$ is given in terms of radial velocity $\beta(r)$, which is commonly parametrized by a power function with two parameters $\beta_\text{max}$ and $n$ \begin{equation} \beta(r) = \beta_\text{max}\frac{r^n}{r_\text{max}^n}.\label{eq:beta} \end{equation} In this paper the spectra of charmed hadrons formed in the core, i.e. by hadronization of the hot QGP fireball, are evaluated by using the velocity profile from a (3+1)D viscous hydrodynamics code MUSIC with IP-Glasma initial conditions tuned to the light flavor hadron observables~\cite{Schenke:2010nt,Schenke:2012wb}. The velocity profile and best fit with $\beta_\text{max}=0.62$ and $n=0.85$ for 0-10\% centrality bin is shown in Fig.~\ref{fig:plotv} (we use $\beta_\text{max}=0.60$ and $n=0.85$ for 30-50\% centrality bin). The fit uncertainties of the parameters $\beta_{\text{max}}$ and $n$ are 0.005 and 0.05, respectively. \begin{figure} \centering \includegraphics[scale=0.35]{./figs/Beta_R_midy_v2.pdf} \caption{Radial velocity profile on the freeze-out surface extracted from hydrodynamic simulations of central Pb-Pb collisions.} \label{fig:plotv} \end{figure} Different types of freeze-out surfaces have been used in the past, for example, the constant Bjorken time freeze-out surface introduced in ref.~\cite{Schnedermann:1993ws} \begin{align} \tau(r) = \tau_\text{fo} \end{align} or constant proper time time surface of~\cite{Broniowski:2001uk} \begin{align} \tau(r)&=\sqrt{\tau_\text{fo}^2+r^2}. \end{align} In ref.~\cite{Broniowski:2001uk} the velocity flow was restricted to be a Hubble-like $u^\mu=x^\mu/\tau_\text{fo}$ and parallel to the norm of the surface. For parametrized velocity in Eq.~\ref{eq:beta}, $u^\mu$ is no longer proportional to $d\sigma^\mu$. However, one can consider a third type of the surface for which this condition is still true: $\tau(r) = \tau_\text{fo}+\int_0^r dr'\beta(r')$ and using Eq.~\ref{eq:beta} we get \begin{equation} \tau(r)=\tau_\text{fo} + \frac{r\beta(r)}{n+1}. \end{equation} The three freeze-out surfaces are depicted in Fig.~\ref{fig:contour} (left). Without loss of generality, the freeze-out time is taken to be equal to $\tau_\text{fo}=r_\text{max}$ and $r_\text{max}$ itself can be determined by requiring the freeze-out volume per unit rapidity \begin{align} V &= 2\pi\int_0^{r_\text{max}}\! \mathrm{d} r \; r \tau(r)u^\tau\left[1 - \beta(r)\frac{\partial \tau}{\partial r}\right] \end{align} to be equal to a given value, e.g. $V=4997\,\text{fm}^3$ in central Pb-Pb collisions. Note, however, that the integration variable $r$ can be rescaled to $ x = r/r_\text{max}$ with the result that $r_\text{max}^3$ appears as normalization in front of the integral. Since we replace the overall normalization by that obtained from the SHMc, knowledge of $r_\text{max}$ is not required, and the only parameters left are the dimensionless parameters $\beta_{\text{max}}$ and $n$, as discussed above. As we did in a previous publication for the J/$\psi$ spectrum~\cite{Andronic:2019wva}, the spectra for various charmed hadrons are computed using this velocity profile as input for a blast-wave parameterization in terms of temperature, flow velocity profile and mass of the hadron. The temperature we use is the chemical freeze-out temperature $T_{cf} = 156.5\,\text{MeV}$ obtained from fitting the yields of light flavor hadrons and nuclei as measured by ALICE for Pb-Pb collisions at $\sqrt{s}$ = 2.76 TeV~\cite{Andronic:2017pug,Andronic:2018qqt}. We studied the effects of the uncertainties of the blast wave parameters $\beta_{\text{max}}$ and $n$ on the hadron spectra. The resulting variations in the spectra are less than 10\% and in the ratios to $D^0$ less than 3\%. In Fig.~\ref{fig:contour} (right) we show the $D^{0}$ spectra for the three freeze-out surfaces. We see that the difference in the absolute spectra is small and lies within the uncertainty band, which is mostly due to uncertainty in $g_c$ at these low momenta. In addition, given the still large experimental uncertainties we do not expect the precise form of the freeze-out surface to be the most important factor and we will use a constant freeze-out time surface as the default choice. We emphasize here that for particle ratios, e.g. $\ensuremath{\Lambda_{\text{c}}}\xspace/D^0$, even this small difference mostly cancels. \begin{figure} \centering \includegraphics[width=0.49\linewidth]{figs/Cent010/SurfaceComparison_Model.pdf} \includegraphics[width=0.49\linewidth]{figs/Cent010/SurfaceComparison_Spectra.pdf} \caption{Left: freeze-out surface comparison, where $\tau_\text{fo}=r_\text{rmax}$. Right: $D^0$ spectra for different freeze-out surfaces. The shaded band is due to the normalization uncertainty in $g_c$. Experimentally measured points and their uncertainties~\cite{Acharya:2018hre} are shown for reference. } \label{fig:contour} \end{figure} One of the limitations of the standard blast-wave model is that it does not include the momentum modification of particle spectra due to the feed-down caused by resonance decays. Recently, a very efficient way of computing such modifications was derived~\cite{Mazeliauskas:2018irt} and applied in blast-wave fits with resonance decay feed-down~\cite{Mazeliauskas:2019ifr} and hydrodynamic simulations~\cite{Devetak:2019lsk}. Here we compute the momentum resolved decay feed-down to long lived charmed mesons and baryons using the \texttt{FastReso} computer code~\cite{FastReso}. In total we perform calculations for 76 $2$-body and 10 $3$-body decays of charmed mesons and baryons. In practise, this procedure replaces thermal Boltzmann freeze-out kernels in Eq.~\ref{eq:Cooper-Frye} with numerically computed total final particle kernels. We use the same temperature and radial velocity profiles as in a standard blast-wave model. In Fig.~\ref{fig:FdCorrection} (left) we show the full decay spectra of charmed hadrons over their initial thermal spectra. In addition, in Fig.~\ref{fig:FdCorrection}~(right) we show the selected decay-channel contributions to $\ensuremath{\Lambda_{\text{c}}}\xspace$ spectra. The feed-down contributions preferentially accumulate at low momentum and can be as large as 5 times that of thermal spectra for $\ensuremath{\Lambda_{\text{c}}}\xspace$. The dotted lines in Fig.~\ref{fig:FdCorrection} (left) show the ratio of full over thermal $\ensuremath{p_{\text{T}}}\xspace $-integrated yields in SHMc. These feed-down factors were used previously to scale the thermal spectra without accounting for $\ensuremath{p_{\text{T}}}\xspace $ dependence of the feed-down. One can see rather good agreement between the naive and exact scaling of the spectra for $\ensuremath{p_{\text{T}}}\xspace \lesssim 3\,\text{GeV}$, where most of the particles are. As low momentum is the only region where core charmed hadron production is dominant, we find in practice very small differences between full decay spectra and scaled thermal spectra in this momentum range. Nevertheless, in the plots below we will use the spectra obtained with decay kernels from \texttt{FastReso}. \begin{figure} \centering \includegraphics[width=.49\linewidth]{./figs/RatiosToThermal.pdf} \includegraphics[width=.49\linewidth]{./figs/LcPartial.pdf} \caption{Left: ratios of different particle spectra with feed-down contribution to thermal spectra (note that the corona contribution is not included here). Dashed lines correspond to the ratio of integrated yields (these ratios were previously used to scale thermal spectra in SHMc). Right: feed-down contribution to $\Lambda_c^+$ from different decay channels. For details see text.} \label{fig:FdCorrection} \end{figure} Finally, the high momentum power-law tail actually observed in experimental particle spectra is not described by hydrodynamics. Instead it can be modelled using a core-corona picture~\cite{Andronic:2019wva}. Even in nucleus-nucleus collisions at small impact parameter, a number of nucleon-nucleon collisions take place in the so-called corona-region where the overlap density is a small fraction of the maximum density achieved in the collision. In this overlap volume where nucleons undergo on average one or less collisions, we assume that no QGP is formed and, hence, treat the collisions as $\ensuremath{\rm pp}$-like. On the contrary, in the core part, we assume full thermalization of produced charm quarks. We define the corona region as having 10\% of the central density $\rho_0$. In a heavy nucleus at rest the central nucleon number density is $\rho_0 = 0.16\,{\rm fm}^{-3}$. The \ensuremath{p_{\text{T}}}\xspace shape of the cross section measured in $\ensuremath{\rm pp}$ collision is parametrized by \begin{equation} \frac{\mathrm{d}^2\sigma^{\ensuremath{\rm pp}}}{\mathrm{d} y \mathrm{d}\ensuremath{p_{\text{T}}}\xspace } = C \times \frac{\ensuremath{p_{\text{T}}}\xspace}{(1+(\ensuremath{p_{\text{T}}}\xspace/p_0)^2)^n}, \label{eq:ppFit} \end{equation} where the coefficients $C$, $p_0$ and $n$ are obtained from a fit to experimental distributions for each particle species~\cite{Acharya:2021cqv, Acharya:2019mgn, Acharya:2020lrg} and the total integral of the function is set to experimentally measured integrated cross section $\mathrm{d}\sigma/\mathrm{d} y$. The fit is found to describe the measured cross sections well within the uncertainties in the whole \ensuremath{p_{\text{T}}}\xspace range considered. We then scale the $\ensuremath{\rm pp}$ differential cross section by the overlap function $T_\text{AA}^\text{corona}$ to account for the number of binary nucleon-nucleon collisions in the corona. In summary, for each of the charmed hadrons under consideration the \ensuremath{p_{\text{T}}}\xspace{} spectra are obtained by summing the soft momentum spectrum from the the blast-wave model with resonance decays and the high momentum tail from the corona part. The uncertainty bands are obtained by varying $g_c$. In addition, the uncertainty on the corona part also includes the uncertainty of the fit to the pp data~\cite{Acharya:2021cqv,Acharya:2019mgn,Acharya:2020lrg}. This uncertainty is assumed to be uncorrelated for different particle species and is the dominant source of uncertainties for particle spectra and their ratios at high \ensuremath{p_{\text{T}}}\xspace, although it cancels for $R_\text{AA}$. \section{Results for Pb-Pb and lighter collision systems} \label{sec:results1} \begin{figure*} \centering \includegraphics[width=.49\linewidth]{figs/Cent010/Spectra_D0AndLambda.pdf} \includegraphics[width=.49\linewidth]{figs/Cent010/Raa_D0AndLambda.pdf} \caption{Spectra (left) and $R_{\rm AA}$ (right) of $\ensuremath{\text{D}^{\text{0}}}\xspace$ mesons (top) and $\Lambda_{\rm c}$ baryons (bottom) in Pb-Pb collisions at \cme{5.02} and 0-10\% centrality. Pb-Pb data for D-meson distributions taken from~\cite{Acharya:2018hre}. The pp data needed to compute the corona part are taken from~\cite{Acharya:2021cqv,Acharya:2020lrg}. The model band width at low and high \ensuremath{p_{\text{T}}}\xspace are driven by the uncertainties of $g_{c}$ and pp spectra fits, respectively, as described in the text.} \label{fig:spectra_1} \end{figure*} In the following we will describe predictions from the SHMc as well as the comparison of results from SHMc with the currently available data. For simplicity, we will only consider Pb-Pb collisions at \cme{5.02} and 0-10\% centrality, and predictions for 30-50\% centrality will be given in Appendix~\ref{sec:SemiCentralPredictions}. The model predictions for all particle species and the two centrality bins are available in numerical form as auxiliary file with the arXiv version of the publication. By far the best series of experiments exists for $D$ mesons produced in Pb-Pb collisions, see~\cite{Acharya:2018hre}. \subsection{Transverse momentum distributions} In Fig.~\ref{fig:spectra_1} we show the comparison between the SHMc predictions and data for spectra and nuclear modification factor $R_{AA}$ as a function of transverse momentum $p_{\rm T}$. The transverse momentum dependence is obtained as explained in detail in section~\ref{sec:SHMc_pt} above. Note that there are no new parameters used here apart from the hydrodynamics input discussed in section~\ref{sec:SHMc_pt}. The transverse momentum spectrum for $D^0$ mesons is very well described in particular in the purely thermal (``core") region for $p_{\rm T} \le 4$ GeV. In the transition region between core and corona as well for the high momentum tail we notice that the data are under-predicted for both the $p_{\rm T}$ spectrum and the $R_{AA}$. This suggests that the corona description is somewhat schematic and could be further optimized. The corresponding distribution for the $\Lambda_c$ baryon are displayed in the lower panels of Fig.~\ref{fig:spectra_1}. We note that these spectra and distributions are obtained with the unmodified charm resonance spectrum discussed below. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{./figs/Cent010/RatiosToD0_MultiPanels.pdf} \caption{Ratio of charmed hadron spectra, normalized to the $D^0$ spectrum from SHMc + FastReso + corona in Pb-Pb collisions at \cme{5.02} and 0-10\% centrality, in comparison to ALICE data~\cite{Acharya:2018hre}. The pp data needed to compute the corona part are taken from~\cite{Acharya:2021cqv,Acharya:2019mgn,Acharya:2020lrg}. The model band width at low and high \ensuremath{p_{\text{T}}}\xspace are driven by the uncertainties of $g_{c}$ and pp spectra fits, respectively, as described in the text.} \label{fig:spectra_2} \end{figure*} In Fig.~\ref{fig:spectra_2} we show the corresponding distributions for $D^+$, $D^{*+}$, $D^{+}_{s}$ and $\Lambda_c$, plotted as a ratio to the $D^0$ spectrum. In this normalized plot, the charm cross section which determines the charm fugacity parameter $g_c$, is eliminated. For the three D-mesons we observe very good agreement with the experimental observations. For the $\Lambda_c$ baryon the structure of the distribution changes quite strongly: a clear maximum appears near $p_{\rm T} = 4.5$ GeV. Within the framework of the SHMc this maximum appears as a consequence of a superposition of collective flow (hydrodynamic expansion) and change of hadronization regime from bulk (statistical hadronization) to jets, much as it is observed also for the $\Lambda/K$ ratio in the (u,d,s) sector~\cite{Abelev:2013xaa}. \subsection{Integrated yields} In this section we discuss results for momentum integrated particle yields, which for constant temperature freeze-out assumed in the SHMc, do not depend on the details of the freeze-out surface and velocity prametrizations discussed in section~\ref{sec:SHMc_pt} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{./figs/Yields_charm_Pb-Pb_y0.pdf} \includegraphics[width=0.49\textwidth]{./figs/Yields2df_charm_Pb-Pb_y0.pdf} \vskip -0.4 cm \caption{Mass dependence of yields \ensuremath{\der N / \der y}~ for various hadron species for Pb-Pb collisions at mid-rapidity. The left panel is for absolute yields and the right panel is for yields per degree of freedom ($2J+1$). In this plot also the primordial (prior to decays) values are shown as lines, corresponding to hadrons with charm-quark or anti-quark content of 0, 1, 2, and 3 (respective powers of $g_c$).} \label{fig:yields_m} \end{figure} \begin{figure} \centering \includegraphics[scale=0.4]{./figs/YieldsToT_charm_Pb-Pb_y0.pdf} \vskip -0.4 cm \caption{Total (core+corona) yields \ensuremath{\der N / \der y}~ for various hadron species for central (0-10\%) Pb-Pb collisions at mid-rapidity. Red points correspond to the standard mass spectrum and total open charm cross section as discussed in the text. The open points where obtained with an enhanced total open charm cross section, implemented via tripled statistical weights for excited charmed baryons. For more details see text.} \label{fig:yields_tot} \end{figure} \begin{table*} \begin{tabular}{l | l l l } Particle & $\mathrm{d} N/\mathrm{d} y$ core (SHMc) & \, $\mathrm{d} N/\mathrm{d} y$ corona & $\mathrm{d} N/\mathrm{d} y$ total \\ \hline \hline & \multicolumn{3}{c}{0-10\%} \\ \hline $D^{0}$ & 6.02 $\pm$ 1.07 & \, 0.396 $\pm$ 0.032 & 6.42 $\pm$ 1.07 \\ $D^{+}$ & 2.67 $\pm$ 0.47 & \, 0.175 $\pm$ 0.026 & 2.84 $\pm$ 0.47 \\ $D^{*+}$ & 2.36 $\pm$ 0.42 & \, 0.160 +0.048$-$0.022 & 2.52 $\pm$ 0.42 \\ $D_{s}^{+}$ & 2.15 $\pm$ 0.38 & \, 0.074 +0.024$-$0.015 & 2.22 $\pm$ 0.38 \\ $\Lambda_{c}^{+}$ & 1.30 $\pm$ 0.23 & \, 0.250 $\pm$ 0.028 & 1.55 $\pm$ 0.23 \\ $\Xi_{c}^{0}$ & 0.263 $\pm$ 0.047 & \, 0.090 $\pm$ 0.035 & 0.353 $\pm$ 0.058 \\ J/$\psi$ & 0.108 +0.041$-$0.035 & \, (5.08$\pm$0.37)$\cdot$10$^{-3}$ & 0.113 +0.041$-$0.035 \\ $\psi(2S)$ & (3.04 +1.2$-$1.0)$\cdot$10$^{-3}$ & \, (7.61$\pm$0.55)$\cdot$10$^{-4}$ & (3.80 +1.2$-$1.0)$\cdot$10$^{-3}$ \\ \hline & \multicolumn{3}{c}{30-50\%} \\ \hline $D^{0}$ & 0.857 $\pm$ 0.153 & \, 0.207 $\pm$ 0.017 & 1.06 $\pm$ 0.154 \\ $D^{+}$ & 0.379 $\pm$ 0.068 & \, 0.092 $\pm$ 0.014 & 0.471 $\pm$ 0.069 \\ $D^{*+}$ & 0.335 $\pm$ 0.060 & \, 0.084 +0.025$-$0.011 & 0.419 +0.065$-$0.061 \\ $D_{s}^{+}$ & 0.306 $\pm$ 0.055 & \, 0.039 +0.013$-$0.008 & 0.344 $\pm$ 0.056 \\ $\Lambda_{c}^{+}$ & 0.185 $\pm$ 0.033 & \, 0.131 $\pm$ 0.015 & 0.316 $\pm$ 0.036 \\ $\Xi_{c}^{0}$ & 0.038 $\pm$ 0.007 & \, 0.047 $\pm$ 0.018 & 0.084 $\pm$ 0.020 \\ J/$\psi$ & (1.12 +0.37$-$0.32)$\cdot$10$^{-2}$ & \, (2.65$\pm$0.19)$\cdot$10$^{-3}$ & (1.39 +0.37$-$0.32)$\cdot$10$^{-2}$ \\ $\psi(2S)$ & (3.16 +1.04$-$0.89)$\cdot$10$^{-4}$ & \, (3.98$\pm$0.29)$\cdot$10$^{-4}$ & (7.14 +1.08$-$0.94)$\cdot$10$^{-4}$ \\ \end{tabular} \caption{Summary of the calculations of yields at mid-rapidity for open charm and charmonia in Pb-Pb at 5.02 TeV, 0-10\% (upper part) and 30-50\% (lower part) centralities. For the corona, we used as inputs the production cross sections $\mathrm{d} \sigma/\mathrm{d} y$ as measured by ALICE in pp collisions \cite{Acharya:2019mgn,Acharya:2021cqv,Acharya:2020lrg,Acharya:2019lkw} (and assumed for $\Xi_c^0$ $\mathrm{d} \sigma/\mathrm{d} y$=0.10$\pm$0.04 mb and $\psi(2S)/\mathrm{J}/\psi = 0.15$) and $T_\text{AA}^\text{corona}$=0.90 mb$^{-1}$ and 0.47 mb$^{-1}$, respectively (for corona corresponding to $\rho<0.1\rho_0$). For details see text.} \label{tab:yields_tot} \end{table*} In Fig.~\ref{fig:yields_m} we show the mass dependence of rapidity distributions \ensuremath{\der N / \der y}~ for selected charm hadrons at mid-rapidity. The selection includes $D^0$ mesons at the lower masses and includes many multi-charm states including the hypothetical $\Omega_{ccc}$ baryon at the high mass end of the plot. All are stable against decays via strong interactions. Already the left plot exhibits clear structures whose origin becomes clear with the plot at the right hand side, where the yields are divided by the angular momentum degeneracy. Since we are in the 'Boltzmann' regime where all masses $M$ are much larger then the temperature $T_{cf} = 156.5$ MeV, the degeneracy-normalized particle yields scale in the SHMc as $\propto M^{3/2} \exp({-M/T_{cf}})$. In a log plot over 7 decades this function looks essentially like a straight line for fixed charm quark number. The color code separates particles with $\alpha = 1, 2, 3$ charm quarks. The line at the far left corresponds to $\alpha =0$ and coincides with that determined for (u,d,s) hadrons in~\cite{Andronic:2017pug}. The deviation clearly visible for $\alpha = 1$ is due to feeding from hadronically unstable resonances. The grouping into three distinct regions is what is called in the introduction 'the charm hadron hierarchy'. In Fig.~\ref{fig:yields_tot} we show the total yields, the sum of core and corona components, for selected hadron species for which the data in pp collisions, used for the calculations of the corona component, are available. We include in the plot a scenario of charm baryon enhancement, implemented via tripled statistical weights for excited charmed baryons, which leads to an increase of the total thermal charm densities by 18\%. Note that the additional charmed baryon resonances are all assumed to be narrow Breit-Wigner-type resonances, as discussed in section~\ref{sec:SHMc_spec}. We demonstrate that the equivalent increase in the input charm cross section (from 0.53 to 0.63 mb) leads to a significant increase in the predicted yield for the charmed baryons, while the yields of all the rest of the species remain unchanged\footnote{After the completion of this work, the ALICE collaboration released~\cite{Acharya:2021set} a charm cross section at mid-rapidity for pp collisions at 5.02 TeV and based on the measurement of charmed mesons and baryons. Due to a significantly larger fragmentation into charmed baryons as compared to measurements in $\rm{e}^+\rm{e}^-$ and ep collisions, a charm cross section is obtained increased by 40\% compared to the value on which the current calculations are based.}. The numerical values for the case of the PDG hadron spectrum are shown in Table~\ref{tab:yields_tot}. One notices that some of the uncertainties are asymmetric and this originates either from SHMc, as the $g_c$ values are characterized by (slightly) asymmetric uncertainties and from the corona component via the experimental production cross section for pp collisions. In Table~\ref{tab:yields_canonical12} we have compiled the expected luminosity, rapidity density for $\Omega_{ccc}$ production, inelastic cross section corresponding to the 10\% most central collisions, and expected yields for $\Omega_{ccc}$ production in 5 different collision systems at top LHC energy and for a run time of $10^6$ s. The beam parameters are from~\cite{Citron:2018lsq}, the rapidity densities and yields for $\Omega_{ccc}$ production are our predictions. The predictions are per unit rapidity for the 10\% most central collisions but contain no efficiency and acceptance corrections. Nevertheless, substantial yields can be expected. Even though the expected luminosity increases by 4 orders of magnitude when moving from Pb-Pb to O-O, the yield in O-O is comparable to that for Pb-Pb, and that at a price of about 10 collisions per bunch crossing for O-O~\cite{Citron:2018lsq}. Furthermore, corona effects will be much increased when going to such a small system. Which of the systems is optimal for QGP-related research will have to be carefully optimized. \setlength{\tabcolsep}{4pt} \begin{table*} \begin{tabular}{l|ccccc} & O-O & Ar-Ar & Kr-Kr & Xe-Xe & Pb-Pb\\ \hline\hline $\sigma_{\text{inel}}(10\%)\, \text{mb}$ & 140 & 260 & 420 & 580 & 800 \\ $T_{\text{AA}}(0-10\%)\, \text{mb}^{-1}$ & 0.63 & 2.36 & 6.80 & 13.0 & 24.3 \\ $ \mathcal{L} ({\text{cm}^{-2}\text{s}^{-1}}) $ & $4.5 \cdot 10^{31}$ & $2.4 \cdot 10^{30} $ & $1.7\cdot 10^{29}$& $3.0 \cdot 10^{28} $& $3.8 \cdot 10^{27}$ \\ \hline &&&$\mathrm{d} \sigma_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}/\mathrm{d} y = 0.53\,\text{mb}$ & &\\ \hline $\mathrm{d} N_{\Omega_{ccc}}/\mathrm{d} y$ & $8.38 \cdot 10^{-8} $ & $1.29 \cdot 10^{-6} $ & $1.23 \cdot 10^{-5} $& $4.17 \cdot 10^{-5}$ & $1.25 \cdot 10^{-4}$ \\ $\Omega_{ccc}$ Yield & $5.3 \cdot 10^{5}$& $8.05 \cdot 10^5 $& $8.78 \cdot 10^5$ & $7.26 \cdot 10^5$ & $3.80 \cdot 10^5 $ \\ \hline &&&$\mathrm{d} \sigma_{\ensuremath{\text{c}\overline{\text{c}}}\xspace}/\mathrm{d} y = 0.63\,\text{mb}$ & &\\ \hline $\mathrm{d} N_{\Omega_{ccc}}/\mathrm{d} y$ & $1.44 \cdot 10^{-7} $ & $2.33 \cdot 10^{-6} $ & $2.14 \cdot 10^{-5} $& $7.03 \cdot 10^{-5}$ & $2.07 \cdot 10^{-4}$ \\ $\Omega_{ccc}$ Yield & $9.2 \cdot 10^{5}$& $1.45 \cdot 10^6 $& $1.53 \cdot 10^6$ & $1.22 \cdot 10^6$ & $6.29 \cdot 10^5 $ \end{tabular} \caption{Expected yields for a run of $10^6$ s of $\Omega_{ccc}$ baryons for various collision systems at the LHC energy $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV with full canonical suppression. All calculations are for mid-rapidity with $\Delta y = 1$. } \label{tab:yields_canonical12} \end{table*} \section{Conclusions and Outlook} In the present paper we have explored a range of predictions made within the framework of the SHMc with focus on hadrons with open charm. Most important is the comparison to recent ALICE measurements on $D$ mesons~\cite{Acharya:2018hre} and predictions for $\Lambda_c$ baryons. As baseline for SHMc predictions we kept the chemical freeze-out temperature $T_{cf} = 156.5$ MeV determined from the analysis of (u,d,s) hadrons. As only additional input we used the open charm cross section based on pp measurements from the ALICE and LHCb collaborations and extrapolated to the Pb-Pb system using hard collision scaling and a correction for nuclear modifications obtained from an analysis of recently measured p-Pb open and hidden charm data. The transverse momentum distributions were obtained in a novel, hydro-inspired approach including resonance decays. Without any further assumptions and parameters all $D$ meson yields and low transverse momentum distributions in Pb-Pb collisions are well described. The situation is less well settled in the $\Lambda_c$ baryon sector. Recent ALICE measurements in pp and p-Pb collisions~\cite{Acharya:2020uqi} indicate enhanced production of $\Lambda_c$ baryons compared to what was expected based on $e^+e^-$ and $ep$ data on fragmentation into charmed baryons. For an account of ALICE preliminary data including those from Pb-Pb collisions see Fig.~4 in~\cite{Loizides:2020tey}. These preliminary data have led to new charm baryon production models including ``missing" charm baryons~\cite{He:2019vgs}. We have therefore provided predictions for $\Lambda_c$ production in Pb-Pb collisions using the current experimental information on the charm baryon resonance spectrum~\cite{Zyla:2020zbs} as well as with an increased number of charm baryons. New data on this puzzling situation are expected soon from both the CERN ALICE and LHCb collaborations. The success of the description of yields and low transverse momentum spectra of open charm hadrons by the SHMc also demonstrates that the hadronization of open and hidden charm takes place at or close to the QCD phase boundary. It further demonstrates that open and hidden charm data can be reproduced with one common hadronization mechanism. Our predictions for Pb-Pb collisions imply very large enhancements for hadrons with 2 or 3 charm quarks compared to pure thermal production with charm fugacity $g_c =1$. The enhancement will be predominantly visible at low transverse momentum \ensuremath{p_{\text{T}}}\xspace{}, see, e.g., Fig.~\ref{fig:spectra_1}. For multi-charmed baryons these enhancements lead to an impressive and quite spectacular hierarchy, see Fig.~\ref{fig:yields_m}. To test these predictions is a challenge for future charm production experiments in LHC Run3 and Run4 and ultimately one of the important goals for the ALICE3 'all Silicon' experiment~\cite{Adamova:2019vkf}. Fundamental new information on the hadronization and deconfinement of charm quarks should be the rewards for the efforts to build such a detector. \section{Acknowledgments} \label{sec:Acknowledgments} This work is part of and supported by the DFG (German Research Foundation) -- Project-ID 273811115 -- SFB 1225 ISOQUANT. K.R. acknowledges the support by the Polish National Science Center (NCN) under the Opus grant no. 2018/31/B\-/ST2/01663, and the Polish Ministry of Science and Higher Education. V.V. is supported by a research grant (Grant No. 00025462) from VILLUM FONDEN, the Danish National Research Foundation (Danmarks Grundforskningsfond), and the Carlsberg Foundation (Carlsbergfondet). \bibliographystyle{utphys}
2024-02-18T23:40:36.214Z
2021-06-14T02:24:21.000Z
algebraic_stack_train_0000
2,832
11,865
proofpile-arXiv_065-13836
\section{\label{sec1}Introduction} \paragraph*{Introduction.-}Symmetries and symmetry breaking underlie many interesting phases and phenomena in condensed matter physics. A crystal with a periodic array of atoms/molecules is a simple example where continuous symmetry in space is spontaneously broken. Based on Lorentz invariance that puts spatial and temporal coordinates on equal footing, Wilczek in 2012 proposed the idea of a time crystal\,\cite{Wilczek}, where time translation symmetry can also be spontaneously broken in the ground state of a quantum many body system -- local observables oscillate in time with fixed periodicity, analogous to the spatial modulation in crystalline solids. However, despite Lorentz invariance, space and time are not completely interchangeable, as evidenced by their different signs in the metric tensor. Moreover, by its very own definition, the ground state or any equilibrium state of a closed quantum system does not vary with time and Wilczek's original idea was shown to be unfeasible\,\cite{NoTimeCrystal1,NoTimeCrystal2,NoTimeCrystal3,OldTimeCrystal3,NoTimeCrystal5}. Nevertheless the idea of time crystals has generated much interest over the past decade. More recent studies have established that time crystals can emerge under proper conditions. It is now widely accepted that time crystals can be realized in out-of-equilibrium systems\,\cite{OldTimeCrystal1,OldTimeCrystal2,OldTimeCrystal3,NewTimeCrystal1,NewTimeCrystal2,NewTimeCrystal3} and particularly in the presence of a periodic driving field. Consensus has also grown on a set of criteria that need to be satisfied by a state to be classified as a time crystal\,\cite{RefNote}, broadening the scope of this novel state of matter from its original definition. Discrete time crystalline behavior in a periodically driven system is characterized by the local properties that oscillate in time with a period which is a multiple of that of the driving field\,\cite{DiscreteTimeCrystal1,DiscreteTimeCrystal2,DiscreteTimeCrystal3,PrethermalExperiment1,PrethermalExperiment2,PrethermalExperiment3,PrethermalExperiment4,PrethermalExperiment5,FractionalTimeCrystal1,FractionalTimeCrystal2,QuasiCrystal,QuasiCrystal1,QuasiCrystal2,QuasiCrystal4,QuasiCrystal5,QuasiCrystal6,DrivenDissipative1,DrivenDissipative2,DrivenDissipative3,DrivenDissipative4,DrivenDissipative5,Ultracold_Atom_Time_Crystal_Theory,TimeCrystal1,Magnon_Time_Crystal,TimeCrystal3,ArchimedeanScrew,TimeCrystal1,TimeCrystal2,TimeCrystal3}. In many cases, the driving field injects energy into the system that eventually leads to thermalization. The periodic behaviours before thermalization are known as pre-thermal time crystals\,\cite{PrethermalExperiment1,PrethermalExperiment2,PrethermalExperiment3,PrethermalExperiment4,PrethermalExperiment5}. Conversely, if the driving frequency is much larger than the local energy scales or if heat generated during thermalization can be dissipated, a driven dissipative time crystal can form because thermalization takes a long time\,\cite{DrivenDissipative1,DrivenDissipative2,DrivenDissipative3,DrivenDissipative4,DrivenDissipative5}. For Floquet many body localized (MBL) systems\,\cite{DiscreteTimeCrystal1,DiscreteTimeCrystal2,DiscreteTimeCrystal3}, where absence of coupling between different energy eigenstates prevent thermalization of the states, a more robust long-lived time crystal can be realized. \begin{figure}[tb] \includegraphics[width=0.3\textwidth]{Schematic.png} \caption{The schematic of the discrete time crystal made of edge spin. Spins oscillate with a period twice that of external EM field. The phase difference of oscillation between the neighbouring sites is opposite as the magnons are amplified at $k=\pi$ point.} \label{fig::Schematic} \end{figure} Time crystals have been theoretically studied and experimentally observed in a range of systems, including magnons\,\cite{TimeCrystal3,Magnon_Time_Crystal}, ultracold atoms\,\cite{Ultracold_Atom_Time_Crystal_Theory,TimeCrystal1}, superfluid quantum gas\,\cite{TimeCrystal1,TimeCrystal2} and qubits\,\cite{QubitTimeCrystal1,QubitTimeCrystal2,QubitTimeCrystal3}. Recently, the bulk magnon states of the magnetic insulators YIG is utilized to realize space-time crystal by spontaneously breaking continuous time translational symmetry\,\cite{TimeCrystal3}. While the experiment using permalloy confirms the spontaneous breaking of the spatial translational symmetry of the coherent magnon state, the breaking of the discrete time translational symmetry has not been confirmed\,\cite{Magnon_Time_Crystal}. In this work, we show that a discrete time crystal can emerge in the $\pi$-Berry phase protected magnon edge state of a quantum magnet driven by a periodic field in absence of any time reversal symmetry breaking interactions. The topological protection of the edge state strongly reduces the decay into the bulk magnons. Although the system eventually thermalizes, this topological protection facilitates the stabilization of coherent magnons in the pre-thermal regime. In contrast to Floquet MBL, our proposal avoids the need for strong disorder, which stands in the way of experimental realization in larger systems\,\cite{NewTimeCrystal3}. The proposed emergent magnon time crystal can be understood as a pre-thermal time crystal from a driven-dissipative system that is further stabilised by the topological structure of the magnon band. \begin{figure}[tb] \includegraphics[width=0.5\textwidth]{TimeCrystalV5.png} \caption{(a) Schematic of a kagome ferromagnetic system with number of sites along width (see orange sites) is $N=13$ and the system is periodic along the green arrow after certain number of sites. (b) The magnon band structure is shown in black color. The yellow dots denote the eigenstates with eigenvalues with positive imaginary part. Inset shows the magnified picture of the band near $k=\pi$ and the color code describes the value of positive imaginary part. (c) The number of magnons at $k=\pi$ as a function of time for upper (blue) and lower edge (red) states. (d) The oscillation of spin component $S^x$ at a site at upper edge (blue plot) and lower edge (red plot). Inset shows the magnified figure within a particular time limit. The parameters used for all the plots are $J=1.0$, $\gamma=5\times 10^{-4}$, $\eta=9\times 10^{-4}$, $\Omega=5.1716$, $p_0=1.0$, $E_0^x=0.0$, $E_0^y=0.002$.} \label{fig::TimeCrystal} \end{figure} \paragraph*{Discrete time crystal.-} We consider the ferromagnetic Heisenberg model $\pazocal{H}_0=-J\sum_{ij} \hat{S}_i\cdot\hat{S}_j$ on a kagome lattice. The low energy magnon excitations above the ferromagnetic ground states are described by the linear spin-wave theory: $\hat{S}_i^{+}=\sqrt{2S}\hat{a}_i$, $\hat{S}_i^{-}=\sqrt{2S}\hat{a}^{\dagger}_i$, $\hat{S}_i^z=S-\hat{a}_i^\dagger \hat{a}_i$, where $S$ denotes the magnitude of the spin, and $\hat{a}_i^\dagger (\hat{a}_i)$ creates (annihilates) a magnon at site $i$. Application of the above transformation to $\pazocal{H}_0$ yields a tight binding magnon Hamiltonian where the interactions are neglected. The resulting band structure for a ribbon geometry (Fig.\ref{fig::TimeCrystal}(a)) is shown in Fig.\,\ref{fig::TimeCrystal}(b). The bulk bands carry a non-trivial quantized $\mathbb{Z}_2$ topological invariant (Zak-phase or $\pi$-Berry phase), and contain nearly flat topological edge states between the projected Dirac points\,\cite{BerryPhase1,BerryPhase2}. Any time reversal symmetry breaking terms in the Hamiltonian, such as the Dzyaloshinskii-Moriya interaction(DMI), would open up a gap in the magnon spectrum at the Dirac points\,\cite{EffectiveTimeReversalSymmetry} and imparts dispersion to the edge states at $k=\pi$, destroying discrete time crystalline behavior that will be discussed later. As bosons not subject to the Pauli exclusion principle, magnons normally populate the bottom of the band, far from the edge states. However, recent studies have shown that edge state magnons can be controllably amplified at arbitrary energies by tailored EM waves\,\cite{Amplification}. The EM field couples to the magnetic insulators via polarization\,\cite{PolarizationOperator1,PolarizationOperator2,PolarizationOperator3} as \begin{align} H_c &=\cos(\Omega t)\boldsymbol{E}\cdot\sum_{\left\langle i,j\right\rangle} \boldsymbol{P}_{ij} \label{eq::Coupling} \end{align} where $\boldsymbol{P}_{ij}$ is the polarization operator. The relevant terms in $\boldsymbol{P}_{ij}$ that contribute to magnon amplification consists of bilinear spin operators on the nearest neighbor bonds~\cite{Supplementary}, \begin{equation} \mathbf{P}_{ij}\approx \boldsymbol{p}_{0,ij} \left(\mathbf{S}_i\cdot\mathbf{Q}_{ij}\right) \left(\mathbf{S}_j\cdot\mathbf{Q}_{ij}\right). \label{eq::Polarization} \end{equation} Other polarization terms are not important for this study as they will be neglected in the rotating wave approximation\,\cite{Amplification}. The equation of motion for the magnon field $\tilde{\alpha}_k$ is given by\,\cite{Supplementary}, {\small \begin{equation} \frac{d}{dt} \begin{pmatrix} \tilde{\alpha}_k^*\\ \tilde{\alpha}_{-k} \end{pmatrix} = \mathrm{i} \begin{pmatrix} \tilde{\epsilon}_k-\mathrm{i}\frac{\gamma\mathbb{I}+\eta\left|\alpha_k\right|^2}{2} & \frac{\left[\tilde{H}_c\right]_{12}}{2} \\ -\frac{\left[\tilde{H}_c\right]_{21}}{2} & -\tilde{\epsilon}_{-k}-\mathrm{i}\frac{\gamma\mathbb{I}+\eta\left|\alpha_k\right|^2}{2} \end{pmatrix} \begin{pmatrix} \tilde{\alpha}_k^*\\ \tilde{\alpha}_{-k} \end{pmatrix}, \label{eq::EOM} \end{equation} } where $(\tilde{\alpha}^*_k \,\,\tilde{\alpha}_{-k})$ represnt the magnon fields $\left\langle\hat{\tilde{a}}_{n,k}\right\rangle$; $\tilde{\epsilon}_{k}$ is a diagonal matrix with elements $\epsilon_{n,k}-\frac{\Omega}{2}$, where $\epsilon_{n,k}$ is the energy eigenvalue; $\gamma$ and $\eta$ are phenomenological linear and non-linear damping constants; $\mathbb{I}$ is the identity matrix and $\left|\alpha_k\right|^2$ is the diagonal matrix with entries $\left|\left\langle\hat{\tilde{a}}_{n,k}\right\rangle\right|^2$ The square matrix on the right hand side of Eq.\,\ref{eq::EOM} is the dynamical matrix with complex eigenvalues (for $\eta=0$). The real and imaginary parts of the eigenvalues represent the energy and lifetime of the magnon respectively. In absence of EM coupling ({\footnotesize $\left[\tilde{H}_c\right]_{12}\approx O_{N\times N}$}), the imaginary part of eigenvalues is negative indicating magnon decay. However as the amplitude of EM field increases the imaginary part of some of the eigenvalues satisfying $\epsilon_{n,k}+\epsilon_{n,-k}\approx \Omega$ become positive. This indicates the onset of spontaneous amplification of magnons. The yellow dots in the band structure (Fig.\,\ref{fig::TimeCrystal}(b)) are the eigenvalues with positive imaginary part. The solution of Eq.\ref{eq::EOM} describes amplified coherent magnons above a cutoff amplitude of EM field\,\cite{CoherentState1,CoherentState2,TimeCrystal1,TimeCrystal2}. The presence of the non-linear damping suppresses the exponential increase of the magnon number and the system reaches a steady state. Fig.\,\ref{fig::TimeCrystal}(c) shows the amplified coherent magnons population for the edge states of upper and lower edges at $k=\pi$ in the steady state. While the number of magnons ({\small $\left|\left\langle\hat{\tilde{a}}_{n,k}\right\rangle\right|^2$}) are identical in the rotating and the lab frames in the steady state (see Fig.\,\ref{fig::TimeCrystal}), the field $\left\langle\hat{\tilde{a}}_{n,k}\right\rangle$ oscillates in time. Specifically when the pair of amplified magnons satisfy $\epsilon_{n,k}=\epsilon_{n,-k}=\Omega/2$, the steady state expectation values for the field in the rotating frame is independent of time i.e. $\left\langle\hat{\tilde{a}}_{n,k}(t)\right\rangle_{\text{rot}}^s=\left\langle\hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^s$. Thus the fields in the two frames are related as, \begin{equation} \left\langle \hat{\tilde{a}}_{n,k}(t)\right\rangle_{\text{lab}}^{\text{s}} \approx \left\langle\hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^{\text{s}} \exp(\mathrm{i}\frac{\Omega}{2} t), \label{eq::Oscillation2} \end{equation} where the superscript ``s" denotes steady state expectation value. The equation of motion Eq.\,\ref{eq::EOM} has a $\mathbb{Z}_2$ symmetry $\hat{\tilde{a}}_{n,k}\rightarrow -\hat{\tilde{a}}_{n,k}$. Above a critical amplitude of the EM field, the amplified magnon field at the edges the system spontaneously breaks the $\mathbb{Z}_2$ symmetry by acquiring a finite, non-zero {\small $\left\langle\hat{\tilde{a}}_{n,k}\right\rangle_{\text{lab}}^{\text{s}}$} that oscillates in time with a period which is twice that of the driving EM field. Thus a discrete time crystal of edge state magnons is formed via amplification\,\cite{TimeCrystal1,TimeCrystal2} that breaks the discrete time translational symmetry spontaneously. This time crystalline behavior can be experimentally observed by measuring the transverse magnetization at the edges, i.e., the spin components $\hat{S}^x_i$ and $\hat{S}^y_i$ -- the spin component $\hat{S}^z_i$ is constant, because it is related to the number of magnons {\small $\left|\left\langle\hat{\tilde{a}}_{i,k}\right\rangle\right|^2$}, which is invariant in time in the steady state. The x-component of the spin, $\left\langle \hat{S}^x_i\right\rangle$, is given in terms of the fields $\left\langle \hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^{\text{s}}$ as \begin{widetext} \begin{equation} \left\langle\hat{S}_i^x\right\rangle = \sqrt{\frac{S}{2N_x}} \left[ \sum_{k>0,n} \left[U_1^\dagger\right]_{in} \left\langle \hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^{\text{s}} e^{\frac{i\Omega t}{2}} e^{-ikx_i} + \sum_{k>0,n} \left[U_2^\dagger\right]_{in} \left\langle \hat{\tilde{a}}_{n,-k}\right\rangle_{\text{rot}}^{\text{s}} e^{\frac{i\Omega t}{2}} e^{ikx_i} + \left[U_1^\dagger\right]_{in} \left\langle \hat{\tilde{a}}_{n,0}\right\rangle_{\text{rot}}^{\text{s}} e^{\frac{i\Omega t}{2}} + \text{\small{H.C.}} \right] \label{eq::Sx} \end{equation} \end{widetext} Fig.\,\ref{fig::TimeCrystal}(d) demonstrate the oscillation of $S^x_i$ at a site at the upper edge (blue) and the lower edge (red). The different $k$-points arrive at steady state at a different time\,\cite{Supplementary} and so $S^x_i$ in Fig.\,\ref{fig::TimeCrystal}(d) modulates transiently and reaches a steady state when all the amplified $k$-points do. The oscillation amplitudes of $\left\langle \hat{S}_i^x\right\rangle$ at both edges are nearly identical, but they are not exactly equal, since it is a superposition of several fields $\left\langle \hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^{\text{s}}$\,(see Eq.\,\ref{eq::Sx}); while the magnitude of the fields $\left\langle \hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^{\text{s}}$ at two edges are the same, the phases are not. Moreover, the amplitude of oscillation varies with different simulations due to the random starting conditions representing the vacuum fluctuations\,\cite{CoherentState1,CoherentState2}. Finally, time crystalline behavior also holds for the long-range order in spatial directions due to the coherence of the pumped magnon at $k=\pi$. Since the amplification of magnons extends over a finite momentum range around $k=\pi$, a spatial modulation in the amplitude of oscillation is expected. \begin{widetext} \begin{figure}[tb] \includegraphics[width=\textwidth]{StabilityV4.png} \caption{ The two magnon density of states\,(blue and yeollow color plot) and band structure\,(black color) at (a) zero magnetic field and (b) at $B_z=JS$. (c) The double magnon scattering rate as a function of width of the energy levels for different system widths. As a matter of simplicity, we restricted our calculation for the scattering of magnons for the upper-edge states and considered only the scattering between pair of points with $k = \pm\pi$ and $k = \pm\pi \pm\delta k$.} \label{fig::Stability} \end{figure} \end{widetext} \paragraph*{Stability of the time crystal.-} Discrete time crystals of the amplified edge magnons are stable unless bulk eigenmodes with a significant overlap with the edge modes is amplified. The choice of the Kagome ferromagnet is important as it allows edge magnons to be selectively amplified without the amplification of bulk magnons.~\cite{Amplification} However, magnon scattering can excite other magnon eigenmodes and may not conserve the number of magnons. The Hamiltonian $\pazocal{H}_0$ does not contain any magnon non-conserving terms, but such terms may arise in the presence of spin anisotropy in many real quantum magnets. We have calculated the bulk band structure and momentum resolved two-magnon density of states, \begin{equation} D_\mathbf{k}(\omega)=\frac{1}{N} \sum_{n,n',\boldsymbol{\Delta}} \delta(\omega-\omega_{n,\boldsymbol{\Delta}}-\omega_{n',\mathbf{k}-\boldsymbol{\Delta}}) \end{equation} for a system with toroidal boundary condition\,(\,\ref{fig::Stability}(a), (b)). When the energy of an eigenstate matches that of the two magnon continuum, it can decay into two magnons with lower energies. The two magnon continuum energy scales as twice the negative of Zeeman term ($2B_zS$) in a longitudinal magnetic field $B_z$, while that of the magnon bands scale as $B_zS$. Hence for magnetic fields $B_z S>E_{\text{edge}}$ the edge states are energetically separated from the 2-magnon continuum and cannot decay via this channel. At even higher fields, $B_z>6J$ the two-magnon continuum gets separated in energy from the magnon band structure, implying that two edge magnons cannot combine and produce a higher energy magnon. As the magnetic field increases, the higher order magnon non-conserving processes will disappear faster than the two magnon decay processes. Thus an external magnetic field can suppress the magnon non-conserving scattering in the system. Magnon {\it conserving} scattering processes can not be eliminated by an external field, and are always present in any spin Hamiltonian. We calculated the scattering rate of magnons due to two-magnon scattering\,(quartic terms of magnon Hamiltonian) using Fermi golden rule, \begin{equation} s_2=\frac{2\pi}{\hbar} \sum_f\sum_i\left|\squeezeB{f}{\pazocal{H}_{0,\text{int}}^{(4)}}{i}\right|^2 \delta(E_f-E_i), \label{eq::FermiGoldenRule} \end{equation} where $\pazocal{H}_{0,\text{int}}^{(4)}$ is the quartic term of the magnon Hamiltonian\,\cite{Supplementary},and $|f\rangle$ ($|i\rangle$) denotes the final (initial) state. For simplicity, we have restricted our calculation to the upper-edge states and considered only the scattering from the points $k=\pm \pi$ and $k=\pm\pi\pm\delta k$ (where $\delta k=0.0628$). To numerically calculate the scattering rate $s_2$, we have considered a finite width of energy levels $\delta E$, which physically implies band broadening. The scattering rate as a function of band broadening is plotted in a logarithmic scale in Fig.\,\ref{fig::Stability}(c) for different system sizes. It is observed that the scattering rate decreases rapidly as the band broadening decreases. The band broadening results from magnon interactions and the only way to control the bandwidth is by controlling the density of the amplified magnons. This is achieved by working at low temperature and low EM field intensity. Even if the scattering rate is high, as long as the life-time of magnons at the scattered state is small the behavior of the system is governed by only the selectively amplified edge state magnons\,\cite{TimeCrystal3}. Thus a small density of amplified edge state magnons would result in a stable time crystal. These conditions also help minimise magnon decay due to magnon-phonon scattering. Finally, edge imperfections cause broadening of the edge states, and reduce the yield of coherent magnons.~\cite{Tanaka2020,Pawlak2020}. The reduced scattering is a consequence of the topological protection of the edge state. The presence of chiral symmetry induces a flat edge mode at zero energy due to finite $\pi$-Berry phase. Absence of chiral symmetry can result in a dispersive edge state at non-zero energy in the presence of a quantized non-zero $\pi$-Berry phase\,\cite{BerryPhase1,BerryPhase2}. Interestingly, the $\pi$-Berry phase protected topological edge states are robust against impurities with or without chiral symmetry\,\cite{BerryPhase2}, providing additional stability to the time crystalline behavior. However, the presence of the effective time reversal symmetry breaking DMI\,\cite{EffectiveTimeReversalSymmetry}, which is ubiquitous in many real quantum magnets, results in dispersive edge states. As a result, the condition, $\epsilon_{n,k} = \epsilon_{n,-k}$ for a pair of amplified magnons, as assumed above, is broken. Then, $\left\langle \hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^s$ becomes time dependent and according to Eq.\,\ref{eq::Oscillation2}, the period of oscillation of the fields at a particular $k$-point will no longer be exactly twice the period of the external EM field. For a finite system, there will be a finite number of amplified points around $k=\pi$; adding over the fields at a few amplified points according to Eq.\,\ref{eq::Sx} would result in an oscillation of $S^x$ that is incommensurate with the external field, resulting in a quasi-time crystal\,\cite{QuasiCrystal}. In the thermodynamic limit, when the number of $k$-points in the vicinity of $k=\pi$ diverges, the oscillation would become chaotic in nature, destroying the time crystal-like behaviour. \paragraph*{Experimental realisation.-} The edge-magnon time-crystals can be observed using direct spatial and temporal imaging of spin-wave dynamics via multiple recently developed techniques, such as, Kerr microscopy\,\cite{Kerr1,Kerr2}, Brillouin light scattering spectroscopy (BLS)\,\cite{BLS_imaging1,BLS_imaging2,BLS_imaging3} and time resolved scanning transmission x-ray microscopy (TR-STXM)\,\cite{STXMm1,STXM0,STXM10,STXM1,STXM2,STXM3,STXM4,STXM5,STXM6,STXM8,STXM9,Magnon_Time_Crystal}. BLS is useful to detect magnons at a fixed frequency and wave vector\,\cite{BLS1,BLS2,BLS3,BLS4,BLS5,BLS6} and has recently been used to detect the space-time crystal in the ferromagnetic insulator \ce{YIG}\,\cite{TimeCrystal3}. Additionally, theoretically proposed spin Hall noise spectroscopy is a promising technique to detect the presence of edge magnons at a given frequency\,\cite{SpinHallNoise1,SpinHallNoise2}. The TR-STXM, in particular, is promising for directly imaging spin dynamics at the edge due to its high accuracy in detecting magnon dynamics with a spatial and temporal resolution of 20 nm and 50 ps respectively\,\cite{STXMm1,STXM0,STXM10,Magnon_Time_Crystal}. Recently, this method has been used to observe the dynamics of space-time crystal of bulk magnons in permalloy strips\,\cite{Magnon_Time_Crystal}. We propose the spin-$\frac{1}{2}$ kagome ferromagnets haydeeite\,\cite{Material1} and \ce{Cu(1,3-bdc)}\,\cite{Material2_V2} as possible hosts of the discregte time crystals of edge magnons as discussed in this work. While the Haydeeite has experimental evidence for the absence of DMI\,\cite{Material1}, the Cu(1,3-bdc) contains out-of-plane DMI that does not break any effective time reversal symmetry\,\cite{EffectiveTimeReversalSymmetry} for the ferromagnetic ground state with in-plane magnetization\,\cite{Material2}. The period of oscillations for the materials haydeeite and \ce{Cu(1,3-bdc)} are calculated to be $0.05$ps and $0.25$ps respectively, which are estimated using experimentally determined Heisenberg exchange interactions\,\cite{Material1,Material2_V2}. The period of oscillation can be tuned via external magnetic field which controls the energy of edge magnons. Thus these quantum magnets are perfect candidates for realizing discrete time crystal of edge magnons. {\it Acknowledgements} B.Y. would like to acknowledge the support from the Singapore National Research Foundation (NRF) under NRF fellowship award NRF-NRFF12-2020-0005, and a Nanyang Technological University start-up grant (NTU-SUG). P.S. acknowledges financial support from the Ministry of Education, Singapore through MOE2019-T2-2-119. \begin{widetext} \section{Supplementary Material} \subsection{ Polarization Operator} In this section, the mathematical details of the polarization term is discussed. The polarization operator is given as, \begin{equation} \mathbf{P}_{ij}\approx \boldsymbol{p}_{0,ij} \left(\mathbf{S}_i\cdot\mathbf{Q}_{ij}\right) \left(\mathbf{S}_j\cdot\mathbf{Q}_{ij}\right), \label{eq::Polarization} \end{equation} which is derived from the following Hubbard model of electrons on kagome lattice, \begin{equation} \pazocal{H}_{\text{Hubbard}} = -\sum_{ij} \left[ \begin{pmatrix} \hat{c}_{i\uparrow}^\dagger & \hat{c}_{i\downarrow}^\dagger \end{pmatrix} \left(t\mathbb{I}\cos(\theta)+it\boldsymbol{n}\cdot\boldsymbol{\sigma}\sin(\theta)\right) \begin{pmatrix} \hat{c}_{j\uparrow} \\ \hat{c}_{j\downarrow} \end{pmatrix} + \text{H.c.} \right] + U\sum_i \hat{n}_{i\uparrow}\hat{n}_{j\downarrow} \end{equation} where $t\cos(\theta)$ and $it\boldsymbol{n}\sin(\theta)$ ($\boldsymbol{n}$ is an unit vector) are the real and complex hopping amplitude of electrons on nearest neighbour bonds respectively. $U$ is the onsite Coulomb repulsion. The $\bold{p}_{0,ij}$ and $\boldsymbol{Q}_{ij}$ are given by, \begin{align*} \boldsymbol{p}_{0,ij} &=-16\theta^2 e a \frac{t^3}{U^3} (\boldsymbol{e}_{jk}-\boldsymbol{e}_{ki}) =p_0 (\boldsymbol{e}_{jk}-\boldsymbol{e}_{ki}) , \\ \boldsymbol{Q}_{ij}&=\boldsymbol{n}-n^z\hat{z} \end{align*} where $e$ and $a$ are the electron charge and lattice constant respectively. $\boldsymbol{e}_{jk}$ is a vector on nearest neighbour bonds from site-$j$ to site-$k$. The sites $i$, $j$ and $k$ are the sites on the same triangle of the kagome lattice. The polarization terms other than the terms in Eq.\,\ref{eq::Polarization} are not important for this study because in the diagonal basis in the rotating frame those terms are time dependent and so those terms are dropped in rotating wave approximation in Eq.\,\ref{eq::EOM}. \subsection{Derivation of equation of motion} The Hamiltonian describing a kagome ferromagnet on a cylindrical geometry, coupled to an external EM field is given by \begin{align} \pazocal{H}=&\frac{1}{2}\sum_{k} \begin{pmatrix} \Psi_{k}^\dagger & \Psi_{-k} \end{pmatrix} \begin{pmatrix} H_0(k) & O_{N\times N} \\ O_{N\times N} & H_0(-k)^T \end{pmatrix} \begin{pmatrix} \Psi_{k} \\ \Psi_{-k}^\dagger \end{pmatrix} \nonumber\\ &+ \frac{1}{2}\cos(\Omega t)\sum_{k} \begin{pmatrix} \Psi_{k}^\dagger & \Psi_{-k} \end{pmatrix} \begin{pmatrix} [H_c]_{11} & [H_c]_{12} \\ [H_c]_{21} & [H_c]_{22} \end{pmatrix} \begin{pmatrix} \Psi_{k} \\ \Psi_{-k}^\dagger \end{pmatrix}. \end{align} $N$ is the number of sites along the width of the ribbon\,(see Fig.\,2(a) in main text), and $\Psi_{k}=\left(\hat{a}_{1,k},\,\hat{a}_{2,k},\,...,\,\hat{a}_{N,k}\right)^T$ $O_{N\times N}$ is a null matrix. The first and second matrices are derived from the unperturbed Hamiltonian Eq.\,1 and coupling Hamiltonian Eq.\,2 in the main text respectively. The Hamiltonian $\pazocal{H}$ is first represented in the diagonal basis $\tilde{\Psi}_k=U_1(k)\Psi_k$, $\tilde{\Psi}_{-k}^\dagger=U_2(k)\Psi_{-k}^\dagger$ of matrices $H_0(k)$, $H_0(-k)^T$. This is followed by transforming the system from the lab-frame to rotating-frame by using the unitary operator $U(t)=\exp{\frac{i\omega t}{2}\sum_k \tilde{\Psi}_k^\dagger \tilde{\Psi}_k}$, By neglecting the time-dependent terms, we get the following effective Hamiltonian: \begin{equation} \pazocal{H}_{\text{eff}}=\frac{1}{2} \sum_{k} \begin{pmatrix} \tilde{\Psi}_k^\dagger & \tilde{\Psi}_{-k} \end{pmatrix} \begin{pmatrix} \epsilon_k-\frac{\Omega}{2} & \frac{[\tilde{H}_{c}]_{12}}{2} \\ \frac{[\tilde{H}_{c}]_{21}}{2} & \epsilon_{-k}-\frac{\Omega}{2} \end{pmatrix} \begin{pmatrix} \tilde{\Psi}_k \\ \tilde{\Psi}_{-k}^\dagger \end{pmatrix}, \label{eq::EffectiveHamiltonian} \end{equation} where $\epsilon_{k}$ and $\epsilon_{-k}$ are the diagonal matrices of eigenvalues of matrices $H_0(k)$ and $H_0(-k)^T$ respectively. The matrix $\tilde{H}_{12}=U_1 H_{12} U_2^\dagger$ is the coupling matrix in the diagonal basis. Only the off-diagonal terms $[\tilde{H}]_{12}$ and $[\tilde{H}]_{21}$ of the coupling Hamiltonian appear in $\pazocal{H}_{\text{eff}}$ due to the rotating wave approximation. The equation of motion of field $\left\langle \tilde{\Psi}_{k}\right\rangle=\tilde{\alpha}_k=\left(\left\langle\hat{\tilde{a}}_{1,k}\right\rangle,\,\left\langle\hat{\tilde{a}}_{2,k}\right\rangle,\,...,\,\left\langle\hat{\tilde{a}}_{N,k}\right\rangle\right)^T$ is given by, {\small \begin{equation} \frac{d}{dt} \begin{pmatrix} \tilde{\alpha}_k^*\\ \tilde{\alpha}_{-k} \end{pmatrix} = \mathrm{i} \begin{pmatrix} \tilde{\epsilon}_k-\mathrm{i}\frac{\gamma\mathbb{I}+\eta\left|\alpha_k\right|^2}{2} & \frac{\left[\tilde{H}_c\right]_{12}}{2} \\ -\frac{\left[\tilde{H}_c\right]_{21}}{2} & -\tilde{\epsilon}_{-k}-\mathrm{i}\frac{\gamma\mathbb{I}+\eta\left|\alpha_k\right|^2}{2} \end{pmatrix} \begin{pmatrix} \tilde{\alpha}_k^*\\ \tilde{\alpha}_{-k} \end{pmatrix}, \label{eq::EOM} \end{equation} }where $\hat{\tilde{a}}_{n,k}$ is the magnon annihilation operator of $n$-th band at $k$-point; $\tilde{\epsilon}_{k}$ is a diagonal matrix with elements $\epsilon_{n,k}-\frac{\Omega}{2}$, where $\epsilon_{n,k}$ is the energy eigenvalue; $\gamma$ and $\eta$ are phenomenological linear and non-linear damping constants; $\mathbb{I}$ is identity matrix and $\left|\alpha_k\right|^2$ is diagonal matrix with entries $\left|\left\langle\hat{\tilde{a}}_{n,k}\right\rangle\right|^2$. \section{Quartic Interaction Term} The four body interaction term that is used for the calculation of two magnon scattering rate is derived from the spin Hamiltonian using higher order terms of Taylor-series expansion of square root in Holstein Primakoff transformation. The interaction term is given by, \begin{equation} \pazocal{H}_{0,\text{int}}^{(4)}=-\frac{J\hbar^2}{4} \sum_{\left\langle ij\right\rangle} \left[ 4\hat{a}_i^\dagger\hat{a}_j^\dagger\hat{a}_i\hat{a}_j +\hat{a}_j^\dagger \hat{a}_j^\dagger \hat{a}_i\hat{a}_j +\hat{a}_i^\dagger \hat{a}_j^\dagger \hat{a}_i\hat{a}_i +\hat{a}_i^\dagger\hat{a}_j^\dagger \hat{a}_j\hat{a}_j +\hat{a}_i^\dagger\hat{a}_i^\dagger\hat{a}_i\hat{a}_j \right] \end{equation} \end{widetext} \section{\label{sec1}Introduction} \paragraph*{Introduction.-}Symmetries and symmetry breaking underlie many interesting phases and phenomena in condensed matter physics. A crystal with a periodic array of atoms/molecules is a simple example where continuous symmetry in space is spontaneously broken. Based on Lorentz invariance that puts spatial and temporal coordinates on equal footing, Wilczek in 2012 proposed the idea of a time crystal\,\cite{Wilczek}, where time translation symmetry can also be spontaneously broken in the ground state of a quantum many body system -- local observables oscillate in time with fixed periodicity, analogous to the spatial modulation in crystalline solids. However, despite Lorentz invariance, space and time are not completely interchangeable, as evidenced by their different signs in the metric tensor. Moreover, by its very own definition, the ground state or any equilibrium state of a closed quantum system does not vary with time and Wilczek's original idea was shown to be unfeasible\,\cite{NoTimeCrystal1,NoTimeCrystal2,NoTimeCrystal3,OldTimeCrystal3,NoTimeCrystal5}. Nevertheless the idea of time crystals has generated much interest over the past decade. More recent studies have established that time crystals can emerge under proper conditions. It is now widely accepted that time crystals can be realized in out-of-equilibrium systems\,\cite{OldTimeCrystal1,OldTimeCrystal2,OldTimeCrystal3,NewTimeCrystal1,NewTimeCrystal2,NewTimeCrystal3} and particularly in the presence of a periodic driving field. Consensus has also grown on a set of criteria that need to be satisfied by a state to be classified as a time crystal\,\cite{RefNote}, broadening the scope of this novel state of matter from its original definition. Discrete time crystalline behavior in a periodically driven system is characterized by the local properties that oscillate in time with a period which is a multiple of that of the driving field\,\cite{DiscreteTimeCrystal1,DiscreteTimeCrystal2,DiscreteTimeCrystal3,PrethermalExperiment1,PrethermalExperiment2,PrethermalExperiment3,PrethermalExperiment4,PrethermalExperiment5,FractionalTimeCrystal1,FractionalTimeCrystal2,QuasiCrystal,QuasiCrystal1,QuasiCrystal2,QuasiCrystal4,QuasiCrystal5,QuasiCrystal6,DrivenDissipative1,DrivenDissipative2,DrivenDissipative3,DrivenDissipative4,DrivenDissipative5,Ultracold_Atom_Time_Crystal_Theory,TimeCrystal1,Magnon_Time_Crystal,TimeCrystal3,ArchimedeanScrew,TimeCrystal1,TimeCrystal2,TimeCrystal3}. In many cases, the driving field injects energy into the system that eventually leads to thermalization. The periodic behaviours before thermalization are known as pre-thermal time crystals\,\cite{PrethermalExperiment1,PrethermalExperiment2,PrethermalExperiment3,PrethermalExperiment4,PrethermalExperiment5}. Conversely, if the driving frequency is much larger than the local energy scales or if heat generated during thermalization can be dissipated, a driven dissipative time crystal can form because thermalization takes a long time\,\cite{DrivenDissipative1,DrivenDissipative2,DrivenDissipative3,DrivenDissipative4,DrivenDissipative5}. For Floquet many body localized (MBL) systems\,\cite{DiscreteTimeCrystal1,DiscreteTimeCrystal2,DiscreteTimeCrystal3}, where absence of coupling between different energy eigenstates prevent thermalization of the states, a more robust long-lived time crystal can be realized. \begin{figure}[tb] \includegraphics[width=0.3\textwidth]{Schematic.png} \caption{The schematic of the discrete time crystal made of edge spin. Spins oscillate with a period twice that of external EM field. The phase difference of oscillation between the neighbouring sites is opposite as the magnons are amplified at $k=\pi$ point.} \label{fig::Schematic} \end{figure} Time crystals have been theoretically studied and experimentally observed in a range of systems, including magnons\,\cite{TimeCrystal3,Magnon_Time_Crystal}, ultracold atoms\,\cite{Ultracold_Atom_Time_Crystal_Theory,TimeCrystal1}, superfluid quantum gas\,\cite{TimeCrystal1,TimeCrystal2} and qubits\,\cite{QubitTimeCrystal1,QubitTimeCrystal2,QubitTimeCrystal3}. Recently, the bulk magnon states of the magnetic insulators YIG is utilized to realize space-time crystal by spontaneously breaking continuous time translational symmetry\,\cite{TimeCrystal3}. While the experiment using permalloy confirms the spontaneous breaking of the spatial translational symmetry of the coherent magnon state, the breaking of the discrete time translational symmetry has not been confirmed\,\cite{Magnon_Time_Crystal}. In this work, we show that a discrete time crystal can emerge in the $\pi$-Berry phase protected magnon edge state of a quantum magnet driven by a periodic field in absence of any time reversal symmetry breaking interactions. The topological protection of the edge state strongly reduces the decay into the bulk magnons. Although the system eventually thermalizes, this topological protection facilitates the stabilization of coherent magnons in the pre-thermal regime. In contrast to Floquet MBL, our proposal avoids the need for strong disorder, which stands in the way of experimental realization in larger systems\,\cite{NewTimeCrystal3}. The proposed emergent magnon time crystal can be understood as a pre-thermal time crystal from a driven-dissipative system that is further stabilised by the topological structure of the magnon band. \begin{figure}[tb] \includegraphics[width=0.5\textwidth]{TimeCrystalV5.png} \caption{(a) Schematic of a kagome ferromagnetic system with number of sites along width (see orange sites) is $N=13$ and the system is periodic along the green arrow after certain number of sites. (b) The magnon band structure is shown in black color. The yellow dots denote the eigenstates with eigenvalues with positive imaginary part. Inset shows the magnified picture of the band near $k=\pi$ and the color code describes the value of positive imaginary part. (c) The number of magnons at $k=\pi$ as a function of time for upper (blue) and lower edge (red) states. (d) The oscillation of spin component $S^x$ at a site at upper edge (blue plot) and lower edge (red plot). Inset shows the magnified figure within a particular time limit. The parameters used for all the plots are $J=1.0$, $\gamma=5\times 10^{-4}$, $\eta=9\times 10^{-4}$, $\Omega=5.1716$, $p_0=1.0$, $E_0^x=0.0$, $E_0^y=0.002$.} \label{fig::TimeCrystal} \end{figure} \paragraph*{Discrete time crystal.-} We consider the ferromagnetic Heisenberg model $\pazocal{H}_0=-J\sum_{ij} \hat{S}_i\cdot\hat{S}_j$ on a kagome lattice. The low energy magnon excitations above the ferromagnetic ground states are described by the linear spin-wave theory: $\hat{S}_i^{+}=\sqrt{2S}\hat{a}_i$, $\hat{S}_i^{-}=\sqrt{2S}\hat{a}^{\dagger}_i$, $\hat{S}_i^z=S-\hat{a}_i^\dagger \hat{a}_i$, where $S$ denotes the magnitude of the spin, and $\hat{a}_i^\dagger (\hat{a}_i)$ creates (annihilates) a magnon at site $i$. Application of the above transformation to $\pazocal{H}_0$ yields a tight binding magnon Hamiltonian where the interactions are neglected. The resulting band structure for a ribbon geometry (Fig.\ref{fig::TimeCrystal}(a)) is shown in Fig.\,\ref{fig::TimeCrystal}(b). The bulk bands carry a non-trivial quantized $\mathbb{Z}_2$ topological invariant (Zak-phase or $\pi$-Berry phase), and contain nearly flat topological edge states between the projected Dirac points\,\cite{BerryPhase1,BerryPhase2}. Any time reversal symmetry breaking terms in the Hamiltonian, such as the Dzyaloshinskii-Moriya interaction(DMI), would open up a gap in the magnon spectrum at the Dirac points\,\cite{EffectiveTimeReversalSymmetry} and imparts dispersion to the edge states at $k=\pi$, destroying discrete time crystalline behavior that will be discussed later. As bosons not subject to the Pauli exclusion principle, magnons normally populate the bottom of the band, far from the edge states. However, recent studies have shown that edge state magnons can be controllably amplified at arbitrary energies by tailored EM waves\,\cite{Amplification}. The EM field couples to the magnetic insulators via polarization\,\cite{PolarizationOperator1,PolarizationOperator2,PolarizationOperator3} as \begin{align} H_c &=\cos(\Omega t)\boldsymbol{E}\cdot\sum_{\left\langle i,j\right\rangle} \boldsymbol{P}_{ij} \label{eq::Coupling} \end{align} where $\boldsymbol{P}_{ij}$ is the polarization operator. The relevant terms in $\boldsymbol{P}_{ij}$ that contribute to magnon amplification consists of bilinear spin operators on the nearest neighbor bonds~\cite{Supplementary}, \begin{equation} \mathbf{P}_{ij}\approx \boldsymbol{p}_{0,ij} \left(\mathbf{S}_i\cdot\mathbf{Q}_{ij}\right) \left(\mathbf{S}_j\cdot\mathbf{Q}_{ij}\right). \label{eq::Polarization} \end{equation} Other polarization terms are not important for this study as they will be neglected in the rotating wave approximation\,\cite{Amplification}. The equation of motion for the magnon field $\tilde{\alpha}_k$ is given by\,\cite{Supplementary}, {\small \begin{equation} \frac{d}{dt} \begin{pmatrix} \tilde{\alpha}_k^*\\ \tilde{\alpha}_{-k} \end{pmatrix} = \mathrm{i} \begin{pmatrix} \tilde{\epsilon}_k-\mathrm{i}\frac{\gamma\mathbb{I}+\eta\left|\alpha_k\right|^2}{2} & \frac{\left[\tilde{H}_c\right]_{12}}{2} \\ -\frac{\left[\tilde{H}_c\right]_{21}}{2} & -\tilde{\epsilon}_{-k}-\mathrm{i}\frac{\gamma\mathbb{I}+\eta\left|\alpha_k\right|^2}{2} \end{pmatrix} \begin{pmatrix} \tilde{\alpha}_k^*\\ \tilde{\alpha}_{-k} \end{pmatrix}, \label{eq::EOM} \end{equation} } where $(\tilde{\alpha}^*_k \,\,\tilde{\alpha}_{-k})$ represnt the magnon fields $\left\langle\hat{\tilde{a}}_{n,k}\right\rangle$; $\tilde{\epsilon}_{k}$ is a diagonal matrix with elements $\epsilon_{n,k}-\frac{\Omega}{2}$, where $\epsilon_{n,k}$ is the energy eigenvalue; $\gamma$ and $\eta$ are phenomenological linear and non-linear damping constants; $\mathbb{I}$ is the identity matrix and $\left|\alpha_k\right|^2$ is the diagonal matrix with entries $\left|\left\langle\hat{\tilde{a}}_{n,k}\right\rangle\right|^2$ The square matrix on the right hand side of Eq.\,\ref{eq::EOM} is the dynamical matrix with complex eigenvalues (for $\eta=0$). The real and imaginary parts of the eigenvalues represent the energy and lifetime of the magnon respectively. In absence of EM coupling ({\footnotesize $\left[\tilde{H}_c\right]_{12}\approx O_{N\times N}$}), the imaginary part of eigenvalues is negative indicating magnon decay. However as the amplitude of EM field increases the imaginary part of some of the eigenvalues satisfying $\epsilon_{n,k}+\epsilon_{n,-k}\approx \Omega$ become positive. This indicates the onset of spontaneous amplification of magnons. The yellow dots in the band structure (Fig.\,\ref{fig::TimeCrystal}(b)) are the eigenvalues with positive imaginary part. The solution of Eq.\ref{eq::EOM} describes amplified coherent magnons above a cutoff amplitude of EM field\,\cite{CoherentState1,CoherentState2,TimeCrystal1,TimeCrystal2}. The presence of the non-linear damping suppresses the exponential increase of the magnon number and the system reaches a steady state. Fig.\,\ref{fig::TimeCrystal}(c) shows the amplified coherent magnons population for the edge states of upper and lower edges at $k=\pi$ in the steady state. While the number of magnons ({\small $\left|\left\langle\hat{\tilde{a}}_{n,k}\right\rangle\right|^2$}) are identical in the rotating and the lab frames in the steady state (see Fig.\,\ref{fig::TimeCrystal}), the field $\left\langle\hat{\tilde{a}}_{n,k}\right\rangle$ oscillates in time. Specifically when the pair of amplified magnons satisfy $\epsilon_{n,k}=\epsilon_{n,-k}=\Omega/2$, the steady state expectation values for the field in the rotating frame is independent of time i.e. $\left\langle\hat{\tilde{a}}_{n,k}(t)\right\rangle_{\text{rot}}^s=\left\langle\hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^s$. Thus the fields in the two frames are related as, \begin{equation} \left\langle \hat{\tilde{a}}_{n,k}(t)\right\rangle_{\text{lab}}^{\text{s}} \approx \left\langle\hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^{\text{s}} \exp(\mathrm{i}\frac{\Omega}{2} t), \label{eq::Oscillation2} \end{equation} where the superscript ``s" denotes steady state expectation value. The equation of motion Eq.\,\ref{eq::EOM} has a $\mathbb{Z}_2$ symmetry $\hat{\tilde{a}}_{n,k}\rightarrow -\hat{\tilde{a}}_{n,k}$. Above a critical amplitude of the EM field, the amplified magnon field at the edges the system spontaneously breaks the $\mathbb{Z}_2$ symmetry by acquiring a finite, non-zero {\small $\left\langle\hat{\tilde{a}}_{n,k}\right\rangle_{\text{lab}}^{\text{s}}$} that oscillates in time with a period which is twice that of the driving EM field. Thus a discrete time crystal of edge state magnons is formed via amplification\,\cite{TimeCrystal1,TimeCrystal2} that breaks the discrete time translational symmetry spontaneously. This time crystalline behavior can be experimentally observed by measuring the transverse magnetization at the edges, i.e., the spin components $\hat{S}^x_i$ and $\hat{S}^y_i$ -- the spin component $\hat{S}^z_i$ is constant, because it is related to the number of magnons {\small $\left|\left\langle\hat{\tilde{a}}_{i,k}\right\rangle\right|^2$}, which is invariant in time in the steady state. The x-component of the spin, $\left\langle \hat{S}^x_i\right\rangle$, is given in terms of the fields $\left\langle \hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^{\text{s}}$ as \begin{widetext} \begin{equation} \left\langle\hat{S}_i^x\right\rangle = \sqrt{\frac{S}{2N_x}} \left[ \sum_{k>0,n} \left[U_1^\dagger\right]_{in} \left\langle \hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^{\text{s}} e^{\frac{i\Omega t}{2}} e^{-ikx_i} + \sum_{k>0,n} \left[U_2^\dagger\right]_{in} \left\langle \hat{\tilde{a}}_{n,-k}\right\rangle_{\text{rot}}^{\text{s}} e^{\frac{i\Omega t}{2}} e^{ikx_i} + \left[U_1^\dagger\right]_{in} \left\langle \hat{\tilde{a}}_{n,0}\right\rangle_{\text{rot}}^{\text{s}} e^{\frac{i\Omega t}{2}} + \text{\small{H.C.}} \right] \label{eq::Sx} \end{equation} \end{widetext} Fig.\,\ref{fig::TimeCrystal}(d) demonstrate the oscillation of $S^x_i$ at a site at the upper edge (blue) and the lower edge (red). The different $k$-points arrive at steady state at a different time\,\cite{Supplementary} and so $S^x_i$ in Fig.\,\ref{fig::TimeCrystal}(d) modulates transiently and reaches a steady state when all the amplified $k$-points do. The oscillation amplitudes of $\left\langle \hat{S}_i^x\right\rangle$ at both edges are nearly identical, but they are not exactly equal, since it is a superposition of several fields $\left\langle \hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^{\text{s}}$\,(see Eq.\,\ref{eq::Sx}); while the magnitude of the fields $\left\langle \hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^{\text{s}}$ at two edges are the same, the phases are not. Moreover, the amplitude of oscillation varies with different simulations due to the random starting conditions representing the vacuum fluctuations\,\cite{CoherentState1,CoherentState2}. Finally, time crystalline behavior also holds for the long-range order in spatial directions due to the coherence of the pumped magnon at $k=\pi$. Since the amplification of magnons extends over a finite momentum range around $k=\pi$, a spatial modulation in the amplitude of oscillation is expected. \begin{widetext} \begin{figure}[tb] \includegraphics[width=\textwidth]{StabilityV4.png} \caption{ The two magnon density of states\,(blue and yeollow color plot) and band structure\,(black color) at (a) zero magnetic field and (b) at $B_z=JS$. (c) The double magnon scattering rate as a function of width of the energy levels for different system widths. As a matter of simplicity, we restricted our calculation for the scattering of magnons for the upper-edge states and considered only the scattering between pair of points with $k = \pm\pi$ and $k = \pm\pi \pm\delta k$.} \label{fig::Stability} \end{figure} \end{widetext} \paragraph*{Stability of the time crystal.-} Discrete time crystals of the amplified edge magnons are stable unless bulk eigenmodes with a significant overlap with the edge modes is amplified. The choice of the Kagome ferromagnet is important as it allows edge magnons to be selectively amplified without the amplification of bulk magnons.~\cite{Amplification} However, magnon scattering can excite other magnon eigenmodes and may not conserve the number of magnons. The Hamiltonian $\pazocal{H}_0$ does not contain any magnon non-conserving terms, but such terms may arise in the presence of spin anisotropy in many real quantum magnets. We have calculated the bulk band structure and momentum resolved two-magnon density of states, \begin{equation} D_\mathbf{k}(\omega)=\frac{1}{N} \sum_{n,n',\boldsymbol{\Delta}} \delta(\omega-\omega_{n,\boldsymbol{\Delta}}-\omega_{n',\mathbf{k}-\boldsymbol{\Delta}}) \end{equation} for a system with toroidal boundary condition\,(\,\ref{fig::Stability}(a), (b)). When the energy of an eigenstate matches that of the two magnon continuum, it can decay into two magnons with lower energies. The two magnon continuum energy scales as twice the negative of Zeeman term ($2B_zS$) in a longitudinal magnetic field $B_z$, while that of the magnon bands scale as $B_zS$. Hence for magnetic fields $B_z S>E_{\text{edge}}$ the edge states are energetically separated from the 2-magnon continuum and cannot decay via this channel. At even higher fields, $B_z>6J$ the two-magnon continuum gets separated in energy from the magnon band structure, implying that two edge magnons cannot combine and produce a higher energy magnon. As the magnetic field increases, the higher order magnon non-conserving processes will disappear faster than the two magnon decay processes. Thus an external magnetic field can suppress the magnon non-conserving scattering in the system. Magnon {\it conserving} scattering processes can not be eliminated by an external field, and are always present in any spin Hamiltonian. We calculated the scattering rate of magnons due to two-magnon scattering\,(quartic terms of magnon Hamiltonian) using Fermi golden rule, \begin{equation} s_2=\frac{2\pi}{\hbar} \sum_f\sum_i\left|\squeezeB{f}{\pazocal{H}_{0,\text{int}}^{(4)}}{i}\right|^2 \delta(E_f-E_i), \label{eq::FermiGoldenRule} \end{equation} where $\pazocal{H}_{0,\text{int}}^{(4)}$ is the quartic term of the magnon Hamiltonian\,\cite{Supplementary},and $|f\rangle$ ($|i\rangle$) denotes the final (initial) state. For simplicity, we have restricted our calculation to the upper-edge states and considered only the scattering from the points $k=\pm \pi$ and $k=\pm\pi\pm\delta k$ (where $\delta k=0.0628$). To numerically calculate the scattering rate $s_2$, we have considered a finite width of energy levels $\delta E$, which physically implies band broadening. The scattering rate as a function of band broadening is plotted in a logarithmic scale in Fig.\,\ref{fig::Stability}(c) for different system sizes. It is observed that the scattering rate decreases rapidly as the band broadening decreases. The band broadening results from magnon interactions and the only way to control the bandwidth is by controlling the density of the amplified magnons. This is achieved by working at low temperature and low EM field intensity. Even if the scattering rate is high, as long as the life-time of magnons at the scattered state is small the behavior of the system is governed by only the selectively amplified edge state magnons\,\cite{TimeCrystal3}. Thus a small density of amplified edge state magnons would result in a stable time crystal. These conditions also help minimise magnon decay due to magnon-phonon scattering. Finally, edge imperfections cause broadening of the edge states, and reduce the yield of coherent magnons.~\cite{Tanaka2020,Pawlak2020}. The reduced scattering is a consequence of the topological protection of the edge state. The presence of chiral symmetry induces a flat edge mode at zero energy due to finite $\pi$-Berry phase. Absence of chiral symmetry can result in a dispersive edge state at non-zero energy in the presence of a quantized non-zero $\pi$-Berry phase\,\cite{BerryPhase1,BerryPhase2}. Interestingly, the $\pi$-Berry phase protected topological edge states are robust against impurities with or without chiral symmetry\,\cite{BerryPhase2}, providing additional stability to the time crystalline behavior. However, the presence of the effective time reversal symmetry breaking DMI\,\cite{EffectiveTimeReversalSymmetry}, which is ubiquitous in many real quantum magnets, results in dispersive edge states. As a result, the condition, $\epsilon_{n,k} = \epsilon_{n,-k}$ for a pair of amplified magnons, as assumed above, is broken. Then, $\left\langle \hat{\tilde{a}}_{n,k}\right\rangle_{\text{rot}}^s$ becomes time dependent and according to Eq.\,\ref{eq::Oscillation2}, the period of oscillation of the fields at a particular $k$-point will no longer be exactly twice the period of the external EM field. For a finite system, there will be a finite number of amplified points around $k=\pi$; adding over the fields at a few amplified points according to Eq.\,\ref{eq::Sx} would result in an oscillation of $S^x$ that is incommensurate with the external field, resulting in a quasi-time crystal\,\cite{QuasiCrystal}. In the thermodynamic limit, when the number of $k$-points in the vicinity of $k=\pi$ diverges, the oscillation would become chaotic in nature, destroying the time crystal-like behaviour. \paragraph*{Experimental realisation.-} The edge-magnon time-crystals can be observed using direct spatial and temporal imaging of spin-wave dynamics via multiple recently developed techniques, such as, Kerr microscopy\,\cite{Kerr1,Kerr2}, Brillouin light scattering spectroscopy (BLS)\,\cite{BLS_imaging1,BLS_imaging2,BLS_imaging3} and time resolved scanning transmission x-ray microscopy (TR-STXM)\,\cite{STXMm1,STXM0,STXM10,STXM1,STXM2,STXM3,STXM4,STXM5,STXM6,STXM8,STXM9,Magnon_Time_Crystal}. BLS is useful to detect magnons at a fixed frequency and wave vector\,\cite{BLS1,BLS2,BLS3,BLS4,BLS5,BLS6} and has recently been used to detect the space-time crystal in the ferromagnetic insulator \ce{YIG}\,\cite{TimeCrystal3}. Additionally, theoretically proposed spin Hall noise spectroscopy is a promising technique to detect the presence of edge magnons at a given frequency\,\cite{SpinHallNoise1,SpinHallNoise2}. The TR-STXM, in particular, is promising for directly imaging spin dynamics at the edge due to its high accuracy in detecting magnon dynamics with a spatial and temporal resolution of 20 nm and 50 ps respectively\,\cite{STXMm1,STXM0,STXM10,Magnon_Time_Crystal}. Recently, this method has been used to observe the dynamics of space-time crystal of bulk magnons in permalloy strips\,\cite{Magnon_Time_Crystal}. We propose the spin-$\frac{1}{2}$ kagome ferromagnets haydeeite\,\cite{Material1} and \ce{Cu(1,3-bdc)}\,\cite{Material2_V2} as possible hosts of the discregte time crystals of edge magnons as discussed in this work. While the Haydeeite has experimental evidence for the absence of DMI\,\cite{Material1}, the Cu(1,3-bdc) contains out-of-plane DMI that does not break any effective time reversal symmetry\,\cite{EffectiveTimeReversalSymmetry} for the ferromagnetic ground state with in-plane magnetization\,\cite{Material2}. The period of oscillations for the materials haydeeite and \ce{Cu(1,3-bdc)} are calculated to be $0.05$ps and $0.25$ps respectively, which are estimated using experimentally determined Heisenberg exchange interactions\,\cite{Material1,Material2_V2}. The period of oscillation can be tuned via external magnetic field which controls the energy of edge magnons. Thus these quantum magnets are perfect candidates for realizing discrete time crystal of edge magnons. {\it Acknowledgements} B.Y. would like to acknowledge the support from the Singapore National Research Foundation (NRF) under NRF fellowship award NRF-NRFF12-2020-0005, and a Nanyang Technological University start-up grant (NTU-SUG). P.S. acknowledges financial support from the Ministry of Education, Singapore through MOE2019-T2-2-119. \begin{widetext} \section{Supplementary Material} \subsection{ Polarization Operator} In this section, the mathematical details of the polarization term is discussed. The polarization operator is given as, \begin{equation} \mathbf{P}_{ij}\approx \boldsymbol{p}_{0,ij} \left(\mathbf{S}_i\cdot\mathbf{Q}_{ij}\right) \left(\mathbf{S}_j\cdot\mathbf{Q}_{ij}\right), \label{eq::Polarization} \end{equation} which is derived from the following Hubbard model of electrons on kagome lattice, \begin{equation} \pazocal{H}_{\text{Hubbard}} = -\sum_{ij} \left[ \begin{pmatrix} \hat{c}_{i\uparrow}^\dagger & \hat{c}_{i\downarrow}^\dagger \end{pmatrix} \left(t\mathbb{I}\cos(\theta)+it\boldsymbol{n}\cdot\boldsymbol{\sigma}\sin(\theta)\right) \begin{pmatrix} \hat{c}_{j\uparrow} \\ \hat{c}_{j\downarrow} \end{pmatrix} + \text{H.c.} \right] + U\sum_i \hat{n}_{i\uparrow}\hat{n}_{j\downarrow} \end{equation} where $t\cos(\theta)$ and $it\boldsymbol{n}\sin(\theta)$ ($\boldsymbol{n}$ is an unit vector) are the real and complex hopping amplitude of electrons on nearest neighbour bonds respectively. $U$ is the onsite Coulomb repulsion. The $\bold{p}_{0,ij}$ and $\boldsymbol{Q}_{ij}$ are given by, \begin{align*} \boldsymbol{p}_{0,ij} &=-16\theta^2 e a \frac{t^3}{U^3} (\boldsymbol{e}_{jk}-\boldsymbol{e}_{ki}) =p_0 (\boldsymbol{e}_{jk}-\boldsymbol{e}_{ki}) , \\ \boldsymbol{Q}_{ij}&=\boldsymbol{n}-n^z\hat{z} \end{align*} where $e$ and $a$ are the electron charge and lattice constant respectively. $\boldsymbol{e}_{jk}$ is a vector on nearest neighbour bonds from site-$j$ to site-$k$. The sites $i$, $j$ and $k$ are the sites on the same triangle of the kagome lattice. The polarization terms other than the terms in Eq.\,\ref{eq::Polarization} are not important for this study because in the diagonal basis in the rotating frame those terms are time dependent and so those terms are dropped in rotating wave approximation in Eq.\,\ref{eq::EOM}. \subsection{Derivation of equation of motion} The Hamiltonian describing a kagome ferromagnet on a cylindrical geometry, coupled to an external EM field is given by \begin{align} \pazocal{H}=&\frac{1}{2}\sum_{k} \begin{pmatrix} \Psi_{k}^\dagger & \Psi_{-k} \end{pmatrix} \begin{pmatrix} H_0(k) & O_{N\times N} \\ O_{N\times N} & H_0(-k)^T \end{pmatrix} \begin{pmatrix} \Psi_{k} \\ \Psi_{-k}^\dagger \end{pmatrix} \nonumber\\ &+ \frac{1}{2}\cos(\Omega t)\sum_{k} \begin{pmatrix} \Psi_{k}^\dagger & \Psi_{-k} \end{pmatrix} \begin{pmatrix} [H_c]_{11} & [H_c]_{12} \\ [H_c]_{21} & [H_c]_{22} \end{pmatrix} \begin{pmatrix} \Psi_{k} \\ \Psi_{-k}^\dagger \end{pmatrix}. \end{align} $N$ is the number of sites along the width of the ribbon\,(see Fig.\,2(a) in main text), and $\Psi_{k}=\left(\hat{a}_{1,k},\,\hat{a}_{2,k},\,...,\,\hat{a}_{N,k}\right)^T$ $O_{N\times N}$ is a null matrix. The first and second matrices are derived from the unperturbed Hamiltonian Eq.\,1 and coupling Hamiltonian Eq.\,2 in the main text respectively. The Hamiltonian $\pazocal{H}$ is first represented in the diagonal basis $\tilde{\Psi}_k=U_1(k)\Psi_k$, $\tilde{\Psi}_{-k}^\dagger=U_2(k)\Psi_{-k}^\dagger$ of matrices $H_0(k)$, $H_0(-k)^T$. This is followed by transforming the system from the lab-frame to rotating-frame by using the unitary operator $U(t)=\exp{\frac{i\omega t}{2}\sum_k \tilde{\Psi}_k^\dagger \tilde{\Psi}_k}$, By neglecting the time-dependent terms, we get the following effective Hamiltonian: \begin{equation} \pazocal{H}_{\text{eff}}=\frac{1}{2} \sum_{k} \begin{pmatrix} \tilde{\Psi}_k^\dagger & \tilde{\Psi}_{-k} \end{pmatrix} \begin{pmatrix} \epsilon_k-\frac{\Omega}{2} & \frac{[\tilde{H}_{c}]_{12}}{2} \\ \frac{[\tilde{H}_{c}]_{21}}{2} & \epsilon_{-k}-\frac{\Omega}{2} \end{pmatrix} \begin{pmatrix} \tilde{\Psi}_k \\ \tilde{\Psi}_{-k}^\dagger \end{pmatrix}, \label{eq::EffectiveHamiltonian} \end{equation} where $\epsilon_{k}$ and $\epsilon_{-k}$ are the diagonal matrices of eigenvalues of matrices $H_0(k)$ and $H_0(-k)^T$ respectively. The matrix $\tilde{H}_{12}=U_1 H_{12} U_2^\dagger$ is the coupling matrix in the diagonal basis. Only the off-diagonal terms $[\tilde{H}]_{12}$ and $[\tilde{H}]_{21}$ of the coupling Hamiltonian appear in $\pazocal{H}_{\text{eff}}$ due to the rotating wave approximation. The equation of motion of field $\left\langle \tilde{\Psi}_{k}\right\rangle=\tilde{\alpha}_k=\left(\left\langle\hat{\tilde{a}}_{1,k}\right\rangle,\,\left\langle\hat{\tilde{a}}_{2,k}\right\rangle,\,...,\,\left\langle\hat{\tilde{a}}_{N,k}\right\rangle\right)^T$ is given by, {\small \begin{equation} \frac{d}{dt} \begin{pmatrix} \tilde{\alpha}_k^*\\ \tilde{\alpha}_{-k} \end{pmatrix} = \mathrm{i} \begin{pmatrix} \tilde{\epsilon}_k-\mathrm{i}\frac{\gamma\mathbb{I}+\eta\left|\alpha_k\right|^2}{2} & \frac{\left[\tilde{H}_c\right]_{12}}{2} \\ -\frac{\left[\tilde{H}_c\right]_{21}}{2} & -\tilde{\epsilon}_{-k}-\mathrm{i}\frac{\gamma\mathbb{I}+\eta\left|\alpha_k\right|^2}{2} \end{pmatrix} \begin{pmatrix} \tilde{\alpha}_k^*\\ \tilde{\alpha}_{-k} \end{pmatrix}, \label{eq::EOM} \end{equation} }where $\hat{\tilde{a}}_{n,k}$ is the magnon annihilation operator of $n$-th band at $k$-point; $\tilde{\epsilon}_{k}$ is a diagonal matrix with elements $\epsilon_{n,k}-\frac{\Omega}{2}$, where $\epsilon_{n,k}$ is the energy eigenvalue; $\gamma$ and $\eta$ are phenomenological linear and non-linear damping constants; $\mathbb{I}$ is identity matrix and $\left|\alpha_k\right|^2$ is diagonal matrix with entries $\left|\left\langle\hat{\tilde{a}}_{n,k}\right\rangle\right|^2$. \section{Quartic Interaction Term} The four body interaction term that is used for the calculation of two magnon scattering rate is derived from the spin Hamiltonian using higher order terms of Taylor-series expansion of square root in Holstein Primakoff transformation. The interaction term is given by, \begin{equation} \pazocal{H}_{0,\text{int}}^{(4)}=-\frac{J\hbar^2}{4} \sum_{\left\langle ij\right\rangle} \left[ 4\hat{a}_i^\dagger\hat{a}_j^\dagger\hat{a}_i\hat{a}_j +\hat{a}_j^\dagger \hat{a}_j^\dagger \hat{a}_i\hat{a}_j +\hat{a}_i^\dagger \hat{a}_j^\dagger \hat{a}_i\hat{a}_i +\hat{a}_i^\dagger\hat{a}_j^\dagger \hat{a}_j\hat{a}_j +\hat{a}_i^\dagger\hat{a}_i^\dagger\hat{a}_i\hat{a}_j \right] \end{equation} \end{widetext}
2024-02-18T23:40:36.422Z
2022-07-20T02:08:53.000Z
algebraic_stack_train_0000
2,841
9,808
proofpile-arXiv_065-13840
\section{Introduction} \label{Introduction} Data-centric AI is an emerging topic that focuses on engineering data to develop AI applications with the off-the-shelf machine learning (ML) models \cite{landingai}. Previous efforts are mainly model-centric AI that assumes a static environment. In this environment, 1) the data collection and engineering are done, 2) and continuously developing ML models to achieve high performance on test sets is the main target \cite{eyuboglu2022dcbench}. However, real-world AI applications are facing more complicated scenarios, which can not be adequately addressed by model-centric AI. For instance, researchers have to spend a lot of time on data preparation, including data labeling \cite{chew2019smart}, error detection \cite{krishnan2017boostclean}, etc. Meanwhile, they also need to monitor data to detect the distribution drift so as to update models in time \cite{huang2021modelci}. Treating these issues only from a model view will lead to a sub-optimal solution. Therefore, to further improve and democratize AI applications, a lot of efforts are now turning to data-centric or combining model-centric and data-centric \cite{landingai}. Though the concept of data-centric AI has been proposed very recently, many pioneering studies whose core contributions lie in data engineering have already been proposed \cite{sener2018active, xu2021dataclue}. Among them, one vital direction is active learning (AL) \cite{ren2020survey}. The motivation of AL is to reduce manual labeling efforts while maintaining and even improving ML models' performance \cite{wang2014new, sener2018active, gal2017deep, ducoffe2018adversarial, caramalau2021sequential, ash2020deep, agarwal2020contextual, vzliobaite2013active, loy2012stream}. Specifically, it is well-known that ML models are very data-hungry. Therefore, to reach a high performance (e.g., accuracy) that meets application requirements, people always need to label a large amount of data during data collection. This process is extremely time-consuming and labor-intensive and thus often becomes the bottleneck of ML application development. To cope with the issue, AL selects the most representative yet diverse training samples from a large training data pool by utilizing AL strategies. Then it only sends the selected samples to an oracle (e.g., human annotators) to label. Next, ML models will only be trained on these sub-datasets. By doing so, we can still obtain an ML model with competitive performance but save labeling and training costs a lot. However, utilizing AL methods is a non-trivial task. Essentially, applying AL to AI application development is not simply searching for, selecting, or implementing the AL algorithms. Instead, users have to build a backend to run the AL pipeline, tailored for their applications in their own environment (e.g., a private cluster and AWS). In other words, they need to undertake much repetitive engineering work with boilerplate code. Moreover, users have to consider the efficiency and cost issues, as AL often runs on a vast dataset, and some AL algorithms (e.g., committee-based \cite{dagan1995committee, melville2004diverse}) require running more than one ML model for data selection. Under-consideration will result in a long process time and additional cost. Though several open-source AL tools \cite{modal, deepal, libact, alipy} lower the barrier to applying AL, they can not meet the efficiency requirement. To address these issues, we propose to build an efficient backend for AL. Our AL system, named Active-Learning-as-a-Service (ALaaS) (see Figure \ref{fig:system-arch}), is able to run AL strategies on large datasets efficiently by utilizing single or distributed multiple devices. Specifically, it adopts server-client architecture to perform AL tasks. As a result, the system can be easily installed on both laptops and the public cloud. After the installation, users can start the system with a simple configuration file by following our templates. Then the system will run AL tasks in an efficient pipeline manner. Meanwhile, more acceleration techniques such as data cache and batching \cite{crankshaw2017clipper, zhang2020mlmodelci, zhang2020hysia} will be utilized to further speed up the AL process. In addition to that, our system also considers the accessibility and modularity so that non-experts can use AL strategies stored in our AL zoo with ease, and experts can propose more advanced AL strategies for more scenarios. Experiments show that our ALaaS outperforms all other baselines in terms of latency and throughput. Further ablation studies show the effectiveness of our design and reveal more insightful conclusions. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/workflow_v2.pdf} \caption{ALaaS architecture. Our system adopts a server-client architecture, which can be deployed easily. It also supports various AL strategies, different model zoos, and serving engines.} \label{fig:system-arch} \end{figure*} \section{Related Work} This section presents the related work, including three categories: active learning (AL) algorithms and tools, Data-centric AI, and MLOps. \subsection{AL Algorithms and Tools} We categorize AL strategies into three classes, namely, diversity-based, uncertainty-based, and hybrid sampling. Diversity-based methods \cite{yang2015multi, sener2018active} are designed to select the most informative samples from the whole dataset to represent it. Uncertainty-based methods \cite{wang2014new, roth2006margin, gal2017deep} aim to select the samples that can not be identified confidently by current ML models and then use these samples to further improve ML models. Hybrid methods \cite{huang2010active, beluch2018power} combine both the above-mentioned methods. Our system supports all of these methods and runs them more efficiently. Many open-source AL tools have been developed to benefit both academia and industry, including ModAL\cite{modal}, DeepAL \cite{deepal}, Libact \cite{libact}, and ALiPy \cite{alipy}. Our ALaaS is inspired by these tools and further improves the AL efficiency and accessibility by adopting the MLOps concept. The detailed comparison is summarized in Table\ref{tab:open-source-tool-compare}. \begin{table}[t] \centering \caption{Comparison of Active Learning (AL) open-source tools. Our ALaaS provides a Machine-Learning-as-a-Service experience and improves AL efficiency a lot.} \label{tab:open-source-tool-compare} \vspace{7pt} \centering \adjustbox{max width=\textwidth}{ \begin{tabular}{lcccccc} \toprule \begin{tabular}[c]{@{}c@{}}AL \\Open-source Tool\end{tabular} & \begin{tabular}[c]{@{}c@{}}Pipelined \\Data Processing\end{tabular} & \begin{tabular}[c]{@{}c@{}}Elastic \\AL Serving\end{tabular} & \begin{tabular}[c]{@{}c@{}}Server-Client \\Architecture\end{tabular} & PyPI Install & Data Cache & AL Zoo \\ \midrule DeepAL \cite{deepal} & & & & & & \checkmark \\ ModAL \cite{modal} & & & & \checkmark & & \checkmark \\ ALiPy \cite{alipy} & & & & \checkmark & & \checkmark \\ libact \cite{libact} & & & & \checkmark & & \checkmark \\ \textbf{ALaaS (Ours)}& \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ \bottomrule \end{tabular} \end{table} \subsection{Data-centric AI} Data-centric AI is proposed to improve AI application performance by engineering datasets rather than only focusing on models. Recent Data-centric AI competition and workshop \cite{landingai} from Landing.ai demonstrates many exciting studies from both academia and industry. Inspired by the pioneering work, many data-centric methods have been proposed for different areas, including NLP \cite{xu2021dataclue, seo2021automatic}, CV \cite{huang2021ymir, chakrabortyfirst}, Robot \cite{lin2022roboflow}, etc. Also, a new benchmark \cite{eyuboglu2022dcbench} has been built for pushing forward data-centric AI research. To the best of our knowledge, ALaaS is the first MLOps system for efficient AL from the data-centric view. \subsection{MLOps} MLOps (Machine Learning Operation) aims to streamline the ML model development and reduce the AI application maintenance cost. Many MLOps systems have been proposed for both data-centric AI and model-centric AI. From a data-centric view, labeling tools (e.g., labelme \cite{russell2008labelme}), data cleaning tools (e.g., ActiveClean \cite{krishnan2016activeclean}), data drift monitors, and so on, can all be regarded as MLOps systems. From a model-centric view, we have model store systems \cite{vartak2016modeldb}, model continuous integration \cite{zhang2020mlmodelci, renggli2019continuous} tools, training platforms \cite{jiang2020unified}, deployment platforms \cite{chen2018tvm}, etc. Different from these systems, ALaaS is designed specifically for running AL tasks more efficiently. In addition, tech giants starts to build end-to-end cloud platforms for MLOps (e.g., TFX \cite{baylor2017tfx}, SageMaker \cite{das2020amazon}, Ludwig \cite{molino2019ludwig}). Our ALaaS can be a good plugin that is complementary to these systems. \section{System Design and Architecture} This section first highlights our Active-Learning-as-a-Service (ALaaS) with three key features, then details the design of core modules of the system as shown in Figure \ref{fig:system-arch}. \subsection{ALaaS Highlights} We highlight three key features, namely efficiency, accessibility, and modularity, provided by our system. These features are also our design principles, leading the implementation to consider both experts (e.g., data scientists and machine learning (ML) engineers) and non-experts (e.g., customers with little domain knowledge) all the time. \textbf{Efficiency.} Active Learning (AL) always faces large-scale datasets to be labeled \cite{ren2020survey} and some AL even employ multiple computational intensive deep learning (DL) models (e.g., Query-By-Committee \cite{dagan1995committee, melville2004diverse}). Thus, it is critical to efficiently process these datasets and models to accelerate ML application development and save users' AL use cost. Our ALaaS offers an extremely efficient AL service to users by employing a lot of optimization technologies, including a pipeline process \cite{narayanan2019pipedream}, ML serving backend adoption\cite{trtserving}, etc. \textbf{Accessibility.} To further lower the application barrier as well as to improve the adoption rate, an AL system should ensure AL non-experts can easily use it with minimal effort and avoid writing much code. Our ALaaS follows this principle and enables a smooth user experience by implementing a containerized AL service with rich configuration temples to help users quickly get started. \textbf{Modularity.} AL is evolving fast, especially driven by the advance in Deep Learning, which requires a large amount of data to train. Making AL accessible should not hinder its advanced use by AL or ML experts. Therefore, our system is designed in a highly modular manner, enabling experts to prototype, extend and deploy state-of-the-art (SOTA) AL methods with ease. \subsection{ALaaS Architecture} The system adopts service-client architecture to abstract complex AL algorithms into web-based services, enabling an out-of-the-box user experience. Besides, our system provides a modular data manager and an AL strategy zoo, decoupling two key processes, large data operation (e.g., data index and storage) and AL strategy development and selection, in AL utilization. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/server_client_arch_v3.pdf} \caption{A deployed ALaaS system. The AL client sends data URIs to the AL server, where data will be downloaded. Then the AL server sends data samples to different workers for AL processing.} \label{fig:server-client-design} \end{figure*} \textbf{Server-Client}. The server-client architecture makes the AL accessible for different level users ranging from domain experts to ML beginners with almost no knowledge. It can be deployed to a personal laptop as well as a public cloud. We take an ALaaS deployed to AWS \cite{aws} (see Figure \ref{fig:server-client-design}) as an example to detail the whole workflow. First users only need to prepare a configuration file including basic settings like dataset path and AL methods by following provided templates, as shown in Figure \ref{fig:config-example}. Then, with very few lines of code (LoCs), users can start both the AL client and AL server. Next, users will push their unlabeled datasets, which can be stored either in the local disk or AWS S3 \cite{awss3}, to the AL server. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/server_client_api_example_v3.pdf} \caption{An AL service can be easily configured and started with YML files.} \label{fig:config-example} \end{figure*} After getting the dataset Uniform Resource Identifier (URI) from the AL client, the AL server will download the dataset and process it with specific AL strategies in a pipeline manner as shown in Figure \ref{pipeline-design}. With this frustratingly simple optimization, the processing speed can become 10x times faster than the other open-source platform (see Section \ref{compare-open-source}). Meanwhile, the AL server will index every sample in the dataset by assigning unique IDs to them with the help of the data manager. These IDs will be utilized by AL strategies. Finally, the server will distribute the downloaded samples to an optimized inference worker with ML serving backend to do inference. According to pre-defined AL strategies, the AL server will make decisions and generate a report including the URIs of selected samples to be labeled. As a result, the AL server only needs to return the URIs to the AL client, avoiding downloading selected samples from the AL server. \begin{figure*}[b] \centering \includegraphics[width=1.0\textwidth]{fig/alaas_pipeline_v2.pdf} \caption{Dataflow comparison among conventional pool-based learning methods (a), (b), and proposed ALaaS (c). These workflows show how data flows in machines in multiple rounds of AL with different methods. A red box represents a data sample at a download stage, a blue box represents a data sample at a process stage, a green box represents a data sample at an AL inference stage, and a box with a diagonal fill indicates there is no process. The numbers inside the box indicate different rounds of AL.} \label{pipeline-design} \end{figure*} \textbf{Data Manager.} The data manager manages the lifecycle of the dataset in our system. First, it accepts users' datasets and persists their metadata (e.g., name, owner, etc.) for data housekeeping. Second, during the system running, it will index data samples to avoid redundant data movement and batch data for an efficient GPU process. Meanwhile, it provides rich data transformation functions for different tasks like NLP, CV, and Audio. Moreover, for different kinds of AL methods, the data manager can equip the corresponding processing methods to improve usability. \textbf{AL Strategy Zoo.} The AL strategy zoo abstracts and stores many AL strategies, including uncertainty-based, Bayesian, density-based, batch mode, etc. It also provides a base class for advanced users to inherit and extend AL to new scenarios. \textbf{Other Utilities.} To further lower the barrier of using AL and improve efficiency, the system further offers many useful utility functions. For example, as shown in Figure \ref{fig:system-arch}, \textbf{model repository} is designed to connect many public model hubs like HuggingFace \cite{Wolf_Transformers_State-of-the-Art_Natural_2020} and TorchHub \cite{pytorchhub} and obtain pre-trained models from them. Second, as shown in Figure \ref{fig:server-client-design}, the data cache is employed to improve AL computation efficiency, and workers with \textbf{serving engine} are to call different ML serving backend to speed up ML model inference. \section{System Evaluation} This section presents the quantitative evaluation of our systems. We first compare our system with other open-source platforms. Then we benchmark our system from different perspectives to demonstrate its efficiency and accessibility. \subsection{Evaluation setup} \textbf{Hardware\&Software.} We evaluate the system on AWS EC2 and a MacBook laptop. The backend inference software is Triton Inference Server \cite{trtserving}. \textbf{Dataset.} We use the CIFAR-10 dataset \cite{cifar} to conduct experiments. It includes 50,000 training images and 10,000 test images. \textbf{Model.} We use the widely deployed ResNet-18 \cite{he2016deep} model to evaluate system performance as well as benchmark different AL strategies and AL settings. \subsection{Comparison with other AL open source tools} \label{compare-open-source} The first experiment compares the efficiency of ALaaS with that of other baselines. \textbf{Settings.} In this experiment, we simulate a one-round AL process, which applies AL methods to scan the whole dataset to generate a sub-pool. This sub-pool includes samples that will be used to further improve an existing ML model. Specifically, we first train an ML model with randomly selected 10,000 images from the CIFAR-10 training set as the initial model. Next, we use different AL tools to serve the trained model on an AWS 3x.large CPU/GPU EC2. For all tools, we use the same AL strategy named least confidence sampling \cite{sequential1994david}. Finally, we utilize these tools to select 10,000 samples from the rest 40,000 images in the training set and compare their latency and throughput. \begin{wrapfigure}{r}{0.5\textwidth} \centering \includegraphics[width=0.4\textwidth]{exp/exp_acc_budget.pdf} \caption{Top-1 and top-5 accuracy (ACC) with different AL budgets.} \label{fig:ablation-al-budget} \end{wrapfigure} \textbf{Results \& Insights.} The results are shown in Table \ref{tab:open-source-tool-perf-eval}. Compared to other tools, our ALaaS achieves the lowest latency and highest throughput while still maintaining the same accuracy. This efficiency improvement can be attributed to two sides. First, our ALaaS implements stage-level parallelism which reduces the device idle time extremely. Second, ALaaS adopts the existing ML inference servers to accelerate the model inference. Furthermore, we evaluate the intermediate results with different budgets of our ALaaS. As shown in Figure \ref{fig:ablation-al-budget}, as the budget increases, more samples will be selected and the accuracy will also be increased. This further proves the effectiveness of our system. \begin{table}[h] \centering \caption{Performance evaluation among different AL open-source tools. Compared to all baselines, ALaaS has the lowest latency and highest throughput.} \label{tab:open-source-tool-perf-eval} \vspace{7pt} \adjustbox{max width=\textwidth}{ \begin{tabular}{lcccc} \toprule AL Open-source Tool & Top-1 Accuracy (\%) & Top-5 Accuracy (\%) & One-round AL Latency (sec) & End-to-end Throughput (Image/sec) \\ \midrule DeepAL \cite{deepal} & 86.90 & 89.67 & 2287.00 $\pm$ 179.37 & 17.49 \\ ModAL \cite{modal} & 86.90 & 85.72 & 2006.95 $\pm$ 37.98 & 19.93 \\ ALiPy \cite{alipy} & 86.90 & 83.46 & 2410.85 $\pm$ 77.81 & 16.59 \\ libact \cite{libact} & 85.14 & 81.23 & 1771.33 $\pm$ 109.77 & 22.58 \\ \textbf{ALaaS (Ours)} & 86.90 & 88.12 & 552.45 $\pm$ 30.385 & 72.40 \\ \bottomrule \end{tabular} \end{table} \subsection{ALaaS Characterization} We further benchmark our ALaaS with different system settings. The first experiment is to evaluate different AL strategies re-implemented in our system. The second experiment explores the batch size's impact on system efficiency. \subsubsection{AL strategy impact} Our ALaaS already provides many out-of-the-box AL strategies in ModelZoo for users. This experiment evaluate these strategies re-implemented by ALaaS from accuracy and efficiency views to provide more insights. All settings are the same as in the previous experiment. \textbf{Results \& Insights.} The accuracy of different methods is shown in Figure \ref{fig:ablation-al-strategy-acc}. Core-set \cite{sener2018active} achieves the highest accuracy with no surprise as it is designed for CNNs in computer vision (CV) tasks. Meanwhile, K-Center Greedy (KCG) \cite{alcluster2004nguyen} and Least Confidence (LC) \cite{sequential1994david} are the second and the third accuracy though proposed very early. This tells us that even in the deep learning (DL) era, traditional methods still play a vital role and can cooperate with DL very well. The throughput is shown in Figure \ref{fig:ablation-al-strategy-throughput}. LC has the highest throughput while Coreset achieves the lowest throughput. Combining the accuracy \ref{fig:ablation-al-strategy-acc}and the throughput \ref{fig:ablation-al-strategy-throughput}results, we can draw the conclusion that the accuracy improvement of Coreset comes from the heavy design while LC balances the trade-off between accuracy and efficiency well. In summary, ALaaS provides many methods with clear accuracy and efficiency reports so users can choose them based on their own scenarios accordingly. \begin{figure}[h] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_acc_strategy.pdf} \caption{Top-1 and top-5 accuracy (ACC)} \label{fig:ablation-al-strategy-acc} \end{subfigure}\hfill% \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_qps_strategy.pdf} \caption{AL query throughput} \label{fig:ablation-al-strategy-throughput} \end{subfigure}\hfill% \caption{Performance of one-round AL for ResNet-18 \cite{he2016deep} on CIFAR-10 dataset \cite{cifar} using different AL strategies (i.e., Least Confidence (LC) \cite{sequential1994david}, Margin Confidence (MC) \cite{margconf2001tobias}, Ratio Confidence (RC) \cite{settles2009active}, Entropy Sampling (ES) \cite{settles2009active}, K-Center Greedy (KCG) \cite{alcluster2004nguyen}, K-Means Sampling (KMeans) \cite{alcluster2004nguyen}, Core-set \cite{sener2018active}, and Diverse Mini-Batch (DBAL) \cite{diverse2019fedor}). The lower-bound baseline is using random sampling (Random) strategy, while the upper-bound baseline is using the entire dataset for training.} \label{fig:ablation-al-strategy} \end{figure} \subsubsection{Batch size impact.} \textbf{Settings.} We evaluate the batch size (BS) impact on two deployment scenarios, the private server and the AWS cloud. Specifically, we first store the CIFAR-10 dataset on a private FTP server and an AWS S3, respectively. We then start ALaaS on a laptop to simulate the end-to-end AL process, including downloading data from other devices, pre-processing data, and selecting AL with an AL strategy. The other settings are the same as the first experiment. \textbf{Results \& Insights.} Our ALaaS can manage the whole process in both environments with different batch sizes steadily and efficiently, as shown in Figure \ref{fig:ablation-al-infer-bs}. Also, from the Figure \ref{fig:ablation-al-infer-bs}, we have many interesting phenomenons. First, BS = 1 and BS = 2 have very close throughput. Second, the increasing trend from BS = 2 to BS = 8 is the most dramatic. Third, after BS=16, the increase will stop. We attribute the reason to that the transmission time accounts for a large proportion of total processing time when the batch size is small. Therefore, the throughput improvement is marginal at the beginning. Then the batch computation time becomes the largest part of the total processing time, so the improvement is dramatic. Finally, when the batch size reaches the computation capacity, the increase stops. \begin{figure}[t] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_ftp_param_acc.pdf} \caption{Images stored on private FTP server} \label{fig:ablation-al-infer-bs-ftp} \end{subfigure}\hfill% \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_s3_param_acc.pdf} \caption{Images stored on AWS S3} \label{fig:ablation-al-infer-bs-s3} \end{subfigure}\hfill% \caption{End-to-end throughput of one-round pool-based AL for ResNet-18 \cite{he2016deep} on CIFAR-10 \cite{cifar} with different AL inference batch sizes. Storing images on a private FTP server (Figure \ref{fig:ablation-al-infer-bs-ftp}) and S3 (Figure \ref{fig:ablation-al-infer-bs-s3}) both show a monotonic increase of end-to-end throughput over inference batch size.} \label{fig:ablation-al-infer-bs} \end{figure} \section{Conclusion} This paper presents a new MLOps system, named ALaaS, for data-centric AI. ALaaS adopts the philosophy of Machine-Learning-as-Service and implements a server-client architecture, so users can use AL as a web service. Meanwhile, it abstracts the AL process into multiple components and develops several modules, including a data manager, an AL strategy zoo, and utility functions, to support them. More importantly, ALaaS employ stage-level parallelism (a pipeline manner), cache, and batching to improve AL running efficiency. Experiments show that our system has lower latency and higher throughput than all other baselines. We release our code to facilitate the AL research. \newpage \section{Introduction} \label{submission} We are witnessing that many deep learning (DL) models are shaping our world. The models, ranging from object detection and speech recognition to text summarization and product recommendation, empower our daily-used applications, such as YouTube, Amazon, etc., with a super-human performance. To adopt these DL models into applications, researchers and developers roughly need to iteratively go through several stages, including data collection, model design and training, model deployment, and serving. However, it is well-known that DL models are very data-hungry. Therefore, to reach a high performance (e.g., accuracy) that meets application requirements, people always need to label a large amount of data during data collection. This process is extremely time-consuming and labor-intensive and thus often becomes the bottleneck of DL application development. To cope with intensive human resources, both academia and industry are studying and utilizing active learning (AL) methods. AL aims to reduce manual labeling efforts while maintaining and even improving AI models' performance. Specifically, AL selects the most representative or diverse training samples from a large training data pool by utilizing carefully designed strategies and then only sends the selected samples to an oracle (e.g., human annotators) to do the labeling. By doing so, the labeling samples will be reduced a lot, whereas AI models can still obtain a high accuracy as the noise and unimportant samples have been removed. However, utilizing these AL methods is a non-trivial task. Essentially, applying AL to DL applications development is not just simply searching for several AL strategies and implementing the algorithms. Instead, researchers and practitioners have to build an efficient infrastructure for running AL tailored for their applications in their own environment (e.g., a private cluster and AWS), facing full of repetitive engineering work and boilerplate code, as AL always runs on a vast dataset. Besides, given the limited budget, they must wisely choose a suitable AL strategy to reduce human labor costs while maximizing model performance as well as decide the spent allocation on human labeling and running AL. Unfortunately, though many open-source AL python tools lower the barrier to applying AL, they have not considered either efficiency or selection issues. To address these issues, we propose to build an efficient system for AL, inspired by the recently proposed data-centric AI, which aims to build an AI system for data engineering. The AL system should consider efficiency and is able to run AL strategies on large datasets efficiently by utilizing single or distributed multiple devices. Further, it should automatically select AL strategies for different applications and allocate the users' budget. \begin{figure*}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\textwidth]{fig/workflow_v1.pdf}} \caption{ALaaS architecture.} \label{system-arch} \end{center} \vskip -0.2in \end{figure*} \section{Related Work} This section presents the relate work, which is classified into three categories, active learning (AL) algorithms and tools, data centric AI, and MLOps. \subsection{AL Algorithms and Tools} \subsection{Data Centric AI} \subsection{MLOps} \section{System Design} \subsection{System Overview} \subsection{Figures} You may want to include figures in the paper to illustrate your approach and results. Such artwork should be centered, legible, and separated from the text. Lines should be dark and at least 0.5~points thick for purposes of reproduction, and text should not appear on a gray background. Label all distinct components of each figure. If the figure takes the form of a graph, then give a name for each axis and include a legend that briefly describes each curve. Do not include a title inside the figure; instead, the caption should serve this function. Number figures sequentially, placing the figure number and caption \emph{after} the graphics, with at least 0.1~inches of space before the caption and 0.1~inches after it, as in \cref{icml-historical}. The figure caption should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. You may float figures to the top or bottom of a column, and you may set wide figures across both columns (use the environment \texttt{figure*} in \LaTeX). Always place two-column figures at the top or bottom of the page. \subsection{Algorithms} If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic'' environments to format pseudocode. These require the corresponding stylefiles, algorithm.sty and algorithmic.sty, which are supplied with this package. \cref{alg:example} shows an example. \begin{algorithm}[tb] \caption{Bubble Sort} \label{alg:example} \begin{algorithmic} \STATE {\bfseries Input:} data $x_i$, size $m$ \REPEAT \STATE Initialize $noChange = true$. \FOR{$i=1$ {\bfseries to} $m-1$} \IF{$x_i > x_{i+1}$} \STATE Swap $x_i$ and $x_{i+1}$ \STATE $noChange = false$ \ENDIF \ENDFOR \UNTIL{$noChange$ is $true$} \end{algorithmic} \end{algorithm} \subsection{Tables} You may also want to include tables that summarize material. Like figures, these should be centered, legible, and numbered consecutively. However, place the title \emph{above} the table with at least 0.1~inches of space before the title and the same after it, as in \cref{sample-table}. The table title should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. \begin{table}[t] \caption{Classification accuracies for naive Bayes and flexible Bayes on various data sets.} \label{sample-table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule Data set & Naive & Flexible & Better? \\ \midrule Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\ Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\ Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\ Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\ Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\ Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\ Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\ Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} Tables contain textual material, whereas figures contain graphical material. Specify the contents of each row and column in the table's topmost row. Again, you may float tables to a column's top or bottom, and set wide tables across both columns. Place two-column tables at the top or bottom of the page. \section{Benchmarks} \section{Introduction} \label{Introduction} Data-centric AI is an emerging topic that focuses on engineering data to develop AI applications with the off-the-shelf machine learning (ML) models \cite{landingai}. Previous efforts are mainly model-centric AI that assumes a static environment. In this environment, 1) the data collection and engineering are done, 2) and continuously developing ML models to achieve high performance on test sets is the main target \cite{eyuboglu2022dcbench}. However, real-world AI applications are facing more complicated scenarios, which can not be adequately addressed by model-centric AI. For instance, researchers have to spend a lot of time on data preparation, including data labeling \cite{chew2019smart}, error detection \cite{krishnan2017boostclean}, etc. Meanwhile, they also need to monitor data to detect the distribution drift so as to update models in time \cite{huang2021modelci}. Treating these issues only from a model view will lead to a sub-optimal solution. Therefore, to further improve and democratize AI applications, a lot of efforts are now turning to data-centric or combining model-centric and data-centric \cite{landingai}. Though the concept of data-centric AI is proposed very recently, many pioneering studies whose core contributions lie in data engineering have already been proposed \cite{sener2018active, xu2021dataclue}. Among them, one vital direction is active learning (AL) \cite{ren2020survey}. The motivation of AL is to reduce manual labeling efforts while maintaining and even improving ML models' performance \cite{wang2014new, sener2018active, gal2017deep, ducoffe2018adversarial, caramalau2021sequential, ash2020deep, agarwal2020contextual, vzliobaite2013active, loy2012stream}. Specifically, it is well-known that ML models are very data-hungry. Therefore, to reach a high performance (e.g., accuracy) that meets application requirements, people always need to label a large amount of data during data collection. This process is extremely time-consuming and labor-intensive and thus often becomes the bottleneck of ML application development. To cope with the issue, AL selects the most representative yet diverse training samples from a large training data pool by utilizing AL strategies. Then it only sends the selected samples to an oracle (e.g., human annotators) to label. Next, ML models will only be trained on these sub-datasets. By doing so, we can still obtain an ML model with competitive performance but save labeling and training costs a lot. However, utilizing AL methods is a non-trivial task. Essentially, applying AL to AI application development is not simply searching for, selecting, or implementing the AL algorithms. Instead, users have to build a backend to run the AL pipeline, tailored for their applications in their own environment (e.g., a private cluster and AWS). In other words, they need to undertake much repetitive engineering work with boilerplate code. Moreover, users have to consider the efficiency and cost issues, as AL often runs on a vast dataset, and some AL algorithms (e.g., committee-based \cite{dagan1995committee, melville2004diverse}) require running more than one ML model for data selection. Under-consideration will result in a long process time and additional cost. Though several open-source AL tools \cite{modal, deepal, libact, alipy} lower the barrier to applying AL, they can not meet the efficiency requirement. To address these issues, we propose to build an efficient backend for AL. Our AL system, named Active-Learning-as-a-Service (ALaaS) (see Figure \ref{fig:system-arch}), is able to run AL strategies on large datasets efficiently by utilizing single or distributed multiple devices. Specifically, it adopts server-client architecture to perform AL tasks. As a result, the system can be easily installed on both laptops and the public cloud. After the installation, users can start the system with a simple configuration file by following our templates. Then the system will run AL tasks in an efficient pipeline manner. Meanwhile, more acceleration techniques such as data cache and batching \cite{crankshaw2017clipper, zhang2020mlmodelci, zhang2020hysia} will be utilized to further speed up the AL process. In addition to that, our system also considers the accessibility and modularity so that non-experts can use AL strategies stored in our AL zoo with ease, and experts can propose more advanced AL strategies for more scenarios. Experiments show that our ALaaS outperforms all other baselines in terms of latency and throughput. Further ablation studies show the effectiveness of our design and reveal more insightful conclusions. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/workflow_v2.pdf} \caption{ALaaS architecture. Our system adopts a server-client architecture, which can be deployed easily. It also supports various AL strategies, different model zoos, and serving engines.} \label{fig:system-arch} \end{figure*} \section{Related Work} This section presents the related work, including three categories: active learning (AL) algorithms and tools, Data-centric AI, and MLOps. \subsection{AL Algorithms and Tools} We categorize AL strategies into three classes, diversity-based, uncertainty-based, and hybrid sampling. Diversity-based methods \cite{yang2015multi, sener2018active} are designed to select the most informative samples from the whole dataset to represent it. Uncertainty-based methods \cite{wang2014new, roth2006margin, gal2017deep} aim to select the samples that can not be identified confidently by current ML models and then uses these samples to further improve ML models. Hybrid methods \cite{huang2010active, beluch2018power} combine both the above-mentioned methods. Our system supports all of these methods and runs them more efficiently. Many open-source AL tools have been developed to benefit both academia and industry, including ModAL\cite{modal}, DeepAL \cite{deepal}, Libact \cite{libact}, and ALiPy \cite{alipy}. Our ALaaS is inspired by these tools and further improves the AL efficiency and accessibility by adopting the MLOps concept. The detailed comparison is summarized in Table\ref{tab:open-source-tool-compare}. \begin{table}[t] \centering \caption{Comparison of Active Learning (AL) open-source tools. Our ALaaS provides a Machine-Learning-as-a-Service experience and improves AL efficiency a lot.} \label{tab:open-source-tool-compare} \vspace{7pt} \centering \adjustbox{max width=\textwidth}{ \begin{tabular}{lcccccc} \toprule \begin{tabular}[c]{@{}c@{}}AL \\Open-source Tool\end{tabular} & \begin{tabular}[c]{@{}c@{}}Pipelined \\Data Processing\end{tabular} & \begin{tabular}[c]{@{}c@{}}Elastic \\AL Serving\end{tabular} & \begin{tabular}[c]{@{}c@{}}Server-Client \\Architecture\end{tabular} & PyPI Install & Data Cache & AL Zoo \\ \midrule DeepAL \cite{deepal} & & & & & & \checkmark \\ ModAL \cite{modal} & & & & \checkmark & & \checkmark \\ ALiPy \cite{alipy} & & & & \checkmark & & \checkmark \\ libact \cite{libact} & & & & \checkmark & & \checkmark \\ \textbf{ALaaS (Ours)}& \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ \bottomrule \end{tabular} \end{table} \subsection{Data-centric AI} Data-centric AI is proposed to improve AI application performance by engineering datasets rather than only focusing on models. Recent Data-centric AI competition and workshop \cite{landingai} from Landing.ai demonstrates many exciting studies from both academia and industry. Inspired by the pioneering work, many data-centric methods have been proposed for different areas, including NLP \cite{xu2021dataclue, seo2021automatic}, CV \cite{huang2021ymir, chakrabortyfirst}, Robot \cite{lin2022roboflow}, etc. Also, a new benchmark \cite{eyuboglu2022dcbench} has been built for pushing forward data-centric AI research. To the best of our knowledge, ALaaS is the first MLOps system for efficient AL from the data-centric view. \subsection{MLOps} MLOps (Machine Learning Operation) aims to streamline the ML model development and reduce the AI application maintenance cost. Many MLOps systems have been proposed for both data-centric AI and model-centric AI. From a data-centric view, labeling tools (e.g., labelme \cite{russell2008labelme}), data cleaning tools (e.g., ActiveClean \cite{krishnan2016activeclean}), data drift monitors, and so on, can all be regarded as MLOps systems. From a model-centric view, we have model store systems \cite{vartak2016modeldb}, model continuous integration \cite{zhang2020mlmodelci, renggli2019continuous} tools, training platforms \cite{jiang2020unified}, deployment platforms \cite{chen2018tvm}, etc. Different from these systems, ALaaS is designed specifically for running AL tasks more efficiently. In addition, tech giants starts to build end-to-end cloud platforms for MLOps (e.g., TFX \cite{baylor2017tfx}, SageMaker \cite{das2020amazon}, Ludwig \cite{molino2019ludwig}). Our ALaaS can be a good plugin that is complementary to these systems. \section{System Design and Architecture} This section first highlights our Active-Learning-as-a-Service (ALaaS) with three key features, then details the design of core modules of the system as shown in Figure \ref{fig:system-arch}. \subsection{ALaaS Highlights} We highlight three key features, namely efficiency, accessibility, and modularity, provided by our system. These features are also our design principles, leading the implementation to consider both experts (e.g., data scientists and machine learning (ML) engineers) and non-experts (e.g., customers with little domain knowledge) all the time. \textbf{Efficiency.} Active Learning (AL) always faces large-scale datasets to be labeled \cite{ren2020survey} and some AL even employ multiple computational intensive deep learning (DL) models (e.g., Query-By-Committee \cite{dagan1995committee, melville2004diverse}). Thus, it is critical to efficiently process these datasets and models to accelerate ML application development and save users' AL use cost. Our ALaaS offers an extremely efficient AL service to users by employing a lot of optimization technologies, including a pipeline process \cite{narayanan2019pipedream}, ML serving backend adoption\cite{trtserving}, etc. \textbf{Accessibility.} To further lower the application barrier as well as improve the adoption rate, an AL system should ensure AL non-experts can easily use it with minimal effort and avoid writing much code. Our ALaaS follows this principle and enables a smooth user experience by implementing a containerized AL service with rich configuration temples to help users quickly get started. \textbf{Modularity.} AL is evolving fast, especially driven by the advance in Deep Learning, which requires a large amount of data to train. Making AL accessible should not hinder its advanced use by AL or ML experts. Therefore, our system is designed in a highly modular manner, enabling experts to prototype, extend and deploy state-of-the-art (SOTA) AL methods with ease. \subsection{ALaaS Architecture} The system adopts service-client architecture to abstract complex AL algorithms into web-based services, enabling an out-of-the-box user experience. Besides, our system provides a modular data manager and an AL strategy zoo, decoupling two key processes, large data operation (e.g., data index and storage) and AL strategy development and selection, in AL utilization. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/server_client_arch_v3.pdf} \caption{A deployed ALaaS system. The AL client sends data URIs to the AL server, where data will be downloaded. Then the AL server sends data samples to different workers for AL processing.} \label{fig:server-client-design} \end{figure*} \textbf{Server-Client}. The server-client architecture makes the AL accessible for different level users ranging from domain experts to ML beginners with almost no knowledge. It can be deployed to a personal laptop as well as a public cloud. We take an ALaaS deployed to AWS \cite{aws} (see Figure \ref{fig:server-client-design}) as an example to detail the whole workflow. First users only need to prepare a configuration file including basic settings like dataset path and AL methods by following provided templates, as shown in Figure \ref{fig:config-example}. Then, with very few lines of code (LoCs), users can start both the AL client and AL server. Next, users will push their unlabeled datasets, which can be stored either in the local disk or AWS S3 \cite{awss3}, to the AL server. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{fig/server_client_api_example_v2.pdf} \caption{An AL service can be easily configured and started with YML files.} \label{fig:config-example} \end{figure*} Next, after getting the dataset Uniform Resource Identifier (URI) from the AL client, the AL server will download the dataset and process it with specific AL strategies in a pipeline manner as shown in Figure \ref{pipeline-design}. With this frustratingly simple optimization, the processing speed can become 10x times faster than the other open-source platform (see Section \ref{compare-open-source}). Meanwhile, the AL server will index every sample in the dataset by assigning unique IDs to them with the help of the data manager. These IDs will be utilized by AL strategies. Finally, the server will distribute the downloaded samples to an optimized inference worker with ML serving backend to do inference. According to pre-defined AL strategies, the AL server will make decisions and generate a report including the URIs of selected samples to be labeled. As a result, the AL server only needs to return the URIs to the AL client, avoiding downloading selected samples from the AL server. \begin{figure*}[b] \centering \includegraphics[width=1.0\textwidth]{fig/alaas_pipeline_v2.pdf} \caption{Dataflow comparison among conventional pool-based learning methods (a), (b), and proposed ALaaS (c). These workflows show how data flows in machines in multiple rounds of AL with different methods. A red box represents a data sample at a download stage, a blue box represents a data sample at a process stage, a green box represents a data sample at an AL inference stage, and a box with a diagonal fill indicates there is no process. The numbers inside the box indicate different rounds of AL.} \label{pipeline-design} \end{figure*} \textbf{Data Manager.} The data manager manages the lifecycle of the dataset in our system. First, it accepts users' datasets and persists their metadata (e.g., name, owner, etc.) for data housekeeping. Second, during the system running, it will index data samples to avoid redundant data movement and batch data for an efficient GPU process. Meanwhile, it provides rich data transformation functions for different tasks like NLP, CV, and Audio. Moreover, for different kinds of AL methods, the data manager can equip the corresponding processing methods to improve usability. \textbf{AL Strategy Zoo.} The AL strategy zoo abstracts and stores many AL strategies, including uncertainty-based, Bayesian, density-based, batch mode, etc. It also provides a base class for advanced users to inherit and extend AL to new scenarios. \textbf{Other Utilities.} To further lower the barrier of using AL and improve efficiency, the system further offers many useful utility functions. For example, as shown in Figure \ref{fig:system-arch}, \textbf{model repository} is designed to connect many public model hubs like HuggingFace \cite{Wolf_Transformers_State-of-the-Art_Natural_2020} and TorchHub \cite{pytorchhub} and obtain pre-trained models from them. Second, as shown in Figure \ref{fig:server-client-design}, the data cache is employed to improve AL computation efficiency, and workers with \textbf{serving engine} are to call different ML serving backend to speed up ML model inference. \section{System Evaluation} This section presents the quantitative evaluation of our systems. We first compare our system with other open-source platforms. Then we benchmark our system from different perspectives to demonstrate its efficiency and accessibility. \subsection{Evaluation setup} \textbf{Hardware\&Software.} We evaluate the system on AWS EC2 and a MacBook laptop. The backend inference software is Triton Inference Server \cite{trtserving}. \textbf{Dataset.} We use the CIFAR-10 dataset \cite{cifar} to conduct experiments. It includes 50,000 training images and 10,000 test images. \textbf{Model.} We use the widely deployed ResNet-18 \cite{he2016deep} model to evaluate system performance as well as benchmark different AL strategies and AL settings. \subsection{Comparison with other AL open source tools} \label{compare-open-source} The first experiment compares the efficiency of ALaaS with that of other baselines. \textbf{Settings.} In this experiment, we simulate a one-round AL process, which applies AL methods to scan the whole dataset to generate a sub-pool. This sub-pool includes samples that will be used to further improve an existing ML model. Specifically, we first train an ML model with randomly selected 10,000 images from the CIFAR-10 training set as the initial model. Next, we use different AL tools to serve the trained model on an AWS 3x.large CPU/GPU EC2. For all tools, we use the same AL strategy named least confidence sampling \cite{sequential1994david}. Finally, we utilize these tools to select 10,000 samples from the rest 40,000 images in the training set and compare their latency and throughput. \begin{wrapfigure}{r}{0.5\textwidth} \centering \includegraphics[width=0.4\textwidth]{exp/exp_acc_budget.pdf} \caption{Top-1 and top-5 accuracy (ACC) with different AL budgets.} \label{fig:ablation-al-budget} \end{wrapfigure} \textbf{Results \& Insights.} The results are shown in Table \ref{tab:open-source-tool-perf-eval}. Compared to other tools, our ALaaS achieves the lowest latency and highest throughput while still maintaining the same accuracy. This efficiency improvement can be attributed to two sides. First, our ALaaS implements stage-level parallelism which reduces the device idle time extremely. Second, ALaaS adopts the existing ML inference servers to accelerate the model inference. Furthermore, we evaluate the intermediate results with different budgets of our ALaaS. As shown in Figure \ref{fig:ablation-al-budget}, as the budget increases, more samples will be selected and the accuracy will also be increased. This further proves the effectiveness of our system. \begin{table}[h] \centering \caption{Performance evaluation among different AL open-source tools. Compared to all baselines, ALaaS has the lowest latency and highest throughput.} \label{tab:open-source-tool-perf-eval} \vspace{7pt} \adjustbox{max width=\textwidth}{ \begin{tabular}{lcccc} \toprule AL Open-source Tool & Top-1 Accuracy (\%) & Top-5 Accuracy (\%) & One-round AL Latency (sec) & End-to-end Throughput (Image/sec) \\ \midrule DeepAL \cite{deepal} & 86.90 & 89.67 & 2287.00 $\pm$ 179.37 & 17.49 \\ ModAL \cite{modal} & 86.90 & 85.72 & 2006.95 $\pm$ 37.98 & 19.93 \\ ALiPy \cite{alipy} & 86.90 & 83.46 & 2410.85 $\pm$ 77.81 & 16.59 \\ libact \cite{libact} & 85.14 & 81.23 & 1771.33 $\pm$ 109.77 & 22.58 \\ \textbf{ALaaS (Ours)} & 86.90 & 88.12 & 552.45 $\pm$ 30.385 & 72.40 \\ \bottomrule \end{tabular} \end{table} \subsection{ALaaS Characterization} We further benchmark our ALaaS with different system settings. The first experiment is to evaluate different AL strategies re-implemented in our system. The second experiment explores the batch size's impact on system efficiency. \subsubsection{AL strategy impact} Our ALaaS already provides many out-of-the-box AL strategies in ModelZoo for users. This experiment evaluate these strategies re-implemented by ALaaS from accuracy and efficiency views to provide more insights. All settings are the same as in the previous experiment. \textbf{Results \& Insights.} The accuracy of different methods is shown in Figure \ref{fig:ablation-al-strategy-acc}. Core-set \cite{sener2018active} achieves the highest accuracy with no surprise as it is designed for CNNs in computer vision (CV) tasks. Meanwhile, K-Center Greedy (KCG) \cite{alcluster2004nguyen} and Least Confidence (LC) \cite{sequential1994david} are the second and the third accuracy though proposed very early. This tells us that even in the deep learning (DL) era, traditional methods still play a vital role and can cooperate with DL very well. The throughput is shown in Figure \ref{fig:ablation-al-strategy-throughput}. LC has the highest throughput while Coreset achieves the lowest throughput. Combining the accuracy \ref{fig:ablation-al-strategy-acc}and the throughput \ref{fig:ablation-al-strategy-throughput}results, we can draw the conclusion that the accuracy improvement of Coreset comes from the heavy design while LC balances the trade-off between accuracy and efficiency well. In summary, ALaaS provides many methods with clear accuracy and efficiency reports so users can choose them based on their own scenarios accordingly. \begin{figure}[h] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_acc_strategy.pdf} \caption{Top-1 and top-5 accuracy (ACC)} \label{fig:ablation-al-strategy-acc} \end{subfigure}\hfill% \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_qps_strategy.pdf} \caption{AL query throughput} \label{fig:ablation-al-strategy-throughput} \end{subfigure}\hfill% \caption{Performance of one-round AL for ResNet-18 \cite{he2016deep} on CIFAR-10 dataset \cite{cifar} using different AL strategies (i.e., Least Confidence (LC) \cite{sequential1994david}, Margin Confidence (MC) \cite{margconf2001tobias}, Ratio Confidence (RC) \cite{settles2009active}, Entropy Sampling (ES) \cite{settles2009active}, K-Center Greedy (KCG) \cite{alcluster2004nguyen}, K-Means Sampling (KMeans) \cite{alcluster2004nguyen}, Core-set \cite{sener2018active}, and Diverse Mini-Batch (DBAL) \cite{diverse2019fedor}). The lower-bound baseline is using random sampling (Random) strategy, while the upper-bound baseline is using the entire dataset for training.} \label{fig:ablation-al-strategy} \end{figure} \subsubsection{Batch size impact.} \textbf{Settings.} We evaluate the batch size (BS) impact on two deployment scenarios, the private server and the AWS cloud. Specifically, we first store the CIFAR-10 dataset on a private FTP server and an AWS S3, respectively. We then start ALaaS on a laptop to simulate the end-to-end AL process, including downloading data from other devices, pre-processing data, and selecting AL with an AL strategy. The other settings are the same as the first experiment. \textbf{Results \& Insights.} Our ALaaS can manage the whole process in both environments with different batch sizes steadily and efficiently, as shown in Figure \ref{fig:ablation-al-infer-bs}. Also, from the Figure \ref{fig:ablation-al-infer-bs}, we have many interesting phenomenons. First, BS = 1 and BS = 2 have very close throughput. Second, the increasing trend from BS = 2 to BS = 8 is the most dramatic. Third, after BS=16, the increase will stop. We attribute the reason to that the transmission time accounts for a large proportion of total processing time when the batch size is small. Therefore, the throughput improvement is marginal at the beginning. Then the batch computation time becomes the largest part of the total processing time, so the improvement is dramatic. Finally, when the batch size reaches the computation capacity, the increase stops. \begin{figure}[t] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_ftp_param_acc.pdf} \caption{Images stored on private FTP server} \label{fig:ablation-al-infer-bs-ftp} \end{subfigure}\hfill% \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{exp/exp_s3_param_acc.pdf} \caption{Images stored on AWS S3} \label{fig:ablation-al-infer-bs-s3} \end{subfigure}\hfill% \caption{End-to-end throughput of one-round pool-based AL for ResNet-18 \cite{he2016deep} on CIFAR-10 \cite{cifar} with different AL inference batch sizes. Storing images on a private FTP server (Figure \ref{fig:ablation-al-infer-bs-ftp}) and S3 (Figure \ref{fig:ablation-al-infer-bs-s3}) both show a monotonic increase of end-to-end throughput over inference batch size.} \label{fig:ablation-al-infer-bs} \end{figure} \section{Conclusion} This paper presents a new MLOps system, named ALaaS, for data-centric AI. ALaaS adopts the philosophy of Machine-Learning-as-Service and implements a server-client architecture, so users can use AL as a web service. Meanwhile, it abstracts the AL process into multiple components and develops several modules, including a data manager, an AL strategy zoo, and utility functions, to support them. More importantly, ALaaS employ stage-level parallelism (a pipeline manner), cache, and batching to improve AL running efficiency. Experiments show that our system has lower latency and higher throughput than all other baselines. We release our code to facilitate the AL research. \newpage
2024-02-18T23:40:36.446Z
2022-07-20T02:10:34.000Z
algebraic_stack_train_0000
2,842
8,770
proofpile-arXiv_065-13935
\section{Introduction} Policy evaluation plays a crucial role in many real-world applications including healthcare, marketing, social sciences, among many others. Before deploying any new policy, it is crucial to know the impact of this policy. However, in the aforementioned applications, it is often impractical to evaluate a new policy by directly running this policy. As a result, the new policy needs to be evaluated offline based on an observational dataset generated by a possibly different behavior policy. This formulates the off-policy evaluation (OPE) problem. Most works in the literature focus on evaluating the \textit{average} value of a target policy aggregated over different initial states. In many applications such as healthcare and technology industries, in addition to the average effect, it is crucial to learn the value under a given initial condition (e.g., the individual effect) as well. For instance, in precision medicine, it allows us to estimate the outcome of each individual patient following a given treatment regime. In online recommendation, it allows us to evaluate the effect of a new strategy for each individual visitor. Moreover, in addition to a point estimator of the value, it is crucial to evaluate the uncertainty of the estimator in safety-critical applications. Uncertainty quantification allows us to determine whether the target policy is significantly better than an existing one or not. \subsection{Related Work} \textbf{Off-policy evaluation}. There is a huge literature on OPE. Existing methods can be divided into three categories, corresponding to the direct method \citep[see e.g.,][]{le2019batch}, importance sampling (IS) method \citep[see e.g.,][]{precup2000eligibility,liu2018breaking,nachum2019dualdice} and doubly robust method \citep[see e.g.,][]{farajtabar2018more,kallus2018policy,tang2019doubly,uehara2020minimax,kallus2020double}. In addition, several papers have studied interval estimation of the policy's value for uncertainty quantification \citep{thomas2015high,jiang2016doubly,hanna2017bootstrapping,dai2020coindice,feng2020accountable,jiang2020minimax,chandak2021universal,shi2021deeply}. These confidence intervals are typically derived based on concentration inequalities, normal approximations, bootstrap \citep{efron1994introduction} or the empirical likelihood method \citep{owen2001empirical}. However, all the aforementioned methods focused on the average effect of the target policy. To our knowledge, interval estimation of the individual effect has been less explored in the literature. \textbf{Conformal prediction}. Our proposal is closely related to a line of research on conformal prediction (CP), which was originally introduced by \cite{vovk2005conformal} to construct valid model-free \textit{prediction} intervals (PIs) for the response. Both PI and confidence interval (CI) express uncertainty in statistical estimates. Nonetheless, a CI gives a range for the conditional mean function of the response whereas a PI aims to cover the response itself. A key strength of CP lies in its generality. Specifically, it can accommodate \textit{any} prediction model under minimal assumptions on the data. Recently, \cite{tibshirani2019conformal} developed a weighted CP method to handle settings under covariate shift. The weighted CP method was further extended and applied to a number of applications, including individual treatment effects estimation \citep{lei2021conformal}, survival analysis \cite{candes2021conformalized}, classification under label shift \citep{podkopaev2021distribution}, to name a few. These methods considered either a standard supervised learning setting, or a contextual bandit setting with ``state-agnostic" target policies. These settings differ from ours that involve sequential decision making and a general target policy. \textbf{Distributional reinforcement learning}. Recently, there is an emerging line of research on distributional reinforcement learning that estimates the entire distribution of the return under the optimal policy \citep[see e.g.,][]{bellemare2017distributional,dabney2018distributional,mavrin2019distributional,zhou2020non}. Our proposal shares similar spirits with these works in that it not only considers the expected return, but takes the variability of the return around its expectation into account as well. \subsection{Contribution} Methodologically, we develop a novel procedure to construct off-policy PIs for a target policy's return starting from any initial state in sequential decision making. It is ultimately different from many existing OPE methods that consider the average effect aggregated over different initial states, construct CIs for the expected return and ignore the variance of the return around its expectation. A key ingredient of our proposal lies in constructing a pseudo variable whose distribution depends on both the target and behavior policy. We next sample a subset of observations based on this pseudo variable and apply weighted conformal prediction method on the selected subsamples. Finally, we develop an importance-sampling-based and a multi-sampling-based strategy to improve efficiency. Theoretically, we prove that the proposed PI achieves valid coverage asymptotically. In addition, when the behavior policy is known to us (e.g., as in randomized studies), it achieves exact coverage in finite samples. Such a property is particularly appealing as the sample size is usually limited in offline domains. Finally, our PI is asymptotically efficient when the regression estimator is consistent. \begin{comment} In this paper, we carefully construct a new auxiliary policy based on the target policy and behavior policy, which will help to generate samples imitating the outcome following the target policy. The first proposed method involves two steps: (i.) sampling an auxiliary binary variable and select subsamples imitating the target outcome and (ii.) applying weighted conformal prediction on the selected subsamples. In cases that the propensity score of behavior policy is known, our method can provide a valid model-free PI with theoretical guarantee. If the propensity score of behavior policy is unknown but can be consistently estimated, we can prove that our method satisfies asymptotic validation. The second proposed method is to adjust the weights in weighted conformal prediction, to improve efficiency and make full use of samples. It is preferred especially when the gap between target policy and behavior policy is large. The third method is to employ bootstrap technique applied to the above two methods, for replicability. We illustrate our idea from the single-stage setting and then extend the proposals to multi-stage setting. \end{comment} \section{Preliminaries: Conformal Prediction}\label{sec:2} We begin with a brief overview for the CP algorithm in supervised learning. At a high-level, CP allows us to calibrate prediction intervals computed by general black box machine learning algorithms with rigorous statistical guarantees. Being algorithm agnostic is particularly appealing in machine learning where predictive algorithms are emerging and evolving rapidly. Specifically, CP first splits the data into training and calibration data-subsets. On the training dataset, it learns the conditional mean of a response $Y$ given $X$ using any machine learning algorithm. Let $\widehat{\mu}$ denote the estimated regression function. On the calibration dataset $\{Z_i=(X_i,Y_i)\}_{i=1}^{n}$, it calculates a nonconformity score (e.g., $|y-\widehat{\mu}(x)|$) that measures how each observation ``conforms" to the training dataset. The resulting PI is constructed based on the empirical quantiles of these nonconformity scores and attains valid coverage as long as the data observations are exchangeable. There are many choices of the score function available and we refer readers to \cite{gupta2021nested} for details. One widely-used score function is given by $\max\{\widehat{q}_{\alpha_{lo}}(x)-y, y-\widehat{q}_{\alpha_{hi}}(x)\}$ where $\widehat{q}_{\alpha}$ denotes the $\alpha$th quantile of $Y$ given $X=x$, $\alpha_{lo}$ and $\alpha_{hi}$ denote the lower and upper quantiles, respectively \citep{romano2019conformalized}. The resulting algorithm is referred to as the conformal quantile regression. We next review the weighted CP algorithm developed by \cite{tibshirani2019conformal}. As commented earlier, the aforementioned CP algorithm relies on exchangeability --- a key assumption that requires the likelihood of joint distribution to be invariant to the order of samples. This assumption is clearly violated under distributional shift. To address this concern, \cite{tibshirani2019conformal} introduced the so-called ``weighted exchangeability'' that relaxes the classical i.i.d. assumption and is automatically satisfied for independent samples. \begin{definition}(Weighted Exchangeability) Random variables $V_1,\ldots,V_n$ are said to be weighted exchangeable, if the density $f$ of their joint distribution can be factorized as \begin{equation*} f(v_1,\ldots,v_n)=\prod_{i=1}^{n}w_i(v_i)g(v_1,\ldots,v_n), \end{equation*} for certain weight functions $w_1,\ldots,w_n$, and a permutation-invariance function $g$ such that $g(v_{\sigma(1)},\ldots,v_{\sigma(n)})=g(v_1,\ldots,v_n)$ for any permutation $\sigma$ of $\{1,\ldots,n\}$. \end{definition} According to the definition, independent data are always ``weighted exchangeable" with weight function corresponding to the likelihood ratios. \begin{lemma}\label{lemma1} Let $V_i\sim P_i$, $i=1,\ldots,n$ be independent draws, where each $P_{i}$ is absolutely continuous with respect to $P_1$ for $i\ge 2$. Then $V_i,\ldots,V_n$ are weighted exchangeable with weight functions $w_1=1$ and $w_i=dP_i/dP_1$, $i\ge 2$. \end{lemma} Let $V_i=\mathcal{S}(Z_i,\mathcal{Z}_{tr})$ denote the nonconformity score for the $i$th observation in the calibration data based on certain machine learning algorithm trained on the training dataset $\mathcal{Z}_{tr}$, and $V_{n+1}^{(x,y)}=\mathcal{S}((x,y),\mathcal{Z}_{tr})$ denote the one for an arbitrary predictor-response pair $(x,y)$. Instead of relying on the empirical quantiles of these nonconformity scores, the weighted CP algorithm considers a weighted version and constructs the PI for $Y$ given $X=x$ using \begin{eqnarray*} \widehat{C}_{n}(x)=\{y\in\mathbb{R}: V_{n+1}^{(x,y)}\le \mbox{Quantile}(1-\alpha; \sum_{i=1}^{n}p_{i}^{w}\delta_{V_{i}^{(x,y)}}+p_{n+1}^{w}\delta_{\infty})\}, \end{eqnarray*} where $\alpha$ denotes the given significance level, $\delta_{a}$ denotes a distribution that places all mass at the value $a$, and $\{p_{i}^{w}\}_{i=1}^{n+1}$ are functions of weights $\{w_{i}\}_{i=1}^{n+1}$ whose explicit expression is given in \citet{tibshirani2019conformal}. Finally, we remark that the (weighted) CP method possesses several appealing statistical properties. First, it does not depend on any specific model assumption in the conditional distribution of the outcome given the covariates; as such, it is applicable to complex nonlinear and high-dimensional settings. Second, it achieves exact coverage in the sense that $P\{Y \in \widehat{C}_n(X)\}\ge 1-\alpha$ for any $n$. To the contrary, most interval estimation procedures are only \textit{asymptotically} valid. Nonetheless, it is not straightforward to extend these methods to the OPE problem. See Section \ref{sec:challenge} for details. \section{Conformal Off-Policy Prediction in Contextual Bandits} \label{sec:3} \subsection{Problem Formulation} To better illustrate the idea, in this section, we focus on a contextual bandit setting (i.e., single stage decision making) where the observed data consist of $n$ i.i.d. samples $\{(X_i,T_i,Y_i)\}_{i=1}^{n}$ where $X_i$ collects the contextual information of the $i$th instance, $T_i\in \{0,1,\cdots,m-1\}$ denotes the treatment (e.g., action) that the $i$th instance receives where $m$ denotes the number of treatment options, and $Y_i$ is the corresponding response (e.g., reward). We adopt a counterfactual/potential outcome framework \citep{rubin2005causal} to formulate the OPE problem. Specifically, for any $0\le t\le m-1$, let $Y_i^t$ denote the reward that the $i$th instance would be observed were they to receive action $t$. A policy $\pi$ is a (stochastic) decision rule that maps the contextual space to a distribution function over the action space. We use $\pi(t|x)$ to denote the probability that the agent selects treatment $t$ given $X=x$. For a given target policy $\pi_D$, we are interested in inferring the conditional distribution of the potential outcome $Y^D$ that would be observed were the instance to follow $\pi_{D}$ given $X$. Specifically, for any $X=x$, we aim to produce a PI for $Y^D$ with valid coverage guarantees. Notice that our objective differs from the standard OPE problem in which one aims to derive a CI for $\mathbb{E} Y^D$. Finally, we impose standard assumptions in the causal inference literature \citep[see e.g.,][]{zhang2012robust,zhu2017greedy,chen2022policy}, including (1) $Y_i^{T_i}=Y_i$ almost surely for any $i$ (i.e., consistency); (2) $(Y_i^0, \cdots, Y_i^{m-1}) \perp \!\!\! \perp T_i|X_i$ for any $i$ (i.e., no unmeasured confounders); (3) $\pi(t|x)$ is uniformly bounded away from zero for any $t,x$ (i.e., positivity). \subsection{Conformal Prediction for Off-Policy Evaluation}\label{sec:challenge To motivate our proposed approach, we first introduce two extensions of CP to the OPE problem in this section and discuss their limitations. We next illustrate the main idea of our proposal. {\textbf{Direct method}}. OPE is essentially a policy evaluation problem under distribution shift where the target policy $\pi_D$ differs from the behavior policy (denoted by $\pi_T$) that generates the offline data. By Lemma \ref{lemma1}, the calibration dataset $\{ (X_i,Y_i) : 1\le i\le n \}$ and the predictor-potential outcome pair $(X_{n+1},Y_{n+1}^D)$ in the target population satisfy weighed exchangeability with weights $w_i=1$ for $1\le i\le n$ and \begin{eqnarray}\label{eqn:weight} w_{n+1}(x,y)=\frac{dP_{Y^{D}|X}(y|x)}{dP_{Y|X}(y|x)}. \end{eqnarray} As a result, a direct application of the weighted CP method is valid for OPE given the weights $\{w_i\}_{i=1}^{n+1}$. We refer to the resulting algorithm as the direct method. To apply weighted CP, it remains to specify the weight $w_{n+1}$. Notice that both $Y$ and $Y^D$ correspond to a mixture of $\{Y^t: 1\le t\le m\}$ with different weight vectors. Estimating $w_{n+1}$ essentially requires to learn the conditional densities of $Y^t$ given $X$ --- an extremely challenging task in complicated high-dimensional nonlinear systems. As will show later, this approach would fail to cover $Y^D$ when the conditional density model is misspecified. {\textbf{Subsampling-based method}}. \begin{comment} We begin this section by discussing the limitations of weighted CP directly applied to the above question. By Lemma \ref{lemma1}, our setting also satisfies the definition of weighted exchangeability. It indicates that the independent draws $(X_{i},Y_{i}^{T})$ for $1\le i\le n$ and $(X_{n+1},Y_{n+1}^{D})$ are weighted exchangeable with $w_i=1$ for $1\le i\le n$ and $w_{n+1}(x,y)=dP_{Y^{D}|X}(y|x)/dP_{Y^{T}|X}(y|x)$. It's worthy to note that $Y^{T}$ conditional on $X$ is a mixture between $Y(1)$ and $Y(0)$ with propensity score $\pi_{T}(X)=P(D=1|X)$, while $Y^{D}$ is such a mixture with a different propensity score $\pi_{D}(X)=P(T=1|X)$. In the following, we abbreviate $\pi_{D}(X),\pi_{T}(X)$ as $\pi_{D},\pi_{T}$ for simplicity. Thus $w_{N+1}(x,y)$ is the likelihood ratio of these two mixture distributions. As a result, even if the propensity scores of behavior policy and target policy are known, we are also required to estimate the conditional densities of $Y(1)$ and $Y(0)$ consistently for the weights, which is very challenging especially when the covariate is high dimensional. The true weights guarantee the coverage probability of PIs for weighted CP, while the training algorithm for nonconformity scores determine the efficiency of PIs. To elaborate this, consider the nonconformity score in conformal quantile regression (CQR) where two conditional quantiles are trained. If the conditional quantile regression estimates are consistent, the PIs by conformal prediction converges to the oracle interval \citep{sesia2020comparison}. While if we use all dataset $\{X_i,Y_i^T\}_{i=1}^{n}$ to train quantile regression, the target is certainly far away from the conditional quantile of $Y^{D}$. The above two challengs motivate us to circumvent estimating the conditional densities in weights and try to estimate conditional quantiles of $Y^{D}$. \end{comment} Another approach to handle distributional shift is to take a data subset whose distribution is similar to the ``target distribution" and analyse this sub-dataset. In particular, for each observation, we sample a pseudo action $D$ following the target policy, select subsamples whose pseudo action matches the observed action, and apply CP to these subsamples. We refer to the resulting algorithm as the subsampling-based method. However, this approach is not valid. This is because the distribution of the selected subsamples $\{(X_i,Y_i): T_i = D_i,1\le i\le n \}$ generally differs from that of $(X_{n+1}, Y_{n+1}^D)$. The two distributions coincide only when $\pi_D$ is deterministic or $\pi_T$ is uniformly random, as shown below. \begin{proposition}\label{prop:1} Let $D$ denote a pseudo action generated according to the target policy $\pi_D$. Then the conditional distribution of $Y$ given $D=T$ and $X$ follows a mixture distribution given as follows \begin{equation*} P_{Y|D=T,X}=\sum_{t=0}^{m-1} \frac{\pi_{D}(t|X)\pi_{T}(t|X)}{\sum_{t'}\pi_{D}(t'|X)\pi_{T}(t'|X) }P_{Y^t|X} \end{equation*} The above mixture distribution equals $P_{Y^{D}|X}=\sum_t \pi_{D}(t|X)P_{Y^t|X}$ if and only if $\pi_D$ is a deterministic policy or $\pi_{T}(0|X)=\pi_T(1|X)=\cdots=\pi_T(m-1|X)$. \end{proposition} \textbf{Our proposal}. The subsampling-based method fails because the distribution of the selected response differs from that of the potential outcome. To address this issue, instead of sampling according to the target policy $\pi_D$, we carefully design a pseudo policy $\pi_A$ whose distribution depends on both $\pi_D$ and $\pi_T$ such that the resulting subsamples' distribution matches that of the potential outcome. More specifically, for any $0\le t<m-1$ and $x$, $\pi_A$ shall satisfy the following, \begin{eqnarray}\label{eqn:ratio} \frac{\pi_A(t|x)}{\pi_A(0|x)}=\frac{\pi_D(t|x)}{\pi_D(0|x)}\left[ \frac{\pi_T(t|x)}{\pi_T(0|x)} \right]^{-1}. \end{eqnarray} In other words, $\pi_A(t|x)$ shall be proportional to the ratio $\pi_D(t|x)/\pi_T(t|x)$ for any $t$ and $x$. Similar to Proposition \ref{prop:1}, we can show that subsamples with $A=T$ follow the following distribution, \begin{eqnarray*} P_{Y|A=T,X}=\sum_{t=0}^{m-1} \frac{\pi_{A}(t|X)\pi_{T}(t|X)}{\sum_{t'}\pi_{A}(t'|X)\pi_{T}(t'|X) }P_{Y^t|X}=\sum_{t=0}^{m-1} \pi_D(t|X) P_{Y^t|X}=P_{Y^D|X}. \end{eqnarray*} This implies that subsampling according to the pseudo policy $\pi_A$ yields the same conditional distribution as $P_{Y^D|X}$ in the target population. Nonetheless, the selected subsamples and the target possess different covariate distributions. Such a ``covariate shift" problem can be naturally handled by the weighted CP algorithm. Using Lemma \ref{lemma1} again, the subsamples and the target population are weighted exchangeable with weights $w_i=1$ for any $i$ such that $A_i=T_i$ and \begin{eqnarray*} w_{n+1}(x,y)=\frac{P_{X,Y^D}(x,y)}{P_{X,Y|A=T}(x,y)}=\frac{P_X(x)}{P_{X|A=T}(x)}=\frac{P(A=T)}{P(A=T|X=x)}\propto\frac{1}{P(A=T|X=x)}. \end{eqnarray*} Compared to the direct method (see \ref{eqn:weight}), the weight in the above expression depends only on the behavior policy, which is known in randomized studies. Consequently, our proposal is robust to the model misspecification of the conditional distribution $P_{Y^t|X}$, as shown later. When the behavior policy is unknown, it can be estimated based on existing supervised learning algorithms. We summarize our proposal in Algorithm \ref{alg1}, and call our method COPP, short for conformal off-policy prediction. Finally, we remark that by \eqref{eqn:ratio}, $\pi_D=\pi_A$ only when $\pi_D$ is deterministic or $\pi_T$ is uniformly random. Consequently, the subsampling-based method is valid in these two special cases. \begin{comment} Inspired by Proposition \ref{prop:1}, we introduce a novel auxiliary policy $A$ based on the target policy and behavior policy such that \begin{equation}\label{auxiliary} \pi_{A}(X)\triangleq P(A=1|X)=\frac{\pi_{D}(1-\pi_{T})}{\pi_{D}(1-\pi_{T})+\pi_{T}(1-\pi_{D})}. \end{equation} Similarly, subsamples with $A=T$ would yield a mixture distribution that \begin{equation*} P_{Y|A=T,X}=\frac{\pi_{A}\pi_{T}}{\pi_{A}\pi_{T}+(1-\pi_{A})(1-\pi_{T})}P_{Y(1)|X}+\frac{(1-\pi_{A})(1-\pi_{T})}{\pi_{A}\pi_{T}+(1-\pi_{A})(1-\pi_{T})}P_{Y(0)|X}. \end{equation*} By simple calculation, we have $P_{Y|A=T,X}=P_{Y^{D}|X}$. This means that by sampling variables $A$ for the observations and selecting subsamples with $A=T$, we can imitate the conditional distribution of $Y^{D}$. In other words, the selected subsamples and the target share the same conditional distribution of outcomes but possess different covariate distribution. It is actually a ``covariate shift'' problem \citep{tibshirani2019conformal}. Suppose there are $n^\prime$ subsamples selected by $A=T$, the subsamples and the test data are weighted exchangeable with $w_i=1$ for $1\le i\le n^\prime$ and \begin{equation*} w_{n+1}(x,y)=\frac{dP_{X}(x)}{dP_{X|A=T}(x)}=\frac{P(A=T)}{P(A=T|X=x)}\propto\frac{1}{P(A=T|X=x)}. \end{equation*} The weighted conformal inference is invariant to rescaling of the likelihood ratio and by simple calculation \begin{equation}\label{weight} e(x)\triangleq P(A=T|X=x)=\frac{\pi_{T}(1-\pi_{T})}{\pi_{D}(1-\pi_{T})+\pi_{T}(1-\pi_{D})}. \end{equation} By the aid of the auxiliary policy, we get rid of the conditional densities of potential outcomes in weights. The weights only depend on the propensity scores. The second advantage of our method is that the quantile training algorithm on the selected subsamples with $A=T$ targets at the true goal. Proper training algorithm can provide more efficient estimation. In practice, the propensity score of behavior policy is unknown and should be estimated in the first place. Now we are ready to demonstrate the AOP in Algorithm \ref{alg1}. \end{comment} \textbf{A numerical example}. We conduct a simulation study to further demonstrate the sub-optimality of the direct and subsampling-based method. We generate 500 data points from Example 1 of Section \ref{sec:5} for calibration and 10000 test data points. We consider a random target policy and a deterministic target policy. We further consider two conditional distribution models for $Y^t|X$, corresponding to a correctly specified model (denoted by ``true"), and a misspecified model (denoted by ``false") generated by injecting uniformly random noises on $(0,1)$ to the oracle distribution function. It can be seen from Figure \ref{SS} that the direct method fails to cover the response when the conditional distribution model is misspecified whereas the subsampling-based method fails when the target policy is random. To the contrary, our proposal achieves valid coverage in all settings. \vspace{-2em} \begin{figure}[h] \centering \subfloat{\includegraphics[width=0.25\linewidth]{ran_M1.pdf}} \subfloat{\includegraphics[width=0.25\linewidth]{det_M1.pdf}} \subfloat{\includegraphics[width=0.25\linewidth]{ran_M0.pdf}} \subfloat{\includegraphics[width=0.25\linewidth]{det_M0.pdf}} \caption{Empirical coverage probabilities of PIs based on the Direct method (DM), Subsampling-based method (SM) and our proposal (COPP) in single-stage studies. The stochastic target policy is given by $\pi_{D}(1|X)=1-\pi_D(0|X)=\text{sigmoid}(-0.5+X^{(1)}+X^{(2)}-X^{(3)}-X^{(4)})$ and the deterministic target policy is given by $\mathbb{I}(X^{(3)}+X^{(4)}>X^{(1)}+X^{(2)})$. The nominal level is $90\%$.} \label{SS} \end{figure} \begin{algorithm}[ht] \caption{COPP: Conformal off-policy prediction for single-stage decision making}\label{alg1} \begin{algorithmic}[t] \State \textbf{Input:} Data $\{(X_i,T_i,Y_i)\}_{i=1}^n$; a test point $X_{n+1}$; a target policy $D$ with propensity score $\pi_{D}$; propensity score training algorithm $\mathcal{P}$; quantile prediction algorithm $\mathcal{Q}$; a base nonconformity score $\mathcal{S}$; and coverage level $1-\alpha$ with $\alpha_{hi}-\alpha_{lo}=1-\alpha$. \begin{itemize} \item[1:] Split the data into two disjoint subsets $\mathcal{Z}_{tr}$ and $\mathcal{Z}_{ca}$. \item[2:] Estimate $\pi_{T}(t|x)$ via $\mathcal{P}$ using all samples from $\mathcal{Z}_{tr}$, i.e., $\widehat{\pi}_{T}(t|x)\leftarrow \mathcal{P}(\{(X_{i},T_{i})\}_{i\in\mathcal{Z}_{tr}})$. \item[3:] Draw $A_i$ for $i=1,\ldots,n$ by plugging $\widehat{\pi}_{T}(t|x)$ in \eqref{eqn:ratio}. \item[4:] Select subsamples satisfying $A_i=T_i$ in both subsets. Denote them by $\mathcal{Z}_{tr}^{s}$ and $\mathcal{Z}_{ca}^{s}$. \item[5:] Train quantile regressions using $\mathcal{Q}$ on selected subsamples from $\mathcal{Z}_{tr}^{s}$, i.e., $$\widehat{q}_{\alpha_{lo}}(x;\mathcal{Z}_{tr}^{s})\leftarrow \mathcal{Q}(\alpha_{lo},\{(X_{i},Y_{i})\}_{i\in\mathcal{Z}_{tr}^{s}}), \widehat{q}_{\alpha_{hi}}(x;\mathcal{Z}_{tr}^{s})\leftarrow \mathcal{Q}(\alpha_{hi},\{(X_{i},Y_{i})\}_{i\in\mathcal{Z}_{tr}^{s}}).$$ \item[6:] Compute the conformity scores for all selected subsamples $i\in \mathcal{Z}_{ca}^{s}$: $$V_i=\max\{\widehat{q}_{\alpha_{lo}}(X_i;\mathcal{Z}_{tr}^{s})-Y_i, Y_i-\widehat{q}_{\alpha_{hi}}(X_i;\mathcal{Z}_{tr}^{s})\}.$$ \item[7:] Compute the weights for all selected subsamples $i\in \mathcal{Z}_{ca}^{s}$ and the test point $X_{n+1}$ $$\widehat{w}(X_i)=\textstyle\sum_{t=0}^{m-1}\pi_{D}(t|X_i)/\widehat{\pi}_{T}(t|X_i),~ \widehat{w}(X_{n+1})=\textstyle\sum_{t=0}^{m-1}\pi_{D}(t|X_{n+1})/\widehat{\pi}_{T}(t|X_{n+1}).$$ \item[8:] Compute the normalized weights for $i\in \mathcal{Z}_{ca}^{s}$ and the test point $X_{n+1}$ $$\widehat{p}_{i}(X_{n+1})=\frac{\widehat{w}(X_i)}{\textstyle\sum_{i\in\mathcal{Z}_{ca}^{s}}\widehat{w}(X_i)+\widehat{w}(X_{n+1})},~ \widehat{p}_{\infty}(X_{n+1})=\frac{\widehat{w}(X_{n+1})}{\textstyle\sum_{i\in\mathcal{Z}_{ca}^{s}}\widehat{w}(X_i)+\widehat{w}(X_{n+1})}.$$ \item[9:] Compute $Q_{1-\alpha}(X_{n+1})$ as the $(1-\alpha)$th quantile of $\sum_{i\in\mathcal{Z}_{ca}^s}\widehat{p}_i(X_{n+1})\delta_{V_{i}}+\widehat{p}_{\infty}(X_{n+1})\delta_{\infty}$. \item[10:] Construct a prediction set for $X_{n+1}$: $$\widehat{C}(X_{n+1})=[\widehat{q}_{\alpha_{lo}}(X_{n+1};\mathcal{Z}_{tr}^s)-Q_{1-\alpha}(X_{n+1}),\widehat{q}_{\alpha_{hi}}(X_{n+1};\mathcal{Z}_{tr}^s)+Q_{1-\alpha}(X_{n+1})].$$ \end{itemize} \State \textbf{Output:} A prediction set $\widehat{C}(X_{n+1})$ for the outcome $Y^{D}$ driven by target policy $D$. \end{algorithmic} \end{algorithm} \textbf{Statistical properties}. Let $\widehat{\pi}_T(t|x)$ denote the estimated behavior policy, $\widehat{w}(x)$ denote the estimated weight function in Step 7 of Algorithm \ref{alg1} and $w(x)=1/P(A=T|X=x)$ denote the oracle value of $\widehat{w}(x)$. We first show that COPP achieves valid coverage \textit{asymptotically} when the behavior policy is consistently estimated. Notice that we do not require consistency of the estimated conditional outcome distribution. \begin{theorem}[Asymptotic coverage]\label{thm:AOP} Let $n_{1}=|\mathcal{Z}_{tr}|$ and $n^\prime_1=|\mathcal{Z}_{tr}^{s}|$. Further, suppose that $E[\widehat{w}(X)|\mathcal{Z}_{tr}]<\infty$, $E[w(X)]<\infty$ and the consistency of behavior policy estimates (see detailed requirement in Appendix A), then the output $\widehat{C}(x)$ from Algorithm \ref{alg1} satisfies $$ \lim_{n_1,n^\prime_1\rightarrow\infty}P_{(X,Y^{D})\sim P_{X}\times P_{Y^{D}|X}}(Y^{D}\in\widehat{C}(X))\ge 1-\alpha.$$ \end{theorem} Next, we show that if the propensity scores are known in advance, the proposed PI achieves exact coverage in finite samples. \begin{theorem}[Exact coverage] Suppose that $E[w(X)]<\infty$, then the output $\widehat{C}(x)$ from Algorithm \ref{alg1} with correctly specified propensity scores satisfies, for any sample size $n$, \begin{equation*} P_{(X,Y^{D})\sim P_{X}\times P_{Y^{D}|X}}(Y^{D}\in\widehat{C}(X))\ge 1-\alpha. \end{equation*} \end{theorem} Finally, we show that the proposed PI is asymptotically efficient when the quantile regression estimator in Step 5 of Algorithm \ref{alg1} is consistent \citep{sesia2020comparison}. \begin{theorem}[Asymptotic efficiency] Suppose the behavior policy is known and the quantile regresssion estimates are consistent (see the detailed requirement in Appendix A), the output $\widehat{C}(X)$ from Algorihtm \ref{alg1} satisfies \begin{equation*} L(\widehat{C}(X)\triangle C_{\alpha}^{\text{oracle}}(X))=o_{p}(1), \end{equation*} as $|\mathcal{Z}_{tr}|,|\mathcal{Z}_{ca}|\rightarrow\infty$. Here $L(A)$ indicates the Lebesgue measure of the set $A$, and $\triangle$ is the symmetric difference operator, i.e., $A\triangle B=(A \backslash B)\cup(B\backslash A)$, $C_{\alpha}^{\text{oracle}}(X)$ is the oracle interval defined as $[q_{\alpha_{lo}}(X),q_{\alpha_{hi}}(X)].$ \end{theorem} \subsection{Extensions}\label{sec:ext} We discuss two extensions of COPP, based on importance sampling and multi-sampling, respectively. \textbf{Extension 1}. One limitation of COPP lies in that the PIs are constructed based only on observations in the subsamples. Nonetheless, when the target policy is stochastic, each observation has certain chance of being selected. To make full use of data, we adopt the importance sampling trick \citep[see e.g.,][]{tsiatis2006semiparametric} to compute the normalized weights and the quantile in Steps 8 and 9 of Algorithm \ref{alg1}, respectively. Specifically, in Step 7, we set the weight $\widehat{w}(X_i)$ for each of the sample in $\mathcal{Z}_{ca}$ to $\widehat{\pi}_A(T_i|X_i)\widehat{w}(X_i)$. Recall that $\widehat{w}(x)$ is an estimate of $1/P(A=T|X=x)$. These weights are then passed to Step 8 to compute $\widehat{p}_i$, and subsequently to Step 9 to calculate $Q_{1-\alpha}(X_{n+1})$ by replacing $\mathcal{Z}_{ca}^{s}$ with the whole calibration set $\mathcal{Z}_{ca}$. As we will show in Section \ref{sec:5}, this procedure is much more efficient than COPP when the selected subsamples contains only a few observations. We next prove that such an extension achieves valid coverage as well. \begin{comment} \begin{equation*} \pi_{s}(X)=\left\{\begin{aligned} &\pi_{A}(X),&~\text{if}~T=1,\\ &1-\pi_{A}(X),&~\text{if}~T=0.\\ \end{aligned} \right. \end{equation*} Then main difference is in Step 7 that $w_{i}=\widehat{\pi}_{s}(X_i)/\widehat{e}(X_{i})$ for $1\le i\le n$. This procedure is much more efficient than AOP especially when the sample size of selected subsamples is small. This point is demonstrated by experiments for multi-stage settings in Section \ref{sec:5}. \end{comment} \begin{theorem}\label{thm:AEOP} Under the conditions of Theorem \ref{thm:AOP}, we have \begin{equation*} \lim_{n_1,n^\prime_1\rightarrow\infty}P_{(X,Y^{D})\sim P_{X}\times P_{Y^{D}|X}}(Y^{D}\in\widehat{C}(X))\ge 1-\alpha. \end{equation*} \end{theorem} \textbf{Extension 2}. The second extension integrates COPP with the multi-sampling method. Notice that Algorithm \ref{alg1} only implements subsampling once. The result can be very sensitive to the selected subsamples. To mitigate the randomness the single-sampling procedure introduces, we propose to repeat COPP multiple times and then aggregate all these PIs to gain efficiency. To combine multiple PIs, we adopt the idea proposed by \cite{solari2022multi} for multi-split conformal prediction. A key observation is that, the PI in Algorithm \ref{alg1} is equivalent to $\widehat{C}(x)=\{y: p(x,y)\ge\alpha\}$ where $$p(x,y)=\sum_{i\in \mathcal{Z}_{ca}^s}\widehat{p}_i(x)\mathbb{I}[\max\{\widehat{q}_{\alpha_{lo}}(x;\mathcal{Z}_{tr}^{s})-y,y-\widehat{q}_{\alpha_{hi}}(x;\mathcal{Z}_{tr}^{s})\}\le V_i]+\widehat{p}_{\infty}(x),$$ serving as a $p$-value for the testing hypotheses $H_{0}:Y_{n+1}^{D}=y$ against $H_{1}:Y_{n+1}^{D}\ne y$ given $X_{n+1}=x$. This allows us to follow the idea of \citet{meinshausen2009p} for $p$-value aggregation. Let $p^{b}(x,y)$ for $1\le b\le B$ be the $p$-values for $B$ constructed PIs with significance level $\alpha\gamma$ for certain tuning parameter $0<\gamma<1$. We aggregate these $p$-values by setting $\bar{p}(x,y)$ to their empirical $\gamma$-quantile. The final PI is given by $\widehat{C}_{B,\gamma}(x)=\{y: \bar{p}(x,y)\ge \alpha\}$. \begin{comment} \begin{equation} \widehat{C}_{B,\gamma}(x)=\{y:\frac{1}{B}\sum_{b=1}^{B}I(p^{b}(x,y)\ge \alpha\gamma)\ge (1-\gamma)\}. \end{equation} Since $\widehat{C}_{B,\gamma}(x)$ is not guaranteed to be an interval, to implement the algorithm, we can use the algorithm in \cite{gupta2021nested}. In the appendix, we establish the relationship between $\widehat{C}_{B,\gamma}(x)$ and a single PI $\widehat{C}^{b}(x)$ by Markov inequality. Based on the relationship, we immediately have the results for $\widehat{C}_{B,\gamma}(x)$ as in Theorem \ref{thm:AOP}. \end{comment} \begin{theorem}\label{thm:boot} Under the conditions of Theorem \ref{thm:AOP}, we have for any $B>0$ and $0<\gamma<1$, \begin{equation*} \lim_{n_1,n_1^\prime\rightarrow\infty}P_{(X,Y^{D})\sim P_{X}\times P_{Y^{D}|X}}(Y^{D}\in\widehat{C}_{B,\gamma}(X))\ge 1-\alpha. \end{equation*} \end{theorem} Finally, we remark that we only derive the asymptotic coverage of the two extensions in Theorems \ref{thm:AEOP} and \ref{thm:boot}. Nonetheless, when the behavior policy is known, these methods also achieve exact coverage. \section{Conformal Off-Policy Prediction in Sequential Decision Making}\label{sec:4} \textbf{Problem formulation}. In this section, we consider sequential desicion making where the observed data consist of $n$ i.i.d samples $\{(X_{1i},T_{1i},X_{2i},T_{2i},\ldots,X_{Ki},T_{Ki},Y_{i})\}_{i=1}^{n}$ where for the $i$th instance, $X_{ki}$ collects the state information at the $k$th stage, $T_{ki}\in\{0,\ldots,m-1\}$ denotes the action at the $k$th stage , $Y_{i}$ is the corresponding reward at the final stage. Such a sparse reward setting is frequently considered for precision medicine type applications \citep{murphy2003optimal}. Meanwhile, our method is equally applicable to settings with immediate rewards at each decision point (see Appendix B). Let $H_{k}=\{X_1,T_1,\ldots,X_k\}$ denote the history up to the $k$th stage. We define a (history-dependent) policy $\Pi=(\pi_{1}(t_1|h_1),\pi_{2}(t_2|h_2),\ldots,\pi_{K}(t_K|h_K))$ as a sequence of (stochastic) decision rules where each $\pi_{k}(t_k|h_k)$ determines the probability that an agent selects action $t_k$ at the $k$th stage given that $H_{k}=h_k$. For a given target policy $\Pi_{D}$, we are interested in constructing PIs for the potential outcome $Y^{D}$ that would be observed were the instance to follow $\Pi_{D}$ for any initial state $x_1$. To save space, we impose the consistency, sequential ignorability and positivity assumption in Appendix B. \textbf{COPP}. We generalize our proposal in Section \ref{sec:challenge} to sequential making decision. We design a pseudo policy $\Pi_{A}$ which relies on both $\Pi_{T}$ and $\Pi_{D}$, to generate subsamples whose outcome distribution conditional on the state-action history matches that of the potential outcome. Specifically, for any $0\le t\le m-1$ and $1\le k\le K$, the pseudo policy $\pi_{A}$ shall be proportional to the ratio $\pi_{D}(t_k|h_k)/{\pi_{T}(t_k|h_k)}$ \begin{comment} \begin{equation*} \pi_{A}(t_k|h_k)\propto\frac{\pi_{D}(t_k|h_k)}{\pi_{T}(t_k|h_k)} \end{equation*} \end{comment} for any $t_k$ and $h_k$. Similar to Proposition \ref{prop:1}, we can show that the conditional density of $Y|A_K=T_K,H_{K}$ equals that of $Y^{D}|H_{K}$. More importantly, by iteratively integrating over the space of $\{T_k,X_{k+1},\cdots,X_K\}$, we can show that the conditional density of $Y|A_k=T_k, \cdots, A_K=T_K,H_k$ also equals that of $Y^{D}|H_k$ for each $k$. Using Lemma \ref{lemma1} again, the subsamples $\{(H_{1i},Y_{1i}): A_{ki}=T_{ki}, 1\le k\le K, 1\le i\le n \}$ and the target population $(H_{1,n+1},Y_{n+1})$ are weighted exchangeable with weights $w_i=1$ for any $i$ and \begin{equation*} w_{n+1}(h_1,y)\propto P^{-1}(A_1=T_1,\cdots,A_K=T_K|H_1=h_1). \end{equation*} Based on these weights, the PIs can be similarly derived as in Algorithm \ref{alg1}. We defer the pseudocode and the statistical properties of the constructed PIs to Appendix B. \textbf{Extensions}. Our proposal suffers from the ``curse of horizon" \citep{liu2018breaking} in that the number of selected subsamples decreases exponentially fast with respect to the number of decision stages. While this phenomenon is unavoidable without further model assumptions \citep{jiang2016doubly}, the importance-sampling-based and multi-sampling-based approach alleviate this issue to some extent, as shown in our simulations. Since these extensions are very similar to those presented in Section \ref{sec:ext}, we omit them for brevity. \begin{comment} One limitation of our method is the common ``curse of horizon'' problem in off-policy learning \citep{liu2020understanding}. It is less efficient when the trajectory length is large. The sample size of selected subsamples always decreases as the number of stages increases. The COPP-IS and COPP-MS methods can mitigate this issue to some extent. For COPP-MS, it can be directly applied to sequential decision making without modification. For COPP-IS, we can adjust weights by \begin{equation*} (\pi_{A_1}(T_1|H_1))^{T_1}(1-\pi_{A_1}(T_1|H_1))^{1-T_1}\ldots(\pi_{A_K}(T_K|H_K))^{T_K}(1-\pi_{A_K}(T_K|H_K))^{1-T_K}. \end{equation*} \end{comment} \section{Synthetic Data Analysis}\label{sec:5} In this section, we conduct simulation studies to investigate the empirical performance of our proposed methods. In particular, we focus on the following two examples considered in \citet{wang2018quantile}: \begin{itemize}[leftmargin=*] \item \textbf{Example 1 (Single-Stage Decision Making):} The baseline covariates $X^{(1)},X^{(2)},X^{(3)},X^{(4)}$ are independently uniformly generated from $(0,1)$. The action is binary and satisfies $P(T=1|X)= \mbox{sigmoid}(-0.5-0.5\sum_{j=1}^4 X^{(j)})$ where $\mbox{sigmoid}(t)=\exp(t)/[1+\exp(t)]$. The return is given by $Y=1+X^{(1)}-X^{(2)}+(X^{(3)})^3+\exp(X^{(4)})+T(3-5X^{(1)}+2X^{(2)}-3X^{(3)}+X^{(4)})+(1+T)(1+\sum_{j=1}^4 X^{(j)})\epsilon$ where $\epsilon$ is a standard normal variable independent of $X$ and $A$. The target policy $\pi_D$ satisfies $\pi_D(1|X)=\mbox{sigmoid}(-0.5+X^{(1)}+X^{(2)}-X^{(3)}-X^{(4)}))$. \item \textbf{Example 2 (Two-Stage Decision Making):} States and actions are generated as follows: \begin{eqnarray*} X_1\sim \textrm{Uniform}(0,1),\qquad T_1|X_1\sim \textrm{Bernoulli}(\textrm{sigmoid}(-0.5+X_1)),\\ X_2|X_1,T_1\sim \textrm{Uniform}(X_1,X_1+1),\qquad T_2|X_1,T_1,X_2\sim\mbox{Bernoulli}(\mbox{sigmoid}(-0.5-X_2)). \end{eqnarray*} The final return is given by $Y=1+X_1+T_1[1-3(X_1-0.2)^2]+X_2+T_2[1-5(X_2-0.4)^{2}]+(1+0.5T_1-T_1X_1+0.5T_2-T_2X_2)\epsilon$ for a standard normal variable $\epsilon$ independent of states and actions. The target policy is defined as follows \begin{eqnarray*} D_1|X_1\sim\mbox{Bernoulli}(\mbox{sigmoid}(0.5X_1-0.5)),\,\,D_2|X_1,D_1,X_2\sim\mbox{Bernoulli}(\mbox{sigmoid}(0.5X_2-1)). \end{eqnarray*} \end{itemize} For each example, we further consider two settings. In the high-dimensional setting, we manually include $100-p_0$ null variables that are uniformly distributed on $(0,1)$ in the state with $p_0=4$ and $1$ in Examples 1 and 2, respectively. In the low-dimensional setting, these null variables are not included. This yields a total of four different scenarios. The sample size is fixed to $2000$. \textbf{Implementation details}. We estimate the behavior policy using logistic regression. In the high-dimensional setting, we apply penalized regression to improve the estimation efficiency. The conditional quantile functions are estimated based on quantile regression forest \citep{meinshausen2006quantile}. Following \citet{sesia2020comparison}, we use 75\% of the data for training and the rest for calibration. We fix $\alpha_{lo}=\alpha/2$ and $\alpha_{hi}=1-\alpha/2$ in all settings. In addition, to implement the multi-sampling-based method ,we fix $\gamma=1/2$ and set the significance level to $\alpha$ instead of $\alpha\gamma=\alpha/2$ to improve the precision (interval length). We find that the resulting PI achieves nominal coverage in practice. The number of intervals $B$ is set to 100 in the low-dimensional setting, and $50$ in high dimension to reduce computation time. Finally, in each simulation, we generate $10000$ test data points in the target population to evaluate the converge probability. \begin{comment} We also consider high dimensional cases for the above two examples where the data generations remain the same but we enlarge the dimensions of covariates to be 100 from uniformly distribution on $[0,1]$. For example 1, there are roughly half of the samples that can satisfy $A=T$ and be selected for weighted CP. However, for example 2, there are roughly sixteen percent of samples can be selected. We set $n=2000$ and use $75\%$ data as the training set $\mathcal{Z}_{tr}$ as suggested by \cite{sesia2020comparison}. It means that for example 2, there are roughly $250$ samples in $\mathcal{Z}_{tr}^{s}$ for regression training and about $80$ samples in $\mathcal{Z}_{ca}^{s}$ for calibration. For low-dimensional cases, we estimate the propensity score via logistic regression and further use penalized logistic regression for high-dimensional cases. We train the conditional quantiles by quantile random forest. For all conditional quantiles, we set $\alpha_{lo}=\alpha/2$ and $\alpha_{hi}=1-\alpha/2$. Each time, we generate 100 independent datasets with $n=2000$. In each run, we generate $10000$ test data points in $\mathcal{Z}_{te}$ and construct $90\%$ confidence intervals for each of the test data $Y^D$. The empirical marginal coverage is calculated as $(1/10000)\textstyle\sum_{i\in\mathcal{Z}_{te}}I(Y_i^{D}\in\widehat{C}(X_i))$. \end{comment} \textbf{Benchmark specification}. We compare our proposed methods against the subsampling-based method (SM) detailed in Section \ref{sec:challenge}. In low-dimensional settings, we also compare with the standard importance sampling (IS) and doubly robust (DR) method \citep[see e.g.,][]{dudik2011doubly,zhang2012robust,jiang2016doubly} designed for off-policy confidence interval estimation. These methods focus on the average effect. We couple them with kernel density estimation to infer the individual effect conditional on the initial state. Please refer to Appendix C for the detailed implementation. \textbf{Results}. Figure \ref{Low-dim} reports the coverage probability and average length of various interval estimators, aggregated over 100 simulation. We denote the extensions of our proposal based on importance-sampling, multi-sampling alone and a combination of the two are denoted by COPP-IS, COPP-MS and COPP-IS-MS, respectively. We summarize our findings below. First, intervals based on SM, IS and DR significantly undercover the potential outcome. As we have commented, these methods are not valid in general. SM requires either a uniformly random behavior policy, or a deterministic target policy. IS and DR focus on the expected return and ignore the variability of the return around its expectation. Second, all the proposed methods achieve nominal coverage in most cases. Among them, the multi-sampling-based methods (COPP-MS and COPP-IS-MS) achieve the best performance, with substantially reduced variability compared to the single-sampling-based methods. In addition, COPP-IS performs much better than COPP in two-stage settings where the number of subsamples is limited, as expected. \begin{figure}[ht] \centering \subfloat{\includegraphics[width=0.25\linewidth]{LS_CP.pdf}} \subfloat{\includegraphics[width=0.25\linewidth]{LS_AL.pdf}} \subfloat{\includegraphics[width=0.25\linewidth]{LT_CP.pdf}} \subfloat{\includegraphics[width=0.25\linewidth]{LT_AL.pdf}}\\ \subfloat{\includegraphics[width=0.25\linewidth]{HS_CP.pdf}} \subfloat{\includegraphics[width=0.25\linewidth]{HS_AL.pdf}} \subfloat{\includegraphics[width=0.25\linewidth]{HT_CP.pdf}} \subfloat{\includegraphics[width=0.25\linewidth]{HT_AL.pdf}} \caption{Empirical coverage probabilities and average lengths of intervals based on SM, IS, DR, and our proposed COPP, COPP-IS, COPP-MS, COPP-IS-MS in four settings. The nominal level is $90\%$.} \label{Low-dim} \end{figure} \section{Real Data Analysis} We illustrate the usefulness of our method based on a dataset collected from a world-leading technology company. This company has one of the largest mobile platforms for production, aggregation and distribution of short-form videos with extensive search functionalities. It implements a strategy to encourage its users to explore its search functionality. Specifically, when a user launches the app for the first time in a day, they will see a pop-up window that recommends them to use the search feature. However, pop-ups are annoying for some users. As such, the company's interested in `pop-up' policies that implement this strategy to a subgroup of users to increase their search frequency. \begin{comment} In this section, we consider a randomized controlled experiment conduct at a world-leading tech company. This company has one of China's largest mobile platforms of content creation, aggregation and distribution, which also provides a comprehensive search feature. The company wants more users to experience the search function via pop-ups. However, pop-ups are annoying for some users. We wish to pinpoint users that pop-ups are useful to increase their search frequency. The experiment serves as an exploration applied to two groups of users randomly selected from daily active users. Users in the treatment group will have a popup window recommending them to experience the search feature for their first launch every day. \end{comment} The dataset is collected from an online experiment which involves two millions users. The features available to us consist of each user's history information including the frequency they used the app and the search functionality prior to the experiment. The reward is the user's search frequency after treatment and is highly heavy-tailed. As such, instead of focusing on a target policy's expected return, we are interested in its entire distribution. As commented earlier, most existing OPE methods are not directly applicable. In addition, since the behavior policy is known to us, the proposed method is robust to the model misspecification of the outcome distribution, and achieves exact coverage. We apply our proposal method to 80\% of data for constructing PIs and the remaining data for evaluating these PIs. We consider two target policies, including a pre-trained optimal stochastic policy and a uniformly random policy. Table \ref{tab:real_data} reports the average length, average lower and upper bounds of these PIs as well as the associated standard errors when applied to the users in the testing dataset. It can be seen that our methods are able to distinguish the two policies, since the average lower bound of the optimal policy is significantly larger than that of the random policy. \begin{table}[h] \centering \begin{tabular}{c|c|c|c|c|c} Metric & Policy & COPP & COPP-MS & COPP-IS & COPP-IS-MS \\ \hline \multirow{2}{*}{Length} & Optimal & 8.185(0.037) & 8.272(0.038) &8.665(0.034) &8.270(0.038) \\ \cline{3-6} & Uniform & 8.211(0.037) & 8.211(0.037) &8.298(0.036)& 8.220(0.037)\\ \hline \multirow{2}{*}{Lower Bound} & Optimal & 0.117(0.003) &0.108(0.003)& 0.117(0.003)& 0.105(0.003) \\ \cline{3-6} & Uniform & 0.043(0.002) & 0.052(0.002) & 0.058(0.003) & 0.042(0.002)\\ \hline \multirow{2}{*}{Upper Bound} & Optimal & 8.302(0.039)& 8.380(0.039) & 8.782(0.036)& 8.375(0.039) \\ \cline{3-6} & Uniform & 8.253(0.038) & 8.263(0.038) & 8.357(0.037) & 8.263(0.038) \\ \hline \end{tabular} \caption{Average length, lower bound and upper bound of the proposed prediction intervals when applied to users in the testing dataset (with standard error in parentheses).} \label{tab:real_data} \end{table} \section{Introduction} \lipsum[2] \lipsum[3] \section{Headings: first level} \label{sec:headings} \lipsum[4] See Section \ref{sec:headings}. \subsection{Headings: second level} \lipsum[5] \begin{equation} \xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\theta)= {\frac {\alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\sum _{i=1}^{N} \sum _{j=1}^{N} \alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}} \end{equation} \subsubsection{Headings: third level} \lipsum[6] \paragraph{Paragraph} \lipsum[7] \section{Examples of citations, figures, tables, references} \label{sec:others} \lipsum[8] \cite{kour2014real,kour2014fast} and see \cite{hadash2018estimate}. The documentation for \verb+natbib+ may be found at \begin{center} \url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf} \end{center} Of note is the command \verb+\citet+, which produces citations appropriate for use in inline text. For example, \begin{verbatim} \citet{hasselmo} investigated\dots \end{verbatim} produces \begin{quote} Hasselmo, et al.\ (1995) investigated\dots \end{quote} \begin{center} \url{https://www.ctan.org/pkg/booktabs} \end{center} \subsection{Figures} \lipsum[10] See Figure \ref{fig:fig1}. Here is how you add footnotes. \footnote{Sample of the first footnote.} \lipsum[11] \begin{figure} \centering \fbox{\rule[-.5cm]{4cm}{4cm} \rule[-.5cm]{4cm}{0cm}} \caption{Sample figure caption.} \label{fig:fig1} \end{figure} \subsection{Tables} \lipsum[12] See awesome Table~\ref{tab:table}. \begin{table} \caption{Sample table title} \centering \begin{tabular}{lll} \toprule \multicolumn{2}{c}{Part} \\ \cmidrule(r){1-2} Name & Description & Size ($\mu$m) \\ \midrule Dendrite & Input terminal & $\sim$100 \\ Axon & Output terminal & $\sim$10 \\ Soma & Cell body & up to $10^6$ \\ \bottomrule \end{tabular} \label{tab:table} \end{table} \subsection{Lists} \begin{itemize} \item Lorem ipsum dolor sit amet \item consectetur adipiscing elit. \item Aliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna. \end{itemize} \section{Conclusion} Your conclusion here \section*{Acknowledgments} This was was supported in part by...... \bibliographystyle{unsrt}
2024-02-18T23:40:36.815Z
2022-06-15T02:14:11.000Z
algebraic_stack_train_0000
2,861
8,520
proofpile-arXiv_065-14028
\section*{Note to the editor and referees} \section{Introduction} \label{sec:introduction} The ESA Gaia\xspace mission was designed to create the most precise three dimensional map of the Milky way, along with its kinematics, through the repeated observation of about two billion stars. Gaia\xspace\ observes all objects in the sky down to an apparent $G$ magnitude of about $21$ mag, which includes millions of galaxies and quasars. \citep{DR1-DPACP-18}. The data collected between 25 July 2014 and 28 May 2017 (34 months) have been processed by the Gaia\xspace Data Processing and Analysis Consortium (DPAC) to provide the third data release of the Gaia\xspace catalogue, \gdr{3}. For sources with $\ensuremath{G}\xspace \leq 17$ mag, typical positional uncertainties are on the order of $80$ $\mu$as; parallax uncertainties on the order of $100$ $\mu$as; proper motion uncertainties on the order of $100$ $\mu$as yr$^{-1}$; and $G$ magnitude uncertainties on the order of $1$ mmag. In addition to this exquisite astrometric and photometric performance, Gaia\xspace provides high-resolution spectroscopy ($R = \lambda/\Delta \lambda \approx 11700$) centred around the calcium triplet ($845$--$872$ nm), hence its name radial velocity spectrometer (RVS), as well as low-resolution spectrophotometry from two instruments: the blue photometer (BP) covering the wavelength range $330$--$680$ nm with $30 \leq R \leq 100$, and the red photometer (RP) covering the wavelength range $640$--$1050$ nm with $70 \leq R \leq 100$ \citep{2021A&A...652A..86C}. Eight coordination units (CUs) were set up within the DPAC, each focusing on a particular aspect of the Gaia\xspace processing: CU1 for managing the computer architecture; CU2 for the data simulations; CU3 for the core astrometric processing; CU4 for the analysis of non-single stars, Solar System objects, and extended objects; CU5 for the photometric BP/RP\xspace processing; CU6 for the spectroscopic RVS processing; CU7 for the variability analysis; and CU8 for the determination of the astrophysical parameters (APs) of the observed sources. Finally, a ninth CU is responsible for the catalogue validation, access, and publication. This paper is the third in a series of three papers describing the processing done within CU8. The first of these, \cite{DR3-DPACP-157}, summarises the work done in CU8 and the various APs it produces. The second, \cite{DR3-DPACP-160}, describes stellar APs. The present paper discusses the object classification and the non-stellar APs produced by CU8, namely the redshifts of extragalactic sources and total Galactic extinction map. We describe the results and methods of the relevant modules, as they have evolved since their description given prior to launch \citep{Apsis2013}, while focusing on technical details. A thorough scientific analysis of these results, seen from a cross-CU perspective, can be found in performance verification papers like in \cite{DR3-DPACP-101}, where the classification and characterisation of the extragalactic sources are discussed in more details. We provide an overview of the data products from the classification and non-stellar modules in Section \ref{sec:overview}. The Discrete Source Classifier (\modulename{DSC}), which classifies sources probabilistically into five classes that are known a priori from its training set (quasar, galaxy, star, white dwarf, and physical binary star), is described in Section \ref{sec:dsc}. The Outlier Analysis (\modulename{OA}), which complements the DSC classification through a clustering algorithm applied to BP/RP\xspace spectra of sources with low \modulename{DSC} probability, is described in Section \ref{sec:oa}. The quasar classifier (QSOC) and Unresolved Galaxy Classifier (UGC), both based on BP/RP\xspace spectra, make use of the DSC probabilities in order to identify quasars and galaxies and subsequently determine their redshifts; these are described in Sections \ref{sec:qsoc} and \ref{sec:ugc}, respectively. Finally, the global stellar parameters of giant stars, as inferred from BP/RP\xspace spectra, allow the Total Galactic Extinction (TGE) module to derive the Galactic extinction seen along a given line-of-sight as described in Section \ref{sec:tge}. Finally, we summarise the improvements that are currently foreseen for \gdr{4} in \secref{sec:beyond_gdr3}. Additional information on the design and performance of the modules can be found in the Gaia\xspace online documentation\footnote{\href{https://gea.esac.esa.int/archive/documentation/GDR3/index.html}{https://gea.esac.esa.int/archive/documentation/GDR3/index.html}}. \section{Overview of the non-stellar astrophysical parameters from CU8 in Gaia DR3} \label{sec:overview} The five non-stellar modules together contribute to $110$ unique fields in the \gdr{3}. Table \ref{tab:aps_overview} provides an overview of the tables and fields that each of the modules contributes to, including the resulting number of entries in each table. These fields are spread over eight different tables and concern about $1.6$ billion unique sources. Figure \ref{fig:overview} sketches the inter-dependency between these modules, the selection they apply on the \modulename{DSC} probabilities, their input, output, and the number of sources for which they produce results in \gdr{3}. The different selection policies from each module are clearly seen in this plot; each leads to a different associated completeness and purity. The filtering applied by each module on the results they produced is not mentioned here, although we should generally not expect the number of sources satisfying the provided \modulename{DSC} selection criteria to be equal to the number of sources for which there are results in \gdr{3} for each module. \begin{figure*}[t!] \centering \includegraphics[width=0.9\textwidth]{figures/overview.png} \caption{Dependency of the \modulename{OA}, \modulename{UGC}, \modulename{QSOC}, and \modulename{TGE} modules on the \modulename{DSC} combined probabilities for the selection of the sources to be processed (\texttt{classprob\_dsc\_combod}, see \secref{sec:dsc} for a definition). For each module, we provide a synthetic view of their input and output, and the number of sources for which the module produces results in \gdr{3}. In the case of \modulename{TGE}, we provide the number of extinction estimates that were computed in level 9 HEALPixes (see \secref{sec:tge}). Unlike the other modules described here, \modulename{TGE} additionally relies on the General Stellar Parametrizer from Photometry (\modulename{GSP-Phot}) for its source selection and processing, which is described in \cite{DR3-DPACP-156}.} \label{fig:overview} \end{figure*} \begin{table*} \centering \caption{Individual contributions of the non-stellar CU8 modules to the \gdr{3}. See the sections dedicated to each module for a complete description of the fields and tables listed herein. Fields from module-specific tables (i.e. \modulename{OA} and \modulename{TGE}) are not listed here. \label{tab:aps_overview} } \footnotesize \begin{tabular}{p{2cm}|p{8.5cm}p{4cm}} \hline Module & Table and field names & Number of non-empty rows\\ \hline \hline \multirow{11}{2cm}{\modulename{DSC} (source classification)} & -~\linktoAPTable{astrophysical_parameters} \\ & \hspace{1cm}{\tt classprob\_dsc\_allosmod}$^a$ & 1\,370\,759\,105 \\ & \hspace{1cm}{\tt classprob\_dsc\_specmod}$^b$, {\tt classprob\_dsc\_combmod}$^c$ & 1\,590\,760\,469 \\ & -~\linktoMainTable{gaia_source} \\ & \hspace{1cm} {\tt classprob\_dsc\_combmod}$^c$ & 1\,590\,760\,469 \\ & -~\linktoEGTable{galaxy_candidates} \\ & \hspace{1cm}{\tt classprob\_dsc\_combmod}$^c$, \linktoEGParam{galaxy_candidates}{classlabel_dsc}, & 4\,841\,799 \\ & \hspace{1cm}\linktoEGParam{galaxy_candidates}{classlabel_dsc_joint} \\ & -~\linktoEGTable{qso_candidates} \\ & \hspace{1cm}{\tt classprob\_dsc\_combmod}$^c$, \linktoEGParam{qso_candidates}{classlabel_dsc}, & 6\,647\,511 \\ & \hspace{1cm}\linktoEGParam{qso_candidates}{classlabel_dsc_joint} \\ \hline \multirow{9}{2cm}{\modulename{OA} (source classification based on self-organising map)} & -~\linktoAPTable{oa_neuron_information} (78 fields) & 900 (1 per neuron) \\ & -~\linktoAPTable{oa_neuron_xp_spectra} (7 fields) & 78\,300 (900 neurons $\times$ 87 samples per spectrum) \\ & -~\linktoAPTable{astrophysical_parameters} \\ & \hspace{1cm}\linktoAPParam{astrophysical_parameters}{neuron_oa_id}, \linktoAPParam{astrophysical_parameters}{neuron_oa_dist} & 56\,416\,360 \\ & \hspace{1cm}\linktoAPParam{astrophysical_parameters}{neuron_oa_dist_percentile_rank}, \linktoAPParam{astrophysical_parameters}{flags_oa} \\ & -~\linktoEGTable{galaxy_candidates} \\ & \hspace{1cm}\linktoEGParam{galaxy_candidates}{classlabel_oa} & 1\,901\,026 \\ & -~\linktoEGTable{qso_candidates} \\ & \hspace{1cm} \linktoEGParam{qso_candidates}{classlabel_oa} & 2\,803\,225 \\ \hline \multirow{4}{2cm}{\modulename{QSOC} (quasar redshift determination)} & -~\linktoEGTable{qso_candidates} \\ & \hspace{1cm}\linktoEGParam{qso_candidates}{redshift_qsoc}, \linktoEGParam{qso_candidates}{redshift_qsoc_lower} & 6\,375\,063 \\ & \hspace{1cm}\linktoEGParam{qso_candidates}{redshift_qsoc_upper}, \linktoEGParam{qso_candidates}{ccfratio_qsoc}, \\ & \hspace{1cm} \linktoEGParam{qso_candidates}{zscore_qsoc}, \linktoEGParam{qso_candidates}{flags_qsoc}\\ \hline \multirow{3}{2cm}{\modulename{UGC} (galaxy redshift determination)} & -~\linktoEGTable{galaxy_candidates} \\ & \hspace{1cm}\linktoEGParam{galaxy_candidates}{redshift_ugc}, \linktoEGParam{galaxy_candidates}{redshift_ugc_lower}, & 1\,367\,153 \\ & \hspace{1cm} \linktoEGParam{galaxy_candidates}{redshift_ugc_upper} \\ \hline \multirow{2}{2cm}{\modulename{TGE} (Galactic extinction)} & -~\linktoAPTable{total_galactic_extinction_map} (10 fields) & 4\,177\,920 (49\,152 in HEALPix level 6, 196\,608 in level 7, 786\,432 in level 8, 3\,145\,728 in level 9) \\ & -~\linktoAPTable{total_galactic_extinction_map_opt} (7 fields) & 3\,145\,728 (HEALPix level 9) \\ \hline \multicolumn{3}{p{15cm}}{$^a$ Corresponding to \linktoAPParam{astrophysical_parameters}{classprob_dsc_allosmod_quasar}, \linktoAPParam{astrophysical_parameters}{classprob_dsc_allosmod_galaxy} and \linktoAPParam{astrophysical_parameters}{classprob_dsc_allosmod_star}} \\ \multicolumn{3}{p{15cm}}{$^b$ Corresponding to \linktoAPParam{astrophysical_parameters}{classprob_dsc_specmod_quasar}, \linktoAPParam{astrophysical_parameters}{classprob_dsc_specmod_galaxy}, \linktoAPParam{astrophysical_parameters}{classprob_dsc_specmod_star}, \linktoAPParam{astrophysical_parameters}{classprob_dsc_specmod_whitedwarf} and \linktoAPParam{astrophysical_parameters}{classprob_dsc_specmod_binarystar}}\\ \multicolumn{3}{p{15cm}}{$^c$ Corresponding to \linktoAPParam{astrophysical_parameters}{classprob_dsc_combmod_quasar}, \linktoAPParam{astrophysical_parameters}{classprob_dsc_combmod_galaxy}, \linktoAPParam{astrophysical_parameters}{classprob_dsc_combmod_star}, \linktoAPParam{astrophysical_parameters}{classprob_dsc_combmod_whitedwarf} and \linktoAPParam{astrophysical_parameters}{classprob_dsc_combmod_binarystar}} \end{tabular} \end{table*} \section{Source classification (DSC)} \label{sec:dsc} \subsection{Objectives} \label{subsec:dsc_objective} \modulename{DSC} classifies Gaia\xspace sources probabilistically into five classes: quasar, galaxy, star, white dwarf, and physical binary star. These classes are defined by the training data, which are Gaia\xspace\ data, with labels provided by external catalogues. \modulename{DSC} comprises three classifiers: Specmod uses BP/RP\xspace spectra to classify into all five classes; Allosmod uses various other features to classify into just the first three classes; Combmod takes the output class probabilities of the other two classifiers and combines them to give combined probabilities in all five classes. \subsection{Method} \label{subsec:dsc_method} \subsubsection{Algorithms and I/O} Specmod uses an ExtraTrees classifier, which is an ensemble of classification trees. Each tree maps the 100-dimensional input space of the BP/RP\xspace spectrum ---60 samples each, minus 5 samples that are rejected at the edges of each spectrum--- into regions that are then identified with each of the five classes. By using an ensemble of hundreds of trees, these individual discrete classifications are turned into class probabilities. Allosmod uses a Gaussian Mixture Model (GMM). For each class, the distribution of the training data in an eight-dimensional feature space is modelled by a mixture of 25 Gaussians. This is done independently for all three classes (quasar, galaxy, star). Once appropriately normalised and a suitable prior applied, each GMM gives the probability that a feature vector (i.e.\ a new source) is of that class. The eight features are as follows; they are fields in the Gaia source table or are computed from these fields: \begin{itemize}[leftmargin=0.5cm] \item sine of the Galactic latitude, $\sin \linktoMainParam{gaia_source}{b}$, \item parallax, \linktoMainParam{gaia_source}{parallax}, \item total proper motion, \linktoMainParam{gaia_source}{pm}, \item unit weight error (uwe), \\ $=\sqrt{\frac{\linktoMainParam{gaia_source}{astrometric_chi2_al}} {\linktoMainParam{gaia_source}{astrometric_n_good_obs_al} - 5} }$, \item \ensuremath{G}\xspace\ band magnitude, \linktoMainParam{gaia_source}{phot_g_mean_mag}, \item colour \ensuremath{G_{\rm BP}-G}\xspace, \linktoMainParam{gaia_source}{bp_g}, \item colour \ensuremath{G-G_{\rm RP}}\xspace, \linktoMainParam{gaia_source}{g_rp}, \item The relative variability in the \ensuremath{G}\xspace\ band (relvarg), \\ $=\sqrt{ \linktoMainParam{gaia_source}{phot_g_n_obs} / \linktoMainParam{gaia_source}{phot_g_mean_flux_over_error} }$. \end{itemize} All eight features must exist for a given source for Allosmod to provide a probability. As explained below, we exploit some of the `failures' of these features to help identify objects. For example, galaxies should have true proper motions (and parallaxes) very close to zero. Yet they sometimes have larger measured proper motions in \gdr{3} on account of their physical extent combined with the variability in the calculation of the centroid during each scan made by Gaia (obtained at different position angles). This can give rise to spuriously large proper motions (although the uncertainties are also larger). In many cases, these solutions are rejected by the astrometric solutions (to give the so-called 2p solutions; see \citealt{2021A&A...649A...2L} for the definitions), meaning that many galaxies lack parallaxes and proper motions and are therefore not processed by Allosmod. Allosmod models the distribution of the data, and so it provides likelihoods. When combined with the class prior, this gives posterior class probabilities, which are the output from Allosmod. Specmod, in contrast, is a tree-based model that does not strictly provide posterior probabilities. Moreover, its output is influenced by the distribution in the training data (see below). However, by using the simple method described in the \linktosec{cu8par}{apsis}{dsc} we can adjust the outputs from Specmod so that they are analogous to posterior probabilities that incorporate our desired class prior. Allosmod is described in more detail in~\cite{2019MNRAS.490.5615B}, where it is applied to \gdr{2} data. The third DSC classifier, Combmod, takes the probabilities from Specmod and Allosmod for a source and combines them into a new posterior probability over all five classes. This is not entirely trivial, because it has to ensure that the global prior is not counted twice, and it has to allow for the fact that Specmod has more classes than Allosmod. The combination algorithm is described in Appendix~\ref{app:combmod_definition}. \subsubsection{Class prior} \begin{table*} \begin{center} \caption[DSC class prior]{DSC class prior. The first row gives these as fractions relative to the stars, and the second row gives their decimal values summing to 1.0. This is the class prior for Specmod. The prior for the star class in Allosmod is the sum of star, white dwarf, and physical binary star. \label{tab:cu8par_apsis_dsc_classprior} } \begin{tabular}{rrrrrr} \hline & {\tt quasar} & {\tt galaxy} & {\tt star} & {\tt white dwarf} & {\tt physical binary star} \\ \hline $\propto$ & 1/1000 & 1/5000 & 1 & 1/5000 & 1/100 \\ $=$ & 0.000989 & 0.000198 & 0.988728 & 0.000198 & 0.009887 \\ \hline \end{tabular} \end{center} \end{table*} Single stars hugely outnumber extragalactic sources in Gaia\xspace, and failing to take this into account would give erroneous probabilities and classifications. Specifically, if we were to assume equal priors for all classes, then when the attributes of a given source do not provide a strong discrimination between the classes, the source would be classified as any class with near equal probabilities. However, in reality, the source is far more likely to be a star, because extragalactic sources are so rare. We must therefore set appropriate priors for the classes. Failing to do so corresponds to the well-known base rate fallacy. We choose here to adopt a global prior that reflects the expected fraction of each class (as we define them) in the entire \gdr{3} data set. This prior is given in Table~\ref{tab:cu8par_apsis_dsc_classprior}. As the relative fraction of extragalactic to Galactic objects that Gaia observes varies with quantities such as magnitude and Galactic latitude, we could make the prior a function of these (and potentially other) quantities; but we have not introduced this in \gdr{3}. Using the correct prior is important. A classifier with equal priors would perform worse on the rare objects than a classifier with appropriate priors, because the former would tend to misclassify many stars as being extragalactic. However, we would not notice this if we erroneously validated the classifier on a balanced set (equal numbers in each class), because such a validation set has an artificially low fraction of stars, and hence far too few potential contaminants. The classifier would perform worse but would appear to be performing better. This is demonstrated in Table 1 of \cite{2019MNRAS.490.5615B}. We address this issue in the context of our validation data in section~\ref{subsec:dsc_performances}. \subsubsection{Training data}\label{sec:dsc_training_data} \modulename{DSC} is trained empirically, meaning it is trained on a labelled subset of the actual Gaia data it will be applied to (except for binary stars). The classes were defined by selecting sources of each class from an external database and cross-matching them to \gdr{3}. The sources used to construct the training sets ---and which therefore define the classes--- are as follows ( see the \linktosec{cu8par}{apsis}{dsc} and~\cite{LL:CBJ-094} for more details): \begin{itemize}[leftmargin=0.5cm] \item Quasars: 300\,000 spectroscopically confirmed quasars from the fourteenth release of the Sloan Digital Sky Survey (SDSS) catalogue, SDSS-DR14 \citep{2018A&A...613A..51P}. \item Galaxies: 50\,000 spectroscopically confirmed galaxies from SDSS-DR15 \citep{2019ApJS..240...23A}. \item Stars: 720\,000 objects drawn at random from \gdr{3} that are not in the quasar or galaxy training sets. Strictly speaking, this is therefore an `anonymous' class. But as the vast majority of sources in Gaia\xspace are stars, and the majority of those will appear in (spectro)photometry and astrometry as single stars, we call this class `stars'. \item White dwarfs: 40\,000 white dwarfs from the Montreal White Dwarf Database\footnote{\href{http://www.montrealwhitedwarfdatabase.org}{http://www.montrealwhitedwarfdatabase.org}} that have coordinates and that are not known to be binaries using the flag provided in that table. This class is not in Allosmod. \item Physical binary stars: 280\,000 BP/RP\xspace spectra formed by summing the two separate components in spatially-resolved binaries in \gdr{3} (see the \linktosec{cu8par}{apsis}{dsc}). This is only done for the BP/RP\xspace spectra, not for astrometry or photometry, so physical binaries are not a class in Allosmod. \end{itemize} The quasar, galaxy, and star class definitions are more or less the same as in~\cite{2019MNRAS.490.5615B}. The selected sources were filtered in order to remove obvious contaminants or problematic measurements (as described in the \linktosec{cu8par}{apsis}{dsc}). The numbers above refer to what remains after this filtering. The remaining set was then split into roughly equally sized training and validation sets (per class). Generally speaking, the relative number of objects of each class ---the {\em class fraction}--- in the training data affects the output probabilities of a classifier, because it acts as an implicit prior in the classifier. However, for both Specmod and Allosmod, we remove this influence to ensure that their priors correspond to our class prior. We are therefore free to choose as many training examples in each class as we need, or can obtain, in order to learn the data distributions. We note that for the common classes between Specmod and Allosmod, that is,\ quasars, galaxies, and stars, a common sample with complete input data was used to train both modules. In particular, this means that even though Specmod does not require parallaxes and proper motions as inputs, its training sample is restricted to those sources that do have parallaxes and proper motions. This is important because Specmod is also applied to sources that lack parallaxes and proper motions, meaning that some of its results are on types of objects that are not represented in its training set. This is particularly important for galaxies. \begin{figure*} \centering \includegraphics[width=0.8\textwidth,angle=0]{figures/all_train_data_20210108_featurehist_threeclasses.pdf} \rule{0.85\textwidth}{0.25pt} \includegraphics[width=0.8\textwidth,angle=0]{figures/combmod_featurehist_pthreshold0p5.pdf} \rule{0.85\textwidth}{0.25pt} \includegraphics[width=0.8\textwidth,angle=0]{figures/combmod_reqboth_featurehist_pthreshold0p5.pdf} \caption{Distribution (linear scale) of Gaia\xspace\ features for various samples used in DSC. Top: Training data for quasars (blue), galaxies (orange), and stars (black). When training Allosmod, the sin\,b distributions for quasars and galaxies are replaced with uniform ones. Middle: Gaia\xspace sources assigned {\tt classlabel\_dsc='quasar'} (blue) and {\tt classlabel\_dsc='galaxy'} (orange). Bottom: Gaia\xspace sources assigned {\tt classlabel\_dsc\_joint='quasar'} (blue) and {\tt classlabel\_dsc\_joint='galaxy'} (orange). \label{fig:dsc_featurehist} } \end{figure*} \begin{figure}[t!] \begin{center} \includegraphics[width=0.40\textwidth,angle=0]{figures/all_train_data_20210108_ccd_samepanel.jpg} \includegraphics[width=0.40\textwidth,angle=0]{figures/combmod_ccd_pthreshold0p5_samepanel.jpg} \includegraphics[width=0.40\textwidth,angle=0]{figures/combmod_reqboth_ccd_pthreshold0p5_samepanel.jpg} \caption{Colour--colour diagrams for various samples used in DSC. Top: Training data for quasars (blue) and galaxies (orange). Middle: Gaia\xspace sources assigned {\tt classlabel\_dsc='quasar'} (blue) and {\tt classlabel\_dsc='galaxy'} (orange). Bottom: Gaia\xspace sources assigned {\tt classlabel\_dsc\_joint='quasar'} (blue) and {\tt classlabel\_dsc\_joint='galaxy'} (orange). The differences in the distributions are due to the various levels of completeness and purity in the two types of class label. \label{fig:dsc_ccd} } \end{center} \end{figure} Figure~\ref{fig:dsc_featurehist} (top) shows the distribution of the eight Allosmod features in the training data for the quasar and galaxy classes. As we do not want the model to learn the $\sin b$ distribution of extragalactic objects, which is just the SDSS footprint (shown in the plot), we replace this with a random value drawn from a uniform distribution in $\sin b$ (i.e.\ uniform sky density) when training Allosmod. This plot also shows, for comparison, the distribution of the features for the star class in the training data. Figure~\ref{fig:dsc_ccd} (top) shows the distribution of the two colours of the quasars and galaxies in a colour--colour diagram. \subsubsection{Class labels}\label{sec:dsc_class_labels} The main output from \modulename{DSC} is the class probabilities from all three classifiers. For convenience, we also compute two class labels from the probabilities, which appear only for sources in the \linktoEGTable{qso_candidates} and \linktoEGTable{galaxy_candidates} tables in the data release. The first label, \fieldName{classlabel_dsc}, is set to the class that gets the highest posterior probability in Combmod that is greater than 0.5. If none of the output probabilities are above 0.5, this class label is {\tt unclassified}. This gives a sample that is fairly complete for quasars and galaxies, but not very pure. The second class label, \fieldName{classlabel_dsc_joint}, identifies a purer set of quasars and galaxies. It is set to the class that achieves a probability above 0.5 in both Specmod and Allosmod. This produces purer samples because the Specmod and Allosmod probabilities are not perfectly correlated. This lack of correlation may be unexpected, but is what we want, because it means the classifiers are providing non-redundant information. Because DSC is not the only contributor to the \linktoEGTable{qso_candidates} and \linktoEGTable{galaxy_candidates} tables, sources in the {\tt qso\_candidates} table can have either classlabel set to {\tt galaxy}, and vice versa. \subsection{Performance: Purity and completeness}\label{subsec:dsc_performances} By assigning each source to the class with the largest probability, it is uniquely classified. An alternative is to additionally adopt a minimum probability threshold, in which case we can get multiple classifications if the threshold is low enough, or no classification if it is high enough. Doing this on sources with known classes (assumed to be correct), we can then compute the confusion matrix, which tells us how many sources of each true class are assigned to each DSC class. From this, we then compute, for each class, the completeness --the fraction of true positives among all trues--- and the purity ---the fraction of true positives among all positives. Here we use the largest probabilities to compute the completenesses and purities on the validation sets.\footnote{The validation data for the binaries is not the one mentioned in section~\ref{sec:dsc_training_data}, namely synthetically-combined single stars, but instead a set of unresolved binaries directly from Gaia\xspace. See the \linktosec{cu8par}{apsis}{dsc} for more details.} As the class fractions in this validation set are not representative of what they are in Gaia\xspace, the raw purities are meaningless. Specifically, stars are far less common in the validation data than they are in a random sample of Gaia\xspace data, and so there are too few potential contaminants of the other classes in the validation data, resulting in significantly overestimated purities. This fact is sometimes overlooked in the validation of classification results in the literature. Fortunately, we can easily correct for this. As explained in section 3.4 (especially\ equation 4) of \cite{2019MNRAS.490.5615B}, we can modify the confusion matrix to correspond to a validation set that has class fractions equal to the class prior. The purity computed from this modified confusion matrix is then appropriate for any randomly selected sample of Gaia\xspace sources. (This modification does not affect the completeness.) We note that this modification is independent of the fact that \modulename{DSC}\ probabilities are already posterior probabilities that take into account this class prior (i.e.\ both modifications must be done). This should also serve as a warning when assessing any classifier: if the validation data set does not have a representative fraction of contamination, or if this is not adjusted, the predicted purities will be erroneous. \begin{table*}[t] \begin{center} \caption[DSC performance]{DSC performance evaluated on the validation data set. Classification is done by assigning the class with the largest posterior probability. Performance is given in terms of completeness (compl.) and purity, for each classifier and for each class. Purities have been adjusted to reflect the class prior (given in Table~\ref{tab:cu8par_apsis_dsc_classprior}). Results on the `binary' class are largely meaningless due to the incongruity of the class definitions in the training and validation data sets. These results reflect performance for sources drawn at random from the entire Gaia\xspace\ data set, in particular for all magnitudes and latitudes. The final two columns labelled `Spec\&Allos' refer to samples obtained by requiring a probability larger than 0.5 from both Specmod and Allosmod for a given class: this is identical to \fieldName{classlabel_dsc_joint} in the \linktoEGTable{qso_candidates} and \linktoEGTable{galaxy_candidates} tables. The bottom two rows refer to extragalactic sources at higher Galactic latitudes ($|b|>11.54\ensuremath{^\circ}$), where the prior is more favourable for detecting quasars and galaxies. These are conservative estimates, accounting only for reduced numbers of stars, not the better visibility of extragalactic objects on account of less interstellar extinction and source confusion. \label{tab:cu8par_apsis_dsc_resvst_defset_cp} } \begin{tabular}{rrrrrrrrr} \hline & \multicolumn{2}{c}{Specmod} & \multicolumn{2}{c}{Allosmod} & \multicolumn{2}{c}{Combmod} & \multicolumn{2}{c}{Spec\&Allos} \\ & compl.\ & purity & compl.\ & purity & compl.\ & purity & compl.\ & purity \\ \hline quasar & 0.409 & 0.248 & 0.838 & 0.408 & 0.916 & 0.240 & 0.384 & 0.621 \\ galaxy & 0.831 & 0.402 & 0.924 & 0.298 & 0.936 & 0.219 & 0.826 & 0.638 \\ star & 0.998 & 0.989 & 0.998 & 1.000 & 0.996 & 0.990 & -- & -- \\ white dwarf & 0.491 & 0.158 & -- & -- & 0.432 & 0.250 & -- & -- \\ physical binary star & 0.002 & 0.096 & -- & -- & 0.002 & 0.075 & -- & -- \\ \hline quasar, $|\sin\,b\,|>0.2$ & 0.409 & 0.442 & 0.881 & 0.603 & 0.935 & 0.412 & 0.393 & 0.786 \\ galaxy, $|\sin\,b\,|>0.2$ & 0.830 & 0.648 & 0.928 & 0.461 & 0.938 & 0.409 & 0.827 & 0.817 \\ \hline \end{tabular} \end{center} \end{table*} \tabref{tab:cu8par_apsis_dsc_resvst_defset_cp} shows the completenesses and purities for the DSC classes and classifiers. This is the performance we expect for a sample selected at random from the entire Gaia dataset that has complete input data for both Specmod and Allosmod. It accommodates the rareness of all these classes, as specified by the global class prior (Table~\ref{tab:cu8par_apsis_dsc_classprior}), both in the probabilities and the application data set. It is important to bear in mind that these purity and completeness measures only refer to the types of objects in the validation set. For extragalactic objects, this means objects classified as such by SDSS using the SDSS spectra. The overall population of extragalactic objects classified by \modulename{DSC} is of course broader than this, and so the completeness and purity evaluated on other subsets of extragalactic objects could differ. Due to the dominance of single stars in Gaia\xspace, we are not really interested in the performance on this class. Indeed, it is trivial to get an excellent single-star classifier: simply call everything a single star and your classifier has 99.9\% completeness and 99.9\% purity. The performance is modest overall, for reasons that are further discussed in section~\ref{subsec:dsc_use}. Results on binaries are very poor, partly because the validation set we used to compute the confusion matrix is not representative of the training set. This is because the validation set comprises only real Gaia objects, and so known unresolved binaries, whereas the training set was made by combining single star spectra. However, the internal performance on binaries was also poor. This suggests an intrinsic difficulty in separating binaries (as we define them) from single stars. The performance in Table~\ref{tab:cu8par_apsis_dsc_resvst_defset_cp} refers to objects covering the full Gaia\xspace\ parameter space, in particular all magnitudes and Galactic latitudes. The purities tend to increase for brighter magnitudes, as can be seen from the plots in the \linktosec{cu8par}{apsis}{dsc} and in~\cite{LL:CBJ-094}. There we see, for example, that for $\ensuremath{G}\xspace \leq 18$\,mag, the purities for quasars and galaxies when using Allosmod alone is 80\% or higher. However, when looking at the performance in a specific part of the parameter space, one should adopt a new prior that is appropriate for that part of the parameter space, for example\ fewer extragalactic objects visible at low latitudes. We then recompute the posterior probabilities (Appendix~\ref{sec:cu8par_apsis_dsc_adjusting_probabilities}) and the completenesses and purities (remembering that the adjustment of the confusion matrix must use the class fractions in this subset of the validation set). This we have done for sources outside of the Galactic plane, with results reported in the bottom two lines of Table~\ref{tab:cu8par_apsis_dsc_resvst_defset_cp}. For $|b|>11.54\ensuremath{^\circ}$, we adopt a prior probability for quasars of $2.64 \times 10^{-3}$ ($9.9 \times 10^{-4}$ globally), and a prior probability for galaxies of $5.3 \times 10^{-4}$ ($2 \times 10^{-4}$ globally). The purities of the quasar and galaxy samples are significantly higher, as expected because there are fewer contaminating stars per square degree. Using a probability threshold increases the purities even further, albeit at the expense of completeness (see \linktosec{cu8par}{apsis}{dsc} for more plots). Clearly, if we were willing and able to push the prior for extragalactic objects higher, we would obtain higher purities. \subsection{Results} \label{subsec:dsc_results} \modulename{DSC} was applied to all Gaia\xspace sources that have the required input data. Its results were not filtered in any way. In particular, we did not remove sources with lower quality input data, or that have input data lying outside the range of the training data. By including all results, we allow the user to apply their own filters according to their own goals and needs. \begin{figure*} \begin{center} \includegraphics[width=0.49\textwidth,angle=]{figures/extgal060div047_classlabel_quasar_fraction_skyplot.jpg} \includegraphics[width=0.49\textwidth,angle=]{figures/extgal061div050_classlabel_galaxy_fraction_skyplot.jpg} \caption{Galactic sky distribution of the fraction of sources that have 5p/6p astrometric solutions (i.e.\ have parallaxes and proper motions) for sources that also have {\tt dsc\_classlabel='quasar'} (left) and {\tt dsc\_classlabel='galaxy'} (right). The plot is shown at HEALPix level 7 (0.210 \ensuremath{\,\rm deg}$^2$) in a Hammer--Aitoff equal area projection with the Galactic centre in the middle, north up, and longitude increasing to the left. White indicates no sources. \label{fig:dsc_parallaxfrac_skyplot}} \end{center} \end{figure*} \modulename{DSC} produces outputs for 1\,590\,760\,469 sources. All of these have probabilities from Combmod and Specmod, whereas 1\,370\,759\,105 (86.2\%) have probabilities from Allosmod.\footnote{It so happens that all sources which have Allosmod results also have Specmod results, but not vice versa.} This lower number from Allosmod is due to missing input data, usually missing parallaxes and proper motions (or missing colours in a few cases). That is, sources must have 5p or 6p astrometric solutions from the Gaia\xspace Astrometric Global Iterative Solution (AGIS) in order to have Allosmod results. This can be seen in Figure~\ref{fig:dsc_parallaxfrac_skyplot}, which shows the fraction of sources (per HEALPix) that have 5p/6p solutions, for those with {\tt dsc\_classlabel='quasar'} (left) and {\tt dsc\_classlabel='galaxy'} (right). While most objects classified as quasars have measured parallaxes (i.e.\ 5p or 6p solutions), most sources outside of the Galactic plane classified as galaxies do not. Those objects that lack parallaxes and proper motions (the 2p solutions) also lack Allosmod results, and so their Combmod results (and hence {\tt dsc\_classlabel}) are determined only by Specmod. We explore the differences between the 5p/6p and 2p solutions at the end of this section. The vast majority of sources have high probabilities of being stars, and because the purities of the white dwarf and physical binary classes are low (see the online documentation), we focus here on the results for the quasar and galaxy classes. \begin{figure*} \begin{center} \includegraphics[width=1.0\textwidth,angle=0]{figures/CU8paper3_Fig5.pdf} \caption{Galactic sky distribution of the number of DSC sources classified as quasars (left) and galaxies (right) according to {\tt classlabel\_dsc} (top) and {\tt classlabel\_dsc\_joint} (bottom) (see Section~\ref{sec:dsc_class_labels} for the label definition). The plot is shown at HEALPix level 7 (0.210 \ensuremath{\,\rm deg}$^2$). The logarithmic colour scale covers the full range for each panel, and is therefore different for each panel. \label{fig:dsc_number_skyplots}} \end{center} \end{figure*} The label {\tt classlabel\_dsc} (defined in section~\ref{sec:dsc_class_labels}) classifies 5\,243\,012 sources as quasars and 3\,566\,085 as galaxies. Their sky distributions are shown in the top two panels of Figure~\ref{fig:dsc_number_skyplots}. The analysis in section~\ref{subsec:dsc_performances} suggests that these samples are not very pure (see Table~\ref{tab:cu8par_apsis_dsc_resvst_defset_cp}). In these sky plots, we see large overdensities of supposed quasars in several regions, in particular the LMC and SMC, suggesting that this sample is not very pure. However, such overdensities are expected when we have a constant misclassification rate over the whole sky, because any high-density region will have a high density of both correctly and incorrectly classified objects. However, it turns out that the {fraction} of sources classified as quasars is also higher than average in these regions (see below). The LMC and SMC are so dense that 38\% of all the quasar identifications using {\tt classlabel\_dsc} are in the LMC, and 6.4\% are in the SMC.\footnote{For this purpose, the LMC is defined as a circle of 9\ensuremath{^\circ}\ radius centred on RA=81.3\ensuremath{^\circ}, Dec.=-68.7\ensuremath{^\circ}, and the SMC as a circle of 6\ensuremath{^\circ}\ radius centred on RA=16.0\ensuremath{^\circ}, Dec.=-72.8\ensuremath{^\circ}.} These percentages are much smaller for galaxies: just 3\% for the LMC and 1\% for the SMC. The bottom row of Figure~\ref{fig:dsc_number_skyplots} shows the distribution of the 547\,201 sources classified as quasars and the 251\,063 sources classified as galaxies by the purer class label {\tt classlabel\_dsc\_joint}. The overdensities of quasars in the LMC and SMC regions are now greatly reduced, to 4\% and 1\% of all sources respectively. \begin{figure*} \begin{center} \includegraphics[width=1.0\textwidth,angle=0]{figures/CU8paper3_Fig6.pdf} \caption{Galactic sky distribution of the fraction of DSC sources classified as quasars (left) and galaxies (right) according to Specmod (top), Allosmod (second), Combmod (third), and Specmod and Allosmod (bottom) probabilities being greater than 0.5 for that class. The bottom two rows are identical to {\tt classlabel\_dsc} and {\tt classlabel\_dsc\_joint} (respectively) being set to the appropriate class (see section~\ref{sec:dsc_class_labels}). The plot is shown at HEALPix level 7 (0.210 \ensuremath{\,\rm deg}$^2$) with each cell showing the ratio of the sources classified to the total number of sources with DSC results (1.59 billion over the whole sky). The logarithmic colour scale covers the full range for each panel, and is therefore different for each panel. \label{fig:dsc_fraction_skyplots}} \end{center} \end{figure*} Figure~\ref{fig:dsc_fraction_skyplots} shows the same sky distribution as before, but now expressing the numbers as a fraction of the total number of sources in that HEALPix\footnote{For details on the HEALPix scheme used by Gaia, see \citet{LL:BAS-020}} (classified by DSC as anything). As most of the sources are stars, these plots essentially show the ratio of extragalactic to Galactic objects per HEALPix, albeit with varying degrees of contamination. The four rows of the plot correspond to four possible ways of classifying extragalactic sources: the top three rows are for probabilities above 0.5 for Specmod, Allosmod, and Combmod, respectively, whereby the latter is identical to {\tt classlabel\_dsc}. The bottom row is {\tt classlabel\_dsc\_joint}. Looking at the third row ---for {\tt classlabel\_dsc}--- we see a higher fraction of extragalactic sources (plus contamination) has been discovered outside of the Galactic plane than at lower latitudes. This we expect, as high extinction from Galactic dust obscures extragalactic objects, and also there are far more stars in the Galactic plane. However, we also see a higher fraction of supposed quasars (left) in the LMC and SMC ---clear misclassifications--- indicating a higher contamination in these regions. Looking at the top two left panels in Figure~\ref{fig:dsc_fraction_skyplots} for Specmod and Allosmod, respectively, we see that this contamination comes from Specmod, that is,\ misclassification of the BP/RP\xspace spectra, but not from Allosmod, which uses photometry and astrometry. It is probably not due to crowding in the LMC/SMC corrupting the BP/RP\xspace spectra, because we do not see such high contamination in the crowded Galactic plane; it is more likely due to faint blue sources in the LMC/SMC being confused with quasars, something which does not occur as much in the Galactic plane due to the higher reddening there. The top three rows of the right column of Figure~\ref{fig:dsc_fraction_skyplots} show the corresponding plots for galaxies. The stripes are artefacts of the Gaia\xspace scanning law. They are much more prominent in Allosmod than in Specmod, and we see in Table~\ref{tab:cu8par_apsis_dsc_resvst_defset_cp} that Allosmod is expected to have a lower purity for galaxies than Specmod (the opposite is true for quasars). When we use {\tt classlabel\_dsc\_joint} for classification, we get smaller but purer samples (see~\cite{DR3-DPACP-101}). The sky distributions for these samples (bottom row of Fig.~\ref{fig:dsc_fraction_skyplots}) show that low-latitude regions are excluded. In other words, only sources at higher latitudes were classified with probabilities above 0.5 by both Specmod and Allosmod. We also note that the overdensities in the LMC and SMC are greatly reduced with {\tt classlabel\_dsc\_joint}. The middle panels of Figure~\ref{fig:dsc_featurehist} show the distributions of various Gaia\xspace features for the sources classified as quasar (in blue) and galaxy (in orange) by {\tt classlabel\_dsc}. The middle panel of Figure~\ref{fig:dsc_ccd} shows the two colours as a colour--colour diagram. These may be compared to the distributions of the training data in the upper panels in both cases. There are some noticeable differences. The most obvious is the spike in the latitude distribution for (apparent) quasars at the LMC. Recall that, when training Allosmod, we used a flat $\sin{b}$ distribution (see section~\ref{subsec:dsc_method}). We also see that the objects classified ---galaxies in particular--- extend to fainter magnitudes than the training data. This is not surprising given that the training sample had to have SDSS spectroscopic classifications, whereas we apply DSC to all Gaia\xspace sources, which extend to fainter magnitudes, where misclassifications are more frequent. The observed galaxies also show larger (anomalous) proper motions, plus more (anomalous) photometric variability according to the relative variability, {\tt relvarg}, parameter. Finally, we also see differences in the colour distributions compared to the training data for both classes (Figure~\ref{fig:dsc_ccd}). Some of this is due to the different populations being sampled (the training objects are brighter), as well as contamination. The bottom panels of Figures~\ref{fig:dsc_featurehist} and~\ref{fig:dsc_ccd} show the features and colour--colour diagrams for objects classified using the purer {\tt classlabel\_dsc\_joint} label. These show tighter distributions that are more similar to the training data. We note in particular the reduction of faint galaxies. \begin{figure*}[t] \begin{center} \includegraphics[width=0.40\textwidth,angle=0]{figures/dr3int5_qsotable_withastrometry_classlabel_dsc_ccd_grp_bpg.jpg} \includegraphics[width=0.40\textwidth,angle=0]{figures/dr3int5_qsotable_withoutastrometry_classlabel_dsc_ccd_grp_bpg.jpg} \caption{Colour--colour diagram for sources in the \linktoEGTable{qso_candidates} table with \linktoEGParam{qso_candidates}{classlabel_dsc}{\tt ='quasar'}, excluding regions around the LMC and SMC. The left column shows sources with 5p/6p solutions (2.64 million sources), the right column shows sources with 2p solutions (0.14 million sources). These numbers refer to plotted sources, i.e.\ that have all Gaia\xspace bands. The colour coding in the upper panel shows the mean DSC-Combmod probability for the quasar class (the field \linktoAPParam{astrophysical_parameters}{classprob_dsc_combmod_quasar}). The colour coding in the lower panel shows the density of sources on a log scale relative to the peak density in that panel. \label{fig:dr3int5_qsotable_astrometry_classlabel_dsc_ccd_grp_bpg} } \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[width=0.40\textwidth,angle=0]{figures/dr3int5_galaxytable_withastrometry_classlabel_dsc_ccd_grp_bpg.jpg} \includegraphics[width=0.40\textwidth,angle=0]{figures/dr3int5_galaxytable_withoutastrometry_classlabel_dsc_ccd_grp_bpg.jpg} \caption{ As in Figure~\ref{fig:dr3int5_qsotable_astrometry_classlabel_dsc_ccd_grp_bpg} but for sources in the \linktoEGTable{galaxy_candidates} table with \linktoEGParam{galaxy_candidates}{classlabel_dsc}{\tt ='galaxy'}, excluding regions around the LMC and SMC. The left column shows sources with 5p/6p solutions (0.91 million sources), and the right column shows sources with 2p solutions (2.32 million sources). These numbers refer to plotted sources, i.e.\ that have all Gaia\xspace bands. \label{fig:dr3int5_galaxytable_astrometry_classlabel_dsc_ccd_grp_bpg} } \end{center} \end{figure*} We now return to the issue of the 5p/6p and 2p solutions. Figure~\ref{fig:dr3int5_qsotable_astrometry_classlabel_dsc_ccd_grp_bpg} shows the colour--colour diagram for all sources with {\tt classlabel\_dsc='quasar'}, excluding those in the regions around the LMC and SMC, for sources with (5p/6p) and without (2p) parallaxes and proper motions. The DSC-Comdmod probabilities for 5p/6p solutions come from both Specmod and Allosmod, whereas for the 2p solutions they only come from Specmod. Of the objects classified here as quasars, 95\% have 5p/6p solutions. We see that the 5p/6p solutions are confined to a smaller range of colours than are the 2p solutions. That is, demanding the existence of parallaxes and proper motions yields a slightly different population of objects in colour space. We reiterate the fact that there is significant stellar contamination in the {\tt classlabel\_dsc='quasar'} sample as a whole. The (purer) subset defined by {\tt classlabel\_dsc\_joint='quasar'} has a distribution (not shown) similar to that of the 5p/6p solutions in the bottom left panel of Figure~\ref{fig:dr3int5_qsotable_astrometry_classlabel_dsc_ccd_grp_bpg}. Figure~\ref{fig:dr3int5_galaxytable_astrometry_classlabel_dsc_ccd_grp_bpg} shows the colour--colour diagram for the galaxies. Again we see a difference in the colour distribution of the two types of astrometric solution, but now it is the 2p solutions that cover a narrower range of colours. Galaxies are partially resolved by Gaia\xspace, and their structure can induce a spurious parallax and proper motion in AGIS (which DSC-Allosmod tries to exploit). Many of these astrometric solutions are rejected by AGIS, turning them into 2p solutions, and these sources can only be classified by Specmod. Of the objects classified here as galaxies, 72\% have 2p solutions, compared to 5\% for the quasars. Thus, the Specmod and Allosmod results reported in \gdr{3} are not for identical populations of objects, because of the different input data requirements of these classifiers. \begin{figure*}[t] \begin{center} \includegraphics[width=0.40\textwidth,angle=0]{figures/dr3int5_qsotable_grp_bpg_specmod_and_allosmod_probsonly.jpg} \includegraphics[width=0.40\textwidth,angle=0]{figures/dr3int5_galaxytable_grp_bpg_specmod_and_allosmod_probsonly.jpg} \caption{Colour--colour diagram for sources in the \linktoEGTable{qso_candidates} table with \linktoEGParam{qso_candidates}{classlabel_dsc}{\tt ='quasar'} (left) and in the \linktoEGTable{galaxy_candidates} table with \linktoEGParam{galaxy_candidates}{classlabel_dsc}{\tt ='galaxy'} (right), in both cases excluding regions around the LMC/SMC, that have both Specmod and Allosmod results. The upper and lower panels show the mean DSC-Specmod probability and the mean DSC-Allosmod probability, respectively, for a common sample. \label{fig:dr3int5_bothtables_specmod_and_allosmod_classlabel_dsc_ccd_grp_bpg} } \end{center} \end{figure*} As Specmod and Allosmod use different data, it is interesting to see how their classification probabilities differ for a common set of sources. We investigate this by selecting sources that have results from both Specmod and Allosmod, and have {\tt classlabel\_dsc} set. This is shown for the quasar candidates in the left column of Fig.~\ref{fig:dr3int5_bothtables_specmod_and_allosmod_classlabel_dsc_ccd_grp_bpg}. These plots do not convey the number of sources in each part of the diagram, and should therefore be interpreted with that in mind. Nonetheless, although we see regions where Specmod and Allosmod have similar probabilities, there are also regions where their probabilities are quite different. Because {\tt classlabel\_dsc\_joint} is only set to `quasar' when both Specmod and Allosmod probabilities are above 0.5, these figures explain why that set is comparatively small. The right column of Figure~\ref{fig:dr3int5_bothtables_specmod_and_allosmod_classlabel_dsc_ccd_grp_bpg} shows the same for the galaxy candidates, and again we see a significant lack of correlation between Specmod and Allosmod. This shows that the different data used by these two classifiers convey rather different information. \subsection{Use of DSC results}\label{subsec:dsc_use} The \modulename{DSC} class probabilities exist primarily to help users identify quasars and galaxies. The performance on white dwarfs and binaries is rather poor. These probabilities will be of limited use to the general user and we do not recommend their use to build samples. One could add these probabilities to the star probability for each source, and thereby end up with a three-class classifier. Classification can be done by selecting sources with class probabilities above a given threshold. A threshold of 0.5 gives a selection (and performance) very similar to what would be obtained when taking the maximum probability. A threshold of 0.5 applied to the Combmod outputs is identical to the \fieldName{classlabel_dsc} label (section~\ref{sec:dsc_class_labels}). With this choice of threshold, the purities for galaxies and quasars are rather modest, as we can see from Table~\ref{tab:cu8par_apsis_dsc_resvst_defset_cp}. This is unsurprising, because with a threshold of 0.5 we expect up to half of the objects to be incorrectly classified even with a perfect classifier. Increasing the threshold does increase the purity at the cost of decreased completeness, but because the DSC probabilities tend to be rather extreme (see plots in~\citealt{LL:CBJ-094}), this does not help as much as one might hope. The fact that the purities are often lower than the limit expected from the threshold may be due not only to an imperfect classifier, but also to an imperfect calibration of the probabilities in Specmod and Combmod (although not Allosmod).\footnote{The issue of expected sample purity is discussed in section 5.2 of \cite{2008MNRAS.391.1838B}. Even with an imperfect classifier, it is possible to infer the expected number of true sources from the inferred numbers by inverting the confusion matrix, as shown by \cite{2019MNRAS.490.5615B}.} The DSC completenesses, especially with Combmod, are quite good, but the purities are rather modest, as discussed earlier. This is a consequence of primarily two factors. The first factor is the intrinsic rareness of the quasars and galaxies. If only one in every thousand sources were extragalactic, then even if our classifier had 99.9\% accuracy, the resulting sample would only be around 50\% pure. This is the situation we have: the intrinsic ability of \modulename{DSC} to separate the classes is actually very good, with purities of the order of 99\% on balanced test sets. However, when it is then applied to a randomly selected set of Gaia data there are so many stars that even though a small {\em fraction} of these are misclassified, this is still a large {\em number}. We cannot overcome this problem by adopting a different prior. If we used uniform priors, for example, this would classify many more sources ---both true and false---- as extragalactic. This would increase the completeness of this class. It is not immediately obvious what happens to the purity, but \cite{2019MNRAS.490.5615B} found that for Allosmod in \gdr{2}, the purities for quasars and galaxies were actually significantly reduced. The extreme rareness of the extragalactic objects places high demands on the classifiers, and the performance may be limited by the second factor, namely the ability of the data to distinguish between the classes. We experimented with using different or additional Gaia features (e.g.\ colour excess factor) as inputs to Allosmod, but this did not help. Performance might improve if we define synthetic filters from the BP/RP\xspace spectra instead of using the entire spectrum, or by generating other features from the Gaia data, but this has not been explored\footnote{One obvious example is to compute the absolute magnitude, because this together with colour -- i.e.\ the HRD -- clearly separates out white dwarfs when the parallax uncertainties are not too large.}. The inclusion of non-Gaia data, such as infrared photometry, should help but was beyond the scope of the activities for \gdr{3}. A third potential limiting factor is the set of training examples we use. Although the SDSS spectroscopic classifications are believed to be very good, they may have errors, and they may also not provide the clearest distinction between galaxies and quasars. The fact remains that the classification performance depends unavoidably on the intrinsic rareness, that is,\ on the prior. Users may want to adopt a different prior from ours (Table~\ref{tab:cu8par_apsis_dsc_classprior}), which would be particularly appropriate if they focus on a subset of parameter space. To recompute the DSC probabilities with a new prior we do not need to re-train or re-apply DSC. The fact that DSC provides posterior probabilities as outputs makes it simple to strip off our prior and apply a new one, as shown in appendix~\ref{sec:cu8par_apsis_dsc_adjusting_probabilities}. It is important to realise that the performances in Table~\ref{tab:cu8par_apsis_dsc_resvst_defset_cp} are (a) only for the classes as defined by the training data and (b) an average over the entire Gaia sample, and are therefore dominated by faint sources with lower quality data. Our galaxy class in particular is a peculiar subset of all galaxies, because Gaia\xspace tends not to observe extended objects, and even then may not measure them correctly (see section~\ref{subsec:dsc_method}). \modulename{DSC} misclassifies some very bright sources that are obviously not extragalactic, for example. As these are easily removed by the user, we chose not to filter the DSC results in any way. One may likewise wonder why there are some objects classified as quasars with statistically significant proper motions . We do use proper motion as a classification feature, but in a continuous fashion, not as a hard cut. A more conservative approach to classification is to apply a series of necessary conditions, that is,\ a simple decision tree. This could increase the purity ---and could be tuned to guarantee that certain known objects come out correctly--- but at the expense of completeness. We do nevertheless provide the class label \fieldName{classlabel_dsc_joint} as a means to select a purer subsample of extragalactic sources (section~\ref{sec:dsc_class_labels}), as can be seen from the last two columns of Table \ref{tab:cu8par_apsis_dsc_resvst_defset_cp}. \section{Outlier analysis (OA)} \label{sec:oa} \subsection{Objectives} \label{subsec:oa_objectives} The Outlier Analysis (\modulename{OA}) module aims to complement the overall classification performed by the \modulename{DSC} module, by processing those objects with lower classification probability from \modulename{DSC} (see \secref{sec:dsc}). \modulename{OA} is intended to analyse abnormal or infrequent objects, or artefacts, and was applied to all sources that received \modulename{DSC} Combmod probabilities below 0.999 in all of its five classes. This threshold was chosen so as to process a limited number of 134 million sources, corresponding to about $10\%$ of the total number of sources for which \modulename{DSC} produced probabilities. Subsequently, a selection of the sources to be processed is carried out based on several quality criteria, the most restrictive being that the mean spectra correspond to at least five transits (see details in the \linktosec{cu8par}{apsis}{oa}). The resulting filtering leads us to process a total of 56\,416\,360 sources. Such sources tend to be fainter and/or have noisier data. For these objects, \modulename{OA} provides an unsupervised classification ---where the true object types are not known--- that complements the one produced by \modulename{DSC}, which follows a supervised approach based on a set of fixed classes. \subsection{Method} \label{subsec:oa_method} The method used by \modulename{OA} to analyse the physical nature of classification outliers is based on a self-organising map \citep[SOM,][]{Kohonen1982}, which groups objects with similar BP/RP\xspace spectra (see \secref{subsubsec:oa_method_preprocessing}) according to a Euclidean distance measure. The SOM performs a projection of the multidimensional input space of BP/RP\xspace into a two-dimensional grid of size $30\times 30$, which facilitates the visual interpretation of clustering results. Such a projection is characterised by its preservation of the topological order, in the sense that, for a given distance metric, similar data in the input space will belong to the same or to neighbouring neurons in the output space. Each one of these neurons has a prototype, which is adjusted during the training phase and that best represents the input spectra that are closest to this neuron. In \gdr{3,} each prototype is the average spectrum of the pre-processed\footnote{The \modulename{OA} pre-processing of BP/RP\xspace spectra is later described in \secref{subsubsec:oa_method_preprocessing}.} BP/RP\xspace spectra of the sources assigned to that particular neuron, which correspond to those closest to the neuron according to the Euclidean distance between the neuron prototype and the pre-processed BP/RP\xspace spectrum of the source. Neuron prototypes are reported in the \linktoAPTable{oa_neuron_xp_spectra} table. A centroid is also identified for each neuron, which is the source whose pre-processed BP/RP\xspace spectrum is the closest to the prototype of the neuron, according to the Euclidean distance. Centroids can be found in the \linktoAPParam{oa_neuron_information}{centroid_id} field of the \linktoAPTable{oa_neuron_information} table along with statistics of the main Gaia observables for the sources belonging to this neuron: \ensuremath{G}\xspace, \ensuremath{G_{\rm BP}}\xspace, and \ensuremath{G_{\rm RP}}\xspace magnitudes, proper motions, Galactic latitude, parallax, number of BP/RP\xspace transits, renormalised unit weight error (\linktoMainParam{gaia_source}{ruwe}), BP/RP\xspace flux excess factor, and \ensuremath{G_\mathrm{BP}-G_\mathrm{RP}}\xspace colour. \subsubsection{BP/RP\xspace spectra preprocessing}\label{subsubsec:oa_method_preprocessing} The sampled mean BP/RP\xspace spectra produced by \modulename{SMSgen} are transformed in order to remove artefacts, and to improve the clustering produced by the SOMs: (a) Pixels with negative or zero flux values are linearly interpolated, provided that they do not affect more than 10\% of the effective wavelength in a consecutive manner or more than 25\% of the entire effective wavelength. Such a filtering was imposed because most of the spectra that did not meet such criteria were usually of low quality and had a low number of transits. These filtered spectra are not analysed; (b) BP and RP spectra are downsampled to 60 pixels each; (c) both spectra are trimmed to avoid the low transmission regions of the CCD, so that \modulename{OA} uses the effective wavelength ranges $375$--$644$nm for BP and $644$--$1050$nm for RP; (d) spectra are concatenated to obtain a single spectrum; and, (e) the joint spectrum is normalised so that the sum of its flux is equal to one. \begin{figure*}[t] \centering \includegraphics[width=0.75\textwidth]{figures/oa_method_labelling_combined_map.png} \caption{SOM grid from the \modulename{OA} module visualised through the GUASOM tool~\citep{Alvarez2021}. Each cell corresponds to a neuron from the SOM, most of which were assigned a class label. Those neurons that did not meet the quality criteria defined to establish a class label remain `undefined', as explained in Section~\ref{subsubsec:oa_method_labelling}} \label{fig:oa_method_labelling_combined_map} \end{figure*} \subsubsection{Quality assessment}\label{subsubsec:oa_method_quality} The performance of \modulename{OA} cannot be measured through metrics such as completeness and purity because of the unsupervised nature of the technique. Therefore, a descriptive approach based on the intra-neuron and inter-neuron distances~\citep{Alvarez2021} was followed in order to analyse the quality of the clustering. We decided to use the squared Euclidean distance as a proxy for distance because the SOM algorithm uses it as a measurement of mean quantisation error for processing elements. The intra-neuron distance of each source is then computed as the squared value of the Euclidean distance between the source and the prototype of the neuron it belongs to, whereas the inter-neuron distance is computed as the squared Euclidean distance between two different neuron prototypes. In order to assess the quality of the clustering, we selected the three parameters that we thought best describe the distribution of the intra-neuron distances: (a) the width of the distribution according to the value of the full width at half maximum ($FWHM$); (b) the skewness ($S$), which measures its asymmetry; and, (c) the kurtosis excess ($K$), which measures the level of concentration of distances. A high-quality clustering will result from neurons with low values of the $FWHM$ parameter, and large positive values of both skewness and kurtosis. Finally, in order to facilitate the interpretation of such quality measurements, a categorical index named $QC$ was derived based on the values obtained for $S$, $K$, and a normalised version of $FWHM$ (which is reversed in order for the higher quality neurons to take larger values). To this purpose, seven quality categories were established, according to the values taken by such parameters with respect to six arbitrarily chosen percentiles ($95^{\rm th}$, $90^{\rm th}$, $75^{\rm th}$, $50^{\rm th}$, $32^{\rm th}$, and $10^{\rm th}$), which are computed independently for each one of the parameters listed above over the entire map. For each neuron, we determine the lowest percentile in which the three parameters are above their respective percentile values. Thus, if a value is above the $95^{\rm th}$ percentile, then $QC$ will take the value of zero; if it is in the $90^{\rm th}$ percentile, then $QC$ will correspond to category one, and so on up to category six, which will correspond to those neurons whose poorest quality indicator is outside the lowest percentile that has been considered, $10^{\rm th}$. Accordingly, the best-quality neurons will have $QC=0$ and the worst ones $QC=6$. It should be emphasised here that $QC$ only assesses the quality of the clustering (i.e. how closely the pre-processed BP/RP\xspace spectra in a neuron match their prototype) compared to the overall intra-neuron distances, such that no assumption should be made on the quality of the spectra they contain, nor on the labelling of the individual neurons described below. \subsubsection{Neuron labelling}\label{subsubsec:oa_method_labelling} Unsupervised methods do not directly provide any label to the samples that are being analysed. For this reason, a set of reference BP/RP\xspace spectra templates for prototypical astronomical objects was built by taking into account validation sources from the various \modulename{Apsis} modules (see the \linktosec{cu8par}{apsis}{oa}). These reference templates are used to label the neurons in \gdr{3} by identifying the closest template to the neuron prototype according to the Euclidean distance. In addition, to guarantee the suitability of the assigned templates (and class labels), two conditions were imposed: (a) the squared Euclidean distance between a template and the neuron prototype must not exceed a threshold of $3.58 \times 10^{-2}$; and, (b) the neuron must have $QC < 6$. \figref{fig:oa_method_labelling_combined_map} shows the SOM built by \modulename{OA} for \gdr{3}, where around 80\% of the neurons were assigned a template, and hence a class label. The limit of $3.58 \times 10^{-2}$ on the squared distance was set during the template-building process and is detailed in the \linktosec{cu8par}{apsis}{oa}. \subsubsection{GUASOM visualisation tool}\label{subsubsec:oa_guasom} To help the user to analyse and visualise the clustering results, we designed an application called Gaia Utility for the Analysis of Self-Organising Maps (GUASOM)~\citep{Alvarez2021}. It can be run over the internet, and contains several visualisation utilities that allow an interactive analysis of the information present on the map. The tool provides both classical and specific domain representations such as U-matrix, hits, parameter distributions, template labels, colour distribution, and category distribution. \subsection{Performance and results}\label{subsec:oa_results} \modulename{OA} processed $56\,416\,360$ objects in \gdr{3}. \figref{fig:oa_results_sources_g_mag_distribution} displays their \ensuremath{G}\xspace magnitude distribution, demonstrating that \modulename{OA} covers a wide range of \ensuremath{G}\xspace magnitudes with a significant fraction of faint objects. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{figures/oa_results_sources_g_mag_distribution.png} \caption{\ensuremath{G}\xspace mag distribution of the $56\,416\,360$ sources processed by the \modulename{OA} module in \gdr{3} (bin width of $0.1$).} \label{fig:oa_results_sources_g_mag_distribution} \end{figure} \figref{fig:oa_method_labelling_quality_histogram} shows the histogram of neuron quality categories, $QC$, where the total number of sources belonging to such neurons is superimposed. Approximately 35\% of the neurons have $0 \leq QC \leq 3$ and are hence referred to as `high-quality neuron': these comprise around 55\% of the sources processed. The rest of the neurons can be considered as low-quality neurons. \figref{fig:oa_method_results_quality_map} shows how the quality categories are distributed over the SOM. It is worth mentioning that the SOM does not directly label neurons, nor does it provide quality measurements on the clustering they produce, which means that we have to apply the procedures described in Sections~\ref{subsubsec:oa_method_quality} and~\ref{subsubsec:oa_method_labelling} after we build the map. As a result, \figref{fig:oa_method_results_quality_map} shows the quality category associated with each neuron in our grid of $30 \times 30$ neurons. These quality categories assess how well the sources fit to the prototype of the neuron they belong to: neurons with the lowest quality category are composed of sources whose spectra are the most homogeneous (i.e. neurons of highest quality). Similarly, in \figref{fig:oa_method_labelling_combined_map}, the label assigned to each neuron provides a hint as to the astronomical type of the sources they contain. Comparing Figures \ref{fig:oa_method_labelling_combined_map} and \ref{fig:oa_method_results_quality_map}, we can see that high-quality neurons mostly correspond to stars and galaxies, while quasars are usually associated with low-quality neurons. The reason for this mostly stands in the wide range of cosmological redshifts that is observed amongst those objects, in their different continuum shapes and emission-line equivalent widths. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{figures/oa_method_labelling_quality_histogram.png} \caption{Histogram of neuron quality categories for the sources processed by the \modulename{OA} in \gdr{3}. The number of sources per category is superimposed along with the bars. Those neurons with $0 \leq QC \leq 3$ are considered high-quality neurons.} \label{fig:oa_method_labelling_quality_histogram} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/oa_method_results_quality_map.png} \caption{SOM grid visualised through the GUASOM tool~\citep{Alvarez2021} to represent the quality category~($QC$) assigned to each neuron.} \label{fig:oa_method_results_quality_map} \end{figure} \tabref{tab:oa_results_confusion_matrix} represents the contingency table between \modulename{DSC} Combmod and \modulename{OA} class labels. \modulename{DSC} labels are determined according to the class with the highest \modulename{DSC} Combmod probability, except for those that take a probability below $0.5$, which are labelled as `unknown'. Sources with \modulename{DSC} `binary star' class are considered as `star' as the former class is not present in \modulename{OA}. Similarly, \modulename{OA} class labels are aggregated into more generic ones in order to enable comparison with the \modulename{DSC} class labels. Recalling that \modulename{OA} only processes sources with all \modulename{DSC} Combmod probabilities below $0.999$, the \modulename{OA} results can be summarised as follows. \begin{itemize}[leftmargin=0.5cm] \item {Galaxies: There is close agreement for galaxies, as around 80\% of the galaxies identified by \modulename{DSC} are also confirmed by \modulename{OA}.} \item {Quasars: The agreement with \modulename{DSC} decreases to 35\%. A large fraction of those quasars identified by \modulename{DSC} are considered as stars or white dwarfs by \modulename{OA}.} \item {Stars: Around 40\% of those identified by \modulename{DSC} were also confirmed by \modulename{OA}. However, a large fraction of them were considered as extragalactic objects by \modulename{OA}.} \item {White dwarfs: In this case, the agreement between both modules is around 50\%. Most of the remaining objects are considered as stars by \modulename{OA}.} \end{itemize} Around 11\% of the sources are assigned to a neuron that was not labelled by \modulename{OA} because of their poor quality (category six). In particular, approximately 2\,510 sources could not be classified by \modulename{OA} and have {\tt classlabel\_dsc = 'unclassified'}, meaning that studying their nature may require a deeper analysis. \begin{table*}[t] \centering \begin{tabular}{|ll|rrrrr|r|} \cline{3-7} \multicolumn{2}{c|}{} & \multicolumn{5}{c|}{\textbf{\modulename{OA} class label}}\\ \multicolumn{2}{c|}{} & \textbf{STAR} & \textbf{WD} & \textbf{QSO} & \textbf{GAL} & \textbf{UNDEFINED} & \multicolumn{1}{c}{\textbf{Total}}\\ \hline \multirow{6}{*}{\rotatebox[origin=c]{90}{\textbf{\modulename{DSC}}}} & \textbf{STAR} & $40$\% & $3$\% & $22$\% & $24$\% & $11$\% & $53\,295\,527$\\ & \textbf{WD} & $42$\% & $51$\% & $3$\% & $0$\% & $4$\% & $92\,186$\\ & \textbf{QSO} & $29$\% & $21$\% & $35$\% & $2$\% & $13$\% & $2\,158\,916$\\ & \textbf{GAL} & $4$\% & $0$\% & $9$\% & $83$\% & $4$\% & $851\,127$\\ & \textbf{UNKNOWN} & $22$\% & $7$\% & $35$\% & $22$\% & $13$\% & $18\,604$\\ \hline \multicolumn{2}{r|}{\textbf{Total}} & $21\,763\,876$ & $2\,240\,195$ & $12\,680\,763$ & $13\,470\,776$ & $6\,260\,750$\\ \cline{3-7} \end{tabular} \caption{Contingency table between DSC taken from predominant probabilities produced by \modulename{DSC} Combmod and OA classifications, grouped into generic types. Unknown means that the DSC predominant probability was below $0.5$, whereas for OA it means that no template was assigned due to quality constraints. Fractions are computed with respect to the total number of sources in each \modulename{DSC} class.} \label{tab:oa_results_confusion_matrix} \end{table*} \subsection{Use of \modulename{OA} clustering}\label{subsec:oa_use} The analysis performed by the \modulename{OA} module can be useful for different purposes. For instance, high-quality neurons can help to assess the physical nature of some sources with \modulename{DSC} combmod probabilities below the chosen threshold ($0.999$) in all classes or to identify objects that were potentially misclassified. As \modulename{OA} provides an unsupervised classification based on a normalised SED comparison, for a given neuron there are sources with different degrees of similarity to the prototype. For that reason, we encourage the user to isolate clean samples for each neuron through the quality measurements provided in the \linktosec{cu8par}{apsis}{oa}. In particular, we suggest combining both the categorical quality index~($QC$) and the classification distance in order to retrieve the best classified sources from \modulename{OA}. \tabref{tab:oa_use_reliable} shows the number of sources per class that are assigned to a high-quality neuron (from category zero to three), and whose classification distance between the pre-processed BP/RP\xspace spectrum of the source and the neuron prototype is below $0.001$ (i.e. what we consider here as reliable predicted classes). As can be seen, around 13 million stars, 9 million galaxies, 2 million quasars, and $1.5$ million white dwarfs meet these criteria. \begin{table}[t] \centering \begin{tabular}{lr} \hline \textbf{Class label} & \textbf{Number of sources}\\ \hline STAR\_LATE & $8\,966\,955$ \\ GAL\_Z01\_02 & $3\,917\,749$ \\ STAR\_INT & $3\,158\,041$ \\ GAL\_Z02\_GT & $2\,952\,297$ \\ GAL\_Z01\_LT & $2\,355\,895$ \\ WD & $1\,561\,204$ \\ QSO\_Z15\_LT & $1\,138\,832$ \\ QSO\_Z15\_25 & $1\,020\,337$ \\ STAR\_EARLY & $914\,470$ \\ ELS & $489\,551$ \\ QSO\_Z25\_GT & $92\,460$ \\ \hline \end{tabular} \caption{Number of sources in each \modulename{OA} class that belong to a high-quality neuron while having a classification squared Euclidean distance below $0.001$ (i.e. what we consider here as reliable). We note that there may be considerable contamination in these class assignments.} \label{tab:oa_use_reliable} \end{table} \section{Quasar classifier (QSOC)} \label{sec:qsoc} \subsection{Objectives} \label{subsec:qsoc_objective} The quasar classifier (\modulename{QSOC}) module is designed to determine the redshift, $z$, of the sources that are classified as quasars by the \modulename{DSC} module (see Section \ref{sec:dsc} for more details). In order to produce redshift estimates for the most complete set of sources, we considered a very low threshold on the \modulename{DSC} quasar probability of \linktoEGParam{qso_candidates}{classprob_dsc_combmod_quasar} $\geq 0.01$, meaning that we expect a significant fraction of the processed sources to be stars or galaxies. Users interested in purer sub-samples may then require that \linktoEGParam{qso_candidates}{classlabel_dsc_joint} {\tt = 'quasar'}, as explained in Section \ref{sec:dsc_class_labels}, or may use more sophisticated filtering, as explained in \cite[Section 8]{DR3-DPACP-101}. \subsection{Method} \subsubsection{Overview} \label{subsubsec:qsoc_overview} \modulename{QSOC} is based on a $\chi^2$ approach that compares the observed BP/RP\xspace spectra sampled by \modulename{SMSgen} \citep[see][and the \linktosec{cu8par}{apsis}{smsgen}]{DR3-DPACP-157} to quasar rest-frame templates in order to infer their redshift. The predicted redshifts take values in the range $0.0826 < z < 6.12295$. As the effective redshift is not necessarily the one associated with the minimal $\chi^2$ (see Section \ref{subsubsec:qsoc_quasar_algorithm}), it is complemented by an indicator of the presence of quasar emission lines ($Z_{\rm score}$ from Equation \ref{eq:qsoc_zscore}) and these are converted into a redshift score, $S$, from Equation \ref{eq:qsoc_redshift_score}. For a given source, the redshift with the highest score is then the one that is selected by the algorithm. Quasar templates are described in Section \ref{subsubsec:qsoc_quasar_templates} while the redshift determination algorithm is described in Section \ref{subsubsec:qsoc_quasar_algorithm}. \subsubsection{Quasar templates} \label{subsubsec:qsoc_quasar_templates} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/qsoc_templates.png} \caption{Rest-frame quasar templates used by \modulename{QSOC}. These correspond to the dominant templates taken over the 32 templates that are computed based on the method described in \cite{2015MNRAS.446.3545D} and applied to 297\,264 quasars from the DR12Q catalogue that are converted into BP/RP\xspace spectra through the use of the BP/RP\xspace spectrum simulator provided by CU5.} \label{fig:qsoc_templates} \end{figure*} The quasar templates used by \modulename{QSOC} were built based on the method described in \cite{2015MNRAS.446.3545D} and applied to 297\,264 quasars\footnote{We note that for 37 of the 297\,301 quasars originally contained in the DR12Q catalogue, the $\ell$-1 norm fit of the continuum to the observed spectrum (later described) did not converge and these were accordingly not included in the final sample we used.} from the twelfth release of the Sloan Digital Sky Survey Quasar catalogue of \cite[DR12Q]{2017A&A...597A..79P}. These spectra are first extrapolated to the wavelength range of the Gaia\xspace BP/RP\xspace spectro-photometer (i.e. $300$--$1100$ nm) with a linear wavelength sampling of $0.1$ nm using a procedure similar to the one used by \cite{2018MNRAS.473.1785D}. They are subsequently converted into BP/RP\xspace spectra through the use of the BP/RP\xspace spectrum simulator provided by CU5 and described in \cite{EDR3-DPACP-120}. An artificial spectrum with a uniform SED (i.e. of constant flux density per wavelength) was also converted through the BP/RP\xspace spectrum simulator in order to produce the so-called `flat BP/RP\xspace spectrum'. We then divided each simulated BP/RP\xspace spectrum by its flat counterpart before subtracting a quadratic polynomial that is fitted to the observations in a least absolute deviation sense (i.e. $\ell$-1 norm minimisation), leaving pure emission line spectra. We note that, in order to avoid fitting emission lines, a second-order derivative of the flux density was estimated around each sampled point, $d^2 f_i / d \lambda_i^2$, and later used to scale the associated uncertainties by a factor of $\operatorname{max}(\left| d^2 f_i / d \lambda_i^2 \right| / M, 0.01),$ where $M$ is a normalisation factor equal to the maximal absolute value of the second-order derivatives evaluated over all the sampled points. As the continuum regions often have very low curvatures compared to the emission lines, they are usually overweighted by a factor of up to 100 in the $\ell$-1 norm minimisation. A logarithmic wavelength sampling of $\log L = 0.001$ was then used for both the BP and RP templates, ensuring that the resolution of the BP/RP\xspace spectra, as sampled by \modulename{SMSgen}, is preserved. We extracted 32 BP/RP\xspace templates based on these 297\,264 simulated spectra using the weighted principal component analysis method described in \cite{2015MNRAS.446.3545D}; nevertheless, only the dominant BP/RP\xspace templates ---corresponding to the mean of the weighted principal component analysis method--- were used because cross-validation tests performed on the simulated spectra show that a larger number of templates significantly increases the degeneracy between redshift predictions. The resulting templates, illustrated in Figure \ref{fig:qsoc_templates}, closely match the typical composite spectra of quasar emission lines \citep[see e.g.][Section 7]{DR3-DPACP-101}, although they are convolved by the Gaia\xspace line spread function which is averaged over the entire set of rest-frame wavelengths. The templates cover the rest-frame wavelength range from $45.7$ nm to $623.3$ nm in BP and from $84.6$ nm to $992.3$ nm in RP. These limits, along with the observed wavelength coverage imposed by \modulename{SMSgen} of $325$--$680$ nm in BP and $610$--$1050$ nm in RP allow \modulename{QSOC} to predict redshifts in the range $0.0826 < z < 6.1295$\footnote{As the cross correlation function computed by \modulename{QSOC} is extrapolated by $\pm \log L$ at its border, the range of the \modulename{QSOC} redshift predictions is slightly wider than one would expect from a straight comparison of the observed and rest-frame wavelengths.}. \subsubsection{Algorithm} \label{subsubsec:qsoc_quasar_algorithm} The determination of the redshift of quasars by \modulename{QSOC} is based on the fact that the redshift, $z$, turns into a simple offset once considered on a logarithmic wavelength scale: \begin{equation} Z = \log (z + 1 ) = \log \lambda_{\rm obs} - \log \lambda_{\rm rest}, \label{eq:qsoc_log_redshift} \end{equation} where we assume that a given spectral feature located at rest-frame wavelength $\lambda_{\rm rest}$ is observed at wavelength $\lambda_{\rm obs}$. Consider such a logarithmic sampling $\lambda_i = \lambda_0\, L^i$, where $\lambda_0$ is a reference wavelength and $L$ is the logarithmic wavelength sampling we use, here $\log L = 0.001$ (or $L \approx 1.001$). Then for a given set of $n$ rest-frame templates, $\mat{T}$, and an observation vector, $\vec{s}$, which are both logarithmically sampled with $L$, the derivation of the optimal shift, $k$, between $\mat{T}$ and $\vec{s}$ can be formulated as a $\chi^2$ minimisation problem through \begin{equation} \chi^2(k) = \sum_i \frac{1}{\sigma_i^2} \left( s_i - \sum_{j=1}^{n} a_{j,k} T_{i+k,j} \right)^2 \label{eq:qsoc_chi2} ,\end{equation} where $\sigma_i$ is the uncertainty on $s_i$ and $a_{j,k}$ are the coefficients that enable the fit of $\mat{T}$ to $\vec{s}$ in a weighted least squares sense while considering a shift $k$ that is applied to the templates. The redshift that is associated with the shift $k$ is then given by $z = L^k - 1$. A continuous estimation of the redshift is then obtained by fitting a quadratic polynomial to $\chi^2(k)$ in the vicinity of the most probable shift. Despite its appealing simplicity, Equation \ref{eq:qsoc_chi2} is known to have a cubic time complexity on $N$, as shown in \cite{2016MNRAS.460.2811D}, where $N$ is the number of samples contained in each template. In the same manuscript, it is shown that the computation of the \emph{cross-correlation function} (CCF), defined as \begin{equation} \operatorname{ccf}(k) = \left(\sum_i \frac{s_i^2}{\sigma_i^2}\right) - \chi^2(k) = C - \chi^2(k), \label{eq:qsoc_ccf} \end{equation} requires only $\mathcal{O}\left(N \log N\right)$ floating point operations. Furthermore, given that $C$ is independent of the explored shift, $k$, maximising $\operatorname{ccf}(k)$ is equivalent to minimising $\chi^2(k)$. However, some features of the BP/RP\xspace spectra complicate the computation of the CCF. First, the BP and RP spectra are distinct such that the effective CCF is actually composed of the sum of two CCFs associated with the BP and RP spectra and templates, $\operatorname{ccf}_{\rm bp}(k)$ and $\operatorname{ccf}_{\rm rp}(k)$, respectively: \begin{equation} \operatorname{ccf}(k) = \operatorname{ccf}_{\rm bp}(k) + \operatorname{ccf}_{\rm rp}(k). \label{eq:qsoc_ccf_xp} \end{equation} Secondly, the BP/RP\xspace spectra have bell shapes (i.e.\ their flux smoothly goes to zero at the borders of the spectra), and have spectral flux densities that are integrated over wavelength bins of different sizes, as explained in \cite{DR3-DPACP-157}. Equation \ref{eq:qsoc_ccf} is therefore not directly applicable to these spectra. In order to overcome these difficulties, we divided each BP/RP\xspace spectrum by the previously mentioned flat BP/RP\xspace spectrum (i.e. BP/RP\xspace spectrum coming from a constant flux density and converted through the BP/RP\xspace spectrum simulator) and updated their uncertainties accordingly. This solution enables us to solve both the bell shape issue and the varying wavelength size of each pixel, passing from units of flux to units of flux density. Finally, most of the quasar flux resides in its continuum, which we model here as a second-order polynomial, concatenated to the set of templates, $\mat{T}$, and subsequently fitted to the observations in Equation \ref{eq:qsoc_ccf}. \begin{table*} \caption{The QSOC parameters used to compute the redshift score of quasars from Equation \ref{eq:qsoc_redshift_score} and the $Z_{\rm score}$ from Equation \ref{eq:qsoc_zscore}. The rest-frame wavelengths, $\lambda$, of each emission line were retrieved from the quasar templates described in Section \ref{subsubsec:qsoc_quasar_templates}. Theoretical emission line intensities, $I_\lambda$, and score parameters, $w_0$, $w_1$, and $p$, were computed based on a global optimisation procedure that is designed to maximise the score of the redshift predictions with $|\Delta z| < 0.1$ amongst 88\,196 randomly selected sources with a redshift estimate from DR12Q. We note that another set of 89\,839 observations was then kept as a test set, though the two sets provide a similar distribution of scores.} \begin{center} \footnotesize \begin{tabular}{rccccccccc} \hline \multicolumn{10}{c}{\bf Parameters of the redshift score} \\ \hline & \multicolumn{3}{c}{$w_0 = 0.71413$} & \multicolumn{3}{c}{$w_1 = 0.28587$} & \multicolumn{3}{c}{$p = 0.24365$} \\ \\ \hline \multicolumn{10}{c}{\bf Parameters of the $\boldsymbol{Z_{\rm score}}$ for BP spectra} \\ \hline & \ion{O}{iv} & Ly$\alpha$ & \ion{Si}{iv} & \ion{C}{iv} & \ion{C}{iii}] & \ion{Mg}{ii} & H$\gamma$ & H$\beta$ & \\ $\boldsymbol{\lambda}$ {\bf[nm]} & 103.202 & 121.896 & 139.349 & 154.658 & 189.957 & 279.259 & 437.904 & 491.899 & \\ $\boldsymbol{I_\lambda}$ & 0.017 & 1.0039 & 0.01 & 0.13202 & 0.31359 & 0.94396 & 0.23848 & 0.93124 & \\ \\ \hline \multicolumn{10}{c}{\bf Parameters of the $\boldsymbol{Z_{\rm score}}$ for RP spectra} \\ \hline & \ion{O}{iv} & Ly$\alpha$ & \ion{Si}{iv} & \ion{C}{iv} & \ion{C}{iii}] & \ion{Mg}{ii} & H$\gamma$ & H$\beta$ & H$\alpha$ \\ $\boldsymbol{\lambda}$ {\bf[nm]} & 103.353 & 122.388 & 139.563 & 154.588 & 190.398 & 280.470 & 435.600 & 488.952 & 657.736 \\ $\boldsymbol{I_\lambda}$ & 0.062484 & 0.10984 & 0.18982 & 0.07023 & 0.1409 & 0.22011 & 0.4101 & 0.25137 & 0.59948 \\ \\ \hline \end{tabular} \end{center} \label{tbl:qsoc_shift_score_parameters} \end{table*} \begin{table*} \caption{Binary warning flags used in the QSOC redshift selection procedure and reported in the \linktoEGParam{qso_candidates}{flags_qsoc} field. Sources with \linktoEGParam{qso_candidates}{flags_qsoc} $ = 0$ encountered no issues during their processing and are based on reliable spectra which means that they are more likely to contain reliable predictions.} \begin{center} \begin{tabular}{p{2.5cm}|p{0.75cm}|p{0.75cm}|p{10cm}} \hline Warning flag & Bit & Value & Condition(s) for rising \\ \hline \verb+Z_AMBIGUOUS+ & 1 & 1 & The CCF has more than one maximum with $\chi_r^2(k) > 0.85$, meaning that at least two redshifts lead to a similar $\chi^2$ and the solution is ambiguous. \\ \verb+Z_LOWCHI2R+ & 2 & 2 & $\chi_r^2(k) < 0.9$ \\ \verb+Z_LOWZSCORE+ & 3 & 4 & $Z_{\rm score}(k) < 0.9$ \\ \verb+Z_NOTOPTIMAL+ & 4 & 8 & The selected solution did not correspond to the global maximum (i.e. $\chi_r^2(k) < 1$) \\ \verb+Z_BADSPEC+ & 5 & 16 & The BP/RP\xspace spectra upon which this prediction is based are considered as unreliable. An unreliable spectrum has a number of spectral transits in BP, $N_{\rm bp}$ or RP, $N_{\rm rp}$ that is lower than or equal to ten transits or $G \geq 20.5$ mag or $G \geq 19 + 0.03 \times (N_{\rm bp} - 10)$ mag or $G \geq 19 + 0.03 \times (N_{\rm rp} - 10)$ mag (see the \linktosec{cu8par}{apsis}{qsoc} for more information on the derivation of these limits). \\ \hline \end{tabular} \end{center} \label{tbl:qsoc_zwarning} \end{table*} As highlighted in \cite{2018MNRAS.473.1785D}, the global maximum of the CCF may not always lead to a physical solution as, for example, some characteristic emission lines of quasars (e.g. Ly$\alpha$, \ion{Mg}{ii,} or H$\alpha$) may be omitted from the fit while some emission lines can be falsely fitted to absorption features. This global maximum may also result from the fit of noise in the case of very low signal-to-noise-ratio (S/N) spectra. In order to identify these sources of error, we define a score, $0 \leq S(k) \leq 1$, that is associated with each shift; the shift associated with the highest score is the one that is selected by the algorithm. This score is computed as a weighted $p$-norm of the chi-square ratio defined as the value of the CCF evaluated at $k$ over the maximum of the CCF, \begin{equation} \chi_r^2(k) = \frac{\operatorname{ccf}(k)}{\operatorname{max}_k(\operatorname{ccf})} \hspace{0.5cm} \mbox{where} \hspace{0.5cm} 0 \leq \chi_r^2(k) \leq 1, \label{eq:qsoc_chi2r} \end{equation} and of an indicator of the presence of quasar emission lines, \begin{equation} Z_{\rm score}(k) = \prod_\lambda \left[ \frac{1}{2} \left(1 + \operatorname{erf} \frac{e_\lambda}{\sigma(e_\lambda) \sqrt{2}}\right) \right]^{I_\lambda}, \label{eq:qsoc_zscore} \end{equation} where $e_\lambda$ is the value of the BP/RP\xspace flux of the continuum-subtracted emission line at rest-frame wavelength $\lambda$ if we consider the observed spectrum to be at redshift $z = L^k - 1$; $\sigma(e_\lambda)$ is the associated uncertainty and $I_\lambda$ is the theoretical intensity\footnote{Theoretical emission line intensities should be regarded as weights. They do not refer to a particular theoretical model of the emission lines of quasars but to the values inferred in Table \ref{tbl:qsoc_shift_score_parameters}.} of the emission line located at $\lambda$, which is normalised so that the total intensity of all emission lines in the observed wavelength range is equal to one. Equation \ref{eq:qsoc_zscore} can then be viewed as a weighted geometric mean of a set of normal cumulative distribution functions of mean zero and standard deviations $\sigma(e_\lambda)$ evaluated at $e_\lambda$. A $Z_{\rm score}$ close to one indicates that all the emission lines that we expect at redshift $z$ are found in the spectra while missing a single emission line often leads to a very low $Z_{\rm score}$. The final formulation of the score is then given by \begin{equation} S(k) = \sqrt[p]{w_0 \, \left[\,\chi_r^2(k)\,\right]^p + w_1 \, \left[\,Z_{\rm score}(k) \,\right]^{p}}, \label{eq:qsoc_redshift_score} \end{equation} where $w_0$, $w_1$, and $p$ are parameters of the weighted $p$-norm, as listed in Table \ref{tbl:qsoc_shift_score_parameters}. Table \ref{tbl:qsoc_shift_score_parameters} summarises the various parameters used in the computation of the redshift score, $S(k)$. Also, in order to facilitate the filtering of these potentially erroneous redshifts by the final user, we define binary processing flags, \linktoEGParam{qso_candidates}{flags_qsoc}, which are listed in Table \ref{tbl:qsoc_zwarning}. As later highlighted in \secref{subsec:qsoc_filtering}, most secure predictions often have bits 1--4 unset (i.e. \linktoEGParam{qso_candidates}{flags_qsoc} = 0 or \linktoEGParam{qso_candidates}{flags_qsoc} = 16). Finally, the uncertainty on the selected redshift, $\sigma_z$, is derived from the uncertainty on the associated shift, $\sigma_k$, using the asymptotic normality property of the $\chi^2$ estimator, which states that $k$ is asymptotically normally distributed with a variance that is inversely proportional to the curvature of the CCF around the optimum. In particular, the variance on $k$ is asymptotically given by $\sigma_k^2 = -2\, dk^2 / d^2 \operatorname{ccf}(k),$ and as $Z = k \log\left(L\right)$, the logarithmic redshift, $Z = \log(z+1)$, is also normally distributed with a variance of \begin{equation} \sigma_Z^2 = 2 \, \left|\frac{d^2 \operatorname{ccf}(k)}{dk^2}\right|^{-1} \log^2 \left(L\right). \label{eq:qsoc_log_redshift_variance} \end{equation} Furthermore, as $z = \exp Z - 1$, the redshift that is reported by \modulename{QSOC} is distributed as a log-normal distribution of mean $Z$ and variance $\sigma_Z^2$, although this distribution is shifted by $-1$. Accordingly, the squared uncertainty on the computed redshift is given by \begin{equation} \sigma_z^2 = (z + 1)^2 \left(\exp \sigma_Z^2 - 1.0\right) \exp \sigma_Z^2, \label{eq:qsoc_redshift_variance} \end{equation} while its lower and upper confidence intervals, taken as its $0.15866$ and $0.84134$ quantiles, respectively, are given by \begin{equation} z_{\rm low} = \exp(Z - \sigma_Z) - 1 \hspace{0.5cm} \mbox{and} \hspace{0.5cm} z_{\rm up} = \exp(Z + \sigma_Z) - 1. \label{eq:qsoc_redshift_confidence_interval} \end{equation} \subsection{Performance and results} \label{subsec:qsoc_performances} The \modulename{QSOC} contributions to \gdr{3} can be found in the \linktoEGTable{qso_candidates} table and consist of: \linktoEGParam{qso_candidates}{redshift_qsoc}, the quasar redshift, $z$; \linktoEGParam{qso_candidates}{redshift_qsoc_lower}/\linktoEGParam{qso_candidates}{redshift_qsoc_upper}, the lower and upper confidence intervals, $z_{\rm low}$ and $z_{\rm up}$, corresponding to the 16\% and 84\% quantiles of $z$, respectively, as given by Equation \ref{eq:qsoc_redshift_confidence_interval}; \linktoEGParam{qso_candidates}{ccfratio_qsoc}, the chi-square ratio, $\chi_r^2$, from Equation \ref{eq:qsoc_chi2r}; \linktoEGParam{qso_candidates}{zscore_qsoc}, the $Z_{\rm score}$ from Equation \ref{eq:qsoc_zscore}, and \linktoEGParam{qso_candidates}{flags_qsoc}, the \modulename{QSOC} processing flags, $z_{\rm warn}$, from Table \ref{tbl:qsoc_zwarning}. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{figures/qsoc_Zerr.png} \caption{Histogram of the logarithmic redshift error, $\Delta Z = \log(z + 1) - \log(z_{\rm true} + 1)$ between \modulename{QSOC} redshift, $z,$ and literature redshift, $z_{\rm true}$, for 439\,127 sources contained in the Milliquas 7.2 catalogue. A bin width of $0.01$ was used for both curves.} \label{fig:qsoc_Zerr} \end{figure} We quantitatively assess the quality of the \modulename{QSOC} outputs by comparing the predicted reshifts against values from the literature. For this purpose, we cross-matched 6\,375\,063 sources with redshift estimates from \modulename{QSOC} with 790\,776 quasars that have spectroscopically confirmed redshifts in the Milliquas 7.2 catalogue of \cite{2021arXiv210512985F} (i.e. {\tt type = 'Q'} in Milliquas). Using a 1$\ensuremath{''}$ search radius, we found 439\,127 sources in common between the two catalogues. It should be emphasised here that the distributions of the redshifts and $G$ magnitudes of the cross-matched sources are not representative of the intrinsic quasar population as they inherit the selection and observational biases that are present in both the Milliquas catalogue and in Gaia\xspace. The numbers reported here should therefore be interpreted with that in mind. A straight comparison between the \modulename{QSOC} prediction and the Milliquas spectroscopic redshifts, illustrated in Figure \ref{fig:qsoc_Zerr} on a logarithmic scale, shows that $63.7\%$ of the sources have an absolute error on the predicted redshift, $|\Delta z|$ , that is lower than 0.1. This ratio increases to $97.6\%$ if only {\tt flags\_qsoc = 0} sources are considered. As most of the DR12Q quasars we use for building our templates are also contained in the Milliquas catalogue (161\,278 \modulename{QSOC} predictions are contained in both the DR12Q and Milliquas catalogue), one may wonder whether these induce a positive bias on the fraction of sources with $| \Delta z | < 0.1$. In order to answer this question, we note that the \modulename{QSOC} templates were built based on a statistically significant number of 297\,264 sources, and so we expect the computed templates to be representative of the whole quasar population under study while not being too specific to the particular set of spectra we used (i.e.\ any other set of spectra of the same size would have provided us with very similar templates). Nevertheless, $71\%$ of the sources in the DR12Q catalogue have $|\Delta z| < 0.1$. This compares to $59.5\%$ of the sources with $|\Delta z| < 0.1$ that are not in the DR12Q catalogue. If we consider only sources with \linktoEGParam{qso_candidates}{flags_qsoc}$ = 0$, then these numbers are $97\%$ and $98.8\%$, respectively. The observed differences can be explained primarily by the fact that, due to the selection made in the SDSS-III/BOSS survey, $31.7\%$ of the DR12Q sources that are found among the \modulename{QSOC} predictions have $2 < z < 2.6,$ where the presence of the Ly$\alpha$+\ion{Si}{iv}+\ion{C}{iv}+\ion{C}{iii} emission lines allows secure determination of the redshift ($81.4\%$ of the sources in this range have $| \Delta z | < 0.1$). In contrast, the redshift distribution of the sources that are found only in Milliquas peaks in the range $1.2 < z < 1.4$ where only $50.5\%$ of the sources have $| \Delta z | < 0.1$, owing to the sole presence of the \ion{Mg}{ii} emission line in this redshift range (see Section \ref{subsec:qsoc_filtering} for more information on these specific redshift ranges). However, both subsets have a comparable fraction of predictions with $| \Delta z | < 0.1$ once these are computed over narrower redshift ranges, as expected. We further investigate the distribution of the logarithmic redshift error, defined as \begin{equation} \Delta Z = \log(z + 1) - \log(z_{\rm true} + 1), \label{eq:qsoc_log_redshift_error} \end{equation} between \modulename{QSOC} redshift, $z$, and the literature redshift, $z_{\rm true}$, in Figure \ref{fig:qsoc_Zerr}. If we assume that a spectral feature at rest-frame wavelength $\lambda_{\rm true}$ is falsely identified by \modulename{QSOC} as another spectral feature at $\lambda_{\rm false}$, then the resulting logarithmic redshift error will be equal to $\Delta Z = \log \lambda_{\rm true} - \log \lambda_{\rm false}$, such that $\Delta Z$, besides its ability to identify good predictions, can also be used to highlight common mismatches between emission lines. In Figure \ref{fig:qsoc_Zerr}, we can see that most of the predicted (logarithmic) redshifts are in good agreement with their literature values while emission line mismatches mainly occur with respect to two specific emission lines: \ion{C}{iii]} and \ion{Mg}{ii}. In the most frequent case, the \ion{C}{iv} emission line is misidentified as Ly$\alpha$, because the separation between these two lines is comparable to the separation between \ion{C}{iv} and \ion{C}{iii]} when considered on a logarithmic wavelength scale. The Ly$\alpha$ and \ion{C}{iii]} lines are subsequently fitted to noise or wiggles in the very blue part of BP and in RP, respectively. By requiring that {\tt flags\_qsoc = 0}, we can mitigate the effect of these emission-line mismatches without affecting the central peak of correct predictions too much. Finally, we note that the distribution of $\Delta Z / \sigma_Z$, where $\sigma_Z = [ \log (z_{\rm up}+1) - \log (z_{\rm low}+1) ] / 2$ is defined in Equation \ref{eq:qsoc_log_redshift_variance}, effectively follows an approximately Gaussian distribution of median 0.007 and standard deviation (extrapolated from the inter-quartile range) of $1.053$ if observations with $|\Delta z| < 0.1$ are considered. If only observations for which {\tt flags\_qsoc = 0} are considered, $\Delta Z / \sigma_Z$ have a median of $0.002$ and standard deviation of $1.14$. \subsection{Use of \modulename{QSOC} results} \label{subsec:qsoc_filtering} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{figures/qsoc_z.png} \caption{Fraction of successful and reliable \modulename{QSOC} predictions computed over 439\,127 sources contained in the Milliquas 7.2 catalogue with respect to $G$ magnitude (top), Milliquas redshift (middle), and QSOC redshift (bottom). Black line: Fraction of observations with an absolute error of the predicted redshift, $|\Delta z|$, lower than 0.1. Orange line: Fraction of {\tt flags\_qsoc = 0} sources with $|\Delta z| < 0.1$. Blue line: Fraction of observations with {\tt flags\_qsoc = 0}. Orange and blue dotted lines correspond to their solid counterpart while considering {\tt (flags\_qsoc = 0 or flags\_qsoc = 16)} observations instead of {\tt flags\_qsoc = 0} observations. Fractions are computed with respect to the number of sources in magnitude and redshift bins of $0.1$.} \label{fig:qsoc_z} \end{figure} In \gdr{3}, \modulename{QSOC} systematically publish redshift predictions for which \linktoEGParam{qso_candidates}{classprob_dsc_combmod_quasar} $\geq 0.01$ and \linktoEGParam{qso_candidates}{flags_qsoc} $\leq 16$, leading to 1\,834\,118 sources that are published according to these criteria (see \linktoEGParam{qso_candidates}{source_selection_flags} for more information on the selection procedure). Nevertheless, for the sake of completeness, we also publish redshift estimates for all sources with \linktoEGParam{qso_candidates}{classprob_dsc_combmod_quasar} $\geq 0.01$ that are contained in the \linktoEGTable{qso_candidates} table, yielding $4\,540\,945$ additional sources for which \linktoEGParam{qso_candidates}{flags_qsoc} $> 16$. However, these last predictions are of lower quality as, for example, a comparison with the Milliquas spectroscopic redshift shows that $39.6\%$ of the \linktoEGParam{qso_candidates}{flags_qsoc} $> 16$ sources have $| \Delta z | < 0.1$, compared to $87\%$ for sources with \linktoEGParam{qso_candidates}{flags_qsoc} $\leq 16$. Of the source parameters published in the \gdr{3}, the $G$-band magnitude, \linktoMainParam{gaia_source}{phot_g_mean_mag}, has a particularly strong impact on the quality of the \modulename{QSOC} predictions; it shows a clear correlation with the S/N of the BP/RP\xspace spectra, as does the number of BP/RP\xspace spectral transits to a lesser extent. From the top panel of Figure \ref{fig:qsoc_z}, we see that more than $89\%$ of the sources with $G \leq 19$ mag have $|\Delta z| < 0.1$ (black line) while the same fraction is obtained for spectra with $19.9 < G < 20$ mag only for sources with \linktoEGParam{qso_candidates}{flags_qsoc}$ = 0$ (orange solid line). However, these correspond to a very small fraction ($5.5\%$) of the sources in this magnitude range (blue solid line). A less stringent cut, \linktoEGParam{qso_candidates}{flags_qsoc} = 0 or \linktoEGParam{qso_candidates}{flags_qsoc} = 16, where we encounter no processing issue (i.e.\ flag bits $1$--$4$ are not set) even when the BP/RP\xspace spectra are unreliable (i.e. flag bit $5$ can be set), still leads to $92\%$ of the sources with $|\Delta z| < 0.1$ (orange dotted line) while retaining $36.5\%$ of the sources in this magnitude range (blue dotted line). The same cut concurrently retains $22\%$ of the $20.4 < G < 20.5$ mag observations where $81.5\%$ of the predictions have $|\Delta z| < 0.1$ and is accordingly recommended for users dealing with sources at $G > 19$ mag. Besides the aforementioned recommendations on the \linktoEGParam{qso_candidates}{flags_qsoc} and $G$ magnitude, we should point out an important limitation of the Gaia\xspace BP/RP\xspace spectro-photometers regarding the identification and characterisation of quasars, namely the fact that the \ion{Mg}{ii} emission line is often the sole detectable emission line in the BP/RP\xspace spectra of $0.9 < z < 1.3$ quasars in the moderate-S/N regime of $G \gtrsim 19$ mag spectra. Indeed, despite the broad $325$--$1050$ nm coverage of the BP/RP\xspace spectrophotometers, quasar emission lines are often significantly damped in the observed wavelength regions $\lambda < 430$ nm and $\lambda > 950$ nm, owing to the low instrumental response in these ranges \citep[see for example][Figure 10]{DR3-DPACP-101}. As a result, the H$\beta$ and \ion{C}{iii]} emission lines surrounding the \ion{Mg}{ii} line\footnote{The H$\gamma$ emission line being intrinsically weak, it is often not seen in the BP/RP\xspace spectra of quasars and is accordingly not considered here.} only enter the BP/RP\xspace spectra at $z = 0.95$ and $z = 1.25$, respectively. Nevertheless, we consider a range of $0.9 < z < 1.3$ in order to take into account low-S/N spectra where these lines, although present, are often lost in the noise. The sole presence of the \ion{Mg}{ii} emission line has the deleterious effect of increasing the rate of mismatches between this line and mainly the Ly$\alpha$ and H$\beta$ emission lines, as seen in Figure \ref{fig:qsoc_Zerr}. Another issue also arises for $z \approx 1.3$ quasars, where the \ion{C}{iii]} emission line enters the BP spectrum while the \ion{Mg}{ii} line now lies on the peak of the BP spectrum, which complicates its detection by the algorithm leading to mismatches between \ion{C}{iii]} and the Ly$\alpha$ or \ion{Mg}{ii} emission lines. These effects are clearly visible in the middle panel of Figure \ref{fig:qsoc_z} at $0.9 < z < 1.3$, along with the previously discussed misidentification of the \ion{C}{iv} line as Ly$\alpha$ at $z \approx 2$. Appropriate cuts on \linktoEGParam{qso_candidates}{flags_qsoc} allow both of these shortcomings to be alleviated, as seen in \figref{fig:qsoc_z}. In the bottom panel of Figure \ref{fig:qsoc_z}, we see that the fraction of sources with $| \Delta z | < 0.1$ amongst very low- and high-redshift sources, as predicted by \modulename{QSOC}, is low ($7.25\%$ for $z < 0.2$ sources and $2.66\%$ for $z > 4$ sources). The explanation is that these very low- and high-$z$ quasars are rare in our sample, such that any erroneous prediction towards these loosely populated regions is largely reflected in the final fraction of predictions (i.e. the `purity' in these regions becomes very low). Again, cuts on the \linktoEGParam{qso_candidates}{flags_qsoc} allow us to recover about 90\% of sources with $| \Delta z | < 0.1$ in the range $0.1 < z < 4.4$. Concentrating on the drop at $z < 0.1$, we note that only 69 sources have a Milliquas redshift in this range, while only 31 have $0.0826 < z < 0.1$ (i.e.\ in the predictable \modulename{QSOC} redshift range). Amongst these 69 sources, 38 have $| \Delta z | < 0.1$ while 4 have \linktoEGParam{qso_candidates}{flags_qsoc}$ = 0$ but these are unfortunately erroneously predicted. These low numbers, along with the fact that \modulename{QSOC} predicts 2\,154 sources in this redshift range (i.e. 0.5\% of the total predictions) explains the drop at $z < 0.1$ in the middle and bottom panels of Figures \ref{fig:qsoc_z}, even when \linktoEGParam{qso_candidates}{flags_qsoc}$ = 0$. Regarding the $z > 4.4$ quasars, only 76 of them have redshifts in both Gaia\xspace and Milliquas, while only 10 have {\tt flags\_qsoc = 0} and 9 of these also have $| \Delta z | < 0.1$. There are 18\,959 sources with \modulename{QSOC} redshift predictions in this range, although only 101 (i.e. $0.5\%$) of them have \linktoEGParam{qso_candidates}{flags_qsoc}$ = 0$. This leads to a rather poor fraction of $9/101$ of the sources with $| \Delta z | < 0.1$ and {\tt flags\_qsoc = 0} in this redshift range. In conclusion, we should insist first on the fact that \modulename{QSOC} is designed to process Type-I/core-dominated quasars with broad emission lines in the optical and accordingly yields only poor predictions on galaxies, type-II AGN, and BL Lacertae/blazar objects. Secondly, \modulename{SMSgen} does not provide covariance matrices on the integrated flux \citep{DR3-DPACP-157}, meaning that the computed $\chi^2$ from Equation \ref{eq:qsoc_chi2} is systematically underestimated and is consequently not published in \gdr{3}. The computed redshift and associated confidence intervals, $z_{\rm low}$ and $z_{\rm up}$ from Equation \ref{eq:qsoc_redshift_confidence_interval}, though appropriately re-scaled, might also sporadically suffer from this limitation. \section{Unresolved galaxy classifier (UGC)} \label{sec:ugc} \subsection{Objectives} \label{subsec:ugc_objective} The Unresolved Galaxy Classifier (\modulename{UGC}) module estimates the redshift, $z$, of the sources with $G < 21$ mag that are classified as galaxies by \modulename{DSC}-Combmod with a probability of 0.25 or more (see \secref{sec:dsc} for details). \modulename{UGC} infers redshifts in the range $0 \leq z \leq 0.6$ by using a combination of three support vector machines \citep[SVMs,][]{CortesVapnik95}, all taking as input the BP/RP\xspace spectra of the sources as sampled by \modulename{SMSgen} \citep[Section 2.3.2]{DR3-DPACP-157}. The SVMs are trained on a set of BP/RP\xspace spectra of galaxies that are spectroscopically confirmed in the SDSS DR16 archive \citep{2020ApJS..249....3A}. \modulename{UGC} further applies filtering criteria for selecting redshifts to be published in \gdr{3}, as described in \secref{subsec:ugc_method}. \subsection{Method} \label{subsec:ugc_method} \modulename{UGC} is based on the LIBSVM library of \cite{CC01a}, from which three SVM models are built: (i) \emph{t-SVM}, the \emph{total-redshift range} SVM model, which computes the published redshift, \linktoEGParam{galaxy_candidates}{redshift_ugc}, and associated SVM prediction intervals, \linktoEGParam{galaxy_candidates}{redshift_ugc_lower} and \linktoEGParam{galaxy_candidates}{redshift_ugc_upper}, (ii) \emph{r-SVM,} and (iii) \emph{c-SVM}, which are respectively regression and classification SVM models applied to discretised versions of the redshift and used exclusively for the internal validation of the redshift produced by the t-SVM model. All SVM models use common training and test sets, which we describe below. \subsubsection{Training and test sets} \label{subsubsec:ugc_svm_training_test_sets} The sources in the training and test sets were selected from the SDSS DR16 archive \citep{2020ApJS..249....3A}, which provide position, redshift, magnitudes in the $u$-, $g$-, $r$-, $i$-, $z$-bands, photometric sizes (we used here the Petrosian radius), and interstellar extinction for each spectroscopically confirmed galaxy. There are 2\,787\,883 objects in SDSS DR16 that are spectroscopically classified as galaxies, but we rejected sources with poor or missing photometry, size, or redshift, thus reducing the number of galaxies to 2\,714\,637. Despite the known lack of uniformity of the SDSS DR16 redshift distribution due to the BOSS target selection\footnote{\href{https://www.sdss.org/dr16/algorithms/boss_target_selection/}{https://www.sdss.org/dr16/algorithms/boss\_target\_selection/}}, this survey still provides the largest existing database of accurate spectroscopic redshifts of galaxies that can be used as target values in the SVM training and test sets. The selected galaxies were cross-matched to the \gdr{3} sources prior to their filtering by CU9 using a search radius of 0.54\ensuremath{''}, which resulted in 1\,189\,812 cross-matched sources. Amongst these, 711\,600 have BP/RP\xspace spectra, though not all of them are published in \gdr{3}. Because the inclusion of high-redshift galaxies would lead to a very unbalanced training set (i.e.\ very few high-redshift galaxies), we further imposed an upper limit on the SDSS DR16 redshift of $z \leq 0.6$, leaving 709\,449 sources that constitute our \emph{base set}. For the preparation of the training set, a number of conditions were further imposed on the sources in the base set: (i) $\ensuremath{G}\xspace \leq 21.0$ mag; (ii) BP/RP\xspace spectra must be composed of a minimum of six epochs of observations; (iii) the mean flux in the blue and red parts of the BP/RP\xspace spectra, as computed by \modulename{UGC}, must lie in the ranges $0.3\leq bpSpecFlux\leq 100$ e$^-$s$^{-1}$ and $0.5\leq{}rpSpecFlux\leq 200$ e$^-$s$^{-1}$, respectively, in order to exclude potentially poor-quality spectra; (iv) the image size, as characterised by the Petrosian radius, must be in the range $0.5\ensuremath{''}\leq{}petroRad50\_r\leq5\ensuremath{''}$ in order to exclude suspiciously compact or significantly extended galaxies; (v) the interstellar extinction in the $r$-band must be below the upper limit of $extinction\_r\leq0.5$ mag to avoid highly reddened sources; and (vi) the redshift must be larger than $0.01$ in order to exclude nearby extended galaxies. After applying all these cuts, 377\,875 sources remained, which we refer to as the \emph{clean set}. Of these, 6\,000 sources were randomly selected in order to construct the \emph{training set}, the redshift distribution of which is given in \tabref{tab:ugc_traintest}. The imbalance of this training set is clearly visible in this table, and is caused by the small number of high-redshift galaxies present in the clean set. \begin{table*} \small \centering \caption{Distribution of the sources in the \modulename{UGC} data sets according to their SDSS redshifts.} \label{tab:ugc_traintest} \begin{tabular}{lrrrrrrr} \hline & \multicolumn{6}{c}{Redshift ranges} & \\ Data set name & $0.0$--$0.1$ & $0.1$--$0.2$ & $0.2$--$0.3$ & $0.3$--$0.4$ & $0.4$--$0.5$ & $0.5$--$0.6$ & Total \\ \hline \hline Base set & 224\,264 & 292\,968 & 118\,248 & 65\,912 & 7\,055 & 1\,002 & 709\,449 \\ \hspace{0.5cm} Clean set & 152\,564 & 192\,675 & 29\,145 & 2\,490 & 724 & 327 & 377\,875 \\ \hspace{1cm} Clean test set$^a$ & 150\,964 & 191\,025 & 28\,045 & 1\,590 & 224 & 27 & 371\,875 \\ \hspace{1cm} Training set & 1\,600 & 1\,600 & 1\,100 & 900 & 500 & 300 & 6\,000 \\ \hspace{0.5cm} Base test set$^a$ & 222\,664 & 291\,368 & 117\,148 & 65\,012 & 6\,555 & 702 & 703\,449 \\ \hline \multicolumn{8}{p{12.5cm}}{$^a$ The base test set and clean test set are respectively composed of sources in the base set and clean set that are not contained in the training set.} \end{tabular} \end{table*} \begin{table*} \small \caption{Galactic coordinates and colour--colour regions from which \modulename{UGC} results are filtered out. Those correspond to regions where extragalactic objects are not expected: Magellanic clouds (LMC, SMC) and an area (CNT) close to the Galactic centre.} \centering \label{tab:ugc_galactic_areas} \begin{tabular}{lcccc} \hline \multirow{2}{*}{Area} & \multicolumn{2}{c}{Galactic coordinates range} & Colour-colour box A & Colour-colour box B \\ & longitude [\ensuremath{^\circ}]& latitude [\ensuremath{^\circ}] &[mag] & [mag] \\ \hline \multirow{2}{*}{CNT} & \multirow{2}{*}{$0.0\pm15.0$} & \multirow{2}{*}{$-5.0\pm5.0$} & $-0.5<\ensuremath{G}\xspace-\ensuremath{G_{\rm BP}}\xspace<0.5$ & $-0.5<\ensuremath{G}\xspace-\ensuremath{G_{\rm BP}}\xspace<3.0$\\ & & & $0.4<\ensuremath{G_{\rm BP}}\xspace-\ensuremath{G_{\rm RP}}\xspace<1.3$ & $-0.2<\ensuremath{G_{\rm BP}}\xspace-\ensuremath{G_{\rm RP}}\xspace<1.4$ \\ \hline \multirow{2}{*}{LMC} & \multirow{2}{*}{$279.5\pm4.0$} & \multirow{2}{*}{$-33.25\pm3.25$} & $-3.0<\ensuremath{G}\xspace-\ensuremath{G_{\rm BP}}\xspace<-1.5$& $-0.7<\ensuremath{G}\xspace-\ensuremath{G_{\rm BP}}\xspace<2.0$\\ & & & $-0.4<\ensuremath{G_{\rm BP}}\xspace-\ensuremath{G_{\rm RP}}\xspace<1.0$ & $-0.8<\ensuremath{G_{\rm BP}}\xspace-\ensuremath{G_{\rm RP}}\xspace<1.4$ \\ \hline \multirow{2}{*}{SMC} & \multirow{2}{*}{$303.0\pm1.0$} & \multirow{2}{*}{$-44.0\pm1.0$} & $-3.0<\ensuremath{G}\xspace-\ensuremath{G_{\rm BP}}\xspace<-1.5$& $-0.7<\ensuremath{G}\xspace-\ensuremath{G_{\rm BP}}\xspace<2.0$\\ & & & $-0.4<\ensuremath{G_{\rm BP}}\xspace-\ensuremath{G_{\rm RP}}\xspace<1.0$ & $-0.8<\ensuremath{G_{\rm BP}}\xspace-\ensuremath{G_{\rm RP}}\xspace<1.4$ \\ \hline \end{tabular} \end{table*} The conditions described in the previous paragraph were not imposed for the test set. Instead, all 703\,449 spectra in the base set that were not used for training were included in the \emph{base test set}, whose redshift distribution is shown in \tabref{tab:ugc_traintest}. Additionally, a purest test sample, the \emph{clean test set}, was derived from the clean set by removing the training data it contains. \subsubsection{Support vector machine models} \label{subsubsec:ugc_method_svm} The input of all SVM models are BP/RP\xspace spectra. The spectra are first truncated by removing the first 34 and the last 6 samples in BP, and the first 4 and the last 10 samples in RP, in order to avoid regions of low S/N. These cuts result in the definition of the usable wavelength ranges for the BP and the RP parts of the spectrum, namely 366--627 nm and 620--996 nm, respectively. Each pair of truncated spectra is then concatenated to form the SVM input vector of 186 fluxes. A common setup was implemented for the SVM model preparation (see LIBSVM\footnote{\href{https://www.csie.ntu.edu.tw/~cjlin/libsvm/}{https://www.csie.ntu.edu.tw/~cjlin/libsvm}} for details): The Standardization Unbiased method was selected to scale the target data and the vector elements to the range $[-1.0,1.0]$; the radial basis function (RBF) $K(\mathbf{x_{i}},\mathbf{x_{j}})=\exp(-\gamma|\mathbf{x_{i}}-\mathbf{x_{j}}|^{2})$ was chosen as the kernel function, and the tolerance of the termination criterion is set to $e=0.001$; shrinking heuristics are used to speed up the training process; a four-folded tuning (cross-validation) is applied to determine the optimal $\gamma$ kernel parameter and the penalty parameter $C$ of the error term in the optimisation problem. The \modulename{UGC} redshifts are estimated by t-SVM, which implements a $\epsilon$-SVR regression model trained for redshifts in the range $0.0\leq{}z\leq{}0.6$. The two other SVM models, c-SVM and r-SVM, use the BP/RP\xspace spectra as input but are trained to predict a discretised version of the redshifts and are used solely for the purpose of redshift validation (\secref{subsubsec:ugc_method_source_filtering}). The c-SVM model is a C-SVC classification model trained on six different classes corresponding to the redshift ranges $0\leq{}z<0.1$, $0.1\leq{}z<0.2$, $0.2\leq{}z<0.3$, $0.3\leq{}z<0.4$, $0.4\leq{}z<0.5,$ and $0.5\leq{}z<0.6$. The output of the c-SVM model is a class-probability vector. The element of the vector with the highest value above 0.5 is taken as the selected class. If there is no element with probability larger than 0.5, then the source is marked as unclassified. The r-SVM model implements the $\epsilon$-SVR regression model of LIBSVM ---similarly to the t-SVM model--- but it is trained on six discrete target values $(0.05, 0.15, \dots, 0.55)$. As only the first decimal is retained for the predictions, the output of the r-SVM model is directly comparable to the classes used by the c-SVM model. \subsubsection{Source filtering} \label{subsubsec:ugc_method_source_filtering} Two sets of criteria are used to select the \modulename{UGC} outputs to be published in \gdr{3}. The first set applies to specific properties of the processed sources, while the second concerns the redshift validity. An output is included in \gdr{3} only if all the criteria of the two sets are satisfied. Although \modulename{UGC} processes all $G < 21$ mag sources for which the \modulename{DSC} Combmod galaxy probability is higher than or equal to $0.25$, additional criteria were imposed for selecting the purest sample of results. First, we require that the number of spectral transits in both BP and RP is higher than or equal to ten. Second, we require that the mean flux in the blue and red parts of the BP/RP\xspace spectra lies in the ranges set in \secref{subsubsec:ugc_svm_training_test_sets}. Third, we decided to only publish redshifts for sources with $G > 17$ mag, so as to exclude bright and possibly extended sources, for which it is likely that only part of the galaxy has been recorded. Fourth, we require $G-\ensuremath{G_{\rm BP}}\xspace > 0.25$ mag in order to reduce the number of sources with true $z > 0.6$ (which lie outside the range of the training data) by as much as possible. The fifth and final condition is related to the location of blended sources that are erroneously classified as galaxies in high-density regions in the sky (see also \secref{subsec:dsc_results}). Indeed, the positional distribution of the sources processed by \modulename{UGC} shows a high concentration of galaxies in three small areas where extragalactic objects are not expected in large numbers: a region below the Galactic centre, and two areas centred on the Magellanic Clouds (see \tabref{tab:ugc_galactic_areas}). Almost 9\% of the total number of processed sources originate in these three areas. Sources in these areas also occupy a specific region of the $G-\ensuremath{G_{\rm BP}}\xspace, \ensuremath{G_{\rm BP}}\xspace-\ensuremath{G_{\rm RP}}\xspace$ colour--colour diagram that is distinct from the locus of the remaining sources. This distinction has been used to define colour cuts (shown in \tabref{tab:ugc_galactic_areas}) which, in combination with the coordinates of the three areas, allowed us to clean the suspicious clumps of galaxies and to remove a large number of potentially misclassified sources in these three areas. Nonetheless, conditions listed in \tabref{tab:ugc_galactic_areas} are not applied if the \modulename{DSC} Combmod probability for the source to be a galaxy is equal to one. The comparison of the redshifts produced by the t-SVM model to those of the r-SVM and c-SVM models allows us to internally validate the \modulename{UGC} redshifts. The implementation of the filtering involves first the rejection of sources for which at least one of the SVM models has not produced an output (either because there is no prediction or because the source is marked as unclassified). Second, the three computed redshifts are required to span at most two adjacent bins of redshift, similar to those defined for the c-SVM and r-SVM models. The largest absolute difference between the t-SVM redshift and the central value of the c-SVM and r-SVM redshift bins is $0.08$. The redshifts of sources not satisfying one of these criteria are not published in \gdr{3}. \subsection{Performance} \label{subsec:ugc_performance} \begin{figure*} \centering \includegraphics[width=0.48\textwidth]{figures/ugc_t-svm3_perf_test.png} \includegraphics[width=0.48\textwidth]{figures/ugc_t-svm3_perf_testclean.png} \caption[\modulename{UGC} t-SVM model performance]{Comparison of the \modulename{UGC} redshifts, as estimated from the t-SVM model with SDSS DR16 redshifts for the base test set (left) and for the clean test set (right), as identified in \secref{subsubsec:ugc_svm_training_test_sets}.} \label{fig:ugc_t-svm3_test_perf} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.33\textwidth]{figures/ugc_t-svm3_stat_02bin.png} \includegraphics[width=0.32\textwidth]{figures/ugc_t-svm3_perf_sensitivity.png} \includegraphics[width=0.32\textwidth]{figures/ugc_t-svm3_perf_precision.png} \caption{ Left panel: Mean ($\mu_i$) and standard deviation ($\sigma_i$) of the difference between the \modulename{UGC} redshifts, from the t-SVM model, and associated SDSS redshifts for sources contained in the \modulename{UGC} base test set and averaged over redshift bins of size $0.02$. Completeness (middle panel) and purity (right panel) as a function of redshift, evaluated on the \modulename{UGC} test set (black) and clean set (cyan). The bin size is equal to 0.1.} \label{fig:ugc_t-svm3_performance} \end{figure*} The overall performance of the t-SVM model is given by the mean ($\mu$) and the standard deviation ($\sigma$) of the difference between the estimated and the real (target) redshifts. The internal test, applied to the training set itself, yields $\sigma=0.047$ and $\mu=-0.003$. The external test, which is performed on all 703\,449 spectra in the base test set, yields $\sigma=0.053$ and $\mu=0.020$ (\figref{fig:ugc_t-svm3_test_perf}, left panel). These values indicate that the performance is worse for the base test set, as expected. If the clean test set of 371\,875 spectra is used the performance is improved significantly, with $\sigma=0.037$ and $\mu=0.008$ (\figref{fig:ugc_t-svm3_test_perf}, right panel). The performance varies with redshift. To quantify this, the base test set was divided into SDSS redshift bins of size $0.02$. The mean, $\mu_i$, and the standard deviation, $\sigma_i$, of the differences between the redshift predicted by t-SVM and the real (SDSS) redshifts were determined for each one of these bins, as shown in \figref{fig:ugc_t-svm3_performance} (left panel). Generally, there are three regions with different performance. For $z<0.02,$ the error and the bias are relatively large indicating that the t-SVM is ineffective for redshifts close to zero. The performance is good in the range of $0.02<z<0.26$; however, for larger redshifts, the bias changes significantly from almost zero to positive and then to negative values, while the error progressively increases. For $z>0.5,$ both $\mu_i$ and $\sigma_i$ show large scatter, probably due to the fact that large redshifts are under-represented in the t-SVM training set. In addition, the performance of the t-SVM model as a function of redshift was investigated by constructing a confusion matrix, as in classification problems. To this effect, a different class has been assigned to each redshift bin, $z_{\rm bin}$, both for the real (SDSS) and the predicted (t-SVM) redshifts. In this case, the bin size was $0.1$. The confusion matrix presents the total number of cases for each real and each predicted class (see for details the \linktosec{cu8par}{apsis}{ugc}). For a given redshift bin, $z_{\rm bin}$, the numbers of true-positive $TP$, false-negative $FN$, and false-positive $FP$ predictions are used to evaluate the sensitivity, or completeness, $TP/(TP+FN),$ and the precision, or purity, $TP/(TP+FP)$. \figref{fig:ugc_t-svm3_performance} (middle and right panels) show the t-SVM completeness and purity for the redshift bins of the base and clean test sets in bins of redshift. Both completeness and purity for the base and clean test sets are very good up to a redshift of $z=0.2$. The purity is moderate ($\sim$ 0.5) for the two test sets for the redshift bin 0.2--0.3 and fails at larger redshifts. The completeness is moderate in the 0.3--0.5 bin and fails for the last bin. Generally, good performance can be expected for redshifts $z\leq 0.2$. \subsection{Results} \label{subsubsec:ugc_results} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/ugc_galSky_redshUgc_log.png} \caption[\modulename{UGC} sources Galactic sky]{Galactic sky distribution of the number of sources with redshifts estimated by \modulename{UGC}. The plot is shown at HEALPix level 7 (0.210 \ensuremath{\,\rm deg}$^2$).} \label{fig:ugc_galsky} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.32\textwidth]{figures/ugc_hist_redshUgc_02bin_lin.png} \includegraphics[width=0.32\textwidth]{figures/ugc_distr_redshUgc_Gmag_lin.png} \includegraphics[width=0.32\textwidth]{figures/ugc_distr_RP_BP_zAll.png} \caption[\modulename{UGC}\ estimated redshifts distribution]{Distribution of the \modulename{UGC} redshifts. (Left) Histogram of the estimated redshift in bins of size 0.02. (Middle) \modulename{UGC} redshifts as a function of \ensuremath{G}\xspace magnitude. (Right) Distribution of the sources with \modulename{UGC} redshifts on a BP/RP\xspace magnitude diagram where different colours correspond to different redshift ranges.} \label{fig:ugc_redshifts_distr} \end{figure*} The \modulename{UGC} output is included in the \linktoEGTable{galaxy_candidates} table. There are 1\,367\,153 sources for which \modulename{UGC} provides a redshift value as estimated by t-SVM (\secref{subsubsec:ugc_method_svm}), \linktoEGParam{galaxy_candidates}{redshift_ugc}, along with the corresponding lower and upper limits of the SVM prediction interval, \linktoEGParam{galaxy_candidates}{redshift_ugc_lower} and \linktoEGParam{galaxy_candidates}{redshift_ugc_upper}, respectively. The parameter \texttt{redshift\_ugc\_lower} is defined as \texttt{redshift\_ugc}$-\mu_{i}-\sigma_{i}$, where $i$ corresponds to the $i$th redshift range identified in the previous section, and $\mu_i$ and $\sigma_i$ are the associated bias and standard deviation computed on the base test set. Similarly, the parameter \texttt{redshift\_ugc\_upper} is defined as \texttt{redshift\_ugc}$-\mu_{i}+\sigma_{i}$. The value of $($\texttt{redshift\_ugc\_upper}$-$\texttt{redshift\_ugc\_lower}$)/2$ can therefore be used as an estimate of the 1-$\sigma$ uncertainty on \texttt{redshift\_ugc}. Apart from the Galactic plane, the sources with \modulename{UGC} redshifts are almost uniformly distributed on the sky, as seen in \figref{fig:ugc_galsky}, although there are two strips (lower-left and upper-right) of relatively lower density displaying residual patterns. These are regions that have been observed fewer times by Gaia and thus many of the sources in them do not appear in the \modulename{UGC} output because of the filters applied on the number of transits (see Figure~\ref{fig:dsc_number_skyplots}). The distribution of the estimated \texttt{redshift\_ugc} values shown in the left panel of \figref{fig:ugc_redshifts_distr} has a maximum at $z\simeq$ 0.1, while almost 91\% of the redshifts are within $0.05\leq{}z<0.25$. About 7\% of the sources have redshifts larger than 0.25. The lowest and the highest redshifts reported are $z_{min}=-0.036$ and $z_{max}=0.598$, respectively. There are 33 sources with negative redshifts, although most of these values are very close to zero (with median value of -0.0054). The dependence of the \texttt{redshift\_ugc} values on \ensuremath{G}\xspace magnitude is shown in the middle panel of \figref{fig:ugc_redshifts_distr}. As expected, sources with higher redshift are fainter (e.g. $z>0.4$ sources are mostly found at $\ensuremath{G}\xspace>19$ mag, while $z>0.5$ sources are found at $\ensuremath{G}\xspace>20$ mag). The dependence of the estimated redshift on the source magnitude is also evident in the BP/RP\xspace magnitude--magnitude diagram shown in the right panel of \figref{fig:ugc_redshifts_distr}, where different redshift ranges are represented with different colours. \begin{figure*} \centering \includegraphics[width=0.32\textwidth]{figures/ugc_hist_zUgc_zSdss_02bin_lin.png} \includegraphics[width=0.32\textwidth]{figures/ugc_distr_redshUgc_zSdss_lin.png} \includegraphics[width=0.32\textwidth]{figures/ugc_distr_redsh_zDiff_Gmag.png} \caption[Redshift SDSS and \modulename{UGC}\ redshifts comparison]{Comparison of the \modulename{UGC} estimated and the actual (SDSS DR16) redshifts for the 248\,356 sources in common (not shown are 67 sources with actual redshift greater than 0.6). Left panel: Distributions of the \modulename{UGC} redshifts and SDSS DR16 redshifts indicates that \modulename{UGC} tends to overestimate the small redshifts. Middle panel: Comparison of the \modulename{UGC} redshifts and SDSS DR16 redshifts. The unit line is shown in red. A small horizontal branch at \texttt{redshift\_ugc}=0.07 is discussed in the text. Right panel: Differences between the \modulename{UGC} and SDSS DR16 redshifts as a function of $\ensuremath{G}\xspace$ magnitude. The red horizontal line designates perfect agreement.} \label{fig:ugc_redshift_sdss_compare} \end{figure*} There are 248\,356 sources with published \texttt{redshift\_ugc} in common with those spectroscopically classified as \texttt{'GALAXY'} or \texttt{'QSO'} in the SDSS DR16 (using a radius of $0.54$\ensuremath{''}, as before). The differences between the \texttt{redshift\_ugc} and the SDSS redshifts have a mean and standard deviation of $\mu=0.006$ and $\sigma=0.054$, respectively. If the 67 sources with SDSS redshifts greater than 0.6 are excluded, the standard deviation is reduced to $0.029$. \figref{fig:ugc_redshift_sdss_compare} (left panel) compares the distributions of the two redshift estimates. There is a clear excess in the number of sources with \modulename{UGC} redshifts around 0.1 compared to the SDSS redshifts. At the same time, there is a deficit in the lower redshift bins for \modulename{UGC}. The observed differences are probably due to an overestimation by \modulename{UGC} of lower SDSS redshifts. These effects are better demonstrated in \figref{fig:ugc_redshift_sdss_compare} (middle panel). Most of the sources follow the unit line, albeit with significant scatter. However, there is a small bias which tends to be positive for $z \approx 0.1$. We also see in \figref{fig:ugc_redshift_sdss_compare} (middle panel) a short dense horizontal feature of sources with \texttt{redshift\_ugc} around 0.07, while the corresponding SDSS redshifts span a range of values from $\simeq$ 0 to 0.07. We see that the majority of these problematic values occur at $0.07 < $\texttt{redshift\_ugc}$ < 0.071$, with 5178 sources with redshift values in the range 0.070822--0.070823. Detailed analysis (see the \linktosec{cu8par}{apsis}{ugc}) indicates that this peak contains a relatively large fraction of very bright sources (with $\ensuremath{G}\xspace < 17.5$, $\ensuremath{G_{\rm BP}}\xspace <16$ and $\ensuremath{G_{\rm RP}}\xspace<15$ mag), suggesting that the SVM models, which are not trained at all for bright, nearby galaxies, tend to make constant redshift predictions for such objects. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figures/ugc_distr_redshUgc_zSdss_high_redshift.png} \caption[\modulename{UGC} output includes high-redshift sources]{ \modulename{UGC} sources with high redshift from the SDSS DR16. Blue and red points are sources that are spectroscopically classified as \texttt{`QSO'} and \texttt{`GALAXY'} in the SDSS DR16, respectively.} \label{fig:ugc_matched_contamin_highz} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figures/ugc_distr_redshUgc_zSdss_Qso.png} \caption[\modulename{UGC} output includes QSOs]{ Comparison of the \modulename{UGC} redshifts for sources classified as \texttt{`QSO'} in the SDSS DR16, with actual redshift lower than 0.6.} \label{fig:ugc_matched_contamin_qso} \end{figure} \figref{fig:ugc_redshift_sdss_compare} (right panel) shows the difference between \texttt{redshift\_ugc} and the actual SDSS redshift, as a function of $\ensuremath{G}\xspace$ magnitude. As expected, the performance of the \modulename{UGC} redshift estimator is poorer for fainter sources as indicated by the larger dispersion seen at faint $\ensuremath{G}\xspace$ magnitudes. The positive bias of the very bright and nearby galaxies is also clearly seen. \subsection{Use of \modulename{UGC} results} \label{subsec:ugc_filtering} \modulename{UGC} selects sources that have a \modulename{DSC} probability of being a galaxy of \linktoAPParam{astrophysical_parameters}{classprob_dsc_combmod_galaxy} $\geq 0.25$. This is a relatively low threshold, and so the final \modulename{UGC} galaxy catalogue is expected to include some misclassified quasars. Indeed, 5170 sources, or $\simeq 2\%$ of the sources in common with the SDSS DR16, have a SDSS spectroscopic class \texttt{`QSO'} while 58 of them also have SDSS redshifts $z>0.6$, i.e. higher than the \modulename{UGC} limit. There are also 9 high-redshift sources spectroscopically classified as \texttt{`GALAXY'} by the SDSS. \figref{fig:ugc_matched_contamin_highz} shows a comparison between \texttt{redshift\_ugc} and SDSS redshifts for high-redshift sources. As expected, the \modulename{UGC} predictions are unreliable for these sources. However, as seen in \figref{fig:ugc_matched_contamin_qso}, the agreement between \texttt{redshift\_ugc} and SDSS redshifts of QSOs with redshifts below 0.6 is good, despite the fact that the SVM was not trained for quasars. The \modulename{UGC} performance varies with redshift. As a consequence, redshifts larger than $0.4$ and lower than $0.02$ are less reliable. A suspiciously large peak of sources also appears in the redshift bin $0.070 < ${\tt redshift\_ugc}$ < 0.071$, where about $17\,000$ sources are found. It is estimated that most of the sources in this peak are some of the brightest in the \modulename{UGC} output and have SDSS redshifts below 0.04. About 40\% of these can be discarded by applying the previously mentioned cuts to sources with $0.070 < ${\tt redshift\_ugc}$ < 0.071$: $\ensuremath{G}\xspace > 17.5$, $\ensuremath{G_{\rm BP}}\xspace > 16.2,$ and $\ensuremath{G_{\rm RP}}\xspace > 15.0$ mag (see the \linktosec{cu8par}{apsis}{ugc} for details). \section{Total Galactic extinction (TGE) map} \label{sec:tge} \subsection{Objectives} \label{subsec:tge_objective} To support extragalactic studies, it was decided to use the extinction determinations obtained for single stars based on their astrometry and spectrophotometry \citep{DR3-DPACP-156} to estimate the total extinction from the Milky Way as a function of sky position, that is, the full cumulative foreground extinction by the Milky Way on distant extragalactic sources. Taking advantage of the HEALPix encoded in the \linktoMainParam{gaia_source}{source_id}, a series of HEALPix maps of the total Galactic extinction are provided using a selected subset of sources in each HEALPix, which are referred to as extinction tracers. All-sky HEALPix maps of the total Galactic extinction are delivered in two tables at various resolutions (i.e.\ HEALPix levels). These are the tables \linktoAPTable{total_galactic_extinction_map} and \linktoAPTable{total_galactic_extinction_map_opt}, described below. The first of these tables contains HEALPix maps at levels 6 through 9 (corresponding to pixel sizes of 0.839 to 0.013 \ensuremath{\,\rm deg}$^2$), with extinction estimates for all HEALPixes that have at least three extinction tracers, while the second map is a reduced version of this first map where a subset of the pixels is used to construct a map at variable resolution, using the smallest HEALPix available with at least ten tracers for HEALPix levels 7 through 9. This extinction map is the first of its kind, as reported values are based on sources beyond the interstellar medium (ISM) in the disc of the Milky Way. This differs from previous 2D extinction maps where it is not clear to what distance the extinction is integrated to, while for extant 3D maps, not every line of sight contains tracers beyond the ISM layer of the Galactic disc. As such, it is well suited for extra-galactic studies and comparisons with line-of-sight-integrated observations such as dust emission or diffuse gamma-ray emission. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/TGE_opt.png} \caption{HEALPix map of the total Galactic extinction, built from HEALPixes between levels 6 and 9 (0.839 to 0.013 \ensuremath{\,\rm deg}$^2$), which are identified as being at the optimum resolution over their field of view.} \label{fig:tge_opt} \end{figure*} \subsection{Method} \label{subsec:tge_method} To estimate the extinction in each HEALPix, sources that are classified as stars by DSC (i.e. sources with \linktoMainParam{gaia_source}{classprob_dsc_combmod_star} $>0.5$; see Section \ref{sec:dsc}) and with stellar parameters consistent with being giants (as provided by the set of \modulename{GSP-Phot} APs from the `best' library from \cite{DR3-DPACP-156} and provided in the main \linktoMainTable{gaia_source} table) are used as extinction tracers. Giant stars are used as they are intrinsically bright and numerous outside the ISM layer of the Galactic disc. The selection of these tracers is done based on \modulename{GSP-Phot} effective temperatures (\linktoMainParam{gaia_source}{teff_gspphot}) $3000 < \ensuremath{T_{\rm eff}}\xspace < 5700$K, and absolute magnitudes (\linktoAPParam{astrophysical_parameters}{mg_gspphot}) $4 > M_G > -10$. Given these criteria, the extinction parameters from the \modulename{GSP-Phot} best library come from those based on either the MARCS or Phoenix spectral libraries. From an analysis of extinction estimates from two different libraries, no significant systematic trends are found when comparing the extinctions from the two libraries on a per HEALPix basis \citep{DR3-DPACP-160}. In addition, extinction tracers are required to be at least 300 pc above or below the Galactic plane ($b = 0$), or with a Galactocentric radius of $R > 16$ kpc. To establish these criteria, the distance to the source provided by \modulename{GSP-Phot} (\linktoMainParam{gaia_source}{distance_gspphot}) is used. Once the extinction tracers for a given HEALPix are selected, if three or more tracers are available, the median \ensuremath{A_{\rm 0}}\xspace of the tracers\footnote{\ensuremath{A_{\rm 0}}\xspace is the extinction parameter from the adopted Fitzpatrick extinction law \citep{1999PASP..111...63F}, defined as the monochromatic extinction at 541.4nm. See the \linktosec{cu8par}{data}{xp} for details.} ---as given by the \modulename{GSP-Phot} parameter \linktoMainParam{gaia_source}{azero_gspphot}--- is taken as the estimate of the total Galactic extinction (\linktoAPParam{total_galactic_extinction_map}{a0}) for the HEALPix, while the uncertainty of the total Galactic extinction (\linktoAPParam{total_galactic_extinction_map}{a0_uncertainty}) is taken as the standard error of the sample mean of \ensuremath{A_{\rm 0}}\xspace of the tracers. This latter is a choice of convenience, as the small number of tracers in most of the HEALPixes prevents a meaningful estimate of quantiles. Both the median and uncertainty are estimated after a 3-$\sigma$ cut about the median of the unclipped sample in order to remove outliers; this was done principally to remove outliers that were otherwise strongly impacting our estimate of the uncertainty. HEALPixes with fewer than three tracers have no extinction value assigned to them. A diagnostic flag \linktoAPParam{total_galactic_extinction_map}{status} is provided which is set to zero if the number of tracers is three or greater, while a non-zero value gives an indication as to why an insufficient number of tracers were found. The uncertainty of the TGE extinction is generally much smaller than the dispersion of the individual extinction measures of the tracers in the HEALPix, which can be dominated by intrinsic variation of extinction in the field defined by the HEALPix, especially at lower Galactic latitudes with significant extinction. To recover the standard deviation of the distribution of \ensuremath{A_{\rm 0}}\xspace measures of the tracers in a HEALPix, one should multiply the given uncertainty by the square root of the number of tracers used (\linktoAPParam{total_galactic_extinction_map}{num_tracers_used}). The full range of \ensuremath{A_{\rm 0}}\xspace extinction measures of the tracers (\linktoAPParam{total_galactic_extinction_map}{a0_min}, \linktoAPParam{total_galactic_extinction_map}{a0_max}) is also provided. The first table, \linktoAPTable{total_galactic_extinction_map}, contains HEALPix maps at four different HEALPix levels, from level 6 (49\,152 HEALPixes with an area of 0.84 \ensuremath{\,\rm deg}$^2$) to level 9 (3\,145\,728 HEALPixes with an area of 0.013 \ensuremath{\,\rm deg}$^2$ , with the HEALPix level indicated with the parameter \linktoAPParam{total_galactic_extinction_map}{healpix_level}. This range of HEALPix levels ensures that a minimum number of tracers per HEALPix will be found at high Galactic latitudes, where the sky density of tracers is low, while allowing a higher resolution in areas of the sky where the density of tracers is high. (At level 9 only 1\% of the sky has more than 40 tracers per HEALPix.) For any given direction we determine the optimum HEALPix level, that is, the set of the smallest HEALPixes with at least ten tracers to ensure a reliable estimate of the extinction and its uncertainties. However, as the base resolution is HEALPix level 6, all HEALPixes with fewer than ten tracers at this level are tagged as `optimum'. As in the level 6 map, the optimum map has full sky coverage at $|b| > 5\ensuremath{^\circ}$ (i.e. all HEALPixes at $|b| > 5\ensuremath{^\circ}$ have at least three tracers, so an \ensuremath{A_{\rm 0}}\xspace value is reported for each of them). In the HEALPix scheme, each HEALPix at level $n$ contains four sub-HEALPixes at level $n+1$, meaning that each of the four sub-HEALPixes must have at least ten tracers to allow all four to be tagged as optimum. This algorithm is repeated iteratively over each level, starting at the base level 6, until the lack of tracers in a sub-HEALPix prevents further subdivision, or until level 9 is reached. In the table \linktoAPTable{total_galactic_extinction_map}, the optimum HEALPixes are flagged as such with the boolean flag \linktoAPParam{total_galactic_extinction_map}{optimum_hpx_flag}. This algorithm ensures that the subset of optimum HEALPixes do not overlap with one another, yet cover the entire sky. The second table, \linktoAPTable{total_galactic_extinction_map_opt}, is a single optimum HEALPix map at level 9 provided for convenience, where each HEALPix adopts the extinction value of the optimum HEALPix \linktoAPTable{total_galactic_extinction_map} coincident with or containing the HEALPix. That is, if a HEALPix at level 6 is tagged as optimum in \linktoAPTable{total_galactic_extinction_map}, then all 64 of its level-9 sub-HEALPixes in the \linktoAPTable{total_galactic_extinction_map_opt} map will be assigned the \linktoAPParam{total_galactic_extinction_map}{a0} value of the level 6 HEALPix. The parameter \linktoAPParam{total_galactic_extinction_map_opt}{optimum_hpx_level} in this table indicates, for each HEALPix, the HEALPix level of the optimum HEALPix from which its \linktoAPParam{total_galactic_extinction_map_opt}{a0} value is based. \subsection{Performance} \label{subsec:tge_performances} At the base level 6, only 2.8\% of the sky (1379 out of 49152 HEALPixes) close to the Galactic plane (with $|b| < 5\ensuremath{^\circ}$) has no \linktoAPParam{total_galactic_extinction_map}{a0} values because of an insufficient number of tracers. The fraction of HEALPixes with an insufficient number of tracers increases at the higher HEALPix levels as the HEALPixes become smaller: 5.2\% at level 7, 30.4\% at level 8, and 66.3\% at level 9. The average number of tracers for the HEALPixes with \ensuremath{A_{\rm 0}}\xspace estimates is 268.3 at level 6, but only 10.7 at level 9, while the average number of tracers for the optimum HEALPix map is 30.3. The optimum HEALPix map, \linktoAPTable{total_galactic_extinction_map_opt}, shown in Figure \ref{fig:tge_opt}, has the same sky coverage as the level 6 map, but is of higher resolution when a sufficient number of tracers are available. To better demonstrate this, we show a zoom into the Rho Ophiuchi region in Figure \ref{fig:tge_rho-oph}. Over the whole sky, only about 1\% of the HEALPixes at level 9 have more than 40 tracers, and thus the potential to be mapped at higher resolution. Figures showing the individual all-sky maps at levels 6 through 9 can be found in the \linktosec{cu8par}{apsis}{tge}, along with maps of the \linktoAPParam{total_galactic_extinction_map}{a0_uncertainty}. We note that the \linktoAPParam{total_galactic_extinction_map}{a0_uncertainty} is smallest in HEALPix level 6 with a mean value of 0.03 mag; this is due to the larger number of tracers contained in the HEALPixes at this level, whereas the mean \linktoAPParam{total_galactic_extinction_map}{a0_uncertainty} of the HEALPixes in \linktoAPTable{total_galactic_extinction_map} tagged as optimum (\linktoAPParam{total_galactic_extinction_map}{optimum_hpx_flag} $=1$) is of 0.06 mag, as they cover various HEALPix levels. \begin{figure} \centering \includegraphics[width=.48\textwidth]{figures/TGE_rho-oph.png} \caption{ \ensuremath{A_{\rm 0}}\xspace towards Rho Ophiuchi from the TGE optimum HEALPix map (Fig.\,\ref{fig:tge_opt}) centred at $(l,b)=(-5\ensuremath{^\circ},18\ensuremath{^\circ})$. The solid white line in the upper right corner provides the angular scale of the image. The variable resolution of the optimum HEALPix map is particularly obvious towards the middle of the figure.} \label{fig:tge_rho-oph} \end{figure} In Fig.\,\ref{fig:tge_vs_planck}, the TGE $\ensuremath{A_{\rm 0}}\xspace$ estimate at the optimum HEALPix level 9 is plotted against the dust optical depth expressed as $A_V$ from \citet{2016A&A...596A.109P}\footnote{The Planck collaboration reports $E(B-V)$ that we convert to $A_V$ via $A_V=R_V E(B-V)$ and $R_V=3.1$. See the Planck Legacy Archive (\href{http://pla.esac.esa.int}{http://pla.esac.esa.int}) for details.}, once re-binned at the same HEALPix level. We see good agreement, as a linear fit using the median points with $0.2\le A_V \le 3$ results in a slope of $1.04 \pm 0.05$, albeit with an offset of 0.09 $\pm 0.05$. It should be noted that the ratio of $A_V / A_0$ for giants (stars with effective temperature $3000 < \ensuremath{T_{\rm eff}}\xspace < 5700$K) is $\sim 0.98$ (see the \linktosec{cu8par}{data}{xp}), meaning that the slope of TGE (converted to $A_V$) over Planck($A_V$) is $1.04 \times 0.98 = 1.02$. Also worth bearing in mind is that there are a number of Planck maps of the dust distribution available on the Planck Legacy Archive; for example, using the map described in \citet{2016A&A...586A.132P} we find a slope of 0.90$\pm0.04$ and an offset of 0.05$\pm0.04$. Performing a linear fit in the same extinction range between TGE \ensuremath{A_{\rm 0}}\xspace and \citet{1998ApJ...500..525S} $A_V$ results in a slope of $0.98 \pm 0.04$ (offset: 0.10$\pm0.04$, in agreement with the $1.04 \pm 0.05$ obtained using Planck. However, the same linear fit performed between TGE and the Bayestar's map \citep{2019ApJ...887...93G} results in a slope of $1.20 \pm 0.04$ (offset: 0.01$\pm0.04$), suggesting that the Bayestar map is systematically underestimating the extinction with respect to other extinction maps; see discussion in \citet{DR3-DPACP-156}. Towards the limit where the extinction measured by Planck tends to zero, the TGE \ensuremath{A_{\rm 0}}\xspace tends to a non-zero value. This offset is found empirically by fitting a third-order polynomial to the median points for \ensuremath{A_{\rm 0}}\xspace$ < 0.4$ and obtaining the TGE \ensuremath{A_{\rm 0}}\xspace value at Planck $A_V=0$. The resulting offset is $0.10 \pm 0.03$ mag and starts to become evident at $A_V < 0.1$\,mag. The existence of this offset is likely due to the fact that the \modulename{GSP-Phot} extinction prior forces its extinction estimate to be non-negative, which creates a statistical bias at very low extinction values. Indeed, this \ensuremath{A_{\rm 0}}\xspace offset is of the order expected if the true uncertainty of the \ensuremath{A_{\rm 0}}\xspace estimates per source were 0.1 magnitude. See \citet{DR3-DPACP-156} for further discussion. \begin{figure} \centering \includegraphics[width=.49\textwidth]{figures/TGE_Planckscatter_level-opt_zoom.png} \caption{Extinction comparison between the TGE \ensuremath{A_{\rm 0}}\xspace optimum HEALPix map and the Planck $A_V$ HEALPix level 9 map at small extinction values. The colour scale shows the density of HEALPixes, the red dashed line represents unity, and the points with error bars are the median \ensuremath{A_{\rm 0}}\xspace and average absolute deviation computed in $A_V$ bins of width 0.025 mag. The red line is the result of a linear fit to the points. } \label{fig:tge_vs_planck} \end{figure} \begin{figure} \centering \includegraphics[width=.55\textwidth]{figures/tge_vs_Planck_per_hpx_level_pts_greys.png} \caption{Comparison of the extinction between the TGE \ensuremath{A_{\rm 0}}\xspace optimum HEALPix map and the Planck $A_V$ HEALPix level 9 map for extinctions up to 10 mag. The background grey scale is a density plot of the entire optimal HEALPix TGE map (comprising the optimal HEALPixes at several HEALPix levels). The dashed red line represents unity and the solid red line is a linear fit of the medians of all HEALPixes in the optimum HEALPix map with $0.5\le A_V \le 3$. Coloured symbols refer to the median \ensuremath{A_{\rm 0}}\xspace computed in $A_V$ bins of width 0.2 mag for various HEALPix levels that are used to assign the \ensuremath{A_{\rm 0}}\xspace value.} \label{fig:tge_vs_planck_per_hpx_level} \end{figure} Comparing TGE \ensuremath{A_{\rm 0}}\xspace to Planck $A_V$ over a larger interval highlights a possible bias at extinctions $A_V \ge 4$ mag. In Fig.\ref{fig:tge_vs_planck_per_hpx_level}, TGE is plotted versus Planck over an interval of ten magnitudes. A large dispersion in \ensuremath{A_{\rm 0}}\xspace is observed for the optimal map for $A_V > 4$ mag, and it can be seen that the different HEALPix levels do not behave in the same way. The coarser resolutions (levels 6 and 7) initially predict less extinction than Planck (for $4\le A_0 \le 5$ mag) whereas the finer resolutions either agree or predict higher extinction. Above an $A_V$ of 5 mag, only level 6 predicts less extinction than Planck, while the others predict more. Even for $A_V<4$ mag, where TGE and Planck are in very good agreement, a difference can be seen where the lower resolutions predict lower extinction. This is likely due to a selection effect where in a given HEALPix with variable extinction, more stars will be observed where the extinction is smaller. This will bias the extinction estimate for the HEALPix to lower values, and will be more obvious for larger HEALPixes. Finally in Fig. \ref{fig:tge_vs_planck_map} the residual map of TGE $A_0$ minus Planck $A_V$ is shown. TGE underestimates extinction with respect to Planck toward molecular clouds, where dust emission remains optically thin but where TGE estimates may be biased toward smaller values as unresolved areas with below average extinction are oversampled, as mentioned above; see further discussion regarding high-extinction regions in the following section. Meanwhile, within about 30\ensuremath{^\circ} towards the Galactic centre, TGE shows more extinction than Planck, apart from the foreground molecular complexes we just mentioned. \begin{figure} \centering \includegraphics[width=.48\textwidth]{figures/TGE_minus_Planck.png} \caption{Residual sky map of TGE $A_0$ minus Planck $A_V$, using the optimum HEALPix level 9 map. Red values show regions where TGE predicts more extinction than Planck, whereas blue values show the opposite. } \label{fig:tge_vs_planck_map} \end{figure} \subsection{Use of \modulename{TGE} results} \label{subsec:tge_filtering} The \modulename{TGE} extinction maps estimate the total Galactic extinction \ensuremath{A_{\rm 0}}\xspace from the Milky Way ISM toward extragalactic sources, where \ensuremath{A_{\rm 0}}\xspace is the monochromatic extinction at 541.4nm. As mentioned above, $A_V / A_0$ is approximately equal to 0.98 for cool stars at $A_0 < 3$mag. However, in general, the effective extinction in a passband depends on the SED of the source; see the \linktosec{cu8par}{data}{xp} for a discussion on how to derive the extinction from \ensuremath{A_{\rm 0}}\xspace for any passband. As the selected extinction tracers were required to be beyond a certain minimum distance to ensure that they were outside the ISM layer of the Milky Way's disc, sources in nearby galaxies may also be selected as tracers. This means that the extinction towards the LMC and SMC will be a combination of Galactic extinction, inter-galactic extinction, and extinction in the Magellanic clouds (although the latter will be the dominant contribution). Another factor that will influence the amount of reported extinction in these directions stems from the distance prior used in \modulename{GSP-Phot}, which assumes that the sources are Galactic. As such, the extinction will be overestimated. An evaluation of this overestimation can be obtained via a comparison with an external data set. Indeed, in Fig. \ref{fig:tge_vs_planck}, there is a cloud of points with a locus stretching from around $A_V$=0.2, $\ensuremath{A_{\rm 0}}\xspace$=0.8 to $A_V$=0.4, $\ensuremath{A_{\rm 0}}\xspace$=1.2 that consists entirely of lines of sight towards the Magellanic clouds. Comparing the median TGE $A_0$ (1.0 mag) to the median Planck $A_V$ (0.4 mag) towards the LMC reveals a difference of 0.6 mag. These values are both higher than the extinction found using near-infrared observations \citep[$A_V$ = 0.3 mag; ][]{2007ApJ...662..969I} and in the visible \citep[$A_V$ = 0.24 mag; ][]{2013MNRAS.431.1565W}. This difference is likely not only due to the \modulename{GSP-Phot} distance prior, but also to variations in dust properties in the LMC/SMC. Although the absolute level of extinction in these Galactic satellites needs to be interpreted with caution, the relative variations evidencing structured patterns are most certainly real (see Fig. \ref{fig:tge_LMC}). Because extinction tracers are required to be outside the dust layer of the Milky Way, they must be at greater distances at lower Galactic latitudes. This, together with the effect of increasing extinction and Gaia\xspace's magnitude limit, means that at very low latitudes it is not possible to find a sufficient number of tracers outside the ISM layer of the Milky Way with which to make a reliable estimate of the total Galactic extinction. This explains the band of HEALPixes at $b \approx 0$ with no extinction values. Indeed we recommend that the map should not be used for latitudes $|b| < 5\ensuremath{^\circ}$. Also, \modulename{GSP-Phot} sets an upper limit of ten magnitudes on its estimate of \ensuremath{A_{\rm 0}}\xspace per source, and so any HEALPixes with an extinction near this value should be interpreted as a lower bound. However, as suggested by figure \ref{fig:tge_vs_planck_per_hpx_level}, our maps may instead be over-estimating extinction toward these lines of sight with respect to Planck, though we point out that HEALPixes with $\ensuremath{A_{\rm 0}}\xspace > 4$mag are at low Galactic latitude and make up only 2\% of the sky. Furthermore, Planck estimates towards the Galactic plane may be underestimated as a consequence of assuming a single mean dust temperature for the whole line of sight. Further details of the TGE data products are documented in the \linktosec{cu8par}{apsis}{tge}. \begin{figure} \centering \includegraphics[width=.48\textwidth]{figures/TGE_LMC.png} \caption{\ensuremath{A_{\rm 0}}\xspace towards the LMC from the TGE Optimum HEALPix map (Fig.\,\ref{fig:tge_opt}), centred at $(l,b)=(280.0\ensuremath{^\circ},-33.0\ensuremath{^\circ})$. The estimated offset of \ensuremath{A_{\rm 0}}\xspace=0.6 mag has been subtracted. The solid white line in the bottom left corner provides the angular scale of the image.} \label{fig:tge_LMC} \end{figure} \section{Beyond Gaia DR3} \label{sec:beyond_gdr3} We present the non-stellar and classification modules from CU8 in their present status, as for \gdr{3}. However, they are in constant evolution and changes are already planned for \gdr{4} and later, which we summarise for each module in this section. Although the intrinsic performance of \modulename{DSC} is very good, once we take into account class prior ---as we do for all results shown in this paper--- the purities of the classified samples are modest. In preparation for \gdr{4,} we will aim to improve this, for example by optimising the feature set in Allosmod and how this is used. We will also reconsider the class definitions and the training data, in particular for white dwarfs and physical binaries. As Specmod uses the entire BP/RP\xspace spectrum, we expected better performance (compared to Allosmod), and so we will investigate improving the classifier. We may also introduce filters to remove the classifications of the lowest quality data (which are the main determinant of the low purities). OA will be upgraded by implementing its own outlier detector, which will be mostly based on unsupervised clustering algorithms. Additionally, we will improve the statistical description and the templates that were used for \gdr{3}. The functionality offered by the GUASOM visualisation tool will be extended in order to allow the user to perform and explore their own clustering analysis. QSOC will use epoch BP/RP\xspace spectra re-sampled into logarithmic wavelength bins in order to overcome the issues we encountered while using the Hermite spline polynomials associated with the internal representation of the BP/RP\xspace spectra. This internal representation effectively tends to produce wiggles whose strength can be comparable to those of quasar emission lines in faint $G \geq 19$ mag spectra \citep{DR3-DPACP-157}. This solution will concurrently allow us to use sampled BP/RP\xspace spectra with uncorrelated noise on their flux, as the algorithm described in \cite{2016MNRAS.460.2811D} is not optimised to deal with full covariance matrices. The performance of the \modulename{UGC} redshift estimator strongly depends on the training set used. As more epochs are incorporated in the BP/RP\xspace spectra, we expect to have more (and generally fainter) sources with redshifts above 0.4 available for inclusion in the training set, thus improving the performance especially for higher redshifts. We will also investigate optimisation of the SVM model parameters in order to reduce the large variability in the performance with redshift and to minimise the positive bias for bright, low-redshift objects. In future data releases, we can expect the \modulename{TGE} maps to improve with future improvements of \modulename{GSP-Phot} \citep{DR3-DPACP-156}. In particular, we expect that the number of sources with stellar parameters will increase, which will improve the reliability of the \modulename{TGE} maps, and possibly allow for maps at a resolution higher than HEALPix level 9. \section*{Acknowledgements\label{sec:acknowl}} \addcontentsline{toc}{chapter}{Acknowledgements} This work presents results from the European Space Agency (ESA) space mission Gaia\xspace. Gaia\xspace\ data are being processed by the Gaia\xspace\ Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia\xspace\ MultiLateral Agreement (MLA). The Gaia\xspace\ mission website is \url{https://www.cosmos.esa.int/gaia}. The Gaia\xspace\ archive website is \url{https://archives.esac.esa.int/gaia}. Acknowledgements are given in Appendix~\ref{ssec:appendixA}. \bibliographystyle{aa}
2024-02-18T23:40:37.237Z
2022-06-23T02:13:11.000Z
algebraic_stack_train_0000
2,871
25,438
proofpile-arXiv_065-14176
\section{Introduction} \label{intro} Entanglement-assisted quantum error correcting codes (EAQECCs) play an important role in quantum information processing and quantum computation\cite{shor,cal1}. Arbitrary classical linear codes are allowed to transform into EAQECCs by using pre-shared entanglement between the sender and the receiver under the entanglement-assisted (EA) formalism\cite{bru06}. Let $q$ be a prime power. A $q$-ary $[[n,k,d;c]]$ EAQECC that encodes $k$ information qubits into $n$ channel qubits with the help of $c$ pairs of maximally-entangled Bell states (ebits) can correct up to $\lfloor\frac{d-1}{2}\rfloor$ errors, where $d$ is the minimum distance of the code. A $q$-ary $[[n,k,d;c]]$ EAQECC is denoted by $[[n,k,d;c]]_{q}$. Currently, many works have focused on the construction of EAQECCs based on classical linear codes, see \cite{Wil1,Hsi,lai3,lai,lai2,Fujiwara,Hsieh,Wilde,Lu1,guo}. As in classical coding theory, one of the central tasks in quantum coding theory is to construct quantum codes and EA-quantum codes with the best possible minimum distance. {\bf Theorem 1.1 \cite{bru06,lai}.} ( EA-Quantum Singleton Bound) An $[[n,k,d;c]]_{q}$ EAQECC satisfies $$2(d-1)\leq n-k+c,$$ where $0\leq c \leq n-1$. An EAQECC achieving this bound is a EA-quantum maximum-distance-separable (EAQMDS) code. According to \cite{Ketkar}, there are no nontrivial MDS stabilizer codes of lengths exceeding $q^{2}+1$ except when $q$ is even and $d=4$ or $d=q^{2}$ in which case $n\leq q^{2}+2$. Furthermore, it is a very difficult task to construct a quantum MDS code of length $n\leq q^2+1$ with minimal distance larger than $q+1$\cite{Jin,Chen,Kai2,ZhangT1}. Therefore, in order to achieve larger minimal distance, one need to construct a EA-quantum MDS code. The following Proposition is one of the most frequently used construction methods. {\bf Proposition 1.2} \cite{bru06,Wil1}.\ \ If $\mathcal {C}$$=[n,k,d]_{q^{2}}$ is a classical code over $F_{q^{2}}$ and $H$ is its parity check matrix, then $\mathcal{C}$$^{\perp _{h}}$ EA stabilizes an $[[n,2k-n+c,d;c]]_{q}$ EAQECC, where $c=$rank$(HH^{\dagger})$ is the number of maximally entangled states required and $H^{\dagger}$ is the conjugate matrix of $H$ over $F_{q^{2}}$. In resent years, scholars have constructed several entanglement-assisted quantum codes with good parameters in \cite{bru06,Wil1,Hsi,lai3,lai,lai2,Fujiwara,Hsieh,Wilde,Qian1,Qian2,Chen2,Liu,Luo1,Galindo,Qian11,tian,pang,Zhu22,Guenda1}. Many classes of EAQMDS codes have been constructed by different methods, in particular, by the Hermitian constructions from cyclic codes, constacyclic codes or negacyclic codes\cite{Chen2,Chen3,Liu,Lu22,Lu3}. In\cite{Lu1,Li1}, we proposed the concept about a decomposition of the defining set of cyclic codes, and construct some good entanglement-assisted quantum codes with the help of this concept\cite{Li1}. In this paper, we construct two families of EA-quantum MDS codes with length $n$ from cyclic codes. More precisely, Our main contribution on new $q$-ary quantum MDS codes is as follows: (1) $$[[\frac{q^{2}+1}{a},\frac{q^{2}+1}{a}-2d+6,d;4]]$$ where $q=am+l$, $a=(l^2+1)$, $l$ is an odd number, and $(l+1)m+3\leq d \leq (3l-4)m+3$ is odd; (2) $$[[\frac{q^{2}+1}{a},\frac{q^{2}+1}{a}-2d+6,d;4]]$$ where $q=am+l$, $a=\frac{(l^2+1)}{5}$, $l=10t+3$ or $l=10t+7$ is an odd number, and $(\frac{l-1}{2}+\lceil\frac{l}{10}\rceil)m+5\leq d \leq (l+1)m+5$ is odd. In construction (1), consuming four pairs of maximally entangled states, we obtain a family of EA-quantum MDS codes with the minimal distance twice larger than the standard quantum MDS codes in Ref.\cite{Chen,ZhangT1}. Comparing the parameters with all known EA-quantum MDS codes, we find that these quantum MDS codes are new in the sense that their parameters are not covered by the codes available in the literature. The paper is organized as follows. In Section 2, basic notations and results about EA-quantum codes and cyclic codes are provided. In Section 3, we give new classes of EA-quantum MDS codes. The conclusion is given in Section 4. \section{Preliminaries} \label{sec:1} In this section, we review some basic results on cyclic codes, BCH codes, and EAQECCs for the purpose of this paper. For details on BCH codes and cyclic codes can be found in standard textbook on coding theory \cite{Macwilliams,Huffman}, and for EAQECCs please see Refs.\cite{bru06,Wil1,Hsi,lai3,lai,Fujiwara,Lu1}. Let $q$ be a prime power. $F_{q^{2}}$ denotes the finite field with $q^{2}$ elements. For any $\alpha \in F_{q^{2}}$, the conjugation of $\alpha$ is denoted by $\overline{\alpha}=\alpha^{q}$. Given two vectors $\mathbf{x}=(x_{1},x_{2},\cdots,x_{n})$ and $\mathbf{y}=(y_{1},y_{2},\cdots,y_{n})\in F_{q^{2}}^{n}$, their Hermitian inner product is defined as $(\mathbf{x},\mathbf{y})_{h}=\sum \overline{x_{i}}y_{i}=\overline{x_{1}}y_{1}+\overline{x_{2}}y_{2}+\cdots+\overline{x_{n}}y_{n}.$ For a linear code $\mathcal{C}$ over $F_{q^{2}}$ of length $n$, the Hermitian dual code $\mathcal{C}^{\bot _{h}}$ is defined as $\mathcal{C}^{\bot _{h}}=\{x\in F_{q^{2}}^{n} | (x,y)_{h}=0, \forall y $$\in \mathcal{C}\}$. If $\mathcal{C}^{\bot _{h}}\subseteq\mathcal{C} $, then $\mathcal{C}$ is called a Hermitian dual containing code, and $\mathcal{C}^{\bot _{h}}$ is called a Hermitian self-orthogonal code. We now recall some results about cyclic codes. For a cyclic code $\mathcal{C}$, each codeword $c = (c_{0}, c_{1}, \cdots, c_{n-1})$ is customarily represented in its polynomial form: $c(x) = c_{0} + c_{1}x + \cdots + c_{n-1}x_{n-1},$ and the code $\mathcal{C}$ is in turn identified with the set of all polynomial representations of its codewords. The proper context for studying cyclic codes is the residue class ring $\mathcal{R}_{n}$$=\mathbf{F}_{q}[x]/(x^{n}-1)$. $xc(x)$ corresponds to a cyclic shift of $c(x)$ in the ring $\mathcal{R}_{n}$. As we all know, a linear code $\mathcal{C}$ of length $n$ over $F_{q^{2}}$ is cyclic if and only if C is an ideal of the quotient ring $\mathcal{R}_{n}=\mathbf{F}_{q}[x]/(x^{n}-1)$. It follows that $\mathcal{C}$ is generated by monic factors of $(x^{n}-1)$, i.e., $\mathcal{C}=\langle f(x) \rangle$ and $f(x)|(x^{n}-1)$. The $f(x)$ is called the generator polynomial of $\mathcal{C}_{n}$. Let $\gamma$ be a primitive $n$-th root of unity in some splitting field of $x^{n}-1$ and $T=C_{b}\cup C_{b+1}\cup \cdots \cup C_{b+\delta-2}$. A cyclic code $\mathcal{C}$ of length $n$ with generator polynomial $g(x)=\Pi_{i\in T}(x-\gamma^{i})$ is called a BCH code with designed distance $\delta$, and $T$ is called the defining set of $\mathcal{C}$. Let $s$ be an integer with $0\leq s < n$, the $q^{2}$-cyclotomic coset modulo $n$ that contains $s$ is defined by the set $C_{s}=\{s, sq^{2}, sq^{2\cdot 2}, \cdots, sq^{2(k-1)} \}$ (mod $n$), where $k$ is the smallest positive integer such that $xq^{2k}$ $\equiv x$ (mod $n$). We can see that the defining set $T$ is a union of some $q^{2}$-cyclotomic cosets module $n$ and $dim(\mathcal{C}) = n-|T|$. Let $\mathcal {C}$ be a cyclic code with a defining set $T = \bigcup \limits_{s \in S} C_{s}$. Denoting $T^{-q}=\{n-qs | s\in T \}$, then we can deduce that the defining set of $\mathcal {C}$$^{\bot _{h}}$ is $T^{\perp _{h}} =$$ \mathbf{Z}_{n} $$\backslash T^{-q}$, see Ref. \cite{Lu1}. A cyclotomic coset $C_{s}$ is {\it skew symmetric } if $n-qs$ mod $n\in C_{s}$; and otherwise is skew asymmetric otherwise. {\it Skew asymmetric cosets} $C_{s}$ and $C_{n-qs}$ come in pair, we use $(C_{s},C_{n-qs})$ to denote such a pair. The following results on $q^{2}$-cyclotomic cosets, dual containing cyclic codes are bases of our discussion. According to \cite{Lu1,Li2}, $\mathcal{C}^{\perp _{h}}\subseteq$ $\mathcal{C}$ can be described by the relationship of its cyclotomic coset $C_{s}$. In order to construct EA-quantum MDS codes for larger minimal distance of code length $n\leq q^2+1$, we introduce a fundamental definition of decomposition of the defining set of cyclic codes. {\bf Definition 2.1\cite{Lu1}} {\it Let $ \mathcal {C}$ be a cyclic code of length $n$ with defining set $T$. Denote $T_{ss}=T \cap$$T^{-q}$ and $T_{sas}=T \setminus $$T_{ss}$, where $T^{-q}=\{n-qx | x\in T \}$. $T=T_{ss} \cup T_{sas}$ is called decomposition of the defining set of $\mathcal{C}$.} To determine $T_{ss}$ and $T_{sas}$, we give the following lemma to characterize them. {\bf Lemma 2.2 \cite{Li2}.} Let $gcd(q, n) = 1$, $ord_{rn}$$(q^{2})=m$, $0 \leq x, y$, $z \leq n-1$. (1) $C_{x}$ is skew symmetric if and only if there is a $t\leq \lfloor\frac{m}{2}\rfloor$ such that $x \equiv xq^{2t+1}$(mod n). (2) If $C_{y}\neq C_{z}$, $(C_{y}, C_{z})$ form a skew asymmetric pair if and only if there is a $t\leq \lfloor\frac{m}{2}\rfloor$ such that $y \equiv zq^{2t+1}$ (mod n) or $z \equiv yq^{2t+1}$(mod n). Using the decomposition of a defining set $T$, one can calculate the number of needed ebits with a algebra method. {\bf Lemma 2.3. \cite{Lu1}} Let $T$ be a defining set of a cyclic code $ \mathcal {C}$, $T=T_{ss}\cup T_{sas}$ be decomposition of $T$. Using $\mathcal{C}$$^{\perp_{h}}$ as EA stabilizer, the optimal number of needed ebits is $c=\mid T_{ss} \mid$. {\bf Lemma 2.4.} (The BCH bound) Let $\mathcal{C}$ be a cyclic code of length $n$ with defining set $T$. Assume $T$ contains $d-1$ consecutive elements for some integer $d$. Then the minimum distance of $\mathcal{C}$ is at least $d$. {\bf Theorem 2.5.} Let $\mathcal{C}$ be an $[n,k,d]_{q^{2}}$ MDS code with defining set $T$, and the decomposition of $T$ be $T=T_{ss}\cup T_{sas}$. Then $\mathcal{C}$$^{\perp_{h}}$ EA stabilizes an $q$-ary $[[n,n-2|T|+|T_{ss}|,d \geq \delta ; |T_{ss}|]]$ EA-quantum MDS Code. \section{New EA-quantum MDS Codes of Lenght $n=\frac{q^{2}+1}{a}$} \label{sec:1} In this section, we consider cyclic codes over $F_{q^{2}}$ of length $n=\frac{q^{2}+1}{a}$ to construct EA-quantum codes, where $q=am+l$, $a=(l^2+1)$ and $l$ is an odd number. To do this, we give a decomposition of the defining set of cyclic codes over $F_{q^{2}}$ of length $n$. Let $n=\frac{q^{2}+1}{a}$, and $s=\frac{n+1}{2}$, where $q=am+l$, $a=(l^2+1)$ and $l$ is an odd number. Obviously, the $q^{2}$-cyclotomic cosets modulo $n$ are $$C_{s}=\{s,s-1\},C_{s+1}=\{s+1,s-2\},\cdots,C_{n-2}=\{n-2,2\},C_{n-1}=\{n-1,1\}.$$ \subsection{$q=am+l$, $a=(l^2+1)$ } \label{sec:2} In this subsection, we assume that $q$ is an odd prime power of the form $q=am+l$, $a=(l^2+1)$, $l$ is an odd number and $s=\frac{n+1}{2}$. We construct new $q$-ary EA-quantum MDS codes of length $n=\frac{q^{2}+1}{a}$ from cyclic codes. Let us first give a useful lemma for our constructions. {\bf Lemma 3.1:} Let $q=am+l$, $a=l^2+1$, $l$ is an odd number, $n=\frac{q^{2}+1}{a}$ and $s=\frac{n+1}{2}$. If $\mathcal{C}$ is a $q^{2}$-ary cyclic code of length $n$ with define set $T=\bigcup_{i=0}^{k}C_{s+i}$, where $0\leq k\leq (\frac{3l-1}{2})m$, and the decomposition of a defining set $T=T_{ss}\bigcup T_{sas}$, then $T_{ss}=\{C_{s+\frac{l+1}{2}m},C_{s+\frac{l-1}{2}m}\}$, and $|T_{ss}|=4$. {\bf Proof:} Since $-(s+\frac{l+1}{2}m)q\equiv -(s+\frac{l+1}{2}m)\cdot(am+l)$ $\equiv \frac{l^2+1}{2}m^2+(l+\frac{l-1}{2})m+1$ $\equiv s+\frac{l-1}{2}m$ (mod $n$), $\{C_{s+\frac{l+1}{2}m},C_{s+\frac{l-1}{2}m}\}$ forms a skew asymmetric pair. Let $T=\bigcup_{i=0}^{k}C_{s+i}$, where $0\leq k\leq (\frac{3l-1}{2})m$. According to the concept about a decomposition of the defining set $T$, one obtain that $T_{sas}=T\backslash T_{ss}$. In order to testify $|T_{ss}| =4$ if $0\leq k\leq (\frac{3l-1}{2})m$, from Definition 2.1 and Lemma 2.2, we need to testify that there is no skew symmetric cyclotomic coset, and any two cyclotomic coset do not form a skew asymmetric pair in $T_{sas}$. For $x\in \{0,1,2,\cdots,n-1\}$, the $q^{2}$-cyclotomic cosets modulo $n$ are $C_{x}=\{x,n-x\}$. Let $I=\{s+i|0\leq i\leq (\frac{3l-1}{2})m\}$. We only need to testy that for $\forall x\in I$, $-qx$ (mod $n$)$\not \in I$ and $T_{ss}=\{C_{s+\frac{l+1}{2}m},C_{s+\frac{l-1}{2}m}\}$. That implies that if $x,y\in I$, from Lemma 2.2, $C_{x}$ is not a skew symmetric cyclotomic coset, and any $C_{x},C_{y}$ do not form a skew asymmetric pair if and only if $x+yq\not\equiv0$ mod $n$. Divide $I$ into $(\frac{3l-1}{2})m$ parts $I_{1}=[s,s+m]$, $I_{2}=[s+m+1,s+2m-1]$, $I_{3}=[s+2m+1,s+3m]$, $\cdots$, and $I_{(\frac{3l-1}{2})m}=[s+(\frac{3l-1}{2}-1)m+1,s+(\frac{3l-1}{2})m]$. Since $q=(l^2+1)m+l$, $n=\frac{q^2+1}{l^2+1}=(l^2+1)m^2+2lm+1$ and $s=\frac{n+1}{2}=\frac{l^2+1}{2}m^2+lm+1$, if $ x,y\in I_{1}$, then $(\frac{l^2+1}{2}m+\frac{l+1}{2})\cdot n<(\frac{l^2+1}{2}m+\frac{l+1}{2})n+\frac{l^2+1}{2}m+\frac{l+1}{2}=s(q+1)\leq x+yq \leq (s+m)(q+1)<(\frac{l^2+1}{2}m+\frac{l+1}{2}+1)\cdot n$; if $ x,y\in I_{2}$, then $(\frac{l^2+1}{2}m+\frac{l+1}{2}+1)\cdot n<(s+m+1)\cdot (q+1)\leq x+yq \leq (s+2m-1)(q+1)<(\frac{l^2+1}{2}m+\frac{l+1}{2}+2)\cdot n$; and using this method, one can obtain that if $ x,y\in I_{im}$, then $(\frac{l^2+1}{2}m+\frac{l+1}{2}+i)\cdot n<(s+im+1)\cdot (q+1)\leq x+yq \leq (s+(i+1)m)(q+1)<(\frac{l^2+1}{2}m+\frac{l+1}{2}+i+1)\cdot n$, where $0\leq i\leq \frac{3l-1}{2}-1$. Hence, there is no skew symmetric cyclotomic cosets, and any two cyclotomic coset do not form a skew asymmetric pair in $T\setminus \{C_{s+2m},C_{s+m}\}$. That implies that $T_{ss}=\{C_{s+\frac{l+1}{2}m},C_{s+\frac{l-1}{2}m}\}$ and $|T_{ss}|=4$ for $0\leq k\leq (\frac{3l-1}{2})m$, when the defining set $T=\bigcup_{i=0}^{k}C_{s+i}$, where $0\leq k\leq (\frac{3l-1}{2})m$.\\ {\bf Theorem 3.2:} Let $q$ is an odd prime power of the form $q=am+l$, $a=l^2+1$, $l$ is an odd number, $n=\frac{q^{2}+1}{a}$. Then there exists a q-ary $[[\frac{q^{2}+1}{a},\frac{q^{2}+1}{a}-2d+6,d;4]]$- EA-quantum MDS codes, where $(l+1)m+3\leq d \leq (3l-4)m+3$ is odd. {\bf Proof:} Consider the cyclic codes over $F_{q^{2}}$ of length $n=\frac{q^2+1}{a}$ with defining set $T=\bigcup_{i=0}^{k}C_{s+i}$, where $0\leq k\leq (\frac{3l-1}{2})m$, and $q$ is an odd prime power with the form of $q=am+l$, $a=(l^2+1)$. By Lemma 3.1, there is $c=|T_{ss}|=4$ if $\frac{l+1}{2}m\leq k\leq (\frac{3l-1}{2})m$. Since every $q^{2}$-cyclotomic coset $C_{s+x}=\{s+x,s-x-1\}$, $0\leq x\leq s-1$ and $s=\frac{n+1}{2}$, we can obtain that $T$ consists of $2(k+1)$ integers $\{s-k-1,\cdots,s-2,s-1,s,s+1,\cdots,s+k\}$. It implies that $\mathcal{C}$ has minimum distance at least $2k+1$. Hence, $\mathcal{C}$ is a $q^{2}$-ary cyclic code with parameters $[n,n-2(k+1),\geq 2k+3]$. Combining Theorem 2.5 with EA-quantum Singleton bound, we can obtain a EA-quantum MDS code with parameters $[[\frac{q^{2}+1}{a},\frac{q^{2}+1}{a}-2d+6,d;4]]_{q}$, where $(l+1)m+3\leq d \leq (3l-4)m+3$ is odd. \begin{center} Table 1 EAQMDS codes with $n=\frac{q^{2}+1}{a}$, $a=l^2+1$ \\ \begin{tabular}{lllllllllll} \hline &q &$[[n,k,d;4]]_{q}$ &d is odd \\ \hline $q=am+l$, $a=l^2+1$ & 13 &$[[17,23-2d,d;4]]_{13}$ &$7\leq d \leq 11$ \\ \quad\quad $l=3$ & 23 &$[[53,59-2d,d;4]]_{23}$ &$11\leq d \leq 19$ \\ & 43 &$[[185,191-2d,d;4]]_{43}$ &$19\leq d \leq 35$ \\ \quad\quad$l=5 $ & 31 &$[[37,43-2d,d;4]]_{37}$ &$9\leq d \leq 17$ \\ & 83 &$[[265,271-2d,d;4]]_{83}$ &$21\leq d \leq 45$\\ & 109 &$[[457,463-2d,d;4]]_{109}$ &$27\leq d \leq 59$ \\ \quad\quad $l=7$ & 107 &$[[229,235-2d,d;4]]_{107}$ &$19\leq d \leq 43$ \\ & 157 &$[[493,499-2d,d;4]]_{157}$ &$27\leq d \leq 63$\\ & 257 &$[[1321,1327-2d,d;4]]_{257}$ &$43\leq d \leq 103$ \\ \quad\quad $l=9$ & 173 &$[[365,371-2d,d;4]]_{173}$ &$23\leq d \leq 55$ \\ & 337 &$[[1385,1391-2d,d;4]]_{337}$ &$43\leq d \leq 107$\\ & 419 &$[[2141,2147-2d,d;4]]_{419}$ &$53\leq d \leq 133$ \\ \hline \end{tabular} \end{center} \subsection{$q=am+l$, $a=\frac{l^2+1}{5}$} \label{sec:3} In this subsection, we assume that $q$ is an odd prime power of the form $q=am+l$, $a=\frac{l^2+1}{5}$, $l=10t+3$ or $l=10t+7$ is an odd number and $s=\frac{n+1}{2}$. We construct new $q$-ary EA-quantum MDS codes of length $n=\frac{q^{2}+1}{a}$ from cyclic codes. Obviously, for the cyclic codes, the $q^{2}-$cyclotomic coset $C_{i}$ modulo $n$ are $C_{s}=\{s,s-1\},C_{s+1}=\{s+1,s-2\},\cdots,C_{n-2}=\{n-2,2\},C_{n-1}=\{n-1,1\}$. First, we give a useful lemmas for our constructions. {\bf Lemma 3.3:} Let $q=am+l$, $a=\frac{l^2+1}{5}$, $l=10t+3$ or $l=10t+7$ is an odd number, $n=\frac{q^{2}+1}{a}$ and $s=\frac{n+1}{2}$. If $\mathcal{C}$ is a $q^{2}$-ary cyclic code of length $n$ with define set $T$. (a) If $l=10t+3$, $T=\bigcup_{i=0}^{k}C_{s+i}$, where $0\leq k\leq \frac{l+1}{2}m+1$, and the decomposition of a defining set $T=T_{ss}\bigcup T_{sas}$, then $T_{ss}=\{C_{s+\frac{l+3}{4}m},C_{s+\lfloor \frac{l}{10} \rfloor m}\}$, and $|T_{ss}|=4$. (b) If $l=10t+7$, $T=\bigcup_{i=0}^{k}C_{s+i}$, where $0\leq k\leq \frac{l+1}{2}m+2$, and the decomposition of a defining set $T=T_{ss}\bigcup T_{sas}$, then $T_{ss}=\{C_{s+\frac{(\frac{l-1}{2}+\lceil\frac{l}{10}\rceil)}{2}m},C_{s-\lceil\frac{l}{10}\rceil m-1}\}$, and $|T_{ss}|=4$. {\bf Proof:} For $l=10t+3$, since $-(s+\frac{l+3}{4}m)q\equiv -(\frac{l^2+1}{10}m^2+lm+2\frac{l^2+1}{5})(\frac{l^2+1}{5}m+l)$ $\equiv s+\lfloor \frac{l}{10} \rfloor m$ mod $n$, $\{C_{s+\frac{l+3}{4}m},C_{s+\lfloor \frac{l}{10} \rfloor m}\}$ forms a skew asymmetric pair. For $l=10t+7$, since $-(s+\frac{(\frac{l-1}{2}+\lceil\frac{l}{10}\rceil)}{2}m)q\equiv -(\frac{l^2+1}{10}m^2+lm+3+\frac{(\frac{l-1}{2}+\lceil\frac{l}{10}\rceil)}{2}m)(\frac{l^2+1}{5}m+l)$ $\equiv s-\lceil\frac{l}{10}\rceil m-1$ mod $n$, $\{C_{s+\frac{(\frac{l-1}{2}+\lceil\frac{l}{10}\rceil)}{2}m},C_{s-\lceil\frac{l}{10}\rceil m-1}\}$ forms a skew asymmetric pair. Let $T=\bigcup_{i=0}^{k}C_{s+i}$, where $0\leq k\leq \frac{l+1}{2}m+1$ if $l=10t+3$; $0\leq k\leq \frac{l+1}{2}m+2$ if $l=10t+7$. According to the concept about a decomposition of the defining set $T$, one obtain that $T_{sas}=T\backslash T_{ss}$. For $x\in \{0,1,2,\cdots,n-1\}$, the $q^{2}$-cyclotomic cosets modulo $n$ are $C_{x}=\{x,n-x\}$. Let $I=\{s+i|0\leq i\leq 7m\}$. We only need to testy that for $\forall x\in I$, $-qx$ (mod $n$)$\not \in I$ and $T_{ss}=\{C_{s+\frac{l+3}{4}m},C_{s+\lfloor \frac{l}{10} \rfloor m}\}$ if $l=10t+3$; $T_{ss}=\{C_{s+\frac{(\frac{l-1}{2}+\lceil\frac{l}{10}\rceil)}{2}m},C_{s-\lceil\frac{l}{10}\rceil m-1}\}$ if $l=10t+7$. That implies that if $x,y\in I$, from Lemma 2.2, $C_{x}$ is not a skew symmetric cyclotomic coset, and any $C_{x},C_{y}$ do not form a skew asymmetric pair if and only if $x+yq\not\equiv0$ mod $n$. Divide $I$ into $\frac{l+1}{2}$ parts such as $I_{1}=[s,s+m]$, $I_{2}=[s+m+1,s+2m]$, $\cdots$, $I_{\frac{l+1}{2}}=[s+(\frac{l+1}{2}-1)m+3,s+\frac{l+1}{2}m+2]$ for $l=10t+3$; and $I_{1}=[s,s+m]$, $I_{2}=[s+m+1,s+2m]$, $\cdots$, $I_{\frac{l+1}{2}}=[s+(\frac{l+1}{2}-1)m+2,s+\frac{l+1}{2}m+1]$ for $l=10t+7$. Since $q=26m+5$, $n=\frac{q^2+1}{26}=10m^2+14m+5$ and $s=\frac{n+1}{2}=5m^2+7m+3$, if $ x,y\in I_{1}$, then $(\frac{l^2+1}{2}m+\frac{l+1}{2})\cdot n<(\frac{l^2+1}{2}m+\frac{l+1}{2})n+\frac{l^2+1}{2}m+\frac{l+1}{2}=s(q+1)\leq x+yq \leq (s+m)(q+1)<(\frac{l^2+1}{2}m+\frac{l+1}{2}+1)\cdot n$; $(13m+3)\cdot n<(13m+3)n+13m+3=s(q+1)\leq x+yq \leq (s+m)(q+1)=(13m+3)n+26m^2+9m+2<(13m+4)\cdot n$; and using this method, one can obtain that if $ x,y\in I_{im}$, then $(\frac{l^2+1}{2}m+\frac{l+1}{2}+i)\cdot n<(s+im+1)\cdot (q+1)\leq x+yq \leq (s+(i+1)m)(q+1)<(\frac{l^2+1}{2}m+\frac{l+1}{2}+i+1)\cdot n$, where $0\leq i\leq \frac{l+1}{2}m+1$ if $l=10t+3$ and $0\leq i\leq \frac{l+1}{2}m+2$ if $l=10t+7$. Hence, there is no skew symmetric cyclotomic cosets, and any two cyclotomic coset do not form a skew asymmetric pair in $T\setminus \{C_{s+3m},C_{s+2m}\}$. That implies that $T_{ss}=\{C_{s+\frac{l+3}{4}m},C_{s+\lfloor \frac{l}{10} \rfloor m}\}$ if $l=10t+3$ and $|T_{ss}|=4$; $T_{ss}=\{C_{s+\frac{(\frac{l-1}{2}+\lceil\frac{l}{10}\rceil)}{2}m},C_{s-\lceil\frac{l}{10}\rceil m-1}\}$ if $l=10t+7$ and $|T_{ss}|=4$ for $0 \leq k \leq 7m$, when the defining set $T=\bigcup_{i=0}^{k}C_{s+i}$, where $0\leq k\leq \frac{l+1}{2}m+1$ if $l=10t+3$; $0\leq k\leq \frac{l+1}{2}m+2$ if $l=10t+7$.\\ {\bf Theorem 3.4:} Let $q$ be an odd prime power in the form of $q=am+l$, $a=\frac{(l^2+1)}{5}$, $l=10t+3$ or $l=10t+7$ is an odd number, where $m,t$ is a positive integer. Then there exists a q-ary $[[\frac{q^{2}+1}{a},\frac{q^{2}+1}{a}-2d+6,d;4]]$- EA-quantum MDS codes, where $(\frac{l-1}{2}+\lceil\frac{l}{10}\rceil)m+5\leq d \leq (l+1)m+5$ is odd. {\bf Proof:} Consider the cyclic codes over $F_{q^{2}}$ of length $n=\frac{q^2+1}{a}$ with defining set Let $T=\bigcup_{i=0}^{k}C_{s+i}$, where $0\leq k\leq \frac{l+1}{2}m+1$ if $l=10t+3$; $0\leq k\leq \frac{l+1}{2}m+2$ if $l=10t+7$. Since $q$ is an odd prime power in the form of $q=am+l$, $a=\frac{(l^2+1)}{5}$, $l=10t+3$ or $l=10t+7$ is an odd number, where $m,t$ is a positive integer, by Lemma 3.3, there is $c=|T_{ss}|=4$ if $\frac{l+1}{4}m\leq k\leq \frac{l+1}{2}m+1$ if $l=10t+3$; $\frac{l+1}{4}m\leq k\leq \frac{l+1}{2}m+2$ if $l=10t+7$. Since every $q^{2}$-cyclotomic coset $C_{s+x}=\{s+x,s-x-1\}$, $0\leq x\leq s-1$ and $s=\frac{n+1}{2}$, we can obtain that $T$ consists of $2(k+1)$ integers $\{s-k-1,\cdots,s-2,s-1,s,s+1,\cdots,s+k\}$. It implies that $\mathcal{C}$ has minimum distance at least $2k+1$. Hence, $\mathcal{C}$ is a $q^{2}$-ary cyclic code with parameters $[n,n-2(k+1),\geq 2k+3]$. Combining Theorem 2.6 with EA-quantum Singleton bound, we can obtain a EA-quantum MDS code with parameters $[[\frac{q^{2}+1}{a},\frac{q^{2}+1}{a}-2d+6,d;4]]_{q}$, where $(\frac{l-1}{2}+\lceil\frac{l}{10}\rceil)m+5\leq d \leq (l+1)m+5$ is odd. \begin{center} Table 2 EAQMDS codes with $n=\frac{q^{2}+1}{a}$, $a=\frac{l^2+1}{5}$ \\ \begin{tabular}{lllllllllll} \hline &q &$[[n,k,d;4]]_{q}$ &d is odd \\ \hline $q=m+l$, & 17 &$[[29,35-2d,d;4]]_{17}$ &$9\leq d \leq 13$ \\ $a=\frac{l^2+1}{5}$, $l=7$ & 27 &$[[73,79-2d,d;4]]_{27}$ &$13\leq d \leq 21$ \\ & 37 &$[[137,142-2d,d;4]]_{37}$ &$17\leq d \leq 29$ \\ \quad$l=13 $ & 47 &$[[65,71-2d,d;4]]_{47}$ &$13\leq d \leq 19$ \\ & 81 &$[[193,199-2d,d;4]]_{81}$ &$21\leq d \leq 33$ \\ & 149 &$[[653,659-2d,d;4]]_{149}$ &$37\leq d \leq 61$ \\ \quad $l=17$ & 191 &$[[629,635-2d,d;4]]_{191}$ &$35\leq d \leq 59$ \\ & 307 &$[[1625,1631-2d,d;4]]_{307}$ &$55\leq d \leq 95$ \\ \quad $l=27$ & 173 &$[[205,211-2d,d;4]]_{173}$ &$21\leq d \leq 33$ \\ \hline \end{tabular} \end{center} \section{SUMMARY} \label{sec:1} In this paper, by analysing the concept of decomposition of the defining set of cyclic codes, we construct two families of $q$-ary entanglement-assisted quantum MDS codes $[[\frac{q^{2}+1}{a},\frac{q^{2}+1}{a}-2(d-1)+4,d;4]]$, where q is a prime power in the form of $q=am+l$ based on classical cyclic MDS codes by exploiting 4 pre-shared maximally entangled states. $a$, $l$ and $m$ is in the following: (i) $a=l^2+1$, $l$ is a odd number and $m$ is integer number; (ii) $a=\frac{l^2+1}{5}$, $l=10m+3$ or $l=10m+7$ is a odd number and $m$ is integer number. In Table 3, we list the $q$-ary entanglement-assisted quantum MDS codes constructed in this paper. For $l=3$, consumed four pre-shared maximally entangled states, each EA-quantum MDS code of length $n=\frac{q^2+1}{10}$ has twice larger minimal distance than the standard QMDS code of code length constructed in Ref.\cite{Chen}. Comparing the parameters with all known $q$-ary EA-quantum MDS codes, we find that all these constructed EA-quantum MDS codes are new in the sense that their parameters are not covered by the codes available in the literature. \begin{center} Table 3 New parameters of EAQMDS codes \\ \begin{tabular}{lllllllllllllll} \hline $a$ & q &$[[n,k,d;c]]_{q}$ &Distance (d is odd) \\ \hline $l^{2}+1$ &$q=am+l$ &$[[\frac{q^{2}+1}{a},\frac{q^{2}+1}{a}-2d+6,d;4]]$ &$(l+1)m+3\leq d \leq (3l-4)m+3$ \\ & & & \\ $\frac{l^{2}+1}{5}$ &$q=am+l$ &$[[\frac{q^{2}+1}{a},\frac{q^{2}+1}{a}-2d+6,d;4]]$ &$(\frac{l-1}{2}+\lceil\frac{l}{10}\rceil)m+5\leq d \leq (l+1)m+5$ \\ & $l=10m+3$ or & & \\ & $l=10m+7$ & &\\ \hline \end{tabular} \end{center} \section{Acknowledgment} This work is supported by the National Natural Science Foundation of China under Grant No.11801564 and No.61373171, the National Key R\&D Program of China under Grant No. 2017YFB0802400, 111 Project under grant No.B08038.
2024-02-18T23:40:37.884Z
2020-11-25T02:26:46.000Z
algebraic_stack_train_0000
2,907
5,279
proofpile-arXiv_065-14215
\section{Introduction} With increasing ease of collecting data and low cost storage, there is increasing interest to use data-enabled methods developing models for prediction and control,~\cite{abraham2017model,mamakoukas2021derivative,hewing2020learning, asadi2021gaussian,piche2000neural,kabzan2019learning,kocijan2004gaussian}. Such models can be optimized to best fit the data and methods are also available to estimate the error bounds on the predictions, e.g., using predicted time derivatives of the observables~\cite{mamakoukas2021derivative}. However, the conditions under which such data-enabled models can achieve sufficient precision remains unclear. A major challenge is that the model (i.e., the relationship between the input and the measurable outputs) can be dependent on the system's internal states, which are hidden in the sense that they are not directly measured, nor inferred using standard observer designs since they require prior knowledge of the system dynamics. Several approaches are available to address the lack of direct access to the hidden states. One approach is to represent the dynamics through Markov models with a predefined number of hidden states, and then minimize the model prediction error~\cite{tarbouriech2020active,yoon2019hidden,pohle2017selecting}. A difficulty is that the optimal selection of the number of hidden states can be computationally expensive, and there is no guarantee that the resulting models will achieve the desired precision. A second class of approaches to handle the lack of direct access to the hidden states is to model the system dynamics (flow) in a lifted observable space (with generalized functions of the observables) using Koopman operator theory \cite{schmid2008dynamic,mezic2005spectral}. Recent techniques include sparse identification of nonlinear dynamical systems (SINDy)~\cite{brunton2016discovering} and linearization Dynamic Mode Decomposition(DMD)~\cite{kutz2016dynamic}. Nevertheless, with a finite number of states, there is uncertainty about how to select a sufficient set of generalized observable functions to achieve a specified level of prediction precision. A third class of approaches is to use time history of the input and output data to find forward models, e.g., with (i)~transfer function models in the frequency domain~\cite{devasia2017iterative,yan2021mimo}; (ii)~autoregressive models with extra input (ARX)~\cite{ljung1987theory} as well as nonlinear ARX (NARX)~\cite{kocijan2004gaussian,pham2010hybrid}; (iii)~time-delayed information in the Koopman operator framework~\cite{kamb2020time}; and fitting a relation between the time-delayed output data and the inverse input \cite{butterworth2012analysis,blanken2020kernel,aarnoudse2021control}. Again, determining the type of data needed to capture the input-output relationship (with high precision) when models are not available a-priori remains uncertain. When precision of the inverse is not sufficient, it can be improved using iterative techniques, with the inverse of the plant considered as the learning operator, \cite{ghosh2001iterative,fine2009model,teng2015comparison,spiegel2021iterative}. Nevertheless, increasing the precision of the inverse model can improve ILC convergence. The goal of this article is to identify the type of output data needed to develop inverse (output-to-input) operators, with a desired level of precision. Rather than the two step processes of first learning forward models and second using model-predictive control (MPC) to optimally select the control input, the proposed approach seeks to solve the inverse problem of directly finding the input for a given output, e.g., similar to~\cite{dev96ac,willems2005note}. In particular, the relative degree of the system is used to identify the number of time derivatives that need to be added to input-output data to facilitate precision data-enabled learning of the inverse operator. Previous works on inversion of system dynamics, using known models of the system, have shown that the impact of neglecting the boundary conditions of the internal states can be made arbitrarily small~\cite{zou1999preview,zou2007precision} by choosing a sufficiently large time history of the desired output and its derivatives. This motivates the proposed data-enabled algorithm to learn the inverse operator directly from input-output data (without the need to explicitly capture the hidden state dynamics) by using time-delayed observations of the output, along with the output's time derivatives. The main contribution of this paper is to propose a Koopman-type time-delay and output-derivative-based data-enabled inverse operator that minimizes the impact of the hidden state dependency and achieves precision (illustrated with a simulation example). Overall, the work provides insight into the need for including derivative features and time history to achieve precision in Koopman-type inverse operators. Even for forward Koopman-type operators (which only depend on past observable outputs) it is shown that that the output-derivative at the current time instant needs to be included for precision prediction. \section{Problem formulation and solution} The inverse operator is developed for linear time-invariant (LTI) single-input-single-output (SISO) system. Let the system be \begin{align} \Dot{x}(t)&=Ax(t)+Bu(t)\label{eq:X_dynamics}\\ y(t)&=Cx(t)\label{eq:output} \end{align} with states $x(t)\in \mathbb{R}^{n}$, input $u(t)\in \mathbb{R}$ and output $y(t)\in \mathbb{R}$ with matrices $A \in \mathbb{R}^n \times \mathbb{R}^n, B\in \mathbb{R}^n \times 1,C \in 1\times \mathbb{R}^n$. \begin{assumption}[System properties] The system described in (\ref{eq:X_dynamics}) and (\ref{eq:output}) is stable (i.e., $A$ is Hurwitz), hyperbolic (no zeros on the imaginary axis), and has relative degree $r \le n$ (i.e., the difference between the number of poles and the number of zeros). \label{assum:relative_degree} \end{assumption} \begin{assumption} The desired output $y_d$, specified in inverse operator problems, is sufficiently smooth, and has bounded time derivatives upto the relative degree $r$. \end{assumption} \subsection{Hidden state dependency} The system state $x$ can split into state components $\xi$ that directly depend on the output and its time derivatives \begin{align} \xi(t) &= \begin{bmatrix}y(t),\Dot{y}(t),\dots,\frac{d^{r-1} y(t)}{d t^{r-1}}\end{bmatrix}'\in \mathbb{R}^{r\times 1} \label{eq:xi_def} \end{align} and internal states $\eta$, \begin{equation} \begin{bmatrix} \xi(t)\\ \eta(t) \end{bmatrix}=Sx(t) \label{eq:coord_trans} \end{equation} such that in the new coordinates, (\ref{eq:X_dynamics}) can be written as, e.g., see \cite{marino1995nonlinear}, Example 4.1.3, \begin{align} \Dot{\xi}(t)&=A_1\xi(t)+A_2\eta(t)+B_1u(t) \label{eq:xi_dynamic}\\ \Dot{\eta}(t)&=A_3y(t)+A_4\eta(t) \label{eq:eta_dynamic} \end{align} where \begin{equation*} B_1= \begin{bmatrix} 0\\ 0 \\\vdots\\b_{n-r} \end{bmatrix}\in\mathbb{R}^{r\times 1},\quad A_3=\begin{bmatrix} 0\\0\\\vdots \\ 1/b_{n-r} \end{bmatrix}, \end{equation*} \begin{equation*} A_4=\begin{bmatrix} 0&1&\dots& 0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&1\\ -b_0/b_{n-r}&-b_1/b_{n-r}&\dots&-b_{n-r-1} / b_{n-r}, \end{bmatrix} \end{equation*} and the eigenvalues of matrix $A_4$ are the zeros of the transfer function of system (\ref{eq:X_dynamics}) and (\ref{eq:output}). \begin{equation} G(s) = \frac{Y(s)}{U(s)}= \frac{b_0+b_1s+\dots+b_{n-r}s^{n-r}}{a_0+a_1s+\dots+a_{n-1}s^{n-1}+s^n}. \label{eq:transfer_func} \end{equation} Note that the internal state $\eta$ is only driven by the output $y = \xi_1$. Moreover, due to the relative degree $r$ assumption, the input $u$ is directly related to the $r^{th}$ derivative of the output, and therefore, the $r^{th}$ row of (\ref{eq:xi_dynamic}) can be written as \begin{equation} \begin{split} y^{(r)}(t)\triangleq \frac{d^{r}y(t)}{dt^{r}}&=C A^rx +CA^{r-1}Bu(t)\\ &=CA^r S^{-1}\begin{bmatrix} \xi(t)\\\eta(t) \end{bmatrix}+b_{n-r}u(t) \\ &= A_{\xi} \xi(t) +A_{\eta} \eta(t) + b_{n-r}u(t), \end{split} \label{eq:rel_degree_connection} \end{equation} and the matrices $A_1$ and $A_2$ in (\ref{eq:xi_dynamic}) are given by \begin{equation*} A_1=\begin{bmatrix} \begin{matrix} 0&1&\dots& 0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&1 \end{matrix} \\ \hline \\[-0.1in] A_{\xi} \end{bmatrix}, ~~ A_2=\begin{bmatrix} \begin{matrix} 0&0&\dots& 0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&0 \end{matrix} \\ \hline \\[-0.1in] A_{\eta} \end{bmatrix}. \end{equation*} where $A_{\xi}$ and $A_{\eta}$ are the last rows of matrices $A_1$ and $A_2$ respectively. \subsection{Research problem} \vspace{-0.1in} The desired output and its derivatives, $(y_d^{(r)}, \xi_d)$ can be used to predict the inverse input $u_d$ from (\ref{eq:rel_degree_connection}) , as \begin{equation} u_d(t) = b_{n-r}^{-1}\left[ y_d^{(r)}(t)- A_{\xi} \xi_d(t) - A_{\eta} \eta_d(t) \right], \label{eq:inv_model} \end{equation} which depends on the internal states $\eta$ that are hidden or not directly measured. The goal is to minimize the hidden state effects on the inverse model, by addressing the following research problems. \begin{enumerate}[label=(\roman*)] \item Finding the hidden state from output: Develop an operator that maps the time history of the output $y$ with length $T$ to an estimate of the hidden state $\eta$ at time $t$ \begin{equation} \hat\eta(t) = \hat{\mathbb{H}}[y(t-T:t)]. \label{op_internal_eta} \end{equation} \item Koopman-type inverse operator: Using the operator in (\ref{op_internal_eta}), develop a data-enabled Koopman-type inverse operator $\hat{\mathbb{G}}^{-1}$ that uses the history of the desired output and its time derivatives to predict the inverse input as \begin{align} \hat{u}_d(t) &= \hat{\mathbb{G}}^{-1}[y_d(t-T:t),\xi_d(t),y^{(r)}_d(t)]\label{op_inverse_min_phase}. \end{align} \item Inverse operator precision: Quantify the error $\|\hat{u}_d(t)-u_d(t)\|_2$ dependence on each argument of $\hat{\mathbb{G}}^{-1}$. \end{enumerate} \subsection{Solution} \subsubsection{Finding the hidden state from output} If the system is minimum-phase ($A_4$ is Hurwitz), i.e., (\ref{eq:transfer_func}) has no zeros on the right half plane, then $\eta(t)$ can be obtained from the history of the output by solving (\ref{eq:eta_dynamic}) \begin{equation} \begin{split} \eta(t) &= \int_{-\infty}^t e^{A_4(t-\tau)}A_3y(\tau)d\tau\\ &\triangleq \mathbb{H}[y(-\infty:t)]. \end{split} \label{eq:unknown_state_by_integral} \end{equation} In practice, such an operator is hard to capture in a data-enabled way since it requires an infinite window. Therefore, an estimate $\hat{\eta}$ is obtained with an approximate operator $\hat{\mathbb{H}}$ with a finite time history length $T$ is defined \begin{equation} \begin{split} \hat{\eta}(t) &\triangleq\int_{t-T}^t e^{A_4(t-\tau)}A_3y(\tau)d\tau\\ &\triangleq \hat{\mathbb{H}}[y(t-T:t)]. \end{split} \label{eq_approx_unknown_state} \end{equation} The approximate operator $\hat{\mathbb{H}}$ approaches the exact operator ${\mathbb{H}}$ exponentially as the time history $T$ increases. \begin{lemma} \label{lemma_internal_state_estimate} If the output trajectory is bounded, \begin{equation} M=\max_{\tau\in[-\infty, t-T]}\|y(\tau)\|_2 < \infty, \label{eq_output_bound} \end{equation} then the error in computing the hidden state $\eta(t)$ decays exponentially with the time history $T$, i.e., there exists positive scalars $\alpha_1>0, \beta_1>0$ such that \begin{equation} \begin{split} \|\Delta \eta(t)\|_2\triangleq\|\eta(t)-\hat{\eta}(t)\|_{2} \le \beta_1 e^{-\alpha_1 T}. \end{split} \label{eq_eta_err_bound_exp} \end{equation} \end{lemma} \begin{pf} Since the system is assumed to be minimum phase, the and the eigenvalues of matrix $A_4$, which are the zeros of the transfer function of system (\ref{eq:X_dynamics}), lie in the open left-half of the complex plane, i.e., the matrix $A_4$ is Hurwitz. Then, there exists positive scalars $\kappa_1 >0, \alpha_1 >0$ such that,~\cite{desoer1975feedback} \begin{equation} \|e^{A_4t}\|_{2}\le \kappa_1 e^{-\alpha_1 t}. \label{eq:exponeital_decay} \end{equation} Then, from (\ref{eq:unknown_state_by_integral},\ref{eq_approx_unknown_state}), the approximation error can be bounded as \begin{equation} \begin{split} \|\eta(t)-\hat{\eta}(t)\|_{2}&=\left\| \int_{-\infty}^{t-T}e^{A_4(t-\tau)}A_3y(\tau)d\tau \right\|_{2}\\ &\le M\|A_3\|_{2} \int_{-\infty}^{t-T}\kappa_1 e^{-\alpha_1 (t-\tau)} d\tau \\ & \qquad {\mbox{using (\ref{eq_output_bound}, \ref{eq:exponeital_decay})} } \\ &= M\|A_3\|_{2} \int_{T}^{+\infty}\kappa_1 e^{-\alpha_1 \tau'} d\tau'\\ &=M\|A_3\|_{2} \frac{\kappa_1}{\alpha_1}e^{-\alpha_1 T}. \end{split} \end{equation} The result follows with \begin{equation} \beta_1 = M\|A_3\|_{2} \frac{\kappa_1}{\alpha_1} . \label{eq_output_bound_2} \end{equation} \end{pf} \subsubsection{Koopman-type inverse operator} Given an estimate $\hat{\eta}$ of the internal state $\eta $, the inverse operator prediction in (\ref{op_inverse_min_phase}) can be estimated as \begin{align} \hat{u}_d(t) &=b_{n-r}^{-1}\left[ y_d^{(r)}(t)- A_{\xi} \xi_d(t) -A_{\eta} \hat{\eta}_d(t) \right] \nonumber \\ & = b_{n-r}^{-1}\left[ y_d^{(r)}(t)- A_{\xi} \xi_d(t) -A_{\eta} \hat{\mathbb{H}}[y_d(t-T:t)] \right] \nonumber \\ & \qquad {\mbox{using (\ref{eq_approx_unknown_state})} } \nonumber \\ &\triangleq \hat{\mathbb{G}}^{-1}[y^{(r)}_d(t), \xi_d(t), y_d(t-T:t)]. \label{eq:inv_op_derivation} \end{align} \vspace{0.1in} \begin{remark} \label{rem_inverse_preicison} In addition to sufficient time history (large $T$) of the output to accurately find the internal state (to let $\Delta \eta \longrightarrow 0$), information about the derivatives of the output (upto the relative degree $r$ at time $t$, i.e., $y_d^{(r)}(t), \xi(t)$) are also needed for precisely computing the inverse input $u_d$ in (\ref{op_inverse_min_phase}) as illustrated in Fig.~\ref{fig_inverse_hidden_state_depend_demo}. \end{remark} \vspace{-0.1in} \begin{figure}[!ht] \centering \includegraphics[width=0.95\columnwidth]{Images/inverse_demo.png} \caption{The inverse operator's dependence on the hidden state is removed by use of past output history and current time derivatives of the output. } \label{fig_inverse_hidden_state_depend_demo} \end{figure} \vspace{-0.01in} \subsubsection{Koopman-type forward operators using output history\newline} The output $y$ can be related to the input as \begin{equation} \begin{split} {y}(t+T_f) &=C\int_{-\infty}^{t+T_f}e^{A(t-\tau)}Bu(\tau)d\tau \end{split} \end{equation} and approximated by \begin{equation} \begin{split} \hat{y}(t+T_f) &=C\int_{t-T}^{t+T_f}e^{A(t-\tau)}Bu(\tau)d\tau . \end{split} \label{forward_map_approx_u_hist} \end{equation} Therefore, using arguments similar to the proof of Lemma 1, the error in computing the output using just the history of input $u$ tends to zero as the time history of the input increases, i.e., as $T \rightarrow \infty$. Thus, it is possible to find a map that only depends on the input and its past history, \begin{equation} \begin{split} \hat{y}(t+T_f) & = \hat{\mathbb{G}}_u[u(t-T:t+T_f)], \end{split} \end{equation} which justifies the use of ARX models to capture forward linear system models using past input history (and augmented by the output history). In contrast, with Koopman-type operators where past history of the observable output is used to predict future values, the forward model prediction can be written as \begin{equation} \begin{split} &\hat{y}(t+T_f)\\ &=Ce^{AT_f}\hat{x}(t)+C\int_{t}^{t+T_f}e^{A(t+T_f-\tau)}Bu(\tau)d\tau\\ &=Ce^{AT_f} S^{-1} \begin{bmatrix} \xi(t)\\ \hat{\eta}(t) \end{bmatrix} +C\int_{t}^{t+T_f} \!\!\! e^{A(t+T_f-\tau)}Bu(\tau)d\tau\\ & \qquad {\mbox{using (\ref{eq:coord_trans})} } \\ &=Ce^{AT_f} S^{-1} \begin{bmatrix} \xi(t)\\ \hat{\mathbb{H}}[y_d(t-T:t)](t) \end{bmatrix} \\ & \qquad \qquad \qquad \qquad +C\int_{t}^{t+T_f} \!\!\! e^{A(t+T_f-\tau)}Bu(\tau)d\tau\\ & \qquad {\mbox{using (\ref{eq_approx_unknown_state})} } \\ &\triangleq \hat{\mathbb{G}}[y(t-T:t), \xi(t), u(t:t+T_f)]. \end{split} \end{equation} Therefore, past history of the output can also be used to develop Koopman-type forward operators, provided access is available to current time derivatives of the output $\xi(t)$. \subsubsection{Inverse operator precision} The inverse operator depends not only on the past history of the output (to remove the hidden state $\eta$ dependency) but also on the output and its time derivatives at the current time instant $t$. The impact of the time history $T$, output and its time derivatives on the precision of the operator is quantified in the next lemma. \begin{lemma} \label{Lemma_prediction_error} The prediction error of the inverse operator is bounded, i.e there exists positive scalars $L_1>0, L_2>0, L_3 >0$ such that the error between the predicted input $\hat{u}_d(t)$ and the true input $u_d(t)$ is \begin{equation} \begin{split} &\|\hat{u}_d(t)-u_d(t)\|_{2}\\ &\le L_1\|\Delta y^{(r)}_d(t)\|_2 + L_2\|\Delta \xi_d(t)\|_2+L_3 e^{-\alpha_1 T}. \end{split} \label{eq:inv_u_err} \end{equation} \end{lemma} \begin{pf} From (\ref{eq:inv_model}) and (\ref{eq:inv_op_derivation}), \begin{equation} \begin{split} &\|\hat{u}_d(t)-u_{d}(t)\|_{2}\\ &\le | b_{n-r}^{-1}| \left[\|\Delta y^{(r)}_d(t)\|_2+\|A_{\xi}\|_2\|\Delta \xi_d(t)\|_2 \right. \\ & \qquad \qquad \qquad \qquad\left. + \|A_{\eta}\|_2\|\Delta \eta_d(t)\|_2\right], \end{split} \label{eq:quasi_err_step1} \end{equation} where $\Delta y^{(r)}_d(t) \triangleq \hat{y}^{(r)}_d(t)-y^{(r)}_d(t)$, $\Delta \xi_d(t)\triangleq \hat{\xi}_d(t)-\xi_d(t)$ and $\Delta \eta_d(t)\triangleq \hat{\eta}_d(t)-\eta_d(t)$. The results follows from (\ref{eq_eta_err_bound_exp}) with \begin{equation} L_1 = |b^{-1}_{n-r}|, \quad L_2 = L_1\|A_{\xi}\|_2, \quad L_3 = L_1\|A_{\eta}\|_2 \beta_1 . \end{equation} \end{pf} \vspace{0.1in} \begin{remark}[Data-enabled algorithm] \label{rem_Data_based_algorithm} Known values of the desired output and its derivatives, specified with a sampling period $\Delta t$ and time history $T$ can be used to estimate a discrete-time inverse operator from (\ref{eq:inv_op_derivation}) as \begin{align} \hat{u}_d[m] &= \mathbb{G}_d^{-1}[y_d[m-m_T: 1:m],\xi_d[m],y^{(r)}_d[m]], \label{eq_data_inverse} \end{align} where $[m]$ indicates value at time $t_m=m \Delta t$, and $m_T = T/{\Delta t} $. Data-enabled algorithms can be used to learn the operator $\mathbb{G}_d^{-1}$, since (\ref{eq_data_inverse}) maps a finite number of variables (desired output and its time derivatives) to the inverse input at time $t_m$. \end{remark} \section{Simulation results} In this section, an example system is introduced, followed by the data-enabled learning of the inverse operator. \subsection{Example system} Consider the following two-mass-spring-damper system, where the input $u$ is the force acting on mass $m_2$ and its displacement $x_2$ is the output $y$, as shown in Fig.~\ref{fig:example_sys}. \begin{figure}[!ht] \centering \includegraphics[width=0.75\columnwidth]{Images/mass_spring_damper_sys.png} \caption{Example system plot} \label{fig:example_sys} \end{figure} The corresponding state space model can be written as \begin{align} \frac{d}{dt} X &= AX+Bu\\ y=x_2&=CX \end{align} where $X\triangleq \begin{bmatrix} x_1&\Dot{x}_1&x_2&\Dot{x}_2 \end{bmatrix}'$, $C=\begin{bmatrix} 0&0&1&0 \end{bmatrix}$, \begin{equation} A=\begin{bmatrix} 0&1&0&0\\ -\frac{k_1+k_2}{m_1}&-\frac{c_1+c_2}{m_1}&\frac{k_2}{m_1}&\frac{c_2}{m_1}\\ 0&0&0&1\\ \frac{k_2}{m_2}&\frac{c_2}{m_2}&-\frac{k_2}{m_2}&-\frac{c_2}{m_2} \end{bmatrix} , B= \begin{bmatrix} 0\\0\\0\\ a/m_2 \end{bmatrix}, \end{equation} $m_1=10,m_2=5,k_1=110,c_1=68,a=k_1/2,k_2 = 75$ and $c_2=60$ in SI units. The relative degree of the system is $r=2$ and the input-output relation is given by \begin{equation} \begin{split} \ddot{y}(t)&= -25 y(t) -12 \dot{y}(t) +25 x_1(t) +12 \dot{x}_1(t) + {11 u(t)}. \end{split} \label{eq_yddot_example} \end{equation} \subsection{Preliminary selections}\label{subsec_pre} Selection of the data-enabled model types to evaluate, the sampling time (which needs to be sufficiently small to reduce discretization error), the evaluation metric, and sufficiently smooth output trajectories for model evaluation are described below. \vspace{-0.1in} \begin{enumerate}[label=(\roman*)] \item A two-layer feedforward neural-net (created through MATLAB function \texttt{feedforwardnet()} with default activation function) is used to learn the inverse operator from data. \item For the two-layer neural-net, each model pool consists of 5 candidates with different number $N\in\{5,10,20,40,80\}$ of neurons in the hidden layer \item The sampling frequency is varied from $5$ Hz to $20$ Hz, which is substantially higher than the system bandwidth of 1.7 Hz. \begin{figure}[!ht] \centering \includegraphics[width=0.95\columnwidth]{Images/filter.png} \caption{Filter process to generate desired trajectories. } \label{fig:sig_filter} \end{figure} \item The inverse operator is assessed using 10 different desired trajectories $y_{d,k}(t), 1\le k \le 10, t\in [0,10]$ with a fixed prediction sampling time of $0.01$ s. Each desired trajectory $y_{d,k}$ used for assessment needs to be sufficiently smooth to investigate the impact of different order of output's time derivatives on the inverse operator, although from (\ref{eq:inv_u_err}) the expectation is that only output derivatives upto the $r^{th}$ order ($r=2$ for this example) are required. Therefore, nominal trajectories $y_{0,k}$ (specified in the appendix) are filtered as shown in Fig.~\ref{fig:sig_filter}, to obtain desired outputs $y_{d,k}$ and their derivatives as \begin{equation} \begin{bmatrix} y_{d,k}\\\Dot{y}_{d,k}\\\Ddot{y}_{d,k}\\y^{(3)}_{d,k}\\y^{(4)}_{d,k} \end{bmatrix}(t) = \begin{bmatrix} 1&0&0&0&0\\ -a&a&0&0&0\\ a^2&-2a^2&a^2&0&0\\ -a^3&3a^3&-3a^3&a^3&0\\ a^4&-4a^4&6a^4&-4a^4&a^4 \end{bmatrix} \begin{bmatrix} y_{d,k}\\y_{3,k}\\y_{2,k}\\y_{1,k}\\y_{0,k} \end{bmatrix}(t) \label{eq_filter_compu} \end{equation} where $a=2\pi$ (cut-off frequency as 1 Hz), which is less than the system's bandwidth of 1.7 Hz, and example trajectories are shown in Fig.~\ref{fig_test_traj_demo2}. \item For a given time history $T$ and sampling time $\Delta t$, as in Remark~\ref{rem_Data_based_algorithm}, the evaluation metrics for the data-enabled inverse operator with $N$ neurons in the hidden layer are selected as the mean $e_{u,N}$ and maximum $\overline{e}_{u,N}$ normalized prediction error over the ten evaluation trajectories $y_{d,k}(\cdot)$, i.e., \begin{equation} e_{u,N} = \frac{1}{10}\sum_{k=1}^{10}\frac{\max_{m} |\hat{u}_k[m]-u_{d,k}[m]|}{\max_{m} |u_{d,k}[m]|}\times 100\% \label{metric_1} \end{equation} \begin{equation} \overline{e}_{u,N} = \max_{k=1,\dots,10}\frac{\max_{m} |\hat{u}_k[m]-u_{d,k}[m]|}{\max_{m} |u_{d,k}[m]|}\times 100\%, \label{metric_2} \end{equation} where the ideal inverse $u_{d,k}$ was found using (\ref{eq:inv_model}) where $\eta_d$ was obtained through (\ref{eq:unknown_state_by_integral}). Moreover, the smallest normalized prediction error over different number of neurons in the hidden layer is defined as \begin{equation} e_{u} = e_{u, N^*}, \quad \bar{e}_{u} = \bar{e}_{u, N^*} \quad {\mbox{where} }\quad N^* = \argmin_N{e_{u, N}} \label{metric_3} \end{equation} to quantify the precision of the inverse operator. \end{enumerate} \begin{figure}[!ht] \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.44\columnwidth]{Images/demo2.png} & \includegraphics[width=0.45\columnwidth]{Images/demo6.png} \end{tabular} \vspace{-0.1in} \caption{Comparison of the example filtered desired output $y_{d,k}$ and nominal trajectories $y_{0,k}$ for $k=2$ (triangular) and $k=6$ (sinusoidal). } \label{fig_test_traj_demo2} \end{figure} \vspace{-0.1in} \subsection{Data Collection } \vspace{-0.1in} The inverse operators are trained using input-output data collected from simulations. Both noisy and noise free output data are used to assess the impact of noise. The input signal $u$ applied to the system is constructed by concatenating $20$ cycles of $p_{(f_i,\alpha_i)}(\cdot)$ ($i=1,2,3,\dots,20$) with different parameters, which are tabulated in Table.~\ref{tab:excitation_sig}. \begin{equation} p_{(f_i,\alpha_i)}(t)=\alpha_i [ 4\sin{(\pi c t^2)}+s(t)+r(t) ] \label{eq:excitation_sig} \end{equation} where $c = f_i/10$, \begin{equation*} \text{s}(t) =\begin{cases} 1 & 2\le t < 4\\ -0.9 & 4\le t < 6\\ 0.5 & 6\le t < 8\\ 0 & \text{otherwise}, \end{cases} \quad \text{r}(t)=\begin{cases} 0.4t & 0\le t < 1\\ 0.4 & 1\le t < 9\\ \text{r}(10-t) & 9\le t \le 10. \end{cases} \end{equation*} \begin{table}[!ht] \centering \caption{Parameters of $p_{(f_i,\alpha_i)}$ in Eq.~\eqref{eq:excitation_sig}. } \begin{tabular}{|c|c|c|c|c|c|} \hline Cycle $\#$, $i$ & $f_i$ & $\alpha_i$ & Cycle $\#$, $i$ & $f_i$ & $\alpha_i$\\ \hline 1 & 6 & 0.75 &11 & 1 & 0.25 \\ 2 & 3 & 0.5 &12 & 0.5 & 0.25\\ 3 & 2 & 0.5 &13 & 1 & -0.1\\ 4 & 0.5 & 0.5 &14 & 0.5 & -0.05\\ 5 & 0.5 & 0.3 &15 & 0.5 & 0.1\\ 6 & 0.3 & 0.3 &16 & 0.5 & -0.1\\ 7 & 0.1 & 0.3 &17 & 2 & 0.25\\ 8 & 0.5 & -0.3 &18 & 1 & 0.1\\ 9 & 0.3 & -0.3 &19 & 0.5 & 0.05\\ 10 & 0.1 & -0.3 &20 & 1 & 0.5\\ \hline \end{tabular} \label{tab:excitation_sig} \end{table} For the noisy case, additive white gaussian noise with signal-to-noise ratio of 20 is separately added to each output and its time derivatives. Simulations were done in MATLAB with \texttt{ode45()} with sampling rate of 100 Hz (to be consistent with the evaluation metrics from (\ref{metric_1}) to (\ref{metric_3})). Input, output and the output's time derivatives (upto the fourth order) were collected. Second order derivative was obtained from~(\ref{eq_yddot_example}). Third and fourth order derivatives for training purposes were estimated from the data, using finite difference as, \begin{align} \begin{bmatrix} y^{(3)}[m]\\ y^{(4)}[m] \end{bmatrix} &=\frac{1}{12(\Delta t)}\begin{bmatrix} -1&8&0&-8&1 \\ -\frac{1}{\Delta t}&\frac{16}{\Delta t}&-\frac{30}{\Delta t}&\frac{16}{\Delta t}&-\frac{1}{\Delta t} \end{bmatrix}\begin{bmatrix} \ddot{y}[m+2]\\\ddot{y}[m+1]\\\ddot{y}[m]\\\ddot{y}[m-1]\\\ddot{y}[m-2] \end{bmatrix}. \nonumber \end{align} \begin{figure}[!t] \centering \includegraphics[width=0.75\columnwidth]{Images/relative_degree_identify.png} \vspace{-0.1in} \caption{Identifying the relative degree $r$ from input-output data, based on discontinuity in the $r^{th}$ derivative of the output for a step input. } \label{fig_relative_degree_id} \end{figure} \vspace{0.1in} \subsection{Reducing impact of hidden states using output history } To investigate the reduction of the impact of the hidden states on the prediction precision of data-enabled inverse operators, the performance of the data-enabled inverse operators was assessed for different time history $T$ of the output. In this part of the study, the number of time derivatives of the output used was the same as the relative degree of the example system. The relative degree $r=2$ can be established by applying a step input --- a corresponding discontinuity will appear in ${y}^{(r)}$, while the lower order derivatives ($y,\dot{y}$ in this example) remain continuous as seen in Fig.~\ref{fig_relative_degree_id}. Then, from (\ref{eq_data_inverse}), \begin{align} \hat{u}_d[m] &= \mathbb{G}^{-1}_d[y_d[m-m_T:1:m],\Dot{y}_d[m],\Ddot{y}_d[m]]. \label{eq:experiment_data_inverse} \end{align} The inverse operator's prediction error $e_u$ (\ref{metric_3}) was obtained for varying output time history $T$ ([0.1, 0.2, 0.4, 0.8, 1.6, 3.2, $ \dots$] s), for different sampling time $\Delta t \in \left\{ 0,05 s, 0.1 s, 0.2 s \right\} $, and for different number $N$ of neurons in the hidden layer, and plotted in Fig.~\ref{fig_inv_exponential_decay} for the case without noise in the training data. The associated prediction errors are tabulated in Table~\ref{tab_excitation_sig_mean} for the fastest sampling time $\Delta t = 0.05$ s. The precision of the inverse operator improves with larger output time history $T$, as seen in Table~\ref{tab_excitation_sig_mean}, where the evaluation values of the two-layer neural net with different $N$ neurons in the hidden layer are listed. Note that typically $N^*\le 20$ yields good precision for this application from Table~\ref{tab_excitation_sig_mean}. Over all selections of neuron numbers $N$, the variation of the smallest prediction error $e_u=e_{u,N^*}$ (\ref{metric_3}) with sampling time of $\Delta t = 0.05$ s ($20$ Hz) fits an exponential decay curve $e_u(T) \approx 1.88e^{-2.18T}$, shown in red in Fig.~\ref{fig_inv_exponential_decay}. This exponential improvement in precision is expected from Lemma~\ref{Lemma_prediction_error}, which predicts an exponential decay of error in the estimation of the hidden states, dependent on $\|e^{AT}\|_2$ from (\ref{eq:exponeital_decay}), and shown in Fig.~\ref{fig_inv_exponential_decay}. Thus, the impact of hidden states on the prediction precision of data-enabled inverse operator can be reduced by using sufficient time history of the desired output. \vspace{0.1in} \begin{remark}[Reducing hidden state dependence] In the following simulations, the time history $T$ is chosen to be sufficiently large $T^*=3.2$ s, which results in a normalized error $e_u \approx 0.01\%$. \label{remark_select_Tstart} \end{remark} \begin{figure}[!t] \centering \includegraphics[width=0.75\columnwidth]{Images/inv_exponential_decay.png} \caption{Inverse operator's precision in terms of prediction error $e_u$ (\ref{metric_3}) exponentially improves with respect to different window length $T$ of output history, for different sampling times, $\Delta t = 0.05 s(20 \text{Hz, blue}), 0.1 s(10 \text{Hz, cyan}), 0.2 s(5\text{Hz, red})$. Similar results are seen over different $N^*$ neurons in the hidden layer: $5$ triangle ($\triangle$), $10$ (square $\square$),$20$ (diamond $\diamondsuit$),$40$ (pentagram $\medwhitestar$), and $80$ (circle $\fullmoon$). The fitted exponential decay (red line) is obtained with sampling time of $\Delta t = 0.05$ s ($20$ \text{Hz, blue}).} \label{fig_inv_exponential_decay} \end{figure} \vspace{0.1in} \begin{table}[!t] \centering \caption{Inverse operator's precision improvement in terms of prediction error $e_{u,N}$ (\ref{metric_1}) and $\overline{e}_{u,N}$ (\ref{metric_2}) for varying output time history $T$ and number $N$ of neurons in the hidden layer, with sampling time $\Delta t = 0.05$ s.} \begin{tabular}{|c|c|c|c|c|c|} \hline \diagbox{T}{N} & 5 & 10 & 20 & 40 & 80 \\ \hline & \multicolumn{4}{c}{$e_{u,N} (\%)$ as in (\ref{metric_1})} & \\ \hline 0.1&1.78 &2.23 & 1.64 & 2.05 & 2.43\\ 0.2&0.79 &0.88 & 0.87 & 0.88 & 0.98\\ 0.4& 0.95& 0.85& 0.88 & 0.92 & 0.91\\ 0.8& 0.46&0.51 & 0.49 & 0.48 &0.52 \\ 1.6& 0.14&0.12 & 0.12 & 0.14 & 0.16\\ 3.2&0.05 & 0.01 & 0.01 & 0.01 & 0.05\\ \hline & \multicolumn{4}{c}{$\overline{e}_{u,N}(\%)$ as in (\ref{metric_2}) } & \\ \hline 0.1&3.17 & 3.72& 4.73 & 5.61 & 6.28\\ 0.2& 1.22& 1.59& 1.56 & 1.48 & 1.76 \\ 0.4& 1.17&1.10 & 1.33 & 1.75 & 1.69\\ 0.8&0.54 &0.65 &0.61 & 0.67 &1.01 \\ 1.6&0.20 & 0.16& 0.18 & 0.33& 0.44 \\ 3.2& 0.08&0.02 &0.02 & 0.02& 0.14 \\ \hline \end{tabular} \label{tab_excitation_sig_mean} \end{table} \vspace{-0.1in} \subsection{Need to include output time derivatives} \vspace{-0.1in} From (\ref{eq:inv_u_err}) in Lemma~\ref{Lemma_prediction_error}, even if the hidden state error is reduced by having sufficiently large time history $T$, (as shown in the previous subsection), current time derivatives of the output $\xi_d(t),y^{(r)}(t)$ are needed to achieve precision prediction with the inverse operator. Therefore, the impact of adding time-derivative information is investigated through the following two steps, for different sampling periods $\Delta t \in \left\{ 0,05 s, 0.1 s, 0.2 s \right\}$ and for different number $N$ of neurons in the hidden layer. \begin{enumerate}[label=(\roman*)] \item Incrementally including higher-order time derivatives of the output when learning the inverse operator $\mathbb{G}^{-1}_{d,l}$ that predicts the inverse input $\hat{u}_d$ similar to (\ref{eq:experiment_data_inverse}), where output time derivatives till order $l$ ($0 \le l \le 4$) are included in the data-enabled operator learning, e.g., with $l=i \ge 0$, \begin{align} \hat{u}_d[m] & = \mathbb{G}^{-1}_{d,i}[y_d[m-m_T:1:m], \nonumber \\ & \qquad \quad y^{(i)}_d[m], y^{(i-1)}_d[m], \hdots y^{(0)}_d[m]], \label{inv_G_d_4} \end{align} where $\mathbb{G}^{-1}_{d,2} = \mathbb{G}^{-1}_{d}$ in (\ref{eq:experiment_data_inverse}). \item Adding the output's time derivatives $\dot{y}_d(t),\ddot{y}_d(t)$ to NARX-type inverse operators where the inverse operator is learned using both input and output time history, i.e., to compare \begin{equation} \begin{split} \hat{u}_d[m] = & \text{NARX}[y_d[m-m_T:1:m], \\ & \quad u_d[m-m_T:1:m-1]] \end{split} \label{eq_narx} \end{equation} \begin{equation} \begin{split} \hat{u}_d[m] = & \text{NARX}^{*}[y_d[m-m_T:1:m],\dot{y}_d[m], \\ & \quad \ddot{y}_d[m], u_d[m-m_T:1:m-1]]. \end{split} \label{eq_narx_star} \end{equation} \end{enumerate} The corresponding prediction performance, in terms of errors $e_u$ and $\bar{e}_u$ in (\ref{metric_3}), for $T^*=3.2$ s and $\Delta t = 0.05$~s are tabulated in Table~\ref{tab_derivative_impact}, and plotted in Fig~\ref{fig_inv_map_noise_free} for $T^*=3.2$~s and different sampling time $\Delta t \in \{0.05s, 0.1s, 0.2s\}$. \begin{table}[!ht] \centering \caption{Prediction error $e_u,\bar{e}_u$ (\ref{metric_3}) for inverse operators from (\ref{inv_G_d_4}) to (\ref{eq_narx_star}) with $\Delta t = 0.05$ s.} \begin{tabular}{|c|c|c|c|c|c|} \hline & $e_u(\%)$ & $\bar{e}_u (\%)$ & & $e_u(\%)$ & $\bar{e}_u(\%)$ \\ \hline & \multicolumn{4}{c}{Noise free training data} & \\ \hline $\mathbb{G}^{-1}_{d,0}$& 3.13& 9.82 & $\mathbb{G}^{-1}_{d,4}$ & 0.01 & 0.02\\ $\mathbb{G}^{-1}_{d,1}$& 0.74& 2.10 & NARX & 1.60 & 5.93\\ $\mathbb{G}^{-1}_{d,2} =\mathbb{G}^{-1}_{d}$& 0.01& 0.02 & $\text{NARX}^*$ & 0.01 & 0.02\\ $\mathbb{G}^{-1}_{d,3}$& 0.01& 0.02& & & \\ \hline & \multicolumn{4}{c}{Noisy training data} & \\ \hline $\mathbb{G}^{-1}_{d,0}$& 53.91 & 114.68 & $\mathbb{G}^{-1}_{d,4}$ & 0.41 & 0.78\\ $\mathbb{G}^{-1}_{d,1}$& 11.53& 37.82& NARX & 3.89 & 17.95\\ $\mathbb{G}^{-1}_{d,2} =\mathbb{G}^{-1}_{d}$ & 0.53 & 1.05 & $\text{NARX}^*$ & 0.21 &0.45 \\ $\mathbb{G}^{-1}_{d,3}$& 0.65& 1.32& & & \\ \hline \end{tabular} \label{tab_derivative_impact} \end{table} \begin{figure}[!ht] \centering \begin{tabular}{@{}c@{}} \includegraphics[width=0.9\columnwidth]{Images/inv_mapping_noise_free_with_narx_comparison.png}\\ \includegraphics[width=0.9\columnwidth]{Images/inv_mapping_noisy_with_narx_comparison.png} \end{tabular} \caption{ Inverse operator's precision in terms of prediction error $e_u, \overline{e}_u$ (\ref{metric_3}) improves for all cases with the addition of derivative information. (Top: noise free training data. Bottom: noisy training data). Similar results are seen for different number $N^*$ of neurons in the hidden layer, with symbols as in Fig.~\ref{fig_inv_exponential_decay}, where the filled symbols correspond to $\overline{e}_u$ and unfilled correspond to ${e}_u$. Performance of NARX-type operator with input and output history but without derivative information is also improved with the addition of derivative information in $\text{NARX}^*$, as in (\ref{eq_narx},\ref{eq_narx_star})} \label{fig_inv_map_noise_free} \end{figure} {\underline{Impact of including derivatives}} The precision of the inverse operator depends on inclusion of the output derivative upto order $r$ (the relative degree). When the number of derivatives $l$ (included in the training and evaluation) is increased from $l=0$ to $l=4$, the precision of the inverse operator improves significantly when all the required number ($l=2=r$) of time derivative features are included in the training and evaluation data. In particular, the maximum error $\overline{e}_u$ in (\ref{metric_3}) reduces from $9.82\% $ to $0.02\%$ for the case with noise free training data and from $114.68\% $ to $1.05\%$ for the case with noisy training data as seen in Table~\ref{tab_derivative_impact}. Therefore, there is substantial improvement in the inverse operator's precision (especially in the presence of noise) when time derivatives upto the required order of 2 are included. {\underline{Impact on NARX-type inverse operator}} Inclusion of time derivatives is also important for NARX-type inverse operators where both input and output time history are used in the inverse operator. This can be seen by comparing NARX (\ref{eq_narx}) without time derivatives and $\text{NARX}^*$ (\ref{eq_narx_star}) with the derivatives in Table~\ref{tab_derivative_impact} and in Fig~\ref{fig_inv_map_noise_free}. When time derivatives $l=2$ are included in the training and evaluation, the precision of the inverse operator improves significantly. In particular, the maximum error $\overline{e}_u$ in (\ref{metric_3}) reduces from $5.93\% $ to $0.02\%$ for the case with noise free training data and from $17.95\% $ to $0.45\%$ for the case with noisy training data as seen in Table~\ref{tab_derivative_impact} . Therefore, there is substantial improvement in the precision of the NARX-type inverse operator when the output time derivatives upto the required order of 2 are included. {\underline{Derivative information in output time history}} Conceptually, information about the derivatives upto $r-1$ (one less than the relative degree $r$) are available in the time history of the output and only the $r^{th}$ time derivative $y_d^{(r)}[m]$ is directly affected by the input $u[m]$. In particular, output derivatives can be related to the output time history using finite difference techniques, especially in the noise free case, and hence direct computation of the derivatives might not appear to be critical if time history of the output is used during training. Nevertheless, including computed or measured values (even with some noise) of the time derivative $\dot{y}[m]$ (which is not directly affected by the input $u[m]$) still can improve the precision of the inverse operator as seen in Fig.~\ref{fig_inv_map_noise_free} and Table~\ref{tab_derivative_impact}. In particular, the maximum error $\overline{e}_u$ in (\ref{metric_3}) reduces from $9.82\% $ to $2.10\%$ for the case with noise free training data and from $114.68\% $ to $37.82\%$ for the case with noisy training data as seen in Table~\ref{tab_derivative_impact}. Therefore, while the noise free case precision could be improved by smaller sampling time $\Delta t$ without the inclusion of $\dot{y}$, for the noisy case, direct measurements of the output time derivatives can substantially improve the inverse operator training, and lead to better precision in its predictions. Moreover, the precision of the inverse operator is further improved by including time derivatives upto the required order of r (relative degree). \section{Conclusion} \vspace{-0.1in} This work showed that Koopman-type data-enabled inverse operators can have high precision if a sufficient large time history of the output is included to reduce the impact of hidden internal states. Additionally, measurements of the instantaneous output time derivatives (upto the relative degree) are required during training to improve the data-enabled inverse operator precision. Our ongoing work is aimed at extending these results to Koopman-type data-enabled inverse operators for nonlinear nonminimum-phase systems. \vspace{-0.1in}
2024-02-18T23:40:38.052Z
2022-07-05T02:10:01.000Z
algebraic_stack_train_0000
2,914
6,619
proofpile-arXiv_065-14233
\section{Introduction} In this paper, we consider a nonparametric regression model, where bivariate observations \[\{(X_1,z_1),\ldots,(X_n,z_n)\}\] satisfy the following equations: \begin{equation} \label{1} X_i=f({z}_i)+\varepsilon_i, \qquad i=1,\ldots,n, \end{equation} where $f(t)\equiv f(\omega, t)$, $t\in [0,1]$, is an unknown random function (process) which is continuous almost surely, the design $\{z_i;\,i=1,\ldots,n\}$ consists of a set of observable random variables with possibly unknown distributions lying in~$[0,1]$, the design points are not necessarily independent or identically distributed. We will consider the design as a triangular array, i.e., the random variables $\{z_i;\; i = 1, \ldots, n\}$ may depend on $n$. In particular, this scheme includes regression models with fixed design. The random regression function $f(t)$ is not supposed to be design independent. We will give below some fairly standard conditions for the regression analysis on the random errors $\{\varepsilon_i;\, i = 1, \ldots, n \}$. In particular, they are supposed to be centered, not necessarily independent or identically distributed. The paper is devoted to constructing uniformly consistent estimators for the regression function $f (t)$ under minimal assumptions on the correlation of design points. The most popular kernel estimation procedures in the classical case of nonrandom regression function are apparently related with the estimators of Nadaray--Watson, Priest\-ley--Zhao, Gasser--M\"{u}ller, local polynomial estimators, as well as their modifications (e.g., see \cite{1996-Fa}, \cite{2003-FY}, \cite{2002-Gy}, \cite{1990-Ha}, \cite{1988-M}). We are primarily interested in the dependence conditions of design elements $\{{z}_i\}$. In this regard, a huge number of publications in the field of nonparametric regression can be conditionally divided into two groups. We will classify papers with a random design to the first one, and to the second one -- with a fixed design. In the papers dealing with random design, either independent and identically distributed quantities are considered or, as a rule, stationary sequences of observations that satisfy one or another known form of dependence. In particular, various types of mixing conditions, schemes of moving averages, associated random variables, Markov or martingale properties, and so on have been used. In this regard, we note, for example, the papers \cite{2003-CD}, \cite{1979-Dev}, \cite{1990-Ga}, \cite{2002-Gy}, \cite{2008-Ha}, \cite{1984-HL}, \cite{2016-HL}, \cite{2001-JM}, \cite{2011-K1}, \cite{1989-Lie}, \cite{2010-LJ}, \cite{2016-LYS}--\cite{2005-Ma}, \cite{1997-Mu}, \cite{1970-Nad}, \cite{1990-Ro}, \cite{2013-SX}. In the recent papers \cite{2012-CGL}, \cite{2007-KMT}, \cite{2016-LW}, and \cite{2014-WC}, nonstationary sequences of design elements with one or another special type of dependence are considered (Markov chains, autoregression, partial sums of moving averages, etc.). In the case of fixed design, in the overwhelming majority of works, certain conditions for the regularity of the design are assumed (e.g., see \cite{2020-BBL}--\cite{2001-BF}, \cite{2007-GRT}, \cite{2008-Ha}, \cite{1984-HL}, \cite{2018-TXW}, \cite{1994-Wu}, \cite{2020-ZZ}). So, the nonrandom design points $z_i$ are most often given by the formula $z_i = g (i/n) + o (1/n)$ with some function $g$ of bounded variation, where the error $ o (1 / n) $ is uniform in all $i = 1,\ldots, n$. If $g$ is linear then we get a so-called {\it equidistant} design. Another version of the regularity condition is the relation $\max\nolimits_{i \leq n} (z_i-z_{i-1}) = O(1/n)$ (here it is assumed that the design elements ranged in increasing order). The problem of uniform approximation of a regression function has been studied by many authors (e.g., see \cite{1979-Dev}, \cite{2005-Ei}, \cite{2007-GRT}, \cite{2008-Ha}, \cite{1984-HL}, \cite{1993-I}, \cite{2005-LJ}, \cite{1989-Lie}, \cite{2016-LYS}, \cite{1982-MS}, \cite{1970-Nad}, \cite{2013-SX}, \cite{2014-WC}, \cite{ZLL-2018}, and the references there). In connection with studying the random regression function $f(t)$, we note, for example, the papers \cite{2006-Ha}, \cite{KR-2017}, \cite{2010-Li}, \cite{LJ-2020}, \cite{Y-2007}--\cite{ZLL-2018} where the mean and covariance functions of the random regression function~$f$ are estimated in the case when, for $N$ indepen\-dent copies $f_1,\ldots,f_N$ of the function $f$, noisy values of each of these trajectories are observed for some collection of design elements (the design can be either common to all trajectories or different from series to series). Estimation of the mean and covariance functions is an actively developing area of nonparametric estimation, especially in the last couple of decades, which is both of independent interest and plays an important role for some subsequent analysis of the random process $f$ (e.g., see \cite{HE-2015}, \cite{KR-2017}, \cite{2010-Li}, \cite{Mu-2005}, \cite{WCM-2016}, \cite{ZW-2016}). We consider one of the variants of this problem as an application of the main result. The purpose of this article is to construct estimators that are uniformly consistent (in the sense of convergence in probability) not only in the above-mentioned review of cases of dependence, but also for significantly different correlations of observations when the conditions of ergodicity or stationarity are not satisfied, as well as the classical mixing conditions and other well-known dependence restrictions. Note that the proposed estimators belong to the class of local linear kernel estimators, but with some different weights than in the classical version. In this case, instead of the original observations, we consider their concomitants associated with the variational series based on the design observations, and their spacings are taken as the additional weights for the corresponding weighted least-square method generating the above-mentioned new estimators. It is important to emphasize that these estimators have the property of universality regarding the nature of dependence of observations: the design can be either fixed and not necessarily regular, or random, while not necessarily satisfying the traditional correlation conditions. In particular, the only condition for design points that guarantees the uniform consistency of new estimators is the condition for dense filling of the domain of definition of the regression function. In our opinion, this condition is very clear and in fact, it is necessary to restore the function on the area of defining design elements. Previously, similar ideas were implemented in \cite{2020} for slightly different evaluations (in detail, see Sec. 4). Similar conditions for design elements were also used in \cite{2022} and \cite{2021} in nonparametric regression, and in \cite{2019}--\cite{2018-eng} --- in nonlinear regression. The paper has the following structure. Section 2 contains the main results, Section 3 discusses the problem of estimating the mean function of a stochastic process. Comparison of the universal local linear estimators with some known ones is given in Section~4. Section~5 contains some results of computer simulation. In Section~6, we compare the results of using the new universal local linear estimators with the most common approaches of data analysis based on the epidemiological research ESSE-RF. In Section 7, we briefly summarize the results of the study. The proofs of the results from Sections 2--4 are referred to Section~8. \section{Main results} We need a number of assumptions. $({\bf D})$ {\it The observations $X_1,\ldots, X_n$ are represented in the form {\rm(\ref{1})}, where the unknown random regression function $f:[0,1]\to {\mathbb R}$ is continuous almost surely. The design points $\{z_i;\,i=1,\ldots,n\}$ are a set of observable random variables with values in $[0,1]$, having, generally speaking, unknown distributions, not necessarily independent or equally distributed. Moreover, the random variables $\{z_i; \,i = 1, \ldots,n\} $ may depend on $n$, i.e., can be considered as an array of design observations. The random function $f(t)$ may be design dependent.} $({\bf E})$ {\it For all $n \geq 1$, the unobservable random errors $\{\varepsilon_i; \, i = 1, \ldots,n\}$ satisfy with probability $1$ the following conditions for all $i, j \le n$ and $i \neq j$$:$ \begin{equation}\label{w2} {\mathbb E}_{{\cal F}_n}\varepsilon_i=0,\qquad \sup_{i\leq n}{\mathbb E}_{{\cal F}_n}\varepsilon^2_i\le \sigma^2, \qquad {\mathbb E}_{{\cal F}_n}\varepsilon_i\varepsilon_j= 0, \end{equation} where the constant $\sigma^2>0$ may be unknown and does not depend on $n$, the symbol ${\mathbb E}_{{\cal F}_n}$ stands for the conditional expectation given the $\sigma$-field generated both by the paths of the random process $f(\cdot)$ and by the random variables $\{z_{i}; \,i=1,\ldots,n\}$. } $({\bf K})$ {\it A kernel $K(t)$, $t\in \mathbb R$, is equal to zero outside the interval $[-1,1]$ and is the density of a symmetric distribution with the support in $[-1,1]$, i.e., $K(t)\ge 0 $, $K(t)=K(-t)$ for all $t\in [-1,1]$, and $\int\nolimits_{-1}^1 K(t) dt =1$. We assume that the function $K(t)$ satisfies the Lipschitz condition with constant $1\le L\leq \infty$ and $K(\pm 1)=0$. } In the future, we denote by $\kappa_j$, $j=0,1,2,3$, the absolute $j$th moment of the distribution with density $K(t)$, i.e., $\kappa_j= \int\nolimits_{-1}^1|u|^jK(u)du$. Put $K_{h}(t)=h^{-1} K(h^{-1}t)$. It is clear that $K_{h}(s)$ is a probability density with support lying in $[-h,h]$. We need also the notation $$\|K\|^2=\int\limits_{-1}^1K^2(u)du, \qquad\kappa_j(\alpha)=\int\limits_{-1}^{\alpha}t^jK(t)dt,\quad \alpha\in[0,1], \quad j=0,1,2,3.$$ \begin{rem} We emphasize that assumption $(D)$ includes a fixed design situation. We consider the segment $[0,1]$ as an area of design change solely for the sake of simplicity of exposition of the approach. In the general case, instead of the segment $ [0,1] $, one can consider an arbitrary Jordan measurable subset of $\mathbb {R}$. \end{rem} Further, we denote by $z_{n:1}\leq \ldots\leq z_{n:n}$ the order statistics constructed by the sample $\{z_i;\,i=1,\ldots, n\}$. Put $$z_{n:0}:=0,\quad z_{n:n+1}:=1,\quad \Delta z_{ni}:=z_{n:i}-z_{n:i-1}, \quad i=1,\ldots,n+1.$$ The response variable $X_l$ and the random error $\varepsilon_{l}$ from (\ref{1}) corresponding to the order statistic $z_{n:i}$, will be denoted by $X_{ni}$ and $\varepsilon_{ni}$, respectively. It is easy to see that the new errors $\{\varepsilon_{ni};\,i=1,\ldots, n\}$ satisfy condition $(E)$ as well. Next, by $O_p(\eta_n)$ we denote a random variable $\zeta_n$ such that, for all $M>0$, one has $$\limsup\limits_{n\to\infty}{\mathbb P}(|\zeta_n|/\eta_n>M)\le \beta(M), $$ where $\lim_{M\to\infty}\beta(M)=0$ and $\{\eta_n\}$ are positive (maybe random or not) variables and the function $\beta(M)$ that may depend on the kernel $K$ and $\sigma^2$. We agree that, throughout what follows, all limits, unless otherwise stated, are taken for $n\to\infty$. Let us introduce one more constraint, which is the crucial condition of the paper (in particular, the only condition on design points that guarantees the existence of a uniformly consistent estimator; see also the comments at the end of the section). $({\bf D}_0)$ {\it The following limit relation holds$:$ $\delta_n:=\max\limits_{1\leq i\le n+1}\Delta z_{ni}\stackrel{p}{\to} 0$. } Finally, for any $ h \in (0,1) $, we introduce into consideration the following class of estimators for the regression function $f$: \begin{eqnarray}\label{est1} \widehat f_{n,h}(t):= I(\delta_n\le c_*h)\sum_{i=1}^n\frac{w_{n2}(t)-(t-z_{n:i})w_{n1}(t)}{w_{n0}(t)w_{n2}(t)-w_{n1}^2(t)}X_{ni}K_{h}(t-z_{n:i})\Delta z_{ni}, \end{eqnarray} where $I(\cdot)$ is the indicator function, \begin{equation}\label{2} c_*\equiv c_*(K):=\frac{\kappa_2-\kappa^2_1}{96 L(6L+\kappa_2+\kappa_1/2)}<\frac{1}{864L}; \end{equation} hereinafter, we use the notation $$w_{nj}(t):=\sum_{i=1}^n(t-z_{n:i})^jK_{h}(t-z_{n:i})\Delta z_{ni}, \quad j=0,1,2,3. $$ \begin{rem} It is easy to see that the difference $\kappa_2-\kappa^2_1$ is the variance of a non-degenerate distribution, thus this is strictly positive. \end{rem} \begin{rem} It is easy to verify that kernel estimator (\ref{est1}), without the indicator factor, is the first coordinate of the two-dimensional estimate of the weighted least-squares method, i.e., of the two-dimensional point $ (a^*, b^*) $ at which the following minimum is attained: \begin{eqnarray}\label{est2} \min\limits_{a,b} \sum_{i=1}^n\left(X_{ni}-\big(a+b(t-z_{n:i})\big)\right)^2K_{h}(t-z_{n:i})\Delta z_{ni}. \end{eqnarray} Thus, the proposed class of estimators in a certain sense (in fact, by construction) is close to the classical local linear kernel estimators, but in the weighted least squares method (\ref{est2}) we use slightly different weights. \end{rem} \begin{rem} In the case when there are multiple design points, some spacings $\Delta z_{ni}$ vanish, and we lose some of the sample information in the estimator (\ref{est1}). In this case, it is proposed, before using the estimator (\ref{est1}), to slightly reduce the sample by replacing the observations $X_i$ with the same points $z_i$ with their sample mean and leaving only one design point out of multiples in the new sample. In this case, the averaged observations will have less noise. So, despite the smaller size of the new sample, we do not lose the information contained in the original sample. \end{rem} Let us further agree to denote by $C_j$, $j\geq 1$, absolute positive constants, and by $C_j^*$, positive constants depending only on the kernel $K$. The main result of this section is as follows. \begin{theor}\label{theor-1} Let conditions $(D)$, $(E)$, and $(K)$ be satisfied. Then, for any fixed $h\in(0,1/2)$, with probability $1$ it is satisfied \begin{eqnarray}\label{5} \sup_{t\in [0,1]}|\widehat f_{n,t}(t)-f(t)|\le C_1^*\omega_f(h)+\zeta_n(h), \end{eqnarray} where $\omega_f(h):=\sup\limits_{u,v\in [0,1]: |u-v|\le h}|f(u)-f(v)|$ and the random variable $\zeta_n(h)$ meets the relation \begin{eqnarray}\label{6} {\mathbb P}\left( \zeta_n(h)>y,\, \delta_n\le c_*h\right)\le C_2^*\sigma^2\frac{{\mathbb E}\delta_n}{h^2y^2}, \end{eqnarray} with the constant $c_*$ from $(\ref{2})$. \end{theor} \begin{rem} {\rm As follows from the proof of Theorem 1, the constants $C_1^*$ and $C_2^*$ have the following structure: $$ C_1^*=C_1\frac{L^2}{\kappa_2-\kappa_1^2},\qquad C_2^*=C_2\frac{L^4}{(\kappa_2-\kappa_1^2)^2}. $$ } \end{rem} \begin{rem}\label{zam-0} {\rm Since $\delta_n\le 1$, then under condition $(D_0)$ the limit relation ${\mathbb E}\delta_n\to 0$ holds. Therefore, taking into account Theorem \ref{theor-1}, we can assert that $\zeta_n(h)=O_p(h^{-1}({\mathbb E}\delta_n)^{1/2})$. Thus, the bandwidth $h$ can be determined, for example, by the relation \begin{eqnarray}\label{7} h_n=\sup\left\{h>0:\,{\mathbb P}\left(\omega_f(h)\ge h^{-1}({\mathbb E}\delta_n)^{1/2}\right)\le h^{-1}({\mathbb E}\delta_n)^{1/2}\right\}. \end{eqnarray} It is easy to see that, when $(D_0)$ is satisfied, the limit relations $h_n \to 0$ and $h^{-1}_n({\mathbb E} \delta_n)^{1/2} \to 0 $ hold. In fact, the value of $h_n$ equalizes in $h$ the order of smallness in probability of both terms on the right-hand side of the relation (\ref{5}). Note also that, for nonrandom $f$, one can choose $h \equiv {h}_n$ as a solution to the equation \begin{eqnarray}\label{8} h^{-1}({\mathbb E}\delta_n)^{1/2}=\omega_f(h). \end{eqnarray} It is clear that this solution tends to zero as $n$ grows. The relations (\ref{7}) and (\ref{8}) allow us to obtain the order of smallness of the optimal bandwidth $h$, but not the optimal value of $h$. In practice, $h$ can be chosen, for example, by so-called cross-validation. \hfill$\square$ } \end{rem} From Theorem \ref{theor-1} and Remark \ref{zam-0} it is easy to obtain the following corollary. \begin{cor} Let the conditions $(D)$, $(D_0)$, $(K)$, and $(E)$ be satisfied, the regression function $f(t)$ is nonrandom, and $\cal C $ is an arbitrary subset of equicontinuous functions in $C[0,1]$ $($for example, a precompact set$)$. Then $$ \gamma_n({\cal C})=\sup_{f\in \cal C}\sup_{t\in [0,1]}| \widehat f_{n,\tilde h_n}(t)-f(t)|\stackrel{p}\to 0, $$ where $\tilde h_n$ is defined by equation $ (\ref{8}) $, in which the modulus of continuity $ \omega_{f}(h) $ is replaced with the universal modulus $\omega_{\cal C}(h)=\sup\nolimits_{f\in \cal C}\omega_f(h)$. Moreover, the asymptotic relation $\gamma_n({\cal C})=O_p(\omega_{\cal C}(\tilde h_n))$ hold. \end{cor} \begin{rem} {\rm It is easy to see that for a nonrandom $ f (t) $ the modulus of continuity in (\ref{8}) can be replaced by one or another upper bound for $\omega _{\cal C}(h)$, obtaining the corresponding upper bound for $\gamma_n({\cal C})$. Consider the case ${\mathbb E}\delta_n=O(1/n)$. If $ \cal C $ consists of functions $f(t)$ satisfying the H\"{o}lder condition with exponent $\alpha\in (0,1]$ and a universal constant then $\tilde h_n = O \left(n^{-\frac{1}{2 (1+ \ alpha)}}\right)$ and $\omega_{\cal C}(\tilde h_n)= O\left (n^{-\frac{\alpha}{2(1+\alpha)}}\right )$. In particular, if the functions from ${\cal C}$ satisfy the Lipschitz condition ($\alpha = 1$) with a universal constant then $\gamma_n({\cal C})=O_p(n^{-1/4})$. } \end{rem} From Theorem \ref{theor-1} and Remark \ref{zam-0} we obtain the following corollary. \begin{cor} Let the conditions $ (D) $, $ (D_0) $, $ (K) $, and $ (E) $ be satisfied and let the modulus of continuity $ \omega_{f}(h) $ of the random regression function $f(t)$ with probability $ 1 $ admit the upper bound $ \omega_{f}(h)\le\zeta d(h)$, where $ \zeta> 0 $ is a random variable and $ d (h) $ is a positive continuous nonrandom function such that $d(h)\to 0$ as $h\to 0$. Then \begin{eqnarray}\label{optim3} \sup_{t\in [0,1]}| \widehat f_{n,\hat h(n)}(t)-f(t)|\stackrel{p}\to 0, \end{eqnarray} where the value $\hat h_n$ is defined in $(\ref{8})$ after replacement $d(h)$. \end{cor} Let us discuss in more detail condition $(D_0)$. Obviously, condition $(D_0)$ is satisfied for any nonrandom regular design (this is the case of nonidentically distributed $\{z_i\}$ depending on $n$). If $ \{z_i\} $ are independent and identically distributed and the interval $ [0,1] $ is the support of distribution of $ z_1 $ then condition $ (D_0) $ is also satisfied. In particular, if the distribution density of $ z_1 $ is separated from zero on $ [0,1] $ then $ \delta_n = O\left(\log n/n\right) $ holds (see details in \cite{2020}). If $ \{z_i; \, i \ge 1 \} $ is a stationary sequence with a marginal distribution with the support $[0,1]$, satisfying an $ \alpha $-mixing condition then condition $ (D_0) $ is also satisfied (see Remark \ref{rem-4} below). Note that the dependence of the random variables $\{z_i\}$ satisfying condition $ (D_0) $ can be much stronger, which is illustrated in the following example. \begin{ex}\label{ex-1} {\rm Let the sequence of random variables $ \{z_i; \, i\ge 1 \} $ be defined by the relation \begin{equation}\label{11} { z}_{i}=\nu_{i}{u}_{i}^{l}+(1-\nu_{i}){u}_{i}^{r}, \end{equation} where $\{{u}_{i}^l\}$ and $\{{u}_{i}^r\}$ are independent and uniformly distributed on $ [0,1/2] $ and $ [1/2,1] $, respectively, the sequence $ \{\nu_i \} $ does not depend on $ \{{u}_{i}^l \} $, $ \{{u}_{i}^r \} $ and consists of Bernoulli random variables with success probability $1/2 $, i.e., the distribution of random variables $ {z}_i $ is an equilibrium mixture of two uniform distributions on the corresponding intervals. The dependence between the random variables $ \nu_i $ for any natural number $ i $ is defined by the equalities $ \nu_{2i-1} = \nu_1 $ and $ \nu_{2i} = 1- \nu_1 $. In this case, the random variables $\{z_i;\,i\ge 1\}$ in (\ref{11}) form a stationary sequence of random variables uniformly distributed on the segment $ [0,1] $, satisfying condition $ (D_0) $. On the other hand, for all natural numbers $m$ and $n$, \begin{eqnarray*} {\mathbb P}(z_{2m}\le 1/2,\,z_{2n-1}\le 1/2)=0. \end{eqnarray*} Thus, all the known conditions for the weak dependence of random variables (in particular, the mixing conditions) are not satisfied here. According to the scheme of this example, it is possible to construct various sequences of dependent random variables uniformly distributed on $ [0,1] $ by choosing sequences of Bernoulli switches with the conditions $ \nu_{j_k} = 1 $ and $ \nu_{l_k} = 0 $ for infinite numbers of indices $ \{j_k\} $ and $ \{l_k\} $. In which case, condition $ (D_0) $ will also be satisfied, but the corresponding sequence $ \{z_i\} $ (not necessarily stationary) may not even satisfy the strong law of large numbers. For example, this is the case when $ \nu_j = 1-\nu_1 $ for $ j = 2^{2k-1}, \ldots, 2^{2k}-1 $, and $ \nu_j = \nu_1 $ for $ j = 2^{2k} , \ldots, 2^{2k + 1}-1 $, where $ k = 1,2, \ldots $ (i.e., we randomly choose one of the two segments $ [0,1/2] $ and $ [1/2,1] $, into which we randomly throw the first point, and then alternate the selection of one of the two segments by the following numbers of elements of the sequence: $1$, $ 2$, $2^2$, $2^3$, etc.). Indeed, we can introduce the notation $n_k = 2^{2k}-1$, $\tilde n_k = 2^{2k + 1}-1 $, $S_m = \sum\nolimits_{i = 1}^m z_{i}$ and note that, for all elementary events from the event $\{\nu_1 = 1\}$, one has \begin{equation*} \frac{S_{n_k}}{n_k}=\frac{1}{n_k}\sum\limits_{i\in N_{1,k}}u_{i}^l+\frac{1}{n_k}\sum\limits_{i\in N_{2,k}}u_{i}^r, \end{equation*} where $N_{1,k}$ and $N_{2,k}$ are the sets of indices, for which the observations $\{{z}_i, i\leq n_k\}$ lie in the intervals $[0,1/2]$ or $[1/2,1]$, respectively. It is easy to see that $\#(N_{1,k})=n_k/3$ and $\#(N_{2,k})=2\#(N_{1,k})$. Hence, ${S_{n_k}}/{n_k}\to{7}/{12}$ almost surely as $k\to\infty$ due to the strong law of large numbers for the sequences $\{u_{i}^l\}$ and $\{u_{i}^r\}$. On the other hand, as $k\to\infty$, for all elementary events from $\{\nu_1=1\}$ one has \begin{equation}\label{10} \frac{S_{\tilde n_k}}{\tilde n_k}=\frac{1}{\tilde n_k}\sum\limits_{i\in \tilde N_{1,k}}u_{i}^l+\frac{1}{\tilde n_k}\sum\limits_{i\in \tilde N_{2,k}}u_{i}^r\to \frac{5}{12}, \end{equation} where $\tilde N_{1,k}$ and $\tilde N_{2,k}$ are the sets of indices, for which the observations $\{{z}_i, i\leq \tilde n_k\}$ lie in the intervals $[0,1/2]$ or $[1/2,1]$, respectively. Proving the convergence in (\ref{10}), we took into account that $\#(\tilde N_{1,k})=(2^{2k+2}-1)/3$ and $\#(\tilde N_{2,k})=2n_k/3$, i.e., $\#(\tilde N_{1,k})=2\#(\tilde N_{2,k})+1$. Similar arguments are valid for all elementary events from $\{\nu_1=0\}$. $\hfill\Box$ } \end{ex} \begin{rem} \label{rem-4} In the case of i.i.d. random variables $\{z_i\}$, condition $(D_0)$ will be fulfilled if, for all $\delta\in (0,1)$, \begin{equation}\label{izmel2} p_n(\delta)\equiv\sup_{|\Delta|=\delta}{\mathbb P}\Big(\bigcap\limits_{i\le n}\{z_i\notin \Delta\}\Big)\to 0, \end{equation} where the supremum is taken over all intervals $\Delta\subset [0,1]$ of length $\delta$. Indeed, for any natural $N>1$, we divide the interval $[0,1]$ into $N$ subintervals $\Delta_k$, $k=1,\ldots,N$, of length $1/N$. Then one has $${\mathbb P}\Big(\max\limits_{1\leq i\le n+1}\Delta z_{ni}>\frac{2}{N}\Big)\leq \sum\limits_{k=1}^N{\mathbb P}\Big(\bigcap\limits_{i\le n}\{z_i\notin \Delta_k\}\Big)\leq N \max\limits_k{\mathbb P}\Big(\bigcap\limits_{i\le n}\{z_i\notin \Delta_k\}\Big)\leq Np_n(1/N),$$ since the event $\big\{\max\nolimits_{1\leq i\le n+1}\Delta z_{ni}>2/N\big\}$ implies the existence of an interval $\Delta_k$ of length~$1/N$ that does not contain any points from the collection $\{z_i\}$. Thereby, condition (\ref{izmel2}) implies the limit relation $ \max\nolimits_{i \le n + 1} \Delta z_{ni} \stackrel{p}\to 0 $, which is equivalent to convergence with probability $1$ due to the monotonicity of the sequence $\max\nolimits_{i \le n + 1} \Delta z_{ni}$. In particular, if $ \{z_i\} $ are independent then $ p_n(\delta) = e^{-c(\delta)n}$ and $c(\delta)> 0$, i.e., as $ n\to\infty $, the finite collection $\{z_i\}$ with probability $ 1 $ form a refining partition of the finite segment $ [0,1] $. It is easy to show that if $\{z_i;\,i \ge 1 \} $ is a stationary sequence satisfying an $\alpha$-mixing condition and having a marginal distribution with the support $[0,1]$ then (\ref{izmel2}) will be valid. \hfill$\square$ \end{rem} \section{Estimating the mean function of a stochastic process} Consider the following statement of the problem of estimating the expectation of an almost surely continuous stochastic process $f(t)$. There are $ N $ independent copies of the regression equation (\ref {1}): \begin{equation} \label{50} X_{i,j}=f_j(z_{i,j})+\varepsilon_{i,j}, \qquad i=1,\ldots,n,\,\,\,\,j=1,\ldots,N, \end{equation} where $f(t), f_1(t),\ldots, f_N(t)$, $t\in [0,1]$, are independent identically distributed almost surely continuous unknown random processes, the set $ \{\varepsilon_{i, j}; \,i = 1, \ldots, n \} $ satisfies condition $ (E) $ for any $ j $, the set $ \{z_{i, j}; \, i = 1, \ldots, n \} $ meets conditions $ (D) $ and $ (D_0) $ for any $ j $ (here and below the index $ j $ for the considered random variables means the number of copy of Model (\ref{1})). In particular, under the assumption that condition $ (K) $ is valid, by $ \widehat f_{n, h, j} (t) $, $ j = 1, \ldots, N, $ we denote the estimator given by the relation (\ref{est1}) when replacing the values from (\ref{1}) with the corresponding characteristics from (\ref{50}). Finally, an estimator for the mean-function is determined by the equality \begin{equation}\label{LLN} \overline{\widehat f_{N,n,h}}(t)=\frac{1}{N}\sum\limits_{j=1}^N\widehat f_{n,h,j}(t). \end{equation} As a consequence of Theorem \ref{theor-1}, we obtain the following assertion. \begin{theor}\label{theor-2} Let Model $(\ref{50})$ satisfy the above-mentioned conditions and, moreover, \begin{equation}\label{51} {\mathbb E}\sup_{t\in [0,1]}|f(t)|<\infty, \end{equation} while the sequences $h\equiv h_n\to 0$ and $N\equiv N_n\to \infty$ meet the restrictions \begin{equation}\label{52} h^{-2}{\mathbb E}\delta_n\to 0\,\,\,\mbox{and}\,\,\,N{\mathbb P}(\delta_n>c_*h)\to 0. \end{equation} Then \begin{equation}\label{53} \sup\limits_{t\in [0,1]}\left|\overline{\widehat f_{N,n,h}}(t)-{\mathbb E}f(t)\right|\stackrel{p}\to 0. \end{equation} \end{theor} \begin{rem}\label{za3} {\rm If condition (\ref {51}) is replaced with a slightly stronger constraint $${\mathbb E}\sup\nolimits_{t\in [0,1]}f^{2}(t)<\infty$$ then, under conditions similar to (\ref{52}), one can prove the uniform consistency of the estimator $$\widehat M_{N,n,h}(t_1,t_2)=\frac{1}{N}\sum\limits_{j=1}^N\widehat f_{n,h,j}(t_1)\widehat f_{n,h,j}(t_2),\,\,\,\,t_1,t_2\in [0,1], $$ for the unknown mixed second moment ${\mathbb E}f(t_1)f(t_2)$ where $h\equiv h_n$ and $N\equiv N_n$ satisfy (\ref{52}). The arguments in proving this fact are quite similar to those in proving Theorem \ref{theor-2} and they are omitted. In other words, under the above-mentioned restrictions, the estimator $${\widehat{ \rm Cov}}_{N,n,h}(t_1,t_2)= \widehat M_{N,n,h}(t_1,t_2)-\overline{\widehat f_{N,n,h}}(t_1)\overline{\widehat f_{N,n,h}}(t_2)$$ uniformly consistent for the covariance of the random regression function $f(t)$. } \end{rem} \begin{rem} The problem of estimating the mean and covariance functions plays a fundamental role in the so-called functional data analysis (see, for example, \cite{HE-2015}, \cite{KR-2017}, \cite{2010-Li}, \cite{Mu-2005}). The property of uniform consistency of certain estimates of the mean function, which is important in the context of the problem under consideration, was considered, for example, in \cite{HE-2015}, \cite{2010-Li}, \cite{YMW-2005}, \cite{ZW-2016}, \cite{ZLL-2018}. For a random design, as a rule, it is assumed that all its elements are independent identically distributed random variables (see, for example, \cite{CY-2011}, \cite{2006-Ha}, \cite{2010-Li}, \cite{WZ-2006}--\cite{ZLL-2018}). In the case where the design is deterministic, certain regularity conditions discussed above in Introduction are usually used. Moreover, in the problem of estimating the mean function, it is customary to subdivide design elements into certain types depending on the density of filling with the design points the regression function domain. The literature focuses on two types of data: or the design is in some sense ``sparse'' (for example, the number of design elements in each series is uniformly limited \cite{CY-2011}, \cite{2006-Ha}, \cite{2010-Li}, \cite{WZ-2006}, \cite{ZLL-2018}), or the design is somewhat ``dense'' (the number of elements in each series grows with the number of series \cite{CWLY-2016}, \cite{2010-Li}, \cite{WZ-2006}, \cite{ZC-2007}, \cite{ZLL-2018}). Theorem \ref{theor-2} considers the second of the specified types of design under condition $(D_0)$ in each of the independent series. Note that our formulation of the problem of estimating the mean function also includes the situation of a general deterministic design. Note that the methodologies for estimating the mean function used for dense or sparse data are often different (see, for example, \cite{Mu-2005}, \cite{WCM-2016}). In the situation of a growing number of observations in each series, it is natural to preliminarily estimate trajectories of a random regression function in each series, and then average over all series (e.g., see \cite{CY-2011}, \cite{2006-Ha}, \cite{ZC-2007}). This is exactly what we do in (\ref{LLN}) following this conventional approach. \hfill$\square$ \end{rem} \section{Comparison with some known approaches } In \cite{2020}, under the conditions of the present paper, the following estimators were studied: \begin{equation}\label{est4} f^*_{n,h}(t)=\frac{\sum_{i=1}^nX_{ni}K_{h}(t-z_{n:i})\Delta z_{ni}}{\sum_{i=1}^nK_{h}(t-z_{n:i})\Delta z_{ni}}\equiv \frac{\sum_{i=1}^nX_{ni}K_{h}(t-z_{n:i})\Delta z_{ni}}{w_{n0}(t)}. \end{equation} Notice that \begin{equation}\label{est5} f^*_{n,h}(t)\equiv {\rm arg}\min\limits_{a} \sum\limits^n_{i=1}(X_{ni}-a)^2K_{h}(t-z_{n:i})\Delta z_{ni}. \end{equation} It is interesting to compare the new estimators $\widehat f_{n,h}(t)$ with the estimators $f_{n,h}^*(t)$ from \cite{2020} as well as with other estimators (for example, the Nadaraya--Watson estimators $\widehat f_{NW}(t)$ and classical local linear estimators $\widehat f_{LL}(t)$). Throughout this section, we assume that conditions $ (D) $, $ (K) $, and $ (E) $ are satisfied and the regression function $ f(t) $ is nonrandom. Moreover, we need the following constraint. ${({\bf IID})}$ {\it The regression function $ f (t) $ in Model {\rm(\ref{1})} twice continuously differentiable, the errors $ \{\varepsilon_i\} $ are independent, identically distributed, centered, and independent of the design $\{z_i\}$, whose elements are independent and identically distributed. In addition, the distribution function of the random variable $ z_1 $ has a strictly positive density $ p(t) $ continuously differentiable on $ (0,1) $. } Such severe restrictions on the parameters of the regression model are explained both by problems in calculating the asymptotic representation for the variances of the estimators $ \widehat f_{n,h}(t) $ and $ f_{n,h}^*(t) $ as well as by properties of the Nadaraya--Watson estimators, which are very sensitive to the nature of the correlation of design elements. For any statistical estimator $ \tilde f_n(t) $ of the regression function $ f(t) $, we will use the notation $ {\rm Bias} \tilde f_n(t) $ for its bias, i.e., $ {\rm Bias}\tilde f_n(t):={\mathbb E}\tilde f_n(t)-f(t). $ Put $\overline f =\sup\nolimits_{t\in [0,1]}|f(t)|$ and for $j=0,1,2,3$, introduce the notation \begin{eqnarray}\label{int} w_{j}(t)=\int\limits_0^1(t-z)^jK_{h}(t-z)dz=\int\limits_{z\in [0,1]: |t-z|\leq h}(t-z)^jK_{h}(t-z)dz,\quad t\in [0,1]. \end{eqnarray} The following asymptotic representation for the bias and variance of the estimator $ f_{n,h}^*(t) $ was obtained in \cite{2020}. \begin{prop} \label{le2vve} Let condition ${ (IID)}$ be fulfilled and $\inf_{t\in[0,1]}p(t)>0$. If $n\to\infty$ and $h\to 0$ so that $(\log n)^{-1}h\sqrt n\to\infty$, $h^{-2}{\mathbb E}\delta_n\to 0$, and $h^{-3}{\mathbb E}\delta_n^2\to 0$ then, for any $t\in (0,1)$, the following asymptotic relations are valid$:$ $$ {\rm Bias}f_{n,h}^*(t)=\frac{h^2\kappa_2}{2}f''(t)+o(h^2),\qquad {\mathbb Var}f_{n,h}^*(t)\sim \frac{2\sigma^2}{h np(t)}\|K\|^2. $$ \end{prop} Note that the first statement concerning the asymptotic behavior of the bias in Proposi\-tion~\ref{le2vve} was actually proved for arbitrarily dependent design elements when condition $ (D_0) $ is met. The following two propositions and corollaries are also obtained without any assumptions about correlation of design elements, only conditional centering and conditional orthogonality of the errors from condition $ (E) $ are used. \begin{prop} \label{predl-4} Let $ h<1/2$. Then, for any fixed $t\in [h,1-h]$, $$ {\rm Bias}\widehat f_{n,h}(t)= {\rm Bias} f_{n,h}^*(t)+\gamma_{n,h}(t),\quad {\mathbb Var}\widehat f_{n,h}(t)={\mathbb Var}f_{n,h}^*(t)+\rho_{n,h}(t), $$ where $$|\gamma_{n,h}(t)|\le C_3^*\overline f h^{-1}{{\mathbb E}\delta_n},\quad |\rho_{n,h}(t)|\le C_4^*\big(\sigma^2+\overline f^2 \big)h^{-1}{{\mathbb E}\delta_n}.$$ \end{prop} \begin{prop} \label{predl-5} Let the regression function $ f(t) $ be twice continuously differentiable. Then, for any fixed $ t\in (0,1) $, \begin{equation}\label{hatf} {\rm Bias}\widehat f_{n,h}(t)=\frac{f''(t)}{2}B_{0}(t)+O({\mathbb E}\delta_n/h)+o(h^2), \end{equation} where \begin{equation}\label{B_0} B_0(t)=\frac{w^2_{2}(t)-w_{3}(t)w_{1}(t)}{w_{0}(t)w_{2}(t)-w^2_{1}(t)}. \end{equation} Moreover, \begin{equation}\label{starf} {\rm Bias} f_{n,h}^*(t)=-f'(t)\frac{w_{1}(t)}{w_{0}(t)}+\frac{f''(t)}{2}\frac{w_{2}(t)}{w_{0}(t)} + O({\mathbb E}\delta_n)+o(h^2), \end{equation} besides, the error terms $o(h^2)$ and $O(\cdot)$ in $(\ref{hatf})$ and $(\ref{starf})$ are uniform in $t$. \end{prop} \begin{cor}\label{equiv} {\it Let the regression function $ f(t) $ be twice continuously differentiable, $h\to 0$, and $h^{-3}{\mathbb E}\delta_n\to 0$. Then, for each fixed $ t\in (0,1) $ such that $f''(t)\neq 0$, the following asymptotic relations are valid$:$} $${\rm Bias}\widehat f_{n,h}(t)\sim {\rm Bias} f_{n,h}^*(t)\sim \frac{f''(t)}{2}\kappa_2h^2. $$ \end{cor} \begin{cor}\label{atzero} Suppose that, under the conditions of the previous corollary, $ f $ has nonzero first and second derivatives in a neighborhood of zero. Then for any fixed positive $ \alpha <1 $ such that $ \kappa_1 (\alpha) <0 $, the following asymptotic relations hold$:$ $${\rm Bias}\widehat f_{n,h}(\alpha h)\sim \frac{1}{2}h^2D(\alpha)f''(0+),\qquad {\rm Bias} f_{n,h}^*(\alpha h)\sim -h\frac{\kappa_1(\alpha)}{\kappa_0(\alpha)}f'(0+),$$ where $$D(\alpha)=\frac{\kappa^2_2(\alpha)-\kappa_3(\alpha)\kappa_1(\alpha)}{\kappa_0(\alpha)\kappa_2(\alpha)-\kappa^2_1(\alpha)}.$$ \end{cor} Note that, due to the Cauchy--Bunyakovsky inequality and the properties of the density $ K (\cdot) $, the strict inequality $ \kappa_0 (\alpha) \kappa_2 (\alpha) - \kappa^2_1(\alpha)> 0 $ holds for any $ \alpha \in [0,1] $. \begin{rem} Similar relations take place in a neighborhood of the right boundary of the segment $ [0,1] $, when $ t = 1- \alpha h $ for any $ \alpha \le 1 $. In this case, in the above asymptotics, one simply needs to replace the right-hand derivatives at zero by analogous (non-zero) left-hand derivatives at point 1, and instead of the quantities $ \kappa_j(\alpha) $ must be substituted $ \tilde \kappa_j (\alpha) = \int \nolimits_{-\alpha}^1v^iK(v)dv = (-1)^j \kappa_j(\alpha) $. In this case, the coefficient $ D (\alpha) $ will not change, and the corresponding coefficient on the right-hand side of the second asymptotics will only change its sign. \end{rem} Thus, the qualitative difference between the estimators $ f_{n,h}^*(t)$ and $ \widehat f_{n,h}(t) $ is observed only in neighborhoods of the boundary points $ 0 $ and $ 1 $: For the estimator $ f_{ n,h}^*(t)$, in the $ h $-neighborhoods of the indicated points, the order of smallness of the bias is $h$, and for $ \widehat f_{n,h}(t) $ this order is $h^2$. Such a connection between the estimators (\ref{est1}) and (\ref{est4}) seems to be quite natural in view of the relations (\ref{est2}) and (\ref{est5}), and the known relationship at the boundary points between Nadaraya--Watson estimators $ \hat f_{NW}(t) $ and locally linear estimators $\widehat f_{LL}(t)$. \begin{rem} If condition $ {(IID)} $ is satisfied then, for the bias and variance of estimators $ \widehat f_{NW}(t) $ and $ \widehat f_{LL}(t) $, the following asymptotic representations are well known (see, for example, \cite{1996-Fa}), which are valid for any $ t\in (0,1) $ under broad conditions on the parameters of the model under consideration: \begin{eqnarray*} {\rm Bias}\widehat f_{NW}(t)=\frac{h^2\kappa_2}{2p(t)}\left( f''(t)p(t) + 2f'(t)p'(t)\right)+o(h^2), \quad {\mathbb Var}\widehat f_{NW}(t)\sim \frac{\sigma^2}{h np(t)}\|K\|^2,\\ {\rm Bias} \widehat f_{LL}(t)=\frac{h^2\kappa_2}{2}f''(t)+o(h^2),\qquad {\mathbb Var}\widehat f_{LL}(t)\sim \frac{\sigma^2}{h np(t)}\|K\|^2. \end{eqnarray*} The above asymptotic representations show that if the assumptionss $ {(IID)}$ are valid then the variance of the Nadaraya--Watson estimator $ \hat f_{NW}(t) $ and the locally linear estimator $ \hat f_{LL}(t) $ under broad conditions is asymptotically half the variance of the estimators $ f^*_{n,h}(t) $ and $ \hat f_{n,h}(t) $, respectively. But the mean-square error of any estimator is equal to the sum of the variance and squared bias, which for the compared estimators is asymptotically determined by the quantities $ f''(t)p(t) + 2f'(t)p'(t) $ or $ f''(t)p(t) $, respectively. In other words, if the standard deviation $\sigma$ of the errors is not very large and \begin{equation}\label{ineqcomp} \left|f''(t){p(t)}+2f'(t){p'(t)}\right|> \left|f''(t)p(t)\right|, \end{equation} then the estimator $ f^*_{n,h}(t) $ or $ \hat f_{n,h}(t) $ may be more accurate than $ \hat f_{NW}(t) $. The indicated effect for the estimator $ f^*_{n,h}(t) $ is confirmed by the results of computer simulations in \cite{2020}. Note also that in order to choose in a certain sense the optimal bandwidth $ h $, the orders of the smallness of the bias and the standard deviation of the estimator are usually equated. In other words, if the assumptions $ {(IID)} $ are fulfilled, for all four types of estimators considered here, we need to solve the equation $ h^2 \approx (nh)^{-1/2} $. Thus the optimal bandwidth has the standard order $h\approx n^{-1/5}$. \hfill$\square$ \end{rem} \begin{rem}\label{13} Estimators of the form $ \widehat f_{n,h}(t) $ and $ f^*_{n,h}(t) $ given in (\ref{est1}) and (\ref{est4}) can define a little differently, depending on the choice of one or another partition with highlighted points $ \{z_{i};\, i=1,\ldots,n\} $ of the domain of the regression function underlying these estimators. For example, using the Voronoi partition of the segment $ [0,1] $, an estimator of the form (\ref{est4}) can be given by the equality \begin{equation}\label{est4+} \widetilde f^*_{n,h}(t)=\frac{\sum_{i=1}^nX_{ni}K_{h}(t-z_{n:i}) \widetilde\Delta z_{ni}}{\sum_{i=1}^nK_{h}(t-z_{n:i}) \widetilde\Delta z_{ni}}, \end{equation} where $\;\widetilde\Delta z_{n1}={\Delta }z_{n1}+{{\Delta }z_{n2}}/{2}$, $\;\widetilde\Delta z_{nn}={{\Delta }z_{nn}}/{2}+{\Delta }z_{nn+1}$, $\widetilde\Delta z_{ni}={({\Delta }z_{ni}+{\Delta }z_{ni+1})}/{2}$ for $i=2,\ldots,n-1$. Looking through the proofs from \cite{2020} it is easy to see that in this case all properties of the estimator $ \widetilde f_{n,h}^*(t) $ are preserved, except for the asymptotic representation of the variance. Repeating with obvious changes the arguments in proving Proposition \ref{le2vve} in \cite{2020}, we have $${\mathbb Var}\widetilde f_{n,h}^*(t)\sim \frac{1.5\sigma^2}{h np(t)}\|K\|^2.$$ Thus, in the case of independent and identically distributed design points, the asymptotic variance of the estimator can be somewhat reduced by choosing one or another partition. Similarly, in the definition (\ref{est1}), the estimators $ \widehat f_{n,h}(t) $, the quantities $ \{\Delta z_{ni} \} $ can be replaced by the Voronoi tiling $ \{\widetilde \Delta z_{ni} \} $. It is also worth noting that the indicator factor involved in the determination (\ref{est1}) of the estimator $ \widehat f_{n,h}(t) $, does not affect the asymptotic properties of the estimator given in Theorem \ref{theor-1}, and we only needed it to calculate the exact asymptotic behavior of the estimator bias. \hfill$\square$ \end{rem} \section{Simulations } \label{sec-sim} In the following computer simulations, instead of estimator \eqref{est1}, we used the equivalent estimator $\widehat f_{n,h}(t)$ of the weighted least-squares method defined by the relation \begin{eqnarray}\label{ull1} (\widehat f_{n,h}(t),\hat b(t))&=& {\rm arg}\min\limits_{a,b} \sum_{i=1}^n\left(X_{ni}-a-b(t-z_{n:i})\right)^2K_{h}(t-z_{n:i})\widetilde\Delta z_{ni}, \end{eqnarray} where the quantities $\widetilde\Delta z_{ni}$ are defined in (\ref{13}) above. Estimator (\ref{ull1}) differs from estimator (\ref{est1}) by excluding the indicator factor and replacing $\Delta z_{ni}$ with $\widetilde\Delta z_{ni}$, which is not essential (see Remark~\ref{13}). Besides, if we had several observations at one design point, then the observations were replaced by one observation presenting their arithmetic mean (see Remark 4 above). Although the notation $\widehat f_{n,h}(t)$ in (\ref{ull1}) is somewhat different from the same notation in \eqref{est1}, we retained the notation $\widehat f_{n,h}(t)$, which will not lead to ambiguity. In the simulations below, we will also consider the local constant estimator $\widetilde f^*_{n,h}(t)$ from (\ref{est4+}), which can be defined by the equality \begin{equation}\label{ull0} \widetilde f^*_{n,h}(t)\equiv {\rm arg}\min\limits_{a} \sum\limits^n_{i=1}(X_{ni}-a)^2K_{h}(t-z_{n:i})\widetilde\Delta z_{ni}. \end{equation} Here we also replace the observations corresponding to one design point by their arithmetic mean. Recall that the Nadaraya-Watson estimator differs from (\ref{ull0}) by the absence of the factors $\widetilde\Delta z_{ni}$ in the weighting coefficients: \begin{equation}\label{est4nv} \widehat f_{NW}(t)=\frac{\sum_{i=1}^nX_{ni}K_{h}(t-z_{n:i})}{\sum_{i=1}^nK_{h}(t-z_{n:i})}. \end{equation} The Nadaraya-Watson estimators are also weighted least-squares estimators: \begin{equation}\label{est5nv} \widehat f_{NW}(t)\equiv {\rm arg}\min\limits_{a} \sum\limits^n_{i=1}(X_{ni}-a)^2K_{h}(t-z_{n:i}). \end{equation} In the following examples, estimators (\ref{ull1}) and (\ref{ull0}), which will be called {\it universal local linear} (ULL) and {\it universal local constant} (ULC), respectively, will be compared with the estima\-tor of linear regression (LR), the Nadaraya-Watson (NW) estimator, LOESS of order 1, as well as with estimators of generalized additive models (GAM) and of random forest (RF). For LOESS estimators, the R {\it loess}() function was used. It is worth noting that, in the examples below, the best results were obtained by the new estimators (\ref{ull1}) and (\ref{ull0}), LOESS estimator of order 1, and the Nadaraya-Watson estimator. With regard to the simulation examples, the main difference between estimators (\ref{ull1}) and (\ref{ull0}), and the Nadaraya--Watson and LOESS ones is that estimators (\ref{ull1}) and (\ref{ull0}) are ``more local''. This means that if a function $f(z)$ is evaluated on a design interval $A$ with a ``small'' number of observations adjacent to a design interval $B$ with a ``large'' number of observations, the Nadaraya-Watson and LOESS estimators will primarily seek to adjust to the ``large'' cluster of observations on the interval $B$. At the same time, estimators (\ref{ull1}) and (\ref{ull0}) will equally consider observations on intervals of equal lengths, regardless of the distribution of design points on the intervals. In the examples below, for all of the kernel estimators which are the Nadaraya-Watson ones, LOESS, (\ref{ull1}), and (\ref{ull0}), we used the tricubic kernel $$K(t)=\frac{70}{81}\max\{0,(1 - |t|^3)^3\}.$$ We chose the tricubic kernel because that kernel is employed in the $R$ function {\sf loess()} which was used in the simulations. The accuracy of the models was estimated with respect to the maximum error and the mean squared error. In all the examples below, except Example 3, the maximum error was estimated on the uniform grid of 1001 points on the segment $[0,10]$ by the formula $$\max_{j=1,\dots,1001} |\check f (t_j) - f(t_j)|, $$ where $t_j$ are the grid points of segment $[0,10]$, $t_1=0$, $t_{1001}=10$, $\check f (t_j)$ are the values of the constructed estimator at the points of the partition grid, $f (t_j)$ are the true values of the estimated function. In Example 3, a grid of 1001 points was taken on the interval from the minimum to the maximum point of the design. That was done in order to to avoid assessing the quality of extrapolation, since, in that example, the minimum design point could fall far from 0. The mean squared error was calculated for one random splitting of the whole sample into training and validation samples in proportion of $80\%$ to $20\%$, according to the formula $$\frac{1}{m} \sum_{j=1}^m \left(\check f(z_j) - X_j)\right)^2,$$ where $m$ is the validation sample size, $z_j$ are the validation sample design points, $X_j$ are the noisy observations of the predicted function in the validation sample, $\check f$ is the estimate calculated by the training sample. The splittings into training and validation samples were identical for all models. For each of the kernel estimators, the parameter $h$ of the kernel $K_h$ was determined using cross-validation minimizing the mean squared error, where the set of observations was partitioned into 10 folds randomly. The same partitions were taken for all the kernel estimators. When calculating the root mean square error, the cross-validation for choosing $h$ was carried out on the training set. To calculate the maximum error, the cross-validation was performed on the whole sample. For the Nadaraya-Watson models as well as for estimatiors (\ref{ull1}) and (\ref{ull0}), the parameter $h$ was selected from 20 values located on the logarithmic grid from $\max\{0.0001, 1.1 \max_i\Delta z_{ni}\}$ to 0.9. For LOESS, the parameter {\it span} was chosen in the same way from 20 values located on the logarithmic grid from 0.0001 to 0.9. The simulations also included testing basic statistical learning algorithms: linear regression without regularization, generalized additive model, and random forest \cite{elstat-2009}. The training of the generalized additive model was carried out using the R library {\it mgcv}. Thin-plate splines were used, the optimal form of which was selected using generalized cross-validation. Random forest training was done using the R library {\it randomForest}. The number of trees is chosen to be 1000 based on the out-of-bag error plot for a random forest with five observations per leaf. The optimal number of observations in a random forest leafs was chosen using 10-fold cross-validation on a logarithmic grid out of 20 values from 5 to 2000. In each example, 1000 realizations of different train and vadidation sets were performed, for each of which the errors were calculated. In each of train and vadidation sets realizations, 5000 observations were generated. The results of the calculations are presented below in the boxplots, where every box represents the median and the 1st and 3rd quartiles. The plots do not show the results of linear regression, since in the examples, the results appeared to be significantly worse than those of the other models. The mean squared and maximum errors of estimator \eqref{ull1} were compared with the errors of LOESS estimator by the paired Wilcoxon test. The summaries of the errors on the 1000 realizations of different train and vadidation sets are reported as median (1st quartile, 3rd quartile). The examples of this section were constructed so that the distribution of design points is ``highly nonuniform''. Potentially, this could demonstrate the advantage of the new estimator \eqref{ull1} over known estimation approaches. \bigskip \begin{ex} {\rm Let us set the target function \begin{equation}\label{exam1} f(z) = (z-5)^2 + 10, \quad 0\le z\le 10 \end{equation} and let the noise be centered Gaussian with standard deviation $\sigma=2$ (Fig.~\ref{fig1}). In each realization, we draw 4500 independent design points uniformly distributed on the segment $z\in [0,5]$, and 500 independent design points uniformly distributed on the segment $z\in [5,10]$. \begin{figure}[!ht] \centering \includegraphics[width=4in]{example1.png} \caption{\footnotesize Example 2. Sample observations, target function, and two estimators.}\label{fig1} \end{figure} \begin{figure}[!ht] \centering \subfigure \includegraphics[width=.45\textwidth]{plotunif_E1.png} } \quad \subfigure \includegraphics[width=.45\textwidth]{plotmse_E1.png} } \caption{\footnotesize The maximum (left) and mean squared (right) errors in Example~2. For the mean squared error, the random forest model performed worse (10.97 (10.55, 11.39)) than the GAM model and the kernel estimators, so the results of the random forest model ``did not fit'' into the plot.} \label{fig2} \end{figure} The results are presented in Fig.~\ref{fig2}. For the maximum error, the advantage of the estimators of order 1 (LOESS and (\ref{ull1})) over the estimators of order 0 (the Nadaraya-Watson and (\ref{ull0})) is noticeable, while the estimator (\ref{ull1}) turns out to be the best of all considered estimators, in particular, the estimator (\ref{ull1}) performs better than LOESS: 0.6357 (0.4993, 0.8224) vs. 0.6582 (0.5205, 0.8508), $p=0.019$. For the mean squared error, all models, except random forest and linear regression, show similar results. Besides, the estimator (\ref{ull1}) turns out to be the best of the considered ones, although the difference between estimators (\ref{ull1}) and LOESS is not statistically significant: 4.017 (3.896, 4.139) vs. 4.030 (3.906, 4.154), $p=0.11$. } \end{ex} \bigskip \begin{ex} \begin{figure}[!ht] \centering \includegraphics[width=4in]{example2.png} \caption{\footnotesize Example 3. Sample observations, target function, and two estimators.}\label{fexam2} \end{figure} {\rm The piecewise linear target function is shown in Fig.~\ref{fexam2}. For the sake of simplicity of presentation, we do not present the formula for the definition of this function. Here the centered Gaussian noise has the standard deviation $\sigma=2$. The design points are independent and identically distributed with density proportional to the function $(z-5)^2+2$, $0\le z \le 10$. \begin{figure}[!ht] \centering \subfigure \includegraphics[width=.45\textwidth]{plotunif_E2.png} } \quad \subfigure \includegraphics[width=.45\textwidth]{plotmse_E2.png} } \caption{\footnotesize The maximum (left) and mean squared (right) errors in Example~3. For the mean-squared error, the random forest model performed worse (6.699 (6.412, 7.046)) than the GAM model and the kernel estimators, so the results of the random forest model ``did not fit'' into the plot.} \label{rexam2} \end{figure} The results are presented in Fig.~\ref{rexam2}. The Nadaraya-Watson estimator appears to be the best model both for the maximum error and for the mean squared error. For the both errors, estimator (\ref{ull1}) is better than LOESS ($p<0.0001$ for the maximum error, $p=0.0030$ for the mean squared error). } \end{ex} \bigskip \begin{ex} {\rm In this example, the design points are strongly dependent. We will define them as follows: $z_i:=s(A_i)$, $i=1,..., n$, where $A$ is a positive number such that $A/\pi$ is irrational (we chose $A=0.0002$ in this example), $$s(t):=10 \Big|\sum\nolimits_{k=1}^{100} \eta_k \cos(tk)\Big| \quad\mbox{ with }\quad \eta_k:={k^{-1}\psi_k}\Big({\sum\nolimits_{j=1}^{100}j^{-1}\psi_j}\Big)^{-1},$$ and $\psi_j$ are independent uniformly distributed on $[0,1]$ random variables independent of the noise. It was shown \cite{2020} that the random sequence $s(A_i)$ is asymptotically everywhere dense on $[0,10]$ with probability~1. The target function is $$ f(z) = 0.2\ \big(((z-5)^2+25)\ \cos((z-5)^2/2)\ +\ 60\big), $$ shown in Fig.~\ref{fexam3}. \begin{figure}[!ht] \centering \includegraphics[width=4in]{example7.png} \caption{\footnotesize Example 4. Sample observations, target function, and two estimators.} \label{fexam3} \end{figure} \begin{figure}[!ht] \centering \subfigure \includegraphics[width=.45\textwidth]{plotunif_Enoextra.png} } \quad \subfigure \includegraphics[width=.45\textwidth]{plotmse_Enoextra.png} } \caption{\footnotesize The maximum (left) and mean squared (right) errors in Example~4. As before, for the mean squared error, the results of the random forest model (13.95 (11.69, 16.18)) not shown in full on the graph. In addition, the outliers for the GAM, NW, ULC, and ULL estimators are ``cut off'' in this graph. } \label{rexam3} \end{figure} For maximum error, estimate (\ref{ull1}) turns out to be the best of all the considered estimators. In particular, estimator (\ref{ull1}) is better than LOESS: 1.757 (1.491, 2.053) vs. 2.538 (2.216, 2.886), $p<0.0001$. The median mean squared error for estimator (\ref{ull1}) also turns out to be the smallest of those considered. In that sense, estimator (\ref{ull1}) is better than LOESS, but the difference is not significant: 4.166 (4.025, 4.751) vs. 4.219 (4.096, 4.338), $p=0.92$. } \end{ex} \begin{ex} {\rm In this example, the target function was the same as in Example 4. The difference from the previous example is that 50,000 design points were generated by the same technique, and then 5,000 points of the 50,000 ones were selected. This allowed us to fill the domain of $f$ with design elements ``more uniformly'' than in the previous example, while preserving the clusters of design points. \begin{figure}[!ht] \centering \subfigure \includegraphics[width=.45\textwidth]{plotunif_Esample.png} } \quad \subfigure \includegraphics[width=.45\textwidth]{plotmse_Esample.png} } \caption{\footnotesize The maximum (left) and mean-squared (right) errors in Example~5. As before, for the mean-squared error, the results of the random forest model not shown in full on the graph. In addition, the outliers for the NW, ULC, and ULL estimators are ``cut off'' in this graph. } \label{rexam4} \end{figure} For maximum error, estimator (\ref{ull1}) turns out to be the best of all the considered estimators. In particular, estimator (\ref{ull1}) is better than LOESS: 2.872 (2.369, 3.488) vs. 9.435 (5.719, 10.9), $p<0.0001$. For the mean squared error, the best estimator is LOESS. Estimator \eqref{ull1} is worse than LOESS: 5.108 (4.535, 6.597) vs. 4.378 (4.229, 4.541), $p<0.0001$, but it is better than the other estimators considered. } \end{ex} \section{Example of processing real medical data } \label{sec-real} In this section, we consider an application of the models considered in the previous section to the data collected in the multicenter study ``Epidemiology of cardiovascular diseases in the regions of the Russian Federation''. In that study, representative samples of unorganized male and female populations aged 25--64 years from 13 regions of the Russian Federation were studied. The study was approved by the Ethics Committees of the three federal centers: State Research Center for Preventive Medicine, Russian Cardiology Research and Production Complex, Almazov Federal Medical Research Center. Each participant has written informed consent for the study. The study was described in detail in \cite{2020-ShD}. One of the urgent problems of modern medicine is to study the relationship between heart rate (HR) and systolic arterial blood pressure (SBP), especially for low values of the observations. Therefore we will choose SBP as the outcome, and HR as the predictor. The association between these variables was previously estimated to be nonlinear \cite{2020-ShKK}. The general analysis included 6597 participants from 4 regions of the Russian Federation. The levels of SBP and HR were statistically significantly pairwise different between the selected regions. Thus, the hypothesis of the independence of design points was violated. In this section, the maximum error cannot be calculated because the exact form of the relationship is unknown, so only the mean squared error is reported. The mean squared error was calculated for 1000 random partitions of the entire set of observations into training ($80\%$) and validation ($20\%$) samples. \begin{figure}[!ht] \centering \includegraphics[width=.45\textwidth]{plotmse_Realmed.png} \caption{\footnotesize Mean-squared prediction error of the dependence of BP from HR.} \label{rexam5} \end{figure} the results are presented in Fig.~\ref{rexam5}. Here the GAM estimator and the kernel estimators showed similar results, which were better than the results of both the linear regression and random forest. The best estimator turned out to be \eqref{ull0}, although its difference from the Nadaraya-Watson estimator was not statistically significant: 220.2 (215.4, 225.9) vs. 220.4 (215.4, 225.8), $p=0.91$. The difference between estimator (\ref{ull1}) and LOESS was not significant too: 220.4 (215.4, 225.9) vs. 220.6 (215.6, 226.1), $p=0.52$. \section{Conclusion} In this paper, for a wide class of nonparametric regression models with a random design, universal uniformly consistent kernel estimators are proposed for an unknown random regression function of a scalar argument. These estimators belong to the class of local linear estimators. But in contrast to the vast majority of previously known results, traditional conditions of dependence of design elements are not needed for the consistency of the new estimators. The design can be either fixed and not necessarily regular, or random and not necessarily consisting of independent or weakly dependent random variables. With regard to design elements, the only condition that is required is the dense filling of the regression function domain with the design points. Explicit upper bounds are found for the rate of uniform convergence in probability of the new estimators to an unknown random regression function. The only characteristic explicitly included in these estimators is the maximum spacing statistic of the variational series of design elements, which requires only the convergence to zero in probability of the maximum spacing as the sample size tends to infinity. The advantage of this condition over the classical ones is that it is insensitive to the forms of dependence of the design observations. Note that this condition is, in fact, necessary, since only when the design densely fills the regression function domain, it is possible to reconstruct the regression function with some accuracy. As a corollary of the main result, we obtain consistent estimators for the mean function of continuous random processes. In the simulation examples of Section \ref{sec-sim}, the new estimators were compared with known kernel estimators. In some of the examples, the new estimators proved to be the most accurate. In the application to real medical data considered in Section \ref{sec-real}, the accuracy of new estimators was also comparable with that of the best-known kernel estimators. \section{Proofs} In this Section, we will prove the assertions stated in Sections 2--4. Denote \begin{equation}\label{beta0} \beta_{n,i}(t):=\frac{w_{n2}(t)-(t-z_{n:i})w_{n1}(t)}{w_{n0}(t)w_{n2}(t)-w_{n1}^2(t)}. \end{equation} Taking into account the relations $X_{ni}=f(z_{n:i})+\varepsilon_{ni}$,\, $i=1,\ldots,n, $ and the identity \begin{equation}\label{weight} \sum_{i=1}^n\beta_{n,i}(t)K_{h}(t-z_{n:i})\Delta z_{ni}\equiv 1, \end{equation} we obtain the representation \begin{equation}\label{30} \widehat f_{n,h}(t)=f(t)+f(t)I(\delta_n> c_*h)+\widehat r_{n,h}(f,t)+\widehat \nu_{n,h}(t), \end{equation} where \begin{eqnarray*} \widehat r_{n,h}(f,t)=I(\delta_n\le c_*h)\sum_{i=1}^n\beta_{n,i}(t)(f(z_{n:i})-f(t))K_{h}(t-z_{n:i})\Delta z_{ni}, \\ \widehat \nu_{n,h}(t)=I(\delta_n\le c_*h)\sum_{i=1}^n\beta_{n,i}(t)K_{h}(t-z_{n:i})\Delta z_{ni}\varepsilon_{ni}. \end{eqnarray*} We emphasize that, in view of the properties of the density $ K_h(\cdot) $, the domain of summation in the last two sums as well as in all sums defining the quantities $ w_{nj}(t) $ from (\ref{2}) coincides with the set $ A_{n,h}(t) = \{i: \, | t-z_{n: i}| \le h,\, 1 \le i \le n \} $, which is a crucial point for further analysis. \begin{lem}\label{lem-2} For $h<1/2$, the following equalities are valid$:$ \begin{eqnarray} \label{17} \inf\limits_{t\in [0,1]}(w_{0}(t)w_{2}(t)-w_{1}^2(t))= \frac{1}{4}(\kappa_2-\kappa^2_1)h^{2},\quad \inf\limits_{t\in [0,1]}w_{0}(t)=1/2,\\ \label{16-} \sup\limits_{t\in [0,1]}|w_{j}(t)|=\left(\frac{1}{2}\right)^{j-2[j/2]}\kappa_jh^j, \quad j=0,1,2,3. \end{eqnarray} Moreover, on the set of elementary events such that $\delta_n\leq c_*h$, the following inequalities hold$:$ \begin{eqnarray} \label{16} \sup\limits_{t\in [0,1]}|w_{nj}(t)|\le 3Lh^j,\quad \sup\limits_{t\in [0,1]}|w_{nj}(t)-w_{j}(t)|\le 12L\delta_nh^{j-1},\quad j=0,1,2,3,\\ \label{18} \inf\limits_{t\in [0,1]}(w_{n0}(t)w_{n2}(t)-w_{n1}^2(t))\ge \frac{1}{8}(\kappa_2-\kappa^2_1)h^{2},\quad \inf\limits_{t\in [0,1]}w_{n0}(t)\geq1/4,\\ \label{18+1} \forall t_1, t_2\in [0,1]\quad |w_{nj}(t_2)-w_{nj}(t_1)|\le 18Lh^{j-1}|t_2-t_1|,\quad j=0,1,2. \end{eqnarray} \end{lem} {\it Proof}. Let us prove (\ref{17}) and (\ref{16-}). First of all, note that, due to the Cauchy--Bunyakovsky--Schwartz inequality, $ w_{0}(t)w_{2}(t) -w_{1}^2(t) \ge 0 $ for all $ t \in [0,1] $ and this difference is continuous in $ t $. First, consider the simplest case where $ h \le t \le 1-h $. For such $ t $, after changing the integration variable in the definition (\ref{int}) of the quantities $w_{j}(t) $ we have \begin{equation}\label{identW} w_{j}(t)=\int\limits_{t-h}^{t+h}(t-z)^jK_{h}(t-z)dz=h^{j}\int\limits_{-1}^{1}v^jK(v)dv, \end{equation} i.e., $w_{0}(t)\equiv 1$, $w_{1}(t)\equiv 0$, and $w_{2}(t)\equiv h^{2}\kappa_2$. In other words, on the segment $[h,1-h]$, the following identity is valid$:$ \begin{equation}\label{ident} w_{0}(t)w_{2}(t)-w_{1}^2(t)\equiv h^{2}\kappa_2. \end{equation} We now consider the case $t=\alpha h$ for all $\alpha\in [0,1]$. Then \begin{equation}\label{zero} w_{j}(\alpha h)=\int\limits_{0}^{(1+\alpha)h}(\alpha h-z)^jK_{h}(\alpha h-z)dz=h^{j}\kappa_j(\alpha). \end{equation} Next, by (\ref{zero}), we obtain \begin{multline*} \frac{d}{d\alpha}h^{-2}(w_{0}(\alpha h)w_{2}(\alpha h)-w_{1}^2(\alpha h))=\frac{d}{d\alpha}(\kappa_0(\alpha)\kappa_2(\alpha)-\kappa^2_1(\alpha))\\ =K(\alpha)\left(\alpha^2\int\limits_{-1}^{\alpha}K(v)dv+ \int\limits_{-1}^{\alpha}v^2K(v)dv-2\alpha\int\limits_{-1}^{\alpha}vK(v)dv\right)\ge 0 \end{multline*} in view of the relation $\int\nolimits_{-1}^{\alpha}vK(v)dv\le 0$ since $K(v)$ is an even function. Similarly we study the symmetrical case where $t=1-\alpha h$ for all $\alpha\in [0,1]$. >From here and (\ref{ident}) we obtain the first relation in (\ref{17}): \begin{equation*} \inf_{t\in [0,1]}\{w_{0}(t)w_{2}(t)-w_{1}^2(t)\}=w_{0}(0)w_{2}(0)-w^2_{1}(0)=\frac{1}{4}h^{2}(\kappa_2-\kappa^2_1). \end{equation*} The second relation in (\ref{17}) directly follows from (\ref{zero}). Moreover, the above-mentioned arguments and the representations (\ref{identW}) and (\ref{zero}) imply (\ref{16-}). Further, the first estimator in (\ref{16}) is obvious by the above remark about the domain of summation in the definition of functions $ w_{nj}(t) $, and the relations \begin{equation}\label{error-} \sup\limits_{s\in[0,1]}K(s)\leq L,\quad \sum\limits_{i\in A_{n,h}(t)}\Delta z_{ni}\leq 2h+\delta_n\leq 3h. \end{equation} The second estimator in (\ref{16}) immediately follows from the well-known estimate of the error of approximation by Riemann integral sums of the corresponding integrals of smooth functions on a finite closed interval: \begin{equation}\label{error} \Big|\sum_{i\in A_{n,h}(t)}g_{t,j}(z_{n:i})\Delta z_{ni}-\int\limits_{z\in [0,1]: |t-z|\leq h}g_{t,j}(z)dz\Big|\le(2h+\delta_n)\delta_n L_{g_{t,j}}, \end{equation} where the functions $g_{t,j}(z)=(t-z)^jK_{h}(t-z)$, $j=0,1,2,3$, are defined for all $z\in [0\vee t-h,1\wedge t+h]$, and $L_{g_{t,j}}$ is the Lipschitz constant of the function $g_{t,j}(z)$; It easy to verify that $\sup_{t\in[0,1]}L_{g_{t,j}}\leq 4Lh^{j-2}$ for all $h\in (0,1/2)$ and $j=0,1,2,3$. So, on the set of elementary events such that $ \{\delta_n\le c_*h \} $ (recall that $ c_* <1 $), the right-hand side in (\ref{error}) can be replaced with $12L\delta_nh^{j-1}$. In addition, taking (\ref{16-}) and (\ref{16}) into account, we obtain \begin{multline*} |w_{n0}(t)w_{n2}(t)-w_{0}(t)w_{2}(t)|\\\le w_{n0}(t)|w_{n2}(t)-w_{2}(t)|+w_{2}(t)|w_{n0}(t)-w_{0}(t)| \le 9L\delta_n(3L+\kappa_2)h, \end{multline*} $$|w^2_{n1}(t)-w^2_{1}(t)|\le |w_{n1}(t)-w_{1}(t)|(|w_{n1}(t)|+|w_{1}(t)|)\le 9L\delta_n(3L+\kappa_1/2)h. $$ Hence follows the estimate \begin{equation}\label{error2} |w_{n0}(t)w_{n2}(t)-w_{n1}^2(t)-w_{0}(t)w_{2}(t)+w_{1}^2(t)|\le 9L\delta_n(6L+\kappa_2+\kappa_1/2)h. \end{equation} The inequalities in (\ref{18}) follow from (\ref{17}), (\ref{error2}), and the definition of the constant $ c_* $. To prove (\ref{18+1}), note that $$w_{nj}(t_2)-w_{nj}(t_1)=\sum\limits_{i=1}^n\left\{(t_2-z_{n:i})^jK_{h}(t_2-z_{n:i})-(t_1-z_{n:i})^jK_{h}(t_1-z_{n:i})\right\}\Delta z_{ni} $$ $$=\sum\limits_{i\in A_{n,h}(t_1)\cup A_{n,h}(t_2)}\left\{(t_2-z_{n:i})^jK_{h}(t_2-z_{n:i})-(t_1-z_{n:i})^jK_{h}(t_1-z_{n:i})\right\}\Delta z_{ni} $$ where we can use the estimates $|(t_2-z_{n:i})^j-(t_1-z_{n:i})^j|\le 2h^{j-1}|t_2-t_1|$ for $j=0,1,2$, $|t_k-z_{n:i}|\le h$ for $k=1,2$, and also the inequalities $$|K_{h}(t_2-z_{n:i})-K_{h}(t_1-z_{n:i})|\le Lh^{-2}|t_2-t_1|,$$ \begin{equation}\label{27} \sum\limits_{i\in A_{n,h}(t_1)\cup A_{n,h}(t_2)}\Delta z_{ni}\le 4h+2\delta_n\le 6h. \end{equation} Thus, Lemma \ref{lem-2} is proved. \hfill$\square$ \begin{lem}\label{lem-1} For any positive $h<1/2$, the following estimate is valid$:$ \begin{eqnarray*} \sup_{ t\in [0,1]}|\widehat r_{n,h}(f,t)|\le C^*_1\omega_f(h),\quad\mbox{with}\quad C_1^*=C_1\frac{L^2}{\kappa_2-\kappa_1^2}. \end{eqnarray*} \end{lem} {\it Proof}. Without loss of generality, the required estimate can be derived on the set of elementary events determined by the condition $ \delta_n \leq c_*h $. Then the assertion of the lemma follows from the inequality \begin{multline}\label{rbound} |\widehat r_{n,h}(f,t)|\le \frac{\omega_f(h)w_{n2}(t)}{w_{n0}(t)w_{n2}(t)-w_{n1}^2(t)}\sum_{i\in A_{n,h}(t)}K_{h}(t-z_{n:i})\Delta z_{ni}\\ +\frac{\omega_f(h)|w_{n1}(t)|}{w_{n0}(t)w_{n2}(t)-w_{n1}^2(t)}\sum_{i\in A_{n,h}(t)}|t-z_{n:i}|K_{h}(t-z_{n:i})\Delta z_{ni}, \end{multline} the estimates from (\ref{error-}), and Lemma \ref{lem-2}. \hfill$\square$ \begin{lem}\label{lem-3} For any $y>0$ and $h<1/2$, on the set of elementary events such that $\delta_n\leq c_*h$, the following estimate is valid$:$ \begin{eqnarray*} {\mathbb P}_{{\cal F}_n}\left (\sup\limits_{t\in [0,1]}|\widehat \nu_{n,h}(t)|>y\right )\le C_2^*\sigma^2\frac{\delta_n}{h^2y^2},\quad\mbox{with}\quad C_2^*=C_2\frac{L^4}{(\kappa_2-\kappa_1^2)^2}, \end{eqnarray*} where the symbol ${\mathbb P}_{{\cal F}_n}$ denotes the conditionsl probability given the $\sigma$-field ${\cal F}_n$. \end{lem} {\it Proof}. Put \begin{eqnarray}\label{25} \mu_{n,h}(t) = \sum\limits_{i\in A_{n,h}(t)}h^{-2}\alpha_{n,i}(t)K_{h}\left(t-z_{n:i}\right)\Delta z_{ni}\varepsilon_{ni}, \end{eqnarray} where $\alpha_{n,i}(t)=w_{n2}(t)-(t-z_{n:i})w_{n1}(t)$, and note that from Lemma~\ref{lem-2} and the conditions of Lemma \ref{lem-3} it follows that, firstly, $h^{-2}|\alpha_{n,i}(t)|\le 6L$ if only $i\in A_{n,h}(t)$, and secondly, \begin{eqnarray}\label{19} \begin{split} |\widehat \nu_{n,h}(t)| \le 8(\kappa_2-\kappa_1^2)^{-1}|\mu_{n,h}(t)|. \end{split} \end{eqnarray} The distribution tail of the random variable $ \sup_{t \in [0,1]} |\mu_{n,h}(t)| $ will be estimated by the so-called chaining proposed by A.N. Kolmogorov to estimate the distribution tail of the supremum norm of a stochastic process with almost surely continuous trajectories (see \cite{1956-Ce}). First of all, note that the set $ [0,1] $ under the supremum sign can be replaced by the set of dyadic rational points $${\cal R}=\{j/2^k;\, j=1,\ldots, 2^k-1; k\ge 1\}.$$ Thus, \begin{eqnarray*}\sup_{t\in [0,1]}|\mu_{n,h}(t)|= \sup_{t\in {\cal R} }|\mu_{n,h}(t)| \le \max_{j = 1,...,2^m - 1}|\mu_{n,h}(j 2^{-m})|\\ + \sum_{k=m+1}^\infty \max_{j=1,...,2^k-2} \big|\mu_{n,h}((j+1) 2^{-k})-\mu_{n,h}(j 2^{-k})\big|, \end{eqnarray*} where the natural number $m$ is defined by the equality $m = \lceil|\log_2 h|\rceil$ (here $\lceil a \rceil$ is the minimal natural number greater than or equal to $a$. One has \begin{eqnarray} {\mathbb P}_{{\cal F}_n}\Big(\sup_{t\in [0,1]}|\mu_{n,h}(t)|>y\Big) \le{\mathbb P}_{{\cal F}_n}\Big(\max_{j = 1,...,2^m - 1}|\mu_{n,h}(j 2^{-m})| >a_m y\Big) \notag +\\ + \sum_{k=m+1}^\infty {\mathbb P}_{{\cal F}_n}\Big(\max_{j=1,...,2^k-2} \big|\mu_{n,h}((j+1) 2^{-k})-\mu_{n,h}(j 2^{-k})\big| >a_k y\Big) \notag \\ \label{20} \le\sum_{j =1}^{2^m - 1} {\mathbb P}_{{\cal F}_n}(|\mu_{n,h}(j 2^{-m})| >a_m y)+ \nonumber\\ + \sum_{k=m+1}^\infty \,\sum_{j=1}^{2^k-2}{\mathbb P}_{{\cal F}_n}\Big( \big|\mu_{n,h}((j+1) 2^{-k})-\mu_{n,h}(j 2^{-k})\big| >a_k y\Big), \end{eqnarray} where $a_m,a_{m+1},...$ is a sequence of positive numbers such that $a_m+a_{m+1}+...=1$. Let us now estimate each of the terms on the right-hand side of (\ref{20}). Using Markov's inequality for the second moment and the estimates (\ref{error-}), we obtain \begin{eqnarray}\label{21} {\mathbb P}_{{\cal F}_n}(|\mu_{n,h}(j 2^{-m})|>a_m y) \le \frac{(6L)^2}{(a_m y)^{2}} \sum\limits_{i\in A_{n,h}(j2^{-m})} K^2_{h}(j2^{-m} -z_{n:i})(\Delta z_{ni})^2\sigma^2\nonumber\\ \le (6L)^2\sigma^2 (a_m y)^{-2} \delta_n (2h+\delta_n)h^{-2} \le C_3L^2\sigma^2 (a_m y)^{-2} \delta_n h^{-1}. \end{eqnarray} Further, \begin{eqnarray}\label{22} {\mathbb P}_{{\cal F}_n}\Big( \big|\mu_{n,h}((j+1) 2^{-k})-\mu_{n,h}(j 2^{-k})\big| >a_k y\Big)\le (a_k y)^{-2}h^{-4}\nonumber\\ \times\sum\limits_{i=1}^n {\mathbb E}_{\cal F}\Big( \big(\alpha_{n,i}((j+1)2^{-k})K_{h}((j+1) 2^{-k}-z_{n:i})-\alpha_{n,i}(j2^{-k})K_{h}(j 2^{-k}-z_{n:i})\big) \Delta z_{ni} \varepsilon_{ni}\Big)^2\nonumber\\ \le\sigma^2(a_k y)^{-2}h^{-4}\nonumber\\ \times\sum\limits_{i=1}^n \Big( \alpha_{n,i}((j+1)2^{-k})K_{h}((j+1) 2^{-k}-z_{n:i})-\alpha_{n,i}(j2^{-k})K_{h}(j 2^{-k}-z_{n:i})\Big)^2 (\Delta z_{ni})^2 \nonumber\\ \le Lh^{-2}|u-v| \le C_4\sigma^2 L^4(a_k y)^{-2} 2^{-2k} \delta_n (4h+2\delta_n)h^{-4}\leq C_5\sigma^2 L^4(a_k y)^{-2} 2^{-2k} \delta_nh^{-3}. \qquad \end{eqnarray} Here we took into account that the summation range in (\ref{22}) coincides with the set $$\left\{i:\,i\in A_{n,h}((j+1) 2^{-k})\cup A_{n,h}(j2^{-k})\right\},$$ and hence, due to the relation $|(j+1)/2^{k}-j/2^k|=2^{-k}\le h$ for $k>m$, the estimate (\ref{27}) is valid for $t_1=j2^{-k}$ and $t_2=(j+1)2^{-k}$. Moreover, we used the estimates $$ \sup_tK_{h}(t)\le Lh^{-1},\quad | K_{h}(u)-K_{h}(v)|\le Lh^{-2}|u-v|, $$ and took into account the following inequalities in the above range of parameter changes (see Lemma \ref{lem-2}): \begin{eqnarray*} |\alpha_{n,i}((j+1)2^{-k})-\alpha_{n,i}(j2^{-k})|\le C_{6}Lh2^{-k},\qquad |\alpha_{n,i}(j2^{-k})|\le C_7Lh^{2}, \\ |\alpha_{n,i}((j+1)2^{-k})K_{h}((j+1) 2^{-k}-z_{n:i})-\alpha_{n,i}(j2^{-k})K_{h}(j 2^{-k}-z_{n:i})|\le C_{8}L2^{-k}. \end{eqnarray*} We now obtain from (\ref{20})--(\ref{22}) that $${\mathbb P}_{{\cal F}_n}\left(\sup_{t\in [0,1]}|\mu_{n,h}(t)|>y\right) \le C_{9}y^{-2} \sigma^2 L^4 \delta_nh^{-1} \left( 2^m a_m^{-2} +h^{-2} \sum_{k=m+1}^\infty 2^{-k+1} a_k^{-2} \right). $$ The optimal sequence $ a_k $ minimizing the right-hand side of this inequality is $ a_m = c2^{m/3}$ and $ a_k = ch^{-2/3} 2^{(-k + 1)/3} $ for $ k = m + 1, m + 2, ... $, where $ c $ is defined by the relation $ a_m + a_{m + 1} + ... = 1 $. For the indicated sequence, we conclude that \begin{eqnarray*} {\mathbb P}_{{\cal F}_n}\left(\sup_{t\in [0,1]}|\mu_{n,h}(t)|>y\right)< \\ \le C_{10} y^{-2} \sigma^2L^4 \delta_n h^{-1} \left( 2^{m/3} +h^{-2/3} 2^{-m/3} \big(2+2^{1/3}+2^{2/3}\big) \right)^3\le C_{11}y^{-2} \sigma^2 L^4 \delta_nh^{-2}. \end{eqnarray*} The assertion of the lemma follows from (\ref{19}). $\hfill\Box$ {\it Proof of Theorem~$\ref{theor-1}$}. The assertion follows from Lemmas \ref{lem-1} and \ref{lem-3} if we set $$\zeta_n(h)=\sup\limits_{t\in[0,1]}|\widehat \nu_{n,h}(t)|+\sup\limits_{t\in[0,1]}|f(t)|I(\delta_n>c_*h)$$ and take into account the relation $${\mathbb P}\big( \zeta_n(h)>y,\, \delta_n\le c_*h\big)= {\mathbb E} I\big( \delta_n\le c_*h\big){\mathbb P}_{{\cal F}_n}\big( \zeta_n(h)>y\big),$$ which was required. $\hfill\Box$ To prove Theorem \ref{theor-2} we need the two auxiliary assertions below. \begin{lem}\label{lem-4} If the condition {\rm(\ref{51})} is fulfilled then $\lim\nolimits_{\varepsilon\to 0}{\mathbb E}\omega_f(\varepsilon)=0$ and for independent copies of the a.s. continuous random process $f(t)$ the following strong law of large numbers is valid$:$ As $N\to \infty$, then \begin{equation}\label{55} \sup\limits_{t\in [0,1]}\big|\overline{f}_{N}(t)-{\mathbb E}f(t)\big|\stackrel{p}\to 0, \quad \mbox{where}\quad \overline f_{N}(t)=N^{-1}\sum\limits_{j=1}^Nf_j(t). \end{equation} \end{lem} {\it Proof}. The first assertion of the lemma follows from (\ref{51}) and Lebesgue's dominated convergence theorem. We put $$\omega_{\overline f_N}(\varepsilon)=\sup\limits_{t,s:|t-s|\leq\varepsilon}\big|\overline{f}_{N}(t)-\overline{f}_{N}(s)\big|, \quad \omega_{\mathbb{E} f}(\varepsilon)=\sup\limits_{t,s:|t-s|\leq\varepsilon}\big|\mathbb{E}{f}(t)-\mathbb{E}{f}(s)\big|.$$ For any fixed $k>0$ and $i=0,\ldots,k$, one has \begin{eqnarray} \label{56} \sup\limits_{t\in [0,1]}\big|\overline{f}_{N}(t)-{\mathbb E}f(t)\big|\leq \max\limits_{0\leq i\leq k}\left|\overline{f}_{N}\big(i/k\big)-{\mathbb E}f\big(i/k\big)\right|+\nonumber\\+ \max\limits_{1\leq i\leq k}\sup\limits_{(i-1)/k\leq t\leq i/k}\left|\overline{f}_{N}(t)-\overline{f}_{N}\big(i/k\big)\right|+ \max\limits_{1\leq i\leq k}\sup\limits_{(i-1)/k\leq t\leq i/k}\left|\mathbb{E}{f}(t)-\mathbb{E}{f}\big(i/k\big)\right|\leq\nonumber\\ \leq \max\limits_{0\leq i\leq k}\left|\overline{f}_{N}\big(i/k\big)-{\mathbb E}f\big(i/k\big)\right|+\omega_{\overline f_N}\left({1}/{k}\right)+\omega_{\mathbb{E} f}\left({1}/{k}\right). \end{eqnarray} Put $\omega_{ f_j}(\varepsilon)=\sup\limits_{t,s:|t-s|\leq\varepsilon}\big|{f}_j(t)-{f}_j(s)\big|$ and note that $\omega_{\mathbb{E}f}(\varepsilon)\leq {\mathbb E}\omega_f(\varepsilon)$, and as $N\to\infty$, $$\overline{f}_{N}\big(i/k\big)\stackrel{p}{\to}{\mathbb E}f\big(i/k\big), \quad \omega_{\overline f_N}(\varepsilon)\leq\frac{1}{N}\sum\limits_{j=1}^N\omega_{f_j}(\varepsilon)\stackrel{p}{\to}{\mathbb E}\omega_f(\varepsilon).$$ Therefore, the right-hand side in (\ref{56}) does not exceed $ {\mathbb E} \omega_f\left(1/k\right) + o_p(1) $ and by the arbitrariness of $ k $ and the first statement of the lemma, the relation (\ref{55}) is proved. \hfill$\square$ \begin{lem}\label{lem-5} Under the conditions of Theorem~{\rm \ref{theor-2}} the following limit relation holds$:$ \begin{equation}\label{57} \frac{1}{N}\sum_{j=1}^N\Delta_{n,h,j}\stackrel{p}\to 0, \quad \mbox{where}\quad \Delta_{n,h,j}=\sup_{t\in [0,1]}| f^*_{n,h,j}(t)-f_j(t)|. \end{equation} \end{lem} {\it Proof}. Let the sequences $h=h_n\to 0$ and $N=N_n\to\infty$ be such that condition (\ref{52}). Introduce the event $B_{n,h,j}=\{\delta_{n,j}\le c_*h\}$, where $j=1,\ldots,N$. For any positive $\nu$ one has \begin{equation}\label{58} {\mathbb P}\left \{\frac{1}{N}\sum_{j=1}^N\Delta_{n,h,j}>\nu\right\}\le {\mathbb P}\left \{\frac{1}{N} \sum_{j=1}^N\Delta_{n,h,j}I(B_{n,h,j})>\nu\right\}+N{\mathbb P}(\overline{B_{n,h,1}}). \end{equation} Next, from Theorem~\ref{theor-1} we obtain \begin{eqnarray*}{\mathbb E}\Delta_{n,h,j}I(B_{n,h,j})\le C_1^*{\mathbb E}\omega_f(h)+ \int\limits_0^{\infty}{\mathbb P}\left( \zeta_n(h)>y,\, \delta_n\le c_*h\right)dy\le\\ \le C_1^*{\mathbb E}\omega_f(h)+h^{-1}({\mathbb E}\delta_n)^{1/2}+ \int\limits_{h^{-1}({\mathbb E}\delta_n)^{1/2}}^{\infty}{\mathbb P}\left( \zeta_n(h)>y,\, \delta_n\le c_*h\right)dy \le \\ \le C_1^*{\mathbb E}\omega_f(h)+(1+C_2^*\sigma^2)h^{-1}({\mathbb E}\delta_n)^{1/2}. \end{eqnarray*} To complete the proof of the lemma, it remains for the first probability on the right-hand side of (\ref{58}) to apply Markov's inequality, use the last estimate, limit relations (\ref{52}), and the first statement of Lemma \ref{lem-4}. \hfill$\square$ {\it The proof} of Theorem \ref{theor-2} follows from Lemmas \ref{lem-4} and \ref{lem-5}. \hfill$\square$ {\it Proof of Proposition }\ref{predl-4}. For the estimator $f_{n,h}^*(t)$ defined in (\ref{est4}), we need the following representation: \begin{equation}\label{30+} f_{n,h}^*(t)=f(t)+ r_{n,h}^*(f,t)+\nu_{n,h}^*(t), \end{equation} where \begin{eqnarray*} r_{n,h}^*(f,t)=w_{n0}^{-1}(t)\sum_{i=1}^n(f(z_{n:i})-f(t))K_{h}(t-z_{n:i})\Delta z_{ni}, \\ \nu_{n,h}^*(t)=w_{n0}^{-1}(t)\sum_{i=1}^nK_{h}(t-z_{n:i})\Delta z_{ni}\varepsilon_{ni}. \end{eqnarray*} In view of the representations (\ref{30}) and (\ref{30+}), we obtain \begin{eqnarray}\label{newbias} {\rm Bias} \widehat f_{n,h}(t)={\mathbb E}\widehat r_{n,h}(f,t)+f(t){\mathbb P}(\delta_n>c_*h)\nonumber\\ =\sum_{i=1}^n{\mathbb E}\{I(\delta_n\le c_*h)\beta_{n,i}(t)(f(z_{n:i})-f(t))K_{h}(t-z_{n:i})\Delta z_{ni}\}+f(t){\mathbb P}(\delta_n>c_*h), \\ \label{newbias+} {\rm Bias} f_{n,h}^*(t)={\mathbb E} r_{n,h}^*(f,t \nonumber\\ =\sum_{i=1}^n{\mathbb E}\{I(\delta_n\le c_*h)w_{n0}^{-1}(t)(f(z_{n:i})-f(t))K_{h}(t-z_{n:i})\Delta z_{ni}\}+\tau_n, \end{eqnarray} where $|\tau_n|\leq\omega_f(h){\mathbb P}(\delta_n>c_*h)$. Further, it follows from Lemma \ref{lem-2} that, under the condition $\delta_n\le c_*h$, for any point $t\in [h, 1-h]$ one has \begin{equation}\label{beta} \sup\limits_{i\in A_{n,h}(t)}|\beta_{n,i}(t)-w_{n0}^{-1}(t)|\le C_5^*\delta_nh^{-1}. \end{equation} When deriving the relation (\ref{beta}), we also took into account that $w_{0}(t)=1$ and $w_{1}(t)= 0$ for all $t\in [h,1-h]$ (see the proof of Lemma \ref{lem-2}). Now, reckoning with the relations (\ref{error-}), (\ref{newbias}), (\ref{newbias+}), (\ref{beta}), and Lemma \ref{lem-2}, it easy to derive the first assertion of the lemma since \begin{multline}\label{difbias} |{\rm Bias} \widehat f_{n,h}(t)-{\rm Bias} f^*_{n,h}(t)| \le C_5^*h^{-1}\omega_f(h){\mathbb E}\left\{\delta_nI(\delta_n\le c_*h)\sum_{i=1}^nK_{h}(t-z_{n:i})\Delta z_{ni}\right\}\\ +(|f(t)|+\omega_f(h)){\mathbb P}(\delta_n>c_*h)\le C_6^*\omega_f(h)h^{-1}{{\mathbb E}\delta_n}+(|f(t)|+\omega_f(h)){\mathbb P}(\delta_n>c_*h). \end{multline} To prove the second assertion, first of all, note that \begin{multline*} {\mathbb Var}\widehat f_{n,h}(t)={\mathbb Var}\widehat \nu_{n,h}(t)+{\mathbb Var}\left(\widehat r_{n,h}(f,t)+f(t)I(\delta_n>c_*h)\right)\\ ={\mathbb Var}\widehat \nu_{n,h}(t)+{\mathbb Var}\widehat r_{n,h}(f,t)+f^2(t){\mathbb P}(\delta_n>c_*h){\mathbb P}(\delta_n\le c_*h), \end{multline*} $${\mathbb Var}f^*_{n,h}(t)={\mathbb Var}\nu^*_{n,h}(t)+{\mathbb Var}r^*_{n,h}(f,t). $$ Thus, we need to compare the two variances on the right-hand side of the first equality with the corresponding variances of the second one. Using (\ref{error-}) and (\ref{beta}), we get \begin{multline*} |{\mathbb Var}\widehat \nu_{n,h}(t)-{\mathbb Var}\nu^*_{n,h}(t)|\le\sigma^2\left|{\mathbb E}\sum_{i=1}^nI(\delta_n\le c_*h)(\beta^2_{n,i}(t)-w_{n0}^{-2}(t))K^2_{h}(t-z_{n:i})(\Delta z_{ni})^2\right|\\ +\sigma^2{\mathbb P}(\delta_n>c_*h) \le C_7^*\sigma^2h^{-1}{\mathbb E}\left\{\delta_nI(\delta_n\le c_*h)\sum_{i=1}^nhK^2_{h}(t-z_{n:i})\Delta z_{ni}\right\}\\ + \sigma^2{\mathbb P}(\delta_n>c_*h) \le C_8^*\sigma^2h^{-1}{\mathbb E}\delta_n; \end{multline*} when deriving this estimate, we took into account that $$\sum_{i=1}^nw_{n0}^{-2}(t))K^2_{h}(t-z_{n:i})(\Delta z_{ni})^2\le 1. $$ To estimate the difference $|{\mathbb Var}\widehat r_{n,h}(t)-{\mathbb Var}r^*_{n,h}(t)|$, note that the bound $C_9^*\overline f^2h^{-1}{\mathbb E}\delta_n$ for the modulus of the difference between the squares of the displacements of the random variables $ \widehat r_{n,h}(f,t) $ and $ r^*_{n,h}(f,t) $ is essentially contained in (\ref{rbound}) and (\ref{difbias}). Estimation of the difference of the second moments of the specified random variables is done similarly with (\ref{error-}), (\ref{beta}), and (\ref{difbias}): \begin{eqnarray*}|{\mathbb E}\widehat r^2_{n,h}(f,t)- {\mathbb E}r^{*2}_{n,h}(f,t)|\le {\mathbb E}|\widehat r_{n,h}(f,t)-r^{*}_{n,h}(f,t)| |\widehat r_{n,h}(f,t)+r^{*}_{n,h}(f,t)|\le C_{10}^*\overline f^2h^{-1}{{\mathbb E}\delta_n}, \end{eqnarray*} which completes the proof. \hfill$\square$ {\it Proof of Proposition} \ref{predl-5}. >From the definition of $\beta_{n,i}(t)$ in (\ref{beta0}) it follows that, for any $t\in [0,1]$, \begin{eqnarray*}\sum_{i=1}^n\beta_{n,i}(t)(z_{n:i}-t)K_{h}(t-z_{n:i})\Delta z_{ni}=0,\\ \sum_{i=1}^n\beta_{n,i}(t)(z_{n:i}-t)^2K_{h}(t-z_{n:i})\Delta z_{ni}=D^{-1}_n(t)(w^2_{n2}(t)-w_{n3}(t)w_{n1}(t))=:B_n(t), \end{eqnarray*} where $D_n(t):=w_{n0}(t)w_{n2}(t)-w^2_{n1}(t)$. Expanding the function $ f(\cdot) $ by the Taylor formula in a neighborhood of the point $ t $ (up to the second derivative), from the above identities we obtain, using (\ref{beta0}), (\ref{newbias}), and lemma \ref{lem-2}, that for any point $ t $ we have \begin{multline}\label{newbias1} {\rm Bias} \widehat f_{n,h}(t) ={\mathbb E}I(\delta_n\le c_*h)\sum_{i=1}^n\{\beta_{n,i}(t)(f(z_{n:i})-f(t))K_{h}(t-z_{n:i})\Delta z_{ni}\}+f(t){\mathbb P}(\delta_n>c_*h)\\ =\frac{f''(t)}{2}{\mathbb E}I(\delta_n\le c_*h)B_{n}(t)+f(t){\mathbb P}(\delta_n>c_*h)+o(h^2)\\ =\frac{f''(t)}{2}B_{0}(t)+O({\mathbb E}\delta_n/h)+o(h^2); \end{multline} moreover, the $ O $- and $ o $-symbols on the right-hand side of (\ref{newbias1}) are uniform in $ t $. Note that $ B_0(t) = O(h^2) $ holds for any $t$. Next, since for $j=1,2$ we have $|w_{j}(t)|w_{0}^{-1}(t)\le h^j$ and $|w_{nj}(t)|w_{n0}^{-1}(t)\le h^j$ for all natural $n$, the following asymptotic representation holds: \begin{multline} {\rm Bias} f^*_{n,h}(t) =\sum_{i=1}^n{\mathbb E}w_{n0}^{-1}(t)(f(z_{n:i})-f(t))K_{h}(t-z_{n:i})\Delta z_{ni}\\ =-f'(t){\mathbb E}\frac{w_{n1}(t)}{w_{n0}(t)}I(\delta_n\le c_* h)+\frac{f''(t)}{2}{\mathbb E}\frac{w_{n2}(t)}{w_{n0}(t)} I(\delta_n\le c_* h)+ O(h{\mathbb P}(\delta_n> c_* h))+o(h^2)\\ =-f'(t)\frac{w_{1}(t)}{w_{0}(t)}+\frac{f''(t)}{2}\frac{w_{2}(t)}{w_{0}(t)} + O({\mathbb E}\delta_n)+o(h^2). \end{multline} {\it Proof of Corollary $\ref{equiv}$}. Without loss of generality, we can assume that $ t \in [h,1-h] $. Then, as noted in the proof of Lemma \ref{lem-2}, for the indicated $ t $, one has $w_0(t)=1$, $w_1(t)=0$, and $w_2(t)=\kappa_2h^2$, i.e., $B_0(t)=\kappa_2h^2$. \hfill$\square$ {\it Proof of Corollary $\ref{atzero}$}. This assertion follows from Proposition \ref{predl-5} and (\ref{zero}). \hfill$\square$ \section*{Acknowledgments} Yu. Linke, I. Borisov, and P. Ruzankin were supported within the framework of the state contract of the Sobolev Institute of Mathematics, project FWNF-2022-0009.
2024-02-18T23:40:38.146Z
2022-07-05T02:08:30.000Z
algebraic_stack_train_0000
2,920
15,336
proofpile-arXiv_065-14386
\section{Introduction} Squeezing of quantum fluctuations has long been paraded as a key feature of inflation (e.g.~\cite{grish, Grishchuk:1993zd, starob96}). In a recent paper~\cite{us}, however, we showed that once one focuses on the end-product of squeezing, just about any model complies with the observational constraints. All that is observationally needed is that at late-time horizon reentry the fluctuations form standing waves with the correct temporal phase \cite{Dodelson:2003ip}. As shown in~\cite{us} this does not require that in the primordial phase there was ``squeezing'' (seen as the suppression of the momentum mode with respect to the momentum-free mode). It is sufficient for the momentum mode not to be overwhelmingly large at the end of the primordial phase, so that its decay during the standard Big Bang epoch leaves a dominant momentum-free mode at late times, producing the required standing wave. Thus, inflation and other models where fluctuations are squeezed (such as bimetric VSL models \cite{Jprl, Jprd}) are actually ``overkill'', or surplus to requirement. This remark is particularly pertinent for some models based on modified dispersion relations (MDR) \cite{DSRJM, Mukohyama:2009gg,DSR-dimred, DSR-rainbow, Amelino-Camelia:2013gna}, specifically the ``critical'' MDR model. This model is characterized by the dispersion relation found in Horava-Lifshitz theory, and is known to produce an exactly scale-invariant spectrum of primordial fluctuations. The critical model does not squeeze the primordial fluctuations, injecting equal amounts of momentum and momentum-free modes into the standard radiation epoch \cite{us}. As shown in~\cite{us} this is phenomenologically acceptable. Yet, one may wonder about the physical origin of this result, particularly as it is an oddity with respect to almost every other scenario. In this paper we shed light on this matter, re-examining the model in the dual frame, where the speed of light $c$ is set to one as a result of a wavelength-dependent redefinition of time. It was previously suggested that in this dual frame, driven by rainbow gravity, ``gravity switches off'', or matter becomes ``conformally coupled to gravity'' \cite{essay, DSR-rainbow}. In Sections~\ref{dual}, \ref{squeezing}, \ref{critical} we show that the former description is more precise. In a radiation dominated Universe (i.e. with conformal coupling to gravity) there is squeezing; in Minkowski spacetime there is not. The fluctuations in the critical MDR model behave for the purposes of squeezing just as if they were in Minkowski spacetime; therefore for all practical purposes gravity is ``switched off''. In this way this paper clarifies both the result on squeezing and the nature of gravity for the critical MDR model. Beyond the critical model, we know~\cite{us} that for MDR models with a red spectrum of perturbations squeezing does occur. Thus, the observed slightly red scalar perturbations must behave similarly to inflation, even if produced by MDR. In Section~\ref{gw} we show that substantial novelties with regards to inflation might arise for tensor perturbations. The MDR for gravitons can in principle be different from that for scalar perturbations; in particular it can produce a blue spectrum. In that case not only would there be no squeezing, but the momentum mode could grow large enough (compared to the momentum-free mode) that the evolution in the standard radiation phase would not be able to suppress it before horizon reentry. We find that this possibility depends solely on the background equation of state during the MDR phase. If $w<1$ then the usual standing waves are formed. If $w>1$ tensor perturbations form standing waves with a cosine temporal phase (complementary to the sine phase of scalar perturbations, responsible for the observed position of the Doppler peaks). For $w=1$ tensor perturbations would form traveling waves upon horizon reentry. \section{MDR frame and its dual frame}\label{dual} The theory of cosmological fluctuations for MDR models was first developed~\cite{DSRJM,DSR-dimred} in a frame where the existence of MDRs is made explicit and Einstein gravity is valid in a well-defined sense. This frame is thus called the MDR or Einstein frame. In this frame the second order action for the fluctuations is: \begin{eqnarray} S_2&=&\int d^3 k d\eta \, z^2[\zeta'^2 -c ^2 k^2 \zeta^2] \end{eqnarray} with $z= a$ and $c$ given by: \begin{equation} c= \left(\frac{\lambda k}{a(\eta)}\right)^{\gamma}\,,\label{eq:speedgamma} \end{equation} where $\lambda^{-1}$ is the UV scale and $\gamma$ is a dimensionless parameter. The critical model with $\gamma=2$ reproduces the dispersion relation found in the Horava-Lifshitz critical model and leads to perturbations that are scale invariant already inside the horizon \cite{DSR-dimred}. As usual, one sets $\zeta=y/a$ to obtain the dynamical equation: \begin{equation} y''+\left(c^2k^2-\frac{a''}{a}\right)y=0. \end{equation} The conjugate momentum to $y$ is given by \begin{equation}\label{mom1} p=y'-\frac{a'}{a}y=a\zeta'. \end{equation} In general the positive and negative frequencies can be identified from: \begin{eqnarray} y({\mathbf k},\eta)&=&\frac{c({\mathbf k},\eta)+c^\dagger (-{\mathbf k},\eta)}{\sqrt{2\omega}} \label{eq:yvsc}\\ p({\mathbf k},\eta)&=&-i \sqrt{\frac{\omega}{2}}(c({\mathbf k},\eta)- c^\dagger (-{\mathbf k},\eta)).\label{eq:pvsc} \end{eqnarray} An analysis of the squeezing parameter $s$ in this frame was carried out in~\cite{us}, with the result that for $\gamma=2$ no squeezing occurs: $s=0$. One may also compute~\cite{us} the parameter $\sigma$ that measures the ratio of the momentum-free mode and the momentum mode (equivalent to $s$ asymptotically for many models, including inflation) with the result: \begin{equation}\label{sigma} \sigma=\frac{\omega^2 |y|^2}{4|p|^2}\sim1. \end{equation} Hence no suppression of the momentum mode over the momentum-free mode occurs in the MDR phase; but also neither is there an enhancement of this mode, as would be the case for a collapsing universe (see~\cite{us}). As explained in~\cite{DSR-rainbow}, a dual frame, with a constant $c$, may be obtained by defining a new time variable: \begin{equation}\label{tau} \tau=\int c d\eta \end{equation} (in what follows we shall use a tilde to denote quantities as measured in the dual frame). This is a ``rainbow frame'' because the new time is also $k$ dependent (i.e.: $\tau=\tau(\eta,k)$). In such a frame $\tilde c=1$, however the non-trivial effects of what in the MDR frame is a varying-$c$ are shifted elsewhere. In the dual frame Einstein's gravity is no longer valid, and is replaced by rainbow gravity~\cite{DSR-rainbow}. Setting $\tilde\zeta=\zeta$ it is straightforward to show that: \begin{eqnarray} S_2&=&\int d^3 k d\tau\, \tilde z^2[\dot{\tilde \zeta}^2 - k^2 \tilde \zeta^2]\\ \tilde z &=& a\sqrt{c} \end{eqnarray} where a dot denotes derivative with respect to $\tau$. The equation of motion associated with this action is: \begin{equation}\label{eom1} \ddot {\tilde y} + \left(k^2-\frac{\ddot{\tilde z}}{\tilde z}\right) \tilde y=0, \end{equation} with $\tilde \zeta=\tilde y/\tilde z$, as usual. It is remarkable that for $\gamma=2$ we obtain a time independent parameter $\tilde z$ controlling the effects of gravity upon the fluctuations: \begin{equation} \tilde z=\lambda k. \end{equation} Although the $k$ dependence in $\tilde z$ signals a non-local relation between $\tilde \zeta$ and $\tilde y$, its lack of time-dependence has an important implication. It marks a decoupling of the fluctuations from gravity (or the ``switching off of gravity'', as speculated in~\cite{essay}). In fact, for $\gamma=2$ the fluctuations live as if they were in Minkowski spacetime, with dynamical equation: \begin{equation}\label{eom} \ddot {\tilde y} + k^2\tilde y=0, \end{equation} since the term in $\ddot{\tilde z}/\tilde z$ now vanishes. Moreover, for $\gamma=2$ the conjugate momentum to $\tilde y$ is simply: \begin{equation}\label{mom} \tilde p=\dot{\tilde y}. \end{equation} This is to be contrasted with the result for a radiation dominated universe (where a term in $a'/a$ subsists in the relation between $y$ and its conjugate momentum), with implications to be made apparent soon. \section{A squeezing dictionary between frames}\label{squeezing} We now provide a dictionary between frames for the quantities appearing in the squeezed state formalism, in order to facilitate a re-examination of the findings in~\cite{us} from the perspective of the dual frame. In providing this dictionary we shall keep $\gamma$ general. Taking $\tilde \zeta=\zeta$ as a starting point, and setting $\zeta=y/z$ and $\tilde \zeta= \tilde y/\tilde z$ as usual, we have at once that: \begin{equation} \tilde y=y\frac{\tilde z}{z}=y\sqrt{c}. \end{equation} In addition, using (\ref{mom1}), we find the relation between the momenta: \begin{equation} \tilde p=\tilde z \dot\zeta=p\frac{\tilde z}{z}\frac{1}{c}=\frac{p}{\sqrt{c}}. \end{equation} In view of these relations, and considering that $\omega=c k$, comparing (\ref{eq:yvsc}) and (\ref{eq:pvsc}) (valid in the Einstein/MDR frame) with their homologous in the dual frame: \begin{eqnarray} \tilde y({\mathbf k},\tau)&=&\frac{\tilde c({\mathbf k},\tau)+\tilde c^\dagger (-{\mathbf k},\tau)}{\sqrt{2 k}} \label{eq:yvsc1}\\ \tilde p({\mathbf k},\tau)&=&-i \sqrt{\frac{ k}{2}}(\tilde c({\mathbf k},\tau)- \tilde c^\dagger (-{\mathbf k},\tau))\label{eq:pvsc1} \end{eqnarray} we can conclude that: \begin{eqnarray} c({\mathbf k},\eta)&=&\tilde c({\mathbf k},\tau)\\ c^\dagger (-{\mathbf k},\eta)&=&\tilde c^\dagger (-{\mathbf k},\tau). \end{eqnarray} Therefore the Bogolubov transformations in the MDR frame: \begin{eqnarray} c({\mathbf k},\eta)&=&u_{\mathbf k}(\eta)c_0({\mathbf k})+ v_{\mathbf k}(\eta)c^\dagger _0(-{\mathbf k}) \label{eq:cEvolution}\\ c^\dagger (-{\mathbf k},\eta)&=&v^\star_{\mathbf k}(\eta)c_0({\mathbf k}) + u^\star _{\mathbf k}(\eta)c^\dagger _0(-{\mathbf k})\label{eq:cdagEvolution} \end{eqnarray} are exactly mimicked in the dual frame with $\tilde u_{\mathbf k}=u_{\mathbf k}$ and $\tilde v_{\mathbf k}=v_{\mathbf k}$, with the consequence that when we parameterise \begin{eqnarray} u_{k}(\eta)&=&e^{-i\theta_{k}(\eta)} \cosh(r_{k}(\eta))\label{uexp}\\ v_{k}(\eta)&=&e^{i(\theta_{k}(\eta)+2 \phi_{k}(\eta))} \sinh(r_{k}(\eta))\label{vexp}, \end{eqnarray} and likewise for the dual frame, we find that squeezing parameter and angles are the same in both frames. Specifically: \begin{equation} \tilde s_k=s_k \end{equation} (where $s_{\mathbf k}(\eta)=|v_{\mathbf k}(\eta)|^2$, and likewise for the dual frame). It is also straightforward to see that the parameter $\sigma$ introduced in~\cite{us} and defined in (\ref{sigma}) is invariant under frame transformation: \begin{equation} \tilde \sigma=\frac{|\tilde y|^2 k^2}{4 |\tilde p|^2}= \frac{|y|^2 \omega^2}{4 |p|^2}=\sigma. \end{equation} The relative strengths of the momentum mode and the momentum-free mode are therefore the same, once we include the different value of $c$ in the two frames. Therefore arguments laid out in the dual frame transpose directly to the Einstein/MDR frame. This will help us understand better the result found in~\cite{us}. \section{Explaining the solutions in the MDR frame for the critical model}\label{critical} It should be obvious from Eqs.~(\ref{eom}) and (\ref{mom}) that for $\gamma=2$ the fluctuations behave as if they were living in Minkowski spacetime. This is to be distinguished from fluctuations living in a radiation dominated universe with Einstein gravity. In fact in that case Eq.~(\ref{eom}) is valid as well (since the term in $a''/a$ in the equation of motion vanishes, due to conformal coupling). However, the effects of expansion are still present in the definition of the conjugate momentum, $p=y'-y/\eta$. This leads to squeezing, as explained in~\cite{us}. In the present case things are quite different in this respect, as we now see. The most general solution to (\ref{eom}) is: \begin{equation}\label{ysoldual} \tilde y= \frac{1}{\sqrt{2k}}(\tilde c_0({\mathbf k})e^{-ik\tau}+ \tilde c_0^\dagger (-{\mathbf k})e^{ik\tau}) \end{equation} with conjugate momentum: \begin{equation} \tilde p =\dot{\tilde y}=- i\sqrt{\frac{k}{2}}(\tilde c_0({\mathbf k})e^{-ik\tau}- \tilde c_0^\dagger (-{\mathbf k})e^{ik\tau}). \end{equation} Comparing with (\ref{eq:yvsc1}) and (\ref{eq:pvsc1}) we therefore find: \begin{eqnarray} \tilde c({\mathbf k},\eta)&=&\tilde c_0({\mathbf k})e^{-ik\tau}\\ \tilde c(-{\mathbf k},\eta)&=& \tilde c_0^\dagger (-{\mathbf k})e^{ik\tau} \end{eqnarray} so that $v_{\mathbf k}=0$ and there is no squeezing at all, \begin{equation} \tilde s=0, \end{equation} as expected in Minkowski spacetime. Also it is obvious that for vacuum fluctuations at late times we have: \begin{equation} \tilde \sigma\sim 1. \end{equation} This explains the results found in~\cite{us}. Since $\tilde s=s$ and $\tilde\sigma=\sigma$, both squeezing and momentum suppression can be equally understood in both frames. In the dual frame the fluctuations behave as if they were living in Minkowski spacetime. Thus, there is no squeezing or momentum suppression, just as was found in~\cite{us}. It also explains why the solutions found in~\cite{us} are so simple for $\gamma=2$. They are in fact harmonic oscillations transposed to the MDR frame (note that Hankel functions of order $\nu=1/2$ are just harmonic oscillations). To make this explicit note that if: \begin{equation} a\propto \eta^m \end{equation} with \begin{equation} m= \frac{2}{1+3w} \label{mw} \end{equation} then the relation between the two times given by (\ref{tau}) is: \begin{equation}\label{taueta} \tau =\frac{-\lambda^2 k^2}{(2m-1)\eta ^{2m-1}}. \end{equation} The minus sign represents the fact that growing $\eta>0$ maps into growing $\tau<0$ if $-1/3<w\le1$ (or $m\ge 1/2$), needed for solving the horizon problem (see~\cite{us}). Inserting \eqref{taueta} into (\ref{ysoldual}) we recognize the solutions found in~\cite{us}, with amplitudes $c({\mathbf k})=\tilde c({\mathbf k})$, with the sign of the exponentials flipped (as explained there, this is needed so that ${\mathbf k}$ points to the actual direction of propagation). \section{Non-critical models and gravity waves}\label{gw} It was shown in~\cite{us} that, should $\gamma\neq 2$, then $\sigma$ changes in time and therefore will not be of order 1 at the end of the primordial MDR phase, as is the case for $\gamma=2$. However, the observed red spectrum of scalar fluctuations suggests $\gamma\ <2$ (see \cite{DSR-dimred}), for which it was shown that $\sigma$ {\it increases} in time, much as in inflation. Therefore no new constraints upon the theory are obtained. The situation would potentially be different if the spectrum could be blue (and $\gamma >2$), because then $\sigma$ could {\it decrease} during the MDR phase to the point where its growth in the radiation epoch might not be enough to ensure the production of the correct standing waves at horizon reentry. This cannot happen for scalar perturbations since their spectrum has been observed to be red. For tensor perturbations, however, it remains a possibility, with implications explored in this Section. As was stated in \cite{DSR-dimred} much of the discussion for scalar modes in MDR models can be replicated for tensor modes, assuming a dispersion relation of the same form, but with possibly different values for the parameters $\gamma$ and $\lambda$ (cf. Eq. \eqref{eq:speedgamma}). We should just add $S$ and $T$ labels to all variables, to distinguish the two types of fluctuations. The fact that $\lambda_S$ could be different from $\lambda_T$ leads to different amplitudes (and thus controls the observable tensor to scalar ratio $r$ \cite{DSR-dimred}). It could also be that the exponents are different, $\gamma_S\neq \gamma_T$, with the implication that $n_S\neq n_T$. Given that no primordial tensor modes have yet been observed, we can speculate on the implications for the squeezing of tensor modes, should they have a blue spectrum. Could it be that for a given range of blue spectra the gravity waves reenter the horizon as travelling waves, or even as standing waves with the complementary phase to the one observed for scalar modes? In order to answer this question we note that during the MDR phase we have \cite{us}: \begin{equation}\label{sigmaDSR} \sigma_T \propto a^{4-2\gamma_{T}}, \end{equation} so that for $\gamma_T>2$ it decreases in time starting from a value of order 1. We should therefore evaluate the evolution of $\sigma_T$ in the radiation epoch, should it be fed a very small $\sigma_T$ from the primordial phase. Until $\sigma_T\sim 1$ we have $y_T\approx B\eta/a$ and $p_T\approx B'/a$, so that \begin{equation}\label{sigmarad1} \sigma_T\propto \eta^2 \end{equation} (this is the same result as for a contracting radiation dominated universe, since the evolution has $\sigma\ll 1$ throughout). Once $\sigma\sim 1$ (if this ever happens), then $y\approx A $ and $p\approx B'/a$, and we have: \begin{equation}\label{sigmarad2} \sigma_T\propto a^4\propto \eta^4. \end{equation} This is consistent with the formula relating $\sigma_0$ and $\Sigma$ in \cite{us}, valid when $\sigma_0$ is not much smaller than 1. In our problem the two regimes (Eq.~(\ref{sigmarad1}) and Eq.~(\ref{sigmarad2})) may need to be considered. We now need to work out the condition for the decay in $\sigma_T$ in the MDR phase to be severe enough that its growth in the radiation epoch is not sufficient to make it large upon horizon reentry. The specific dependence of the scale factor on time depends on both the parameter $\gamma_T$ and the equation of state parameter $w$. The generalization of (\ref{taueta}), relating the conformal MDR frame time and the rainbow time $\tau$ for a generic $\gamma_T$, is: \begin{equation} \eta\propto (-\tau)^{\frac{-1}{\gamma_T\, m-1}} \end{equation} so that the evolution of the scale factor is: \begin{equation} a\propto (-\tau)^{\frac{-m}{\gamma_T\, m-1}}, \end{equation} for $m$ defined in eq. \eqref{mw} and $\gamma_T \,m>1$. This last condition is needed so that we have inflation in the dual frame and solve the horizon problem. In terms of the equation of state parameter it is equivalent to \begin{equation}\label{horsol} -\frac{1}{3}<w<\frac{2\gamma_T-1}{3}, \end{equation} something that was know since \cite{DSR-rainbow}. Combining these results we see that during the MDR phase: \begin{equation}\label{sigmatau} \sigma_{T}\propto (-\tau)^{\frac{-m(4-2\gamma_T)}{\gamma_T \,m-1}}\,. \end{equation} The advantage of using $\tau$ instead of $\eta$ to discuss our problem is that the ratio between $|\tau|$ at horizon first crossing and the end of the MDR phase and the ratio of $|\eta|$ at horizon reentry and at the end of the MDR phase is the same. Hence we can find directly the conditions for $\sigma$ not to decrease so much in the MDR phase so that it cannot grow again to be large in the radiation epoch. The condition is simply that the exponent in (\ref{sigmatau}) be smaller than the one in (\ref{sigmarad1}). The limiting condition (that the exponent is 2) would translate into travelling waves on reentry, since the momentum-free mode and the momentum mode would have similar amplitudes. This happens for $m=1/2$, that is \begin{equation} w=1 \end{equation} and any value of $\gamma_T>2$ (see (\ref{horsol})). If $1<w<\frac{2\gamma_T-1}{3}$ the exponent in \eqref{sigmatau} is larger than 2, indicating that the momentum mode is dominant at horizon reentry. Thus near-standing waves with a cosine temporal phase would reenter the horizon. If $w<1$, on the other hand, the exponent in \eqref{sigmatau} is smaller than 2, so that the growth of the momentum mode during the MDR phase is more than compensated by its suppression during the following radiation epoch. In this case near-standing waves with the sine temporal phase would reenter the horizon, similarly to scalar perturbations. We have demonstrated that the temporal phase of gravity waves at late time horizon reentry only depends on the equation of state parameter for any $\gamma_{T}>2$, that is for any value of the spectral index $n_{T} > 1$. Hence, even if gravity waves were blue, we would not get a direct constraint on $n_T$ from the phenomenology of primordial gravity waves, should these ever be observed. Instead we would constrain the equation of state during the MDR phase. The observation of travelling waves, in particular, would require $w=1$. From another perspective, if we were to observe that the position of the Doppler peaks of gravity waves is compatible with a cosine temporal phase at horizon reentry, rather than a sine phase, then MDR models would predict a blue spectrum and $w>1$. \section{Conclusions} In this paper we reexamined the status of squeezing of primordial fluctuations produced in models with MDR. New insights were obtained from the perspective of the dual rainbow frame, where perturbations propagate with constant speed, but time is wavelength-dependent. We first focused on the model characterized by the dispersion relation that leads to an exactly scale-invariant power spectrum ($\omega^{2}\sim k^{6} $). We found that in the rainbow frame perturbations propagate following the same dynamics that they would have in Minkowski spacetime. This happens both at the level of the equations of motion and their solutions (harmonic oscillations) and at the level of the definition of the conjugate momentum to perturbations. Thus the absence of squeezing (which would still be expected if the fluctuations were simply conformally coupled to gravity, as is the case of a radiation dominated universe). Absence of primordial squeezing, however, does not lead to pathological implications. Equal amounts of momentum and momentum-free modes would be injected in the radiation dominated phase. The former, then, decays, so that perturbations reenter the horizon as standing waves with a sine temporal phase, similarly to what happens in inflation. In more general models we have $\omega^{2}\sim k^{2(\gamma+1)} $, and $\gamma$ could be different for scalars and tensors. If $\gamma<2$ then the spectrum is red; thus this would be a more realistic model for scalar perturbations. In that case squeezing does occur, so no further constraints arise. The case $\gamma>2$ is interesting in that during the primoridal phase the momentum mode is enhanced over the momentum-free mode (just like in a contracting universe), possibly leading to new results. However, in this case the spectrum is blue. Therefore this could only be relevant for tensor perturbations, since for them we do not yet have constraints on the spectral index of their power spectrum. If the spectrum of tensor fluctuations were to be blue, we find that whether or not perturbations reenter the horizon as standing waves depends on the equation of state during the MDR phase. For $w>1$ one would have standing waves with a cosine phase, while a sine phase would be expected for $w<1$. If $w=1$ one would expect perturbations to reenter the horizon as travelling waves. This relation between the kind of waves formed by the perturbations and the equation of state is likely to be a unique feature of MDR models, with interesting potential phenomenological implications in the distant future. \section*{Acknowledgments} We thank Robert Brandenberger, Carlo Contaldi and Marco Peloso for discussions related to this paper. We acknowledge support from the John Templeton Foundation. JM was also supported by an STFC consolidated grant.
2024-02-18T23:40:38.705Z
2017-11-09T02:09:27.000Z
algebraic_stack_train_0000
2,947
3,994
proofpile-arXiv_065-14451
\section{Introduction} The standard paradigm of cosmology describes the large-scale distribution of matter and galaxies in an expanding Universe \citep[][and references therein]{2003moco.book.....D}. Strongly supported by observations, this model assumes a statistically homogeneous and isotropic Universe with cold dark matter (CDM) as dominating form of matter. Matter in total has the mean density \mbox{$\Omega_{\rm m}\approx0.3$} of which ordinary baryonic matter is just \mbox{$\Omega_{\rm b}\approx0.05$}; as usual densities are in units of the critical density (or its energy equivalent). The largest fraction \mbox{$\Omega_\Lambda\approx0.7$} in the cosmological energy density is given by a cosmological constant $\Lambda$ or so-called dark energy, resulting in a flat or approximately flat background geometry with curvature parameter \mbox{$K=0$} \citep[][and references therein]{1917SPAW.......142E,2016A&A...594A..13P}. The exact physical nature of dark matter is unknown but its presence is consistently inferred through visible tracers from galactic to cosmological scales at different epochs in the cosmic history \citep[][for a review]{2005PhR...405..279B}. In particular the coherent shear of distant galaxy images (background sources) by the tidal gravitational field of intervening matter gives direct evidence for the (projected) density field of dark matter \citep{2004ApJ...604..596C}. The basic physics of galaxy formation inside dark-matter halos and the galaxy evolution seems to be identified and reasonably well matched by observations although various processes, such as star formation and galaxy-gas feedback, are still not well understood or worked out in detail \citep{2010gfe..book.....M}. Ultimately, the ability of the $\Lambda\rm CDM$ model to quantitatively describe the observed richness of galaxy properties from initial conditions will be a crucial validation test. One galaxy property of importance is their spatial distribution. Galaxies are known to be differently distributed than the matter in general; they are so-called biased tracers of the matter density field \citep{1984ApJ...284L...9K}. The details of the biasing mechanism are related to galaxy physics \citep{2017arXiv170703397J,2005Natur.435..629S,2004ApJ...601....1W,2001MNRAS.320..289S,2000MNRAS.311..793B,2000MNRAS.318.1144P}. An observed galaxy bias for different galaxy types and redshifts consequently provides input and tests for galaxy models. Additionally, its measurement is practical for studies that rely on fiducial values for the biasing of a particular galaxy sample or on the observational support for a high galaxy-matter correlation on particular spatial scales \citep[e.g.][]{2017arXiv170605004V,2013MNRAS.429.3230H,2013A&A...560A..33S,2011ApJ...734...94M,2010Natur.464..256R,2010PhRvD..81f3531B}. In this context, we investigate the prospects of weak gravitational lensing to measure the galaxy bias \citep[e.g.][for a review]{2015RPPh...78h6901K,2008PhR...462...67M,schneider2006gravitational}. There are clearly various ways to express the statistical relationship between the galaxy and matter distribution, which both can be seen as realisations of statistically homogeneous and isotropic random fields \citep{2016arXiv161109787D}. With focus on second-order statistics we use the common parameterisation in \citet{1999ApJ...518L..69T}. This defines galaxy bias in terms of auto- and cross-correlation power spectra of the random fields for a given wave number $k$: (i) a bias factor $b(k)$ for the relative strength between galaxy and matter clustering; and (ii) a factor $r(k)$ for the galaxy-matter correlation. The second-order biasing functions can be constrained by combining galaxy clustering with cosmic-shear information in lensing surveys \citep{2016MNRAS.463.3326F,2012MNRAS.426..566C,2012A&A...543A...2S,2003MNRAS.346..994P}. In applications of these techniques, galaxy biasing is then known to depend on galaxy type, physical scale, and redshift, thus reflecting interesting galaxy physics \citep{2016MNRAS.459.3203C,2016MNRAS.456.3886B,2016MNRAS.462...35P,2016arXiv160908167P,2013MNRAS.433.1146C,2013MNRAS.430.2476S,2012ApJ...750...37J,2007A&A...461..861S,2003MNRAS.346..994P,2002ApJ...577..604H}. Our interest here is the quality of lensing measurements of galaxy bias. For this purpose, we focus on the method by \cite{1998A&A...334....1V} and \citet{1998ApJ...498...43S}, first applied in \citet{2001ApJ...558L..11H} and \citet{2002ApJ...577..604H}, where one defines relative aperture measures of the galaxy number-density and the lensing mass to observe $b(k)$ and $r(k)$ as projections on the sky, averaged in bands of radial and transverse direction. The advantage of this method is its model independence apart from a cosmology-dependent normalisation. As improvement we define a new procedure to deproject the lensing measurements of the projected biasing functions, giving direct estimates of $b(k)$ and $r(k)$ for a selected galaxy population. In addition, we account for the intrinsic alignment of source galaxies that are utilised in the lensing analysis \citep{2015SSRv..193..139K}. To eventually assess the accuracy and precision of our deprojection technique, we compare the results to the true biasing functions for various galaxy samples in a simulated, about $1000\,\rm deg^2$ wide survey, constructed with a semi-analytic galaxy model by \citet{2015MNRAS.451.2663H}, H15 hereafter, and data from the Millennium Simulation \citep{2005Natur.435..629S}. To this end, a large part of this paper deals with the construction of flexible template models of $b(k)$ and $r(k)$ that we forward-fit to the relative aperture measures. These templates are based on a flexible halo-model prescription, which additionally allows us a physical interpretation of the biasing functions \citep{2002PhR...372....1C}. Some time is therefore also spent on a discussion of the scale-dependence of galaxy bias which will be eminent in future applications of our technique. The structure of this paper is as follows. In Sect. \ref{sect:data}, we describe the construction of data for a mock lensing survey to which we apply our deprojection technique. With respect to number densities of lens and source galaxies on the sky, the mock data are similar to realistic galaxy samples in the Canada-France-Hawaii Telescope Lensing Survey, CFHTLenS hereafter \citep{2012MNRAS.427..146H}. We increase the simulated survey area, however, to $\sim1000\rm\deg^2$ in order to assess the quality of our methodology for state-of-the-art (ground-based) surveys in future applications. In Sect. \ref{sect:projectedbias}, we revise the relation of the spatial biasing functions to their projected counterparts which are observable through the aperture statistics. This section also adds to the technique of \cite{2002ApJ...577..604H} as novelty potentially relevant higher-order corrections in the lensing formalism. It also incorporates a treatment of the intrinsic alignment of sources into the aperture statistics. Section \ref{sect:spatialbias} derives our template models of the spatial biasing functions, applied for a deprojection; Section \ref{sect:implementation} summarises the template parameters and explores their impact on the scale dependence of galaxy bias. The methodological details for the statistical inference of $b(k)$ and $r(k)$ from noisy measurements are presented in Sect. \ref{sect:statinference}. We apply this inference technique to the mock data in the result Sect. \ref{sect:results} and assess its accuracy, precision, and robustness. As a first demonstration, we apply our technique to previous measurements in \citet{2007A&A...461..861S}. We finally discuss our results in Sect. \ref{sect:discussion}. \section{Data} \label{sect:data} This section details our mock data, that is lens and source catalogues, to which we apply our deprojection technique in the following sections. A reader more interested in the method details for the recovery of galaxy bias with lensing data could proceed to the next sections. \renewcommand{\arraystretch}{1.3} \begin{table} \caption{\label{tab:samples} Selection criteria applied to our mock galaxies to emulate stellar-mass samples consistent with SES13 and for the two additional colour-selected samples RED and BLUE. The samples are further subdivided, as in SES13, into the two redshift bins low-$z$ ($\bar{z}\approx0.36$) and high-$z$ ($\bar{z}\approx0.52$) by a emulated selection in photometric redshift $z_{\rm p}$. The redshift distributions of all samples are summarised by Fig. \ref{fig:pofz}. The sample SOURCES is used as background sample for the mock lensing-analysis.} \begin{center} \begin{tabular}{ll} \hline\hline Galaxy Sample & Selection\tablefootmark{a}\\ \hline\\ % SM1 & $0.5\le M_\ast<1$; $i^\prime<22.5$ \\ % SM2 & $1\le M_\ast<2$; $i^\prime<22.5$ \\ % SM3 & $2\le M_\ast<4$; $i^\prime<22.5$ \\ % SM4 & $4\le M_\ast<8$; $i^\prime<22.5$ \\ % SM5 & $8\le M_\ast<16$; $i^\prime<22.5$ \\ % SM6 & $16\le M_\ast<32$; $i^\prime<22.5$ \\ % RED & $u-r>1.93\,z+1.85$; $i^\prime<22.5$;\\ & $0.5\le M_\ast<32$\\ % BLUE & $u-r\le1.93\,z+1.85$; $i^\prime<22.5$; \\ & $0.5\le M_\ast<32$\\\\ % SOURCES & $i^\prime\le24.7$; $0.65\le z_{\rm p}<1.2$ \end{tabular} \tablefoot{\tablefoottext{a}{$M_\ast$ refers to the stellar mass in units of $10^{10}\,\rm M_\odot$; $i^\prime,u,r$ are apparent magnitudes as defined for CFHTLenS \citep{2013MNRAS.433.2545E}; $z$ is the (cosmological) galaxy redshift; $z_{\rm p}$ is a photometric redshift with errors similar to CFHTLenS}} \end{center} \end{table} \renewcommand{\arraystretch}{1.0} \begin{figure} \begin{center} \epsfig{file=fig1.ps,width=125mm,angle=-90} \end{center} \caption{\label{fig:pofz} Models of the probability densities $p_{\rm d}(z)$ of galaxy redshifts in our lens samples SM1 to SM6, RED and BLUE (two top panels), and the density $p_{\rm s}(z)$ of the source sample (bottom panel).} \end{figure} \subsection{Samples of lens galaxies} Our galaxy samples use a semi-analytic model (SAM) according to H15 which is implemented on the Millennium Simulation \citep{2006Natur.440.1137S}. These SAMs are the H15 mocks that are also used in \cite{2016arXiv160808629S}. The Millennium Simulation (MS) is a $N$-body simulation of the CDM density field inside a comoving cubic volume of $500\,h^{-1}\,\rm Mpc$ side length, and it has a spatial resolution of $5\,h^{-1}\,\rm kpc$ sampled by $10^{10}$ mass particles. The fiducial cosmology of the MS has the density parameters $\Omega_{\rm m}=0.25=1-\Omega_\Lambda$ and $\Omega_{\rm b}=0.045$, $\sigma_8=0.9$ for the normalisation of the linear matter power-spectrum, a Hubble parameter $H_0=100\,h\,\rm km\,s^{-1}\,Mpc^{-1}$ with $h=0.73$, and a spectral index for the primordial matter-power spectrum of $n_{\rm spec}=1.0$. All density parameters are in units of the critical density \mbox{$\bar{\rho}_{\rm crit}=3H_0^2/8\pi\,G_{\rm N}$} where $G_{\rm N}$ denotes Newton's constant of gravity. The galaxy mocks are constructed by populating dark matter halos in the simulation based on the merger history of halos and in accordance with the SAM details. We project the positions of the SAMs inside 64 independent light cones onto a $4\times4\,\rm deg^2$ piece of sky. The resulting total survey area is hence $1024\,\rm deg^2$. We then select galaxies from the mocks to emulate the selection in redshift and stellar mass in \citet{2013MNRAS.430.2476S}, SES13 henceforth. Details on the emulation process can be found in \citet{2016arXiv160808629S}. We give only a brief summary here. The mock-galaxy and source samples are constructed to be compatible with those in recent lensing studies, dealing with data form the Canada-France-Hawaii Telescope Survey, CFHTLenS hereafter \citep{2016arXiv160808629S,2014MNRAS.437.2111V,2013MNRAS.433.2545E,2012MNRAS.427..146H}. Our selection proceeds in two steps. First, we split the galaxy catalogues in stellar mass, including emulated measurement errors, and $i^\prime$-band brightness to produce the stellar-mass samples SM1 to SM6; the photometry uses the AB-magnitude system. Second, we randomly discard galaxies in each stellar-mass sample to obtain a redshift distribution that is comparable to a given target distribution. As targets, we employ the photometric redshift bins `low-$z$' and `high-$z$' in SES13 which are the redshift distributions in CFHTLenS after a cut in photometric redshift $z_{\rm p}$. The low-$z$ bin applies \mbox{$0.2\le z_{\rm p}<0.44$}, and the high-$z$ bin applies \mbox{$0.44\le z_{\rm p}<0.6$}. See Fig 5. in SES13 for the different target distributions. Our selection criteria for SM1 to SM6 are listed in Table \ref{tab:samples}. We note here that randomly removing galaxies at redshift $z$ adds shot noise but does not change the matter-galaxy correlations and the (shot-noise corrected) galaxy clustering. In addition to SM1-6, we define two more samples RED and BLUE based on the characteristic bimodal distribution of $u-r$ colours (Table \ref{tab:samples}). Both samples initially consist of all galaxies in SM1 to SM6 but are then split depending on the $u-r$ colours of galaxies: the division is at $(u-r)(z)=1.93\,z+1.85$ which varies with $z$ to account for the reddening with redshift. We crudely found $(u-r)(z)$ by identifying by eye the mid-points $(u-r)_i$ between the red and blue mode in $u-r$ histograms of CFHTLenS\footnote{\url{http://cfhtlens.org}} SM1-6 galaxies in four photometric-redshift bins with means $\{z_i\}=\{0.25,0.35,0.45,0.55\}$ and width $\Delta z=0.1$ \citep{2012MNRAS.421.2355H}. Then we fit a straight line to the four empirical data points $\{(z_i,(u-r)_i)\}$ and obtain the above $(u-r)(z)$ as best-fit. For splitting the mocks, we identify the precise redshifts $z$ in H15 with the photometric redshifts $z_{\rm p}$ in CFHTLenS which, for the scope of this work, is a sufficient approximation. Similar to the previous stellar-mass samples, we combine the redshift posteriors of all CFHTLenS-galaxies RED or BLUE to define the target distributions for our corresponding mock samples. For the following galaxy-bias analysis, we estimate the probability density function (PDF) $p_{\rm d}(z)$ of each galaxy sample from the mock catalogues in the foregoing step. Simply using histograms of the sample redshifts may seem like a good idea but are, in fact, problematic because the histograms depend on the adopted binning. This is especially relevant for the prediction of galaxy clustering which depends on $p_{\rm d}^2(z)$ (see Eq. \ref{eq:pn}). Instead, we fit for $p_{\rm d}(z)$ a smooth four-parameter Gram-Charlier series \begin{equation} \label{eq:gramcharlier} p_{\rm d}(z|\lambda,\vec{\Theta})= \lambda\, \e^{-\frac{x^2}{2}}\, \left( 1+\frac{s}{6}H_3(x)+\frac{k}{24}H_4(x) \right) ~;~ x=\frac{z-\bar{z}}{\sigma_{\rm z}} \end{equation} with the Hermite polynomials $H_3(x)=x^3-3x$ and $H_4(x)=x^4-6x^2+3$ to a mock sample \mbox{$\{z_i:i=1\ldots n\}$} of $n$ galaxy redshifts; $\lambda$ is a normalisation constant that depends on the parameter combination \mbox{$\vec{\Theta}=(\bar{z},\sigma_{\rm z},s,k)$} and is defined by \begin{equation} \int_0^\infty\d z\,p_{\rm d}(z|\lambda,\vec{\Theta})=1\;. \end{equation} For an estimate $\hat{\vec{\Theta}}$ of the parameters $\vec{\Theta}$, we maximise the log-likelihood \begin{equation} \ln{{\cal L}(\vec{\Theta})}=\sum_{i=1}^n\ln{p_{\rm d}(z_i|\lambda,\vec{\Theta})} \end{equation} with respect to $\vec{\Theta}$. This procedure selects the PDF $p_{\rm d}(z|\lambda,\hat{\vec{\Theta}})$ that is closest to the sample distribution of redshifts $z_i$ in a Kullback-Leibler sense \citep{knight1999mathematical}. The mean $\bar{z}$ and variance $\sigma_z^2$ in the fit matches that of the redshift distribution in the mock lens sample. The resulting densities for all our lens samples are shown in the two top panels of Fig. \ref{fig:pofz}. \subsection{Shear catalogues} For mock source catalogues based on the MS data, we construct lensing data by means of multiple-lens-plane ray-tracing as described in \cite{2009A&A...499...31H}. The ray-tracing produces the lensing convergence $\kappa(\vec{\theta}|z_{\rm s})$ and shear distortion $\gamma(\vec{\theta}|z_{\rm s})$ for $4096^2$ line-of-sight directions $\vec{\theta}$ on 64 regular angular grids and a sequence of $n_{\rm s}=31$ source redshifts $z_{{\rm s},i}$ between $z_{\rm s}=0$ and $z_{\rm s}=2$; we denote by $\Delta z_i=z_{{\rm s},i+1}-z_{{\rm s},i}$ the difference between neighbouring source redshifts. Each grid covers a solid angle of $\Omega=4\times4\,\rm deg^2$. For each grid, we then compute the average convergence for sources with redshift PDF $p_{\rm s}(z)$ by \begin{equation} \kappa(\vec{\theta})= \frac{\sum_{i=1}^{n_{\rm s}}p_{\rm s}(z_{{\rm s},i})\,\Delta z_i \,\kappa(\vec{\theta}|z_{{\rm s}.i})} {\sum_{i=1}^{n_{\rm s}}p_{\rm s}(z_{{\rm s},i})\Delta z_i}\;, \end{equation} and the average shear $\gamma(\vec{\theta})$ from the sequence $\gamma(\vec{\theta}|z_{\rm s})$ accordingly. For $p_{\rm s}(z)$, we employ the estimated PDF of CFHTLenS sources that is selected through \mbox{$i^\prime<24.7$} and \mbox{$0.65\le z_{\rm p}<1.2$}, weighted by their shear-measurement error (SES13); see the bottom panel in Fig. \ref{fig:pofz}. The mean redshift of sources is \mbox{$\bar{z}\approx0.93$}. To assign source positions on the sky, we uniform-randomly pick a sample $\{\vec{\theta}_i:i=1\ldots n\}$ of positions for each grid; the amount of positions is $n=\Omega\,\bar{n}_{\rm s}$ for a number density of $\bar{n}_{\rm s}=5\,\rm arcmin^{-2}$ sources which roughly equals the effective number density of sources in SES13. Depending on the type of our lensing analysis, we assign a source at $\vec{\theta}_i$ one of the following three values for the simulated sheared ellipticity $\epsilon_i$: (i) \mbox{$\epsilon_i=\gamma(\vec{\theta}_i)$} for source without shape noise; (ii) \mbox{$\epsilon_i=A(\gamma(\vec{\theta}_i),\epsilon_{\rm s})$} for noisy sources with shear; and (iii) \mbox{$\epsilon_i=A\left(g_i,\epsilon_{\rm s}\right)$} for noisy sources with reduced shear \mbox{$g_i=\gamma(\vec{\theta}_i)/[1-\kappa(\vec{\theta}_i)]$}. We define here by \mbox{$A(x,y):=(x+y)\,(1+xy^\ast)^{-1}$} the conformal mapping of two complex numbers $x$ and $y$, and by $\epsilon_{\rm s}$ a random shape-noise drawn from a bivariate, truncated Gaussian PDF with zero mean, 1D dispersion $\sigma_\epsilon=0.3$, and an exclusion of values beyond \mbox{$|\epsilon_{\rm s}|\ge1$}. \subsection{Power spectra} We obtain the true spatial galaxy-galaxy, galaxy-matter, and matter-matter power spectra for all galaxy samples at a given simulation snapshot with Fast Fourier Transform (FFT) methods. For a choice of pair of tracers (i.e. simulation matter particles or galaxies from different samples) in a snapshot, we compute a series of raw power spectra by \lq{}chaining the power\rq{} \citep{Smith03}. We cover the whole simulation volume as well as smaller subvolumes (by a factor $4^3$ to $256^3$, into which the whole box is folded) by regular meshes of $512^3$ points (providing a spatial resolution from $\sim 1 \,h^{-1}\,\text{Mpc}$ for the coarsest mesh to $\sim5\,h^{-1}\,\text{kpc}$ for the finest mesh). We project the tracers onto these meshes using clouds-in-cells (CIC) assignment \citep{HockneyEastwood_book}. We FFT-transform the meshes, record their raw power spectra, apply a shot-noise correction (except for cross-spectra), a deconvolution to correct for the smoothing by the CIC assignment, and an iterative alias correction \citep[similar to what is described in][]{2005ApJ...620..559J}. From these power spectra, we discard small scales beyond half their Nyquist frequency as well as large scales that are already covered by a coarser mesh, and combine them into a single power spectrum covering a range of scales from modes $\sim 0.01\,h\,\text{Mpc}^{-1}$ to modes $\sim 100\,h\,\text{Mpc}^{-1}$. The composite power spectra are then used as input to estimate alias corrections for the partial power spectra from the individual meshes with different resolutions, and the process is repeated until convergence. From the resulting power spectra, we then compute the true biasing functions, Eq.~\Ref{eq:brdef}, which we compare to our lensing-based reconstructions in Sect.~\ref{sect:results}. \section{Projected biasing functions as observed with lensing techniques} \label{sect:projectedbias} The combination of suitable statistics for galaxy clustering, galaxy-galaxy lensing, and cosmic-shear correlations on the sky allows us to infer, without a concrete physical model, the $z$-averaged spatial biasing-functions $b(k)$ and $r(k)$ as projections $b_{\rm 2D}(\theta_{\rm ap})$ and $r_{\rm 2D}(\theta_{\rm ap})$ for varying angular scales $\theta_{\rm ap}$. Later on, we forward-fit templates of spatial biasing functions to these projected functions to perform a stable deprojection. We summarise here the relation between $(b(k), r(k))$ and the observable ratio-statistics $(b_{\rm 2D}(\theta_{\rm ap}),r_{\rm 2D}(\theta_{\rm ap}))$. We include corrections to the first-order Born approximation for galaxy-galaxy lensing and galaxy clustering, and corrections for the intrinsic alignment of sources. \subsection{Spatial biasing functions} We define galaxy bias in terms of two biasing functions $b(k)$ and $r(k)$ for a given spatial scale $2\pi\,k^{-1}$ or wave number $k$ in the following way. Let $\delta(\vec{x})$ in $\rho(\vec{x})=\overline{\rho}\,[1+\delta(\vec{x})]$ be the density fluctuations at position $\vec{x}$ of a random density field $\rho(\vec{x})$, and $\overline{\rho}$ denotes the mean density. A density field is either the matter density $\rho_{\rm m}(\vec{x})$ or the galaxy number density $n_{\rm g}(\vec{x})$ with density contrasts $\delta_{\rm m}(\vec{x})$ and $\delta_{\rm g}(\vec{x})$, respectively. We determine the fluctuation amplitude for a density mode $\vec{k}$ by the Fourier transform of $\delta(\vec{x})$, \begin{equation} \tilde{\delta}(\vec{k})= \int\d^3\!x\;\delta(\vec{x})\,\e^{-\i\vec{x}\cdot\vec{k}}\;. \end{equation} All information on the two-point correlations of $\tilde{\delta}(\vec{k})$ is contained in the power spectrum $P(k)$ defined through the second-order correlation function of modes, \begin{equation} \ave{\tilde{\delta}(\vec{k})\tilde{\delta}(\vec{k}^\prime)} =(2\pi)^3\delta_{\rm D}(\vec{k}+\vec{k}^\prime)P(k)\;, \end{equation} where $k=|\vec{k}|$ is the scalar wave-number and $\delta_{\rm D}(\vec{s})$ is the Dirac Delta distribution. Specifically, we utilise three kinds of power spectra, \begin{eqnarray} \ave{\tilde{\delta}_{\rm m}(\vec{k})\tilde{\delta}_{\rm m}(\vec{k}^\prime)} &=&(2\pi)^3\delta_{\rm D}(\vec{k}+\vec{k}^\prime)P_{\rm m}(k)\;; \\ \ave{\tilde{\delta}_{\rm m}(\vec{k})\tilde{\delta}_{\rm g}(\vec{k}^\prime)} &=&(2\pi)^3\delta_{\rm D}(\vec{k}+\vec{k}^\prime)P_{\rm gm}(k)\;; \\ \ave{\tilde{\delta}_{\rm g}(\vec{k})\tilde{\delta}_{\rm g}(\vec{k}^\prime)} &=&(2\pi)^3\delta_{\rm D}(\vec{k}+\vec{k}^\prime) \left(P_{\rm g}(k)+\bar{n}_{\rm g}^{-1}\right)\;, \end{eqnarray} namely the matter power spectrum $P_{\rm m}(k)$, the galaxy-matter cross-power spectrum $P_{\rm gm}(k)$, and the galaxy power-spectrum $P_{\rm g}(k)$. The latter subtracts the shot-noise $\bar{n}_{\rm g}^{-1}$ from the galaxy power spectrum by definition. In contrast to the smooth matter density, the galaxy number-density is subject to shot noise because it consists of a finite number of discrete points that make up the number density field. Traditionally, the definition of $P_{\rm g}(k)$ assumes a Poisson process for the shot noise in the definition of $P_{\rm g}(k)$ \citep{peebles80}. The biasing functions (of the second order) express galaxy bias in terms of ratios of the foregoing power spectra, \begin{equation} \label{eq:brdef} b(k):= \sqrt{\frac{P_{\rm g}(k)}{P_{\rm m}(k)}}~;~ r(k):= \frac{P_{\rm gm}(k)}{\sqrt{P_{\rm g}(k)\,P_{\rm m}(k)}}\;. \end{equation} Galaxies that sample the matter density by a Poisson process have \mbox{$b(k)=r(k)=1$} for all scales $k$ and are dubbed `unbiased'; for \mbox{$b(k)>1$}, we find that galaxies cluster stronger than matter at scale $k$, and vice versa for \mbox{$b(k)<1$}; a decorrelation of \mbox{$r(k)\ne1$} indicates either stochastic bias, non-linear bias, a sampling process that is non-Poisson, or combinations of these cases \citep{1999ApJ...520...24D,2001MNRAS.321..439G}. \subsection{Aperture statistics and galaxy-bias normalisation} \label{sect:biassmoothing} The projected biasing functions $b(k)$ and $r(k)$ are observable by taking ratios of the (co-)variances of the aperture mass and aperture number count of galaxies \citep{1998A&A...334....1V,1998ApJ...498...43S}. To see this, let $\kappa_{\rm g}(\vec{\theta})=N_{\rm g}(\vec{\theta})/\overline{N}_{\rm g}-1$ be the density contrast of the number density of galaxies $N_{\rm g}(\vec{\theta})$ on the sky in the direction $\vec{\theta}$, and $\overline{N}_{\rm g}=\ave{N_{\rm g}(\vec{\theta})}$ is their mean number density. We define the aperture number count of $N_{\rm g}(\vec{\theta})$ for an angular scale $\theta_{\rm ap}$ at position $\vec{\theta}$ by \begin{equation} \label{eq:apcount} {\cal N}(\theta_{\rm ap};\vec{\theta})= \int\d^2\theta^\prime\;U(|\vec{\theta}^\prime|;\theta_{\rm ap})\, \kappa_{\rm g}(\vec{\theta}^\prime+\vec{\theta})\;, \end{equation} where \begin{equation} \label{eq:apfilter} U(\theta;\theta_{\rm ap})= \frac{1}{\theta_{\rm ap}^2}\,u(\theta\,\theta_{\rm ap}^{-1})~;~ u(x)=\frac{9}{\pi}\,(1-x^2)\,\left(\frac{1}{3}-x^2\right)\,{\rm H}(1-x) \end{equation} is the aperture filter of the density field, and ${\rm H}(x)$ is the Heaviside step function of our polynomial filter profile $u(x)$. The aperture filter is compensated, that is $\int_0^\infty\d x\;x\,u(x)=0$. Similarly for the (average) lensing convergence $\kappa(\vec{\theta})$ of sources in direction $\vec{\theta}$, the aperture mass is given by \begin{equation} \label{eq:apmass} M_{\rm ap}(\theta_{\rm ap};\vec{\theta})= \int\d^2\theta^\prime\;U(|\vec{\theta}^\prime|;\theta_{\rm ap})\, \kappa(\vec{\theta}^\prime+\vec{\theta})\;. \end{equation} The aperture statistics consider the variances $\ave{{\cal N}^2}(\theta_{\rm ap})$ and $\ave{M_{\rm ap}^2}(\theta_{\rm ap})$ of ${\cal N}(\theta_{\rm ap};\vec{\theta})$ and $M_{\rm ap}(\theta_{\rm ap};\vec{\theta})$, respectively, across the sky as well as their co-variance $\ave{{\cal N}M_{\rm ap}}(\theta_{\rm ap})$ at zero lag. From these observable aperture statistics, we obtain the galaxy-bias factor $b_{\rm 2D}(\theta_{\rm ap})$ and correlation factor $r_{\rm 2D}(\theta_{\rm ap})$ through the ratios \begin{eqnarray} \label{eq:b2dobs} b_{\rm 2D}(\theta_{\rm ap})&=& \sqrt{ \frac{\ave{{\cal N}^2}(\theta_{\rm ap})} {\ave{M_{\rm ap}^2}(\theta_{\rm ap})}}\times f_{\rm b}(\theta_{\rm ap})\;,\\ \label{eq:r2dobs} r_{\rm 2D}(\theta_{\rm ap})&=& \frac{\ave{{\cal N}M_{\rm ap}}(\theta_{\rm ap})} {\sqrt{\ave{{\cal N}^2}(\theta_{\rm ap})\,\ave{M_{\rm ap}^2}(\theta_{\rm ap})}} \times f_{\rm r}(\theta_{\rm ap})\;, \end{eqnarray} where \begin{eqnarray} \label{eq:calfb} f_{\rm b}(\theta_{\rm ap})&:=& \sqrt{\frac{\ave{M^2_{\rm ap}}_{\rm th}(\theta_{\rm ap})}{\ave{{\cal N}^2}_{\rm th}(\theta_{\rm ap};1)}}\;, \\ \label{eq:calfr} f_{\rm r}(\theta_{\rm ap})&:=& \frac {\sqrt{\ave{M^2_{\rm ap}}_{\rm th}(\theta_{\rm ap})\,\ave{{\cal N}^2}_{\rm th}(\theta_{\rm ap};1)}} {\ave{{\cal N}M_{\rm ap}}_{\rm th}(\theta_{\rm ap};1)} \end{eqnarray} normalise the statistics according to a fiducial cosmology, that means the aperture statistics with subscript `th' as in $\ave{M^2_{\rm ap}}_{\rm th}(\theta_j)$ denote the expected (co-)variance for a fiducial model. The normalisation is chosen such that we have \mbox{$b_{\rm 2D}(\theta_{\rm ap})=r_{\rm 2D}(\theta_{\rm ap})=1$} for unbiased galaxies given the distributions of lenses and sources with distance $\chi$ as in the survey, hence the `$(\theta_{\rm ap};1)$' in the arguments of the normalisation. The normalisation functions $f_{\rm f}$ and $f_{\rm b}$ are typically weakly varying with angular scale $\theta_{\rm ap}$ \citep{2002ApJ...577..604H}. In addition, they depend weakly on the fiducial matter power spectrum \mbox{$P_{\rm m}(k;z)$}; they are even invariant with respect to an amplitude change \mbox{$P_{\rm m}(k;z)\mapsto\upsilon\,P_{\rm m}(k;z)$} with some number \mbox{$\upsilon>0$}. We explore the dependence on the fiducial cosmology quantitatively in Sect. \ref{sect:calbias}. For this study, we assume that the distance distribution of lenses is sufficiently narrow, which means that the bias evolution in the lens sample is negligible. We therefore skip the argument $\chi$ in $b(k;\chi)$ and $r(k;\chi)$, and we use a $b(k)$ and $r(k)$ independent of $\chi$ for average biasing functions instead. The relation between $(b(k),r(k))$ and $(b_{\rm 2D}(\theta_{\rm ap}),r_{\rm 2D}(\theta_{\rm ap}))$ is discussed in the following. Let $p_{\rm d}(\chi)\,\d\chi$ and $p_{\rm s}(\chi)\,\d\chi$ be the probability to find a lens or source galaxy, respectively, at comoving distance $[\chi,\chi+\d\chi)$. The matter power spectrum at distance $\chi$ shall be $P_{\rm m}(k;\chi)$, and \mbox{$k_\ell^\chi:=(\ell+0.5)/f_K(\chi)$} is a shorthand for the transverse spatial wave-number $k$ at distance $\chi$ that corresponds to the angular wave-number $\ell$. The function $f_K(\chi)$ denotes the comoving angular-diameter distance in the given fiducial cosmological model. The additive constant 0.5 in $k_\ell^\chi$ applies a correction to the standard Limber approximation on the flat sky which gives more accurate results for large angular scales \citep{2017arXiv170205301K,2008PhRvD..78l3506L}. According to theory, the aperture statistics are then \begin{eqnarray} \label{eq:n2b} \ave{{\cal N}^2}_{\rm th}(\theta_{\rm ap};b)&=& 2\pi\int\limits_0^\infty\d\ell\,\ell\,P_{\rm n}(\ell;b)\,\left[I(\ell\theta_{\rm ap})\right]^2\;,\\ \label{eq:nmapbr} \ave{{\cal N}M_{\rm ap}}_{\rm th}(\theta_{\rm ap};b,r)&=& 2\pi\int\limits_0^\infty\d\ell\,\ell\,P_{{\rm n}\kappa}(\ell;b,r)\,\left[I(\ell\theta_{\rm ap})\right]^2\;,\\ \label{eq:mapsq} \ave{M_{\rm ap}^2}_{\rm th}(\theta_{\rm ap})&=& 2\pi\int\limits_0^\infty\d\ell\,\ell\,P_\kappa(\ell)\,\left[I(\ell\theta_{\rm ap})\right]^2\;, \end{eqnarray} with the angular band-pass filter \begin{equation} I(x):=\int\limits_0^\infty\d s\;s\,u(s)\,{\rm J}_0(s\,x) =\frac{12}{\pi}\frac{{\rm J}_4(x)}{x^2}\;, \end{equation} the angular power spectrum of the galaxy clustering \begin{equation} \label{eq:pn} P_{\rm n}(\ell;b)= \int\limits_0^{\chi_{\rm h}}\frac{\d\chi\;p_{\rm d}^2(\chi)}{f_K^2(\chi)}\, b^2(k_\ell^\chi)\,P_{\rm m}\left(k_\ell^\chi;\chi\right)\;, \end{equation} the galaxy-convergence cross-power \begin{multline} \label{eq:pnkappa} P_{{\rm n}\kappa}(\ell;b,r)=\\ \frac{3H_0^2\,\Omega_{\rm m}}{2c^2} \int\limits_0^{\chi_{\rm h}} \frac{\d\chi\;p_{\rm d}(\chi)\,g_{\rm s}(\chi)} {a(\chi)\,f_K(\chi)}\, b(k_\ell^\chi)\,r(k_\ell^\chi)\,P_{\rm m}\left(k_\ell^\chi;\chi\right)\;, \end{multline} and the convergence power-spectrum \begin{equation} \label{eq:pkappa} P_\kappa(\ell)= \frac{9H_0^4\,\Omega_{\rm m}^2}{4c^4} \int\limits_0^{\chi_{\rm h}} \frac{\d\chi\,\;g_{\rm s}^2(\chi)}{a^2(\chi)}\, P_{\rm m}\left(k_\ell^\chi;\chi\right)\;, \end{equation} all in the Born and Limber approximation. In the integrals, we use the lensing kernel \begin{equation} g_{\rm s}(\chi)= \int\limits_\chi^{\chi_{\rm h}}\d\chi^\prime\;p_{\rm s}(\chi^\prime)\,\frac{f_K(\chi^\prime-\chi)}{f_K(\chi^\prime)}\;, \end{equation} the scale factor $a(\chi)$ at distance $\chi$, the maximum distance $\chi_{\rm h}$ of a source, and the $n$th-order Bessel function ${\rm J}_n(x)$ of the first kind. By $c$ we denote the vacuum speed of light. The power spectra and aperture statistics depend on specific biasing functions as indicated by the $b$ and $r$ in the arguments. For given biasing functions $b(k)$ and $r(k)$, we obtain the normalised galaxy bias inside apertures therefore through \begin{eqnarray} \label{eq:b2d} b_{\rm 2D}(\theta_{\rm ap};b)&=& \sqrt{ \frac{\ave{{\cal N}^2}_{\rm th}(\theta_{\rm ap};b)} {\ave{{\cal N}^2}_{\rm th}(\theta_{\rm ap};1)}}\;,\\ \label{eq:r2d} r_{\rm 2D}(\theta_{\rm ap};b,r)&=& \frac{1}{b_{\rm 2D}(\theta_{\rm ap};b)} \, \frac{\ave{{\cal N}M_{\rm ap}}_{\rm th}(\theta_{\rm ap};b,r)} {\ave{{\cal N}M_{\rm ap}}_{\rm th}(\theta_{\rm ap};1)}\;, \end{eqnarray} which can be compared to measurements of the Eqs. \Ref{eq:b2dobs} and \Ref{eq:r2dobs}. \subsection{Intrinsic alignment of sources} \label{sect:IIandGI} Recent studies of cosmic shear find evidence for an alignment of intrinsic source-ellipticities that contribute to the shear-correlation functions \citep{2017MNRAS.465.1454H,2016PhRvD..94b2001A,2015PhR...558....1T,2013MNRAS.432.2433H,2011A&A...527A..26J,2006MNRAS.367..611M}. These contributions produce systematic errors in the reconstruction of $b(k)$ and $r(k)$ if not included in their normalisation $f_{\rm b}$ and $f_{\rm r}$. Relevant are `II'-correlations between intrinsic shapes of sources in $\ave{M^2_{\rm ap}}$ and `GI'-correlations between shear and intrinsic shapes in both $\ave{{\cal N}M_{\rm ap}}$ and $\ave{M^2_{\rm ap}}$. The GI term in $\ave{{\cal N}M_{\rm ap}}$ can be suppressed by minimising the redshift overlap between lenses and sources. Likewise, the II term is suppressed by a broad redshift distribution of sources which, however, increases the GI amplitude. The amplitudes of II and GI also vary with galaxy type and luminosity of the sources \citep{2011A&A...527A..26J}. An intrinsic alignment (IA) of sources has an impact on the ratio statistics $b_{\rm 2D}(\theta_{\rm ap})$ and $r_{\rm 2D}(\theta_{\rm ap})$, Eqs. \Ref{eq:b2dobs} and \Ref{eq:r2dobs}, mainly through $\ave{M^2_{\rm ap}}(\theta_{\rm ap})$ if we separate sources and lenses in redshift. The impact can be mitigated by using an appropriate model for $\ave{M^2_{\rm ap}}_{\rm th}(\theta_{\rm ap})$ and $\ave{{\cal N}M_{\rm ap}}_{\rm th}(\theta_{\rm ap})$ in the normalisation of the measurements. For this study, we do not include II or GI correlations in our synthetic mock data but, instead, predict the amplitude of potential systematic errors when ignoring the intrinsic alignment for future applications in Sect. \ref{sect:calbias}. For a reasonable prediction of the GI and II contributions to $\ave{M^2_{\rm ap}}_{\rm th}(\theta_{\rm ap})$, we use the recent non-evolution model utilised in \cite{2017MNRAS.465.1454H}. This model is implemented by using \begin{equation} \label{eq:giii} P_\kappa^\prime(\ell)= P_\kappa(\ell)+P_\kappa^{\rm II}(\ell)+P_\kappa^{\rm GI}(\ell) \end{equation} instead of \Ref{eq:pkappa} in Eq. \Ref{eq:mapsq}. The new II and GI terms are given by \begin{eqnarray} P_\kappa^{\rm II}(\ell)&=& \int_0^{\chi_{\rm h}}\frac{\d\chi\;p^2_{\rm s}(\chi)}{f^2_K(\chi)} \,F_{\rm ia}^2(\chi)\,P_{\rm m}(k^\chi_\ell;\chi)\;; \\ P_\kappa^{\rm GI}(\ell)&=& \frac{3H_0^2\,\Omega_{\rm m}}{c^2}\, \int_0^{\chi_{\rm h}}\frac{\d\chi\;p_{\rm s}(\chi)\,g_{\rm s}(\chi)} {a(\chi)\,f_K(\chi)} \,F_{\rm ia}(\chi)\,P_{\rm m}(k^\chi_\ell;\chi)\;, \end{eqnarray} where \begin{multline} F_{\rm ia}(\chi):= -A_{\rm ia}\,{\rm C}_1\,\bar{\rho}_{\rm crit}\,\frac{\Omega_{\rm m}}{D_+(\chi)} \\ \approx-2.4\times10^{-2} \left(\frac{A_{\rm ia}}{3.0}\right)\, \left(\frac{\Omega_{\rm m}}{0.3}\right)\, \left(\frac{D_+(\chi)}{0.5}\right)^{-1} \end{multline} controls the correlation amplitude in the so-called `non-linear linear' model; see \citet{2004PhRvD..70f3526H}, \citet{2007NJPh....9..444B}, or \cite{2011A&A...527A..26J} for details. The factor $A_{\rm ia}$ scales the amplitude; it broadly falls within $A_{\rm ia}\in[-3,3]$ for recent cosmic-shear surveys and is consistent with \mbox{$A_{\rm ia}\approx2$} for sources in the Kilo-Degree Survey \citep{2017arXiv170706627J,2017MNRAS.465.1454H,2013MNRAS.432.2433H}. For the normalisation of $F_{\rm ia}(\chi)$, we use ${\rm C}_1=5\times10^{-14}\,h^{-2}\,{\rm M}_\odot^{-1}\,{\rm Mpc}^3$, and the linear structure-growth factor $D_+(\chi)$, normalised to unity for \mbox{$\chi=0$} \citep{peebles80}. By comparing $P_\kappa^{\rm II}(\ell)$ and $P_{\rm n}(\ell)$ in Eq. \Ref{eq:pn} we see that II contributions are essentially the clustering of sources on the sky (times a small factor). Likewise, $P_\kappa^{\rm GI}(\ell)$ is essentially the correlation between source positions and their shear on the sky, cf. Eq. \Ref{eq:pnkappa}. In this IA model, we assume a scale-independent galaxy bias for sources in the IA modelling since $F_{\rm ia}(\chi)$ does not depend on $k$. \begin{figure} \begin{center} \epsfig{file=fig2.ps,width=65mm,angle=-90} \end{center} \caption{\label{fig:GIandII} Levels of GI and II contributions to $\ave{M^2_{\rm ap}}$ for different values of $A_{\rm ia}$ (red and black lines labelled `II $\pm A_{\rm ia}$' and `GI $\pm A_{\rm ia}$'). The line `GG' is the theoretical $\ave{M^2_{\rm ap}}$ without GI and II terms; the data points are measurements on the mocks for sources with shear and shape noise (MS $\gamma$+n), reduced shear and shape noise (MS $g$+n), and shear without shape noise (MS $\gamma$). The error bars indicate jackknife errors inflated by a factor of five for clarity (Appendix \ref{sect:estimators}).} \end{figure} \begin{figure} \begin{center} \epsfig{file=fig3.ps,width=65mm,angle=-90} \end{center} \caption{\label{fig:nmapGI} Relative change of $\ave{{\cal N}M_{\rm ap}}$ for present GI correlations with different amplitudes $A_{\rm ia}$ as indicated by `GI $\pm A_{\rm ia}$'. The figure uses SM4 as fiducial lens-sample; the results for other samples are similar. The thin lines within $\pm2\%$ are for the low-$z$ sample, and the thick lines are for the high-$z$ sample.} \end{figure} In Fig. \ref{fig:GIandII}, we plot the predicted levels of II and GI terms in the observed $\ave{M_{\rm ap}^2}$ for varying values of $A_{\rm ia}$ as black and red lines for our MS cosmology and the $p_{\rm s}(z)$ in our mock survey. The corresponding value of $A_{\rm ia}$ is shown as number in the figure key. We use negative values of $A_{\rm ia}$ for GI to produce positive correlations for the plot; the corresponding predictions for $-A_{\rm ia}$ have the same amplitude as those for $A_{\rm ia}$ but with opposite sign. II terms, on the other hand, are invariant with respect to a sign flip of $A_{\rm ia}$. All curves in the plot use a matter power spectrum $P_{\rm m}(k;\chi)$ computed with \texttt{Halofit} \citep{Smith03} and the update in \citet{2012ApJ...761..152T}. For comparison, we plot as blue line GG the theoretical $\ave{M^2_{\rm ap}}$ without GI and II terms. For \mbox{$|A_{\rm ia}|\approx3$}, GI terms can reach levels up to 10 to 20 per cent of the shear-shear correlation signal for $\theta_{\rm ap}\gtrsim1^\prime$, whereas II terms are typically below 10 per cent. GI and II terms partly cancel each other for \mbox{$A_{\rm ia}>0$} so that the contamination is worse for negative $A_{\rm ia}$. \begin{figure*} \begin{center} \epsfig{file=fig4.ps,width=90mm,angle=-90} \vspace{-0.5cm} \end{center} \caption{\label{fig:higherorder} Relative errors in the aperture statistics due to magnification bias of the lenses. \emph{Left}: Errors for $\ave{{\cal N}M_{\rm ap}}$ where different line styles distinguish the galaxy samples. Larger errors for the same sample correspond to the high-$z$ bin, smaller errors to low-$z$. \emph{Right}: Percentage errors for $\ave{{\cal N}^2}$ where larger errors for the same line style are the high-$z$ bias.} \end{figure*} Moreover, we quantify the GI term in $\ave{{\cal N}M_{\rm ap}}$ by using in \Ref{eq:nmapbr} the modified power spectrum \begin{equation} P_{{\rm n}\kappa}^\prime(\ell;b,r)= P_{{\rm n}\kappa}(\ell;b,r)+P^{\rm GI}_{{\rm n}\kappa}(\ell;b,r) \end{equation} with \begin{equation} P^{\rm GI}_{{\rm n}\kappa}(\ell;b,r)= \int_0^{\chi_{\rm h}}\frac{\d\chi\;p_{\rm s}(\chi)\,p_{\rm d}(\chi)}{f^2_K(\chi)}\, b(k_\ell^\chi)\,r(k_\ell^\chi)\,F_{\rm ia}(\chi)\,P_{\rm m}(k_\ell^\chi;\chi)\;. \end{equation} This is the model in \cite{2017arXiv170706627J}, see their Equation (11), with an additional term $r(k_\ell^\chi)$ that accounts for a decorrelation of the lens galaxies. This GI model is essentially the relative clustering between lenses and unbiased sources on the sky and therefore vanishes in the absence of an overlap between the lens and source distributions, which means that \mbox{$\int\d\chi\;p_{\rm s}(\chi)\,p_{\rm d}(\chi)=0$}. In Fig. \ref{fig:nmapGI}, we quantify the relative change in $\ave{{\cal N}M_{\rm ap}}$ owing to the GI term for different values of $A_{\rm ia}$. Since the change is very similar for all galaxy samples in the same redshift bin, we plot only the results for SM4. The overlap between sources and lenses is only around 4 per cent for low-$z$ samples and, therefore, the change stays within 2 per cent for all angular scales considered here (SES13). On the other hand, for high-$z$ samples where we have roughly 14 per cent overlap between the distributions, the change can amount to almost 10 per cent for \mbox{$A_{\rm ia}\approx\pm2$} and could have a significant impact on the normalisation. \subsection{Higher-order corrections} \label{sect:higherorder} Corrections to the (first-order) Born approximation or for the magnification of the lenses cannot always be neglected as done in Eq. \Ref{eq:pnkappa} \citep[e.g.,][]{2008PhRvD..78l3517Z,2009A&A...499...31H,2009PhDThesisHartlap}. This uncorrected equation over-predicts the power spectrum $P_{\rm n\kappa}(\ell)$ by up to 10\% depending on the galaxy selection and the mean redshift of the lens sample; the effect is smaller in a flux-limited survey but also more elaborate to predict as it depends on the luminosity function of the lenses. \citet[][]{2009A&A...499...31H} tests this for the tangential shear around lenses by comparing \Ref{eq:pnkappa} to the full-ray-tracing results in the MS data which account for contributions from lens-lens couplings and the magnification of the angular number density of lenses. For a volume-limited lens sample, \citet{2009PhDThesisHartlap}, H09 hereafter, derives the second-order correction (in our notation) \begin{equation} P_{{\rm n}\kappa}^{(2)}(\ell)= -\frac{9H_0^4\,\Omega_{\rm m}^2}{2c^4} \int_0^{\chi_{\rm h}}\d\chi\; \frac{g_{\rm s}(\chi)\,g_{\rm d}(\chi)}{a^2(\chi)}\, P_{\rm m}(k^\chi_\ell;\chi)\;, \end{equation} where \begin{equation} g_{\rm d}(\chi)= \int_\chi^{\chi_{\rm h}}\d\chi^\prime\;p_{\rm d}(\chi^\prime)\, \frac{f_K(\chi^\prime-\chi)}{f_K(\chi^\prime)}\;, \end{equation} for a more accurate power spectrum $P^\prime_{{\rm n}\kappa}(\ell)=P_{{\rm n}\kappa}(\ell)+P_{{\rm n}\kappa}^{(2)}(\ell)$ that correctly describes the correlations in the MS. Physically, this correction accounts for the magnification of the projected number density of lens galaxies by matter in the foreground. We find that the thereby corrected $\ave{{\cal N}M_{\rm ap}}_{\rm th}(\theta_{\rm ap};b,r)$ can be different to the uncorrected aperture statistic by up to a few per cent, see left-hand panel in Fig. \ref{fig:higherorder}. This directly affects the normalisation of $r_{\rm 2D}(\theta_{\rm ap})$: the measured, normalised correlation $r_{\rm 2D}(\theta_{\rm ap})$ would be systematically low. We obtain Fig. \ref{fig:higherorder} by comparing the uncorrected to the corrected $\ave{{\cal N}M_{\rm ap}}_{\rm th}$ for each of our lens-galaxy samples. In accordance with H09, we find that the systematic error is not negligible for some lens sample, and we therefore include this correction by employing $P^\prime_{{\rm n}\kappa}(\ell)$ instead of $P_{{\rm n}\kappa}(\ell)$ in the normalisation $f_{\rm r}(\theta_{\rm ap})$ and in the prediction $r_{\rm 2D}(\theta_{\rm ap};b,r)$. This improves the accuracy of the lensing reconstruction of $r(k)$ by up to a few per cent, most notably the sample blue high-$z$, especially around \mbox{$k\approx1\,h\,\rm Mpc^{-1}$} which corresponds to \mbox{$\theta_{\rm ap}\approx10^\prime$}. Additional second-order terms for $P_{{\rm n}\kappa}(\ell)$ arise due to a flux limit of the survey (Equations 3.129 and 3.130 in H09), but they require a detailed model of the luminosity function for the lenses. We ignore these contributions here because our mock lens samples, selected in redshift bins of $\Delta z\approx0.2$ and for stellar masses greater than $5\times10^9\,{\rm M}_\odot$, are approximately volume limited because of the lower limit of stellar masses and the redshift binning \citep[see Sect. 4.1 in][which use our lens samples]{2017arXiv171009902S}. Similarly, by \begin{multline} P^{(2)}_{\rm n}(\ell;b,r)= \frac{9H_0^4\,\Omega_{\rm m}^2}{c^4} \int_0^{\chi_{\rm h}}\d\chi\; \frac{g_{\rm d}^2(\chi)}{a^2(\chi)}\, P_{\rm m}(k^\chi_\ell;\chi) \\ -\frac{6H_0^2\,\Omega_{\rm m}}{c^2} \int_0^{\chi_{\rm h}}\d\chi\; \frac{p_{\rm d}(\chi)\,g_{\rm d}(\chi)}{a(\chi)\,f_K(\chi)}\, b(k^\chi_\ell)\,r(k^\chi_\ell)\, P_{\rm m}(k^\chi_\ell;\chi) \end{multline} H09 gives a second-order correction for $P_{\rm n}(\ell;b)$ in addition to more corrections for flux-limited surveys (Equations 3.140-3.143). We include $P_{\rm n}^{(2)}(\ell;b,r)$ by using $P_{\rm n}^\prime(\ell;b,r)=P_{\rm n}(\ell;b)+P_{\rm n}^{(2)}(\ell;b,r)$ instead of Eq. \Ref{eq:pn} in the following for $f_{\rm b}(\theta_{\rm ap})$ and $b_{\rm 2D}(\theta_{\rm ap};b,r)$, although this correction is typically below half a per cent here; see the right-hand panel in Fig. \ref{fig:higherorder}. \section{Model templates of biasing functions} \label{sect:spatialbias} Apart from the galaxy-bias normalisation, the ratio statistics $b_{\rm 2D}$ and $r_{\rm 2D}$ are model-free observables of the spatial biasing functions, averaged for the radial distribution of lenses. The deprojection of the ratio statistics into (an average) $b(k)$ and $r(k)$ is not straightforward due to the radial and transverse smoothing in the projection. Therefore, for a deprojection we construct a parametric family of templates that we forward-fit to the ratio statistics. In principle, this family could be any generic function but we find that physical templates that can be extrapolated to scales unconstrained by the observations result in a more stable deprojection. To this end, we pick a template prescription that is motivated by the halo-model approach but with more freedom that is commonly devised \citep[][for a review]{2002PhR...372....1C}. Notably, we derive explicit expressions for $b(k)$ and $r(k)$ in a halo-model framework. \subsection{Separation of small and large scales} Before we outline the details of our version of a halo model, used to construct model templates, we point out that any halo model splits the power spectra $P_{\rm m}(k)$, $P_{\rm gm}(k)$, and $P_{\rm g}(k)$ into one- and two halo terms, \begin{equation} P(k)=P^{\rm 1h}(k)+P^{\rm 2h}(k)\;. \end{equation} The one-halo term $P^{\rm 1h}(k)$ dominates at small scales, quantifying the correlations between density fluctuations within the same halo, whereas the two-halo term $P^{\rm 2h}(k)$ dominates the power spectrum at large scales where correlations between fluctuations in different halos and the clustering of halos become dominant. We exploit this split to distinguish between galaxy bias on small scales (one-halo terms) and galaxy bias on large scales (two-halo terms), namely \begin{equation} b^{\rm 1h}(k):= \sqrt{\frac{P_{\rm g}^{\rm 1h}(k)}{P_{\rm m}^{\rm 1h}(k)}} ~;~ b^{\rm 2h}(k):= \sqrt{\frac{P_{\rm g}^{\rm 2h}(k)}{P_{\rm m}^{\rm 2h}(k)}} \end{equation} and \begin{equation} \label{eq:r12h} r^{\rm 1h}(k):= \frac{P_{\rm gm}^{\rm 1h}(k)}{\sqrt{P_{\rm g}^{\rm 1h}(k)\,P_{\rm m}^{\rm 1h}(k)}} ~;~ r^{\rm 2h}(k):= \frac{P_{\rm gm}^{\rm 2h}(k)}{\sqrt{P_{\rm g}^{\rm 2h}(k)\,P_{\rm m}^{\rm 2h}(k)}}\;, \end{equation} and we derive approximations for both regimes separately. We will find that the two-halo biasing functions are essentially constants, and the one-halo biasing functions are only determined by the relation between matter and galaxy density inside halos. \begin{figure} \begin{center} \epsfig{file=fig5.ps,width=75mm,angle=-90} \end{center} \caption{\label{fig:wm} The weight $W_{\rm m}(k)$ of the two-halo term in the matter-power spectrum for varying redshifts $z$.} \end{figure} To patch together both approximations of the biasing functions in the one-halo and two-halo regime, we then do the following. Based on Eq. \Ref{eq:brdef}, the function $b^2(k)$ is a weighted mean of $b^{\rm 1h}(k)$ and $b^{\rm 2h}(k)$: \begin{eqnarray} \nonumber b^2(k)&=& \frac{P^{\rm 1h}_{\rm g}(k)+P^{\rm 2h}_{\rm g}(k)} {P_{\rm m}(k)} \\ \nonumber&=& \frac{P^{\rm 1h}_{\rm m}(k)\,[b^{\rm 1h}(k)]^2} {P_{\rm m}(k)} + \frac{P^{\rm 2h}_{\rm m}(k)\,[b^{\rm 2h}(k)]^2 } {P_{\rm m}(k)} \\ \label{eq:bkgeneral} &=& \Big(1-W_{\rm m}(k)\Big)\,[b^{\rm 1h}(k)]^2+W_{\rm m}(k)\,[b^{\rm 2h}(k)]^2\;, \end{eqnarray} where the weight \begin{equation} \label{eq:Wm} W_{\rm m}(k):= \frac{P^{\rm 2h}_{\rm m}(k)}{P_{\rm m}(k)} \end{equation} is the amplitude of the two-halo matter power spectrum relative to the total matter power spectrum. Deep in the one-halo regime we have \mbox{$W_{\rm m}(k)\approx0$} but \mbox{$W_{\rm m}(k)\approx1$} in the two-halo regime. Since the two-halo biasing is approximately constant, the scale-dependence of galaxy bias is mainly a result of the galaxy physics inside halos and the shape of $W_{\rm m}(k)$. Once the weight $W_{\rm m}(k)$ is determined for a fiducial cosmology, it does not rely on galaxy physics, we can use it for any model of $b^{\rm 1h}(k)$ and $b^{\rm 2h}(k)$. In principle, the weight $W_{\rm m}(k)$ could be accurately measured from a cosmological simulation by correlating only the matter density from different halos for $P^{\rm 2h}_{\rm m}(k)$ which is then normalised by the full power spectrum $P_{\rm m}(k)$ in the simulation. We, however, determine $W_{\rm m}(k)$ by computing the one-halo and two-halo term of $P_{\rm m}(k)$ with the setup of \citet{2009MNRAS.398..807S}. Our results for $W_{\rm m}(k)$ at different redshifts are plotted in Fig. \ref{fig:wm}. There we find that the transition between the one-halo and two-halo regime, \mbox{$W_{\rm m}\sim0.5$}, is at \mbox{$k\sim0.3\,h\,\rm Mpc^{-1}$} for \mbox{$z=0$}, whereas the transition point moves to \mbox{$k\sim1\,h\,\rm Mpc^{-1}$} for \mbox{$z\sim1$}. Similar to $b(k)$, we can expand the correlation function $r(k)$ in terms of its one-halo and two-halo biasing functions. To this end, let \begin{equation} W_{\rm g}(k):= \frac{P^{\rm 2h}_{\rm g}(k)}{P_{\rm g}(k)} = \left(\frac{b^{\rm 2h}(k)}{b(k)}\right)^2\,W_{\rm m}(k) \end{equation} be a weight by analogy to $W_{\rm m}(k)$. For unbiased galaxies, that is \mbox{$b^{\rm 2h}(k)=b(k)=1$}, we simply have \mbox{$W_{\rm g}(k)=W_{\rm m}(k)$}. Using the definition of $r(k)$ in Eq. \Ref{eq:brdef} and Eq. \Ref{eq:r12h}, we generally find \begin{multline} \label{eq:rkgeneral} r(k)=\\ \sqrt{(1-W_{\rm m}(k))(1-W_{\rm g}(k))}\,r^{\rm 1h}(k)+ \sqrt{W_{\rm m}(k)W_{\rm g}(k)}\, r^{\rm 2h}(k)\;. \end{multline} \subsection{Halo-model definitions} For approximations of the biasing functions in the one- and two-halo regime, we apply the formalism in \citet{2000MNRAS.318..203S} and briefly summarise it here. All halo-related quantities depend on redshift. In the fits with the model later on, we use for this the mean redshift of the lens galaxies. We shall denote by $n(m)\,\d m$ the (comoving) number density of halos within the halo-mass range \mbox{$[m,m+\d m)$}; \mbox{$\ave{N|m}$} is the mean number of galaxies inside a halo of mass $m$; \mbox{$\ave{N(N-1)|m}$} is the mean number of galaxy pairs inside a halo of mass $m$. Let $u(r,m)$ be the radial profile of the matter density inside a halo or the galaxy density-profile. Also let \begin{equation} \tilde{u}(k,m)= \frac{\int_0^\infty\d s\;sk^{-1}\,u(s,m) \sin{(ks)}} {\int_0^\infty\d s\;s^2\,u(s,m)} \end{equation} be its normalised Fourier transform. Owing to this normalisation, profiles obey \mbox{$\tilde{u}(k,m)=1$} at \mbox{$k=0$}. To assert a well-defined normalisation of halos, we truncate them at their virial radius $r_{\rm vir}$, which we define by the over-density $\Omega_{\rm m}\,\bar{\rho}_{\rm crit}\,\Delta_{\rm vir}(z)$ within the distance $r_{\rm vir}$ from the halo centre and by $\Delta_{\rm vir}(z)$ as in \cite{2001MNRAS.321..559B}. Furthermore, the mean matter and galaxy number density (comoving) are \begin{equation} \label{eq:rhong} \bar{\rho}_{\rm m}=\int\d m\;n(m)\,m ~;~ \overline{n}_{\rm g}=\int\d m\;n(m)\,\ave{N|m}\;. \end{equation} The one-halo terms of the galaxy power spectrum $P_{\rm g}(k)$, the matter power-spectrum $P_{\rm m}(k)$, and the galaxy-matter cross-power spectrum $P_{\rm gm}(k)$ are \begin{eqnarray} \label{eq:pg} P^{\rm 1h}_{\rm g}(k)&=& \int_0^\infty\frac{\d m\;n(m)}{\overline{n}_{\rm g}^2}\, \tilde{u}^{2p}_{\rm g}(k,m)\, \ave{N(N-1)|m}\;;\\ \label{eq:pm} P^{\rm 1h}_{\rm m}(k)&=& \int_0^\infty\frac{\d m\;n(m)\,m^2}{\bar{\rho}_{\rm m}^2}\, \tilde{u}^2_{\rm m}(k,m)\;;\\ \label{eq:pgm} P^{\rm 1h}_{\rm gm}(k)&=& \int_0^\infty\frac{\d m\;n(m)\,m}{\bar{\rho}_{\rm m}\overline{n}_{\rm g}}\, \tilde{u}_{\rm m}(k,m)\,\tilde{u}^q_{\rm g}(k,m)\,\ave{N|m}\;. \end{eqnarray} In these equations, the exponents $p$ and $q$ are modifiers of the statistics for central galaxies which are accounted for in the following simplistic way: Central galaxies are by definition at the halo centre $r=0$; one galaxy inside a halo is always a central galaxy; their impact on galaxy power spectra is assumed to be only significant for halos that contain few galaxies. Depending on whether a halo contains few galaxies or not, the factors $(p,q)$ switch on or off a statistics dominated by central galaxies through \begin{eqnarray} \label{eq:pq} p&=&\left\{ \begin{array}{ll} 1 & ,\,{\rm for~} \ave{N(N-1)|m}>1\;\\ 1/2& ,\,{\rm otherwise} \end{array} \right.\;; \\ \nonumber q&=&\left\{ \begin{array}{ll} 1 & ,\,{\rm for~} \ave{N|m}>1\;\\ 0 & ,\,{\rm otherwise} \end{array} \right.\;. \end{eqnarray} We note that $p$ and $q$ are functions of the halo mass $m$. Later in Sect. \ref{sect:centrals}, we consider also more general models where there can be a fraction of halos that contains only satellite galaxies. We achieve this by mixing \Ref{eq:pg}-\Ref{eq:pgm} with power spectra in a pure-satellite scenario, this means a scenario where always $p\equiv q\equiv1$. We now turn to the two-halo terms in this halo model. We approximate the clustering power of centres of halos with mass $m$ by $b_{\rm h}^2(m)\,P_{\rm lin}(k)$, where $P_{\rm lin}(k)$ denotes the linear matter power spectrum, and $b_{\rm h}(m)$ is the halo bias-factor on linear scales; the clustering of halos is thus linear and deterministic in this description. Likewise, this model approximates the cross-correlation power-spectrum of halos with the masses $m_1$ and $m_2$ by \mbox{$b_{\rm h}(m_1)\,b_{\rm h}(m_2)\,P_{\rm lin}(k)$}. The resulting two-halo terms are then \begin{eqnarray} \label{eq:p2halog} P^{\rm 2h}_{\rm g}(k)&=& \frac{P_{\rm lin}(k)}{\overline{n}_{\rm g}^2} \left(\int_0^\infty\d m\;n(m)\ave{N|m}\,b_{\rm h}(m)\tilde{u}_{\rm g}(k,m)\right)^2\;;\\ \label{eq:p2halom} P^{\rm 2h}_{\rm m}(k)&=& \frac{P_{\rm lin}(k)}{\overline{\rho}_{\rm m}^2} \left( \int_0^\infty\d m\;n(m)m\,b_{\rm h}(m) \tilde{u}_{\rm m}(k,m) \right)^2\;;\\ \nonumber P^{\rm 2h}_{\rm gm}(k)&=& \frac{P_{\rm lin}(k)}{\overline{n}_{\rm g}\,\overline{\rho}_{\rm m}} \int_0^\infty\d m\;n(m)\ave{N|m}\,b_{\rm h}(m)\tilde{u}_{\rm g}(k,m)\\ &&\label{eq:p2halomg} \times\int_0^\infty\d m\;n(m)m\,b_{\rm h}(m)\tilde{u}_{\rm m}(k,m)\;. \end{eqnarray} The two-halo terms ignore power from central galaxies because it is negligible in the two-halo regime. \subsection{A toy model for the small-scale galaxy bias} \label{sect:toymodel} We first consider an insightful toy model of $b(k)$ and $r(k)$ at small scales. In this model, both the matter and the galaxy distribution shall be completely dominated by halos of mass $m_0$, such that we find an effective halo-mass function \mbox{$n(m)\propto\delta_{\rm D}(m-m_0)$}; its normalisation is irrelevant for the galaxy bias. In addition, the halos of the toy model shall not cluster so that the two-halo terms of the power spectra vanish entirely. The toy model has practical relevance in what follows later because the one-halo biasing functions that we derive afterwards are weighted averages of toy models with different $m_0$. For this reason, most of the features can already be understood here, albeit not all, and it already elucidates biasing functions on small scales. Let us define the variance $\sigma_N^2(m_0)=\ave{N^2|m_0}-\ave{N|m_0}^2$ of the halo-occupation distribution (HOD) in excess of a Poisson variance $\ave{N|m_0}$ by \begin{equation} \label{eq:dsigma} \Delta\sigma^2_N(m_0)=\sigma^2_N(m_0)-\ave{N|m_0}\;. \end{equation} If the model galaxies obey Poisson statistic they have \mbox{$\Delta\sigma_N^2(m_0)=0$}. We can now write the mean number of galaxy pairs as \begin{multline} \ave{N(N-1)|m_0}=\ave{N^2|m_0}-\ave{N|m_0}\\ =\ave{N|m_0}^2 \left(1+\frac{\Delta\sigma_N^2(m_0)}{\ave{N|m_0}^2}\right)\;. \end{multline} By using the Eqs. \Ref{eq:pg}--\Ref{eq:pgm} with $n(m)\propto\delta_{\rm D}(m-m_0)$, the correlation factor reads \begin{multline} \label{eq:rktm} R(k,m_0)=\\ \frac{\tilde{u}^{q-p}_{\rm g}(k,m_0)\,\ave{N|m_0}}{\sqrt{\ave{N(N-1)|m_0}}} = \tilde{u}^{q-p}_{\rm g}(k,m_0) \left(1+\frac{\Delta\sigma_N^2(m_0)}{\ave{N|m_0}^2}\right)^{-1/2}\;, \end{multline} and the bias factor is \begin{multline} \label{eq:bktm} B(k,m_0)=\\ \frac{\tilde{u}^p_{\rm g}(k,m_0)\,\sqrt{\ave{N(N-1)|m_0}}} {\tilde{u}_{\rm m}(k,m_0)\,\ave{N|m_0}} = \frac{\tilde{u}^q_{\rm g}(k,m_0)}{\tilde{u}_{\rm m}(k,m_0)} \frac{1}{R(k,m_0)}\;. \end{multline} To avoid ambiguities in the following, we use capital letters for the biasing functions in the toy model. We dub galaxies `faithful tracers' of the matter density if they have both (i) \mbox{$\tilde{u}_{\rm g}(k,m)=\tilde{u}_{\rm m}(k,m)$} and (ii) no central galaxies (\mbox{$p=q=1$}). Halos with relatively small numbers of galaxies, that is \mbox{$\ave{N|m_0},\ave{N(N-1)|m_0}\lesssim1$}, are called `low-occupancy halos' in the following. This toy model then illustrates the following points. \begin{itemize} \item Owing to galaxy discreteness, faithful tracers are biased if they not obey Poisson statistics. Namely, for a sub-Poisson variance, \mbox{$\Delta\sigma_N^2(m_0)<0$}, they produce opposite trends \mbox{$R(k,m_0)>1$} and \mbox{$B(k,m_0)<1$} with $k$, and vice versa for a super-Poisson sampling, but generally we find the relation $R(k,m_0)\times B(k,m_0)=1$. \item Nevertheless faithful tracers obey \mbox{$B(k,m_0),R(k,m_0)\approx1$} if the excess variance becomes negligible, that is if \mbox{$\Delta\sigma_N^2(m_0)\ll\ave{N|m_0}^2$}. The discreteness of galaxies therefore becomes only relevant in low-occupancy halos. \item A value of \mbox{$R(k,m_0)>1$} occurs once central galaxies are present (\mbox{$p,q<1$}). As a central galaxy is \emph{always} placed at the centre, central galaxies produce a non-Poisson sampling of the profile $u_{\rm m}(r,m_0)$. In contrast to faithful galaxies with a non-Poisson HOD, we then find agreeing trends with scale $k$ for $R(k,m_0)$ and $B(k,m_0)$ if \mbox{$\Delta\sigma_{\rm N}^2(m_0)=0$}. Again, this effect is strong only in low-occupancy halos. \item The biasing functions in the toy model are only scale-dependent if galaxies are not faithful tracers. The bias function $B(k,m_0)$ varies with $k$ if either \mbox{$\tilde{u}_{\rm m}(k,m_0)\ne\tilde{u}_{\rm g}(k,m_0)$} or for central galaxies (\mbox{$p\ne1$}). The correlation function $R(k,m_0)$ is scale-dependent only for central galaxies, that is \mbox{$p-q\ne0$}, which then obeys \mbox{$R(k,m_0)\propto\tilde{u}_{\rm g}^{-1/2}(k,m_0)$}. Variations with $k$ become small for both functions, however, if \mbox{$\tilde{u}_{\rm m}(k,m_0),\tilde{u}_{\rm g}(k,m_0)\approx1$}, which is on scales larger than the size $r_{\rm vir}$ of a halo. \end{itemize} We stress again that a counter-intuitive \mbox{$r(k)>1$} is a result of the definition of $P_{\rm g}(k)$ relative to Poisson shot-noise and the actual presence of non-Poisson galaxy noise. One may wonder here if \mbox{$r>1$} is also allowed for biasing parameters defined in terms of spatial correlations rather than the power spectra. That this is indeed the case is shown in Appendix \ref{ap:realspacecorr} for completeness. \subsection{Galaxy biasing at small scales} Compared to the foregoing toy model, no single halo mass scale dominates the galaxy bias at any wave number $k$ for realistic galaxies. Nevertheless, we can express the realistic biasing functions $b^{\rm 1h}(k)$ and $r^{\rm 1h}(k)$ in the one-halo regime as weighted averages of the toy model $B(k,m)$ and $R(k,m)$ with modifications. To this end, we introduce by \begin{equation} \label{eq:flm} b(m)= \frac{\ave{N|m}}{m} \frac{\overline{\rho}_{\rm m}}{\overline{n}_{\rm g}} \end{equation} the `mean biasing function' which is the mean number of halo galaxies $\ave{N|m}$ per halo mass $m$ in units of the cosmic average ${\overline{n}_{\rm g}}/{\overline{\rho}_{\rm m}}$ \citep{2012MNRAS.426..566C}. If galaxy numbers linearly scale with halo mass, that means \mbox{$\ave{N|m}\propto m$}, we find a mean biasing function of \mbox{$b(m)=1$} while halos masses devoid of galaxies have \mbox{$b(m)=0$}. For convenience, we make use of \mbox{$\ave{N|m}\propto m\,b(m)$} instead of $\ave{N|m}$ in the following equations because we typically find \mbox{$\ave{N|m}\propto m^\beta$} with \mbox{$\beta\approx1$}: $b(m)$ is therefore usually not too different from unity. Using the Eqs. \Ref{eq:pg} and \Ref{eq:pm} we then find \begin{equation} \label{eq:bk} [b^{\rm 1h}(k)]^2= \frac{P^{\rm 1h}_{\rm g}(k)}{P^{\rm 1h}_{\rm m}(k)} = \int_0^\infty\d m\;w_{20}^{\rm 1h}(k,m)\, b^2(m)\,B^2(k,m) \end{equation} with $w_{20}^{\rm 1h}(k,m)$ being one case in a family of (one-halo) weights, \begin{equation} w_{ij}^{\rm 1h}(k,m):= \frac{n(m)\,m^2\,\tilde{u}_{\rm m}^i(k,m)\,[b(m)\,\tilde{u}_{\rm g}(k,m)]^j} {\int_0^\infty\!\d m\;n(m)\,m^2\,\tilde{u}_{\rm m}^i(k,m)\,[b(m)\,\tilde{u}_{\rm g}(k,m)]^j}\;. \end{equation} This family and the following weights $w(k,m)$ are normalised, which means that \mbox{$\int\d m\,w(k,m)=1$}. The introduction of these weight functions underlines that the biasing functions are essentially weighted averages across the halo-mass spectrum as, for example, $[b^{\rm 1h}(k)]^2$ which is the weighted average of \mbox{$b^2(m)\,B^2(k,m)$}. The effect of $w_{20}^{\rm 1h}(k,m)$ is to down-weight large halo masses in the bias function because $w_{20}^{\rm 1h}(k,m_1)/w_{20}^{\rm 1h}(k,m_2)\propto\tilde{u}_{\rm m}^2(k,m_1)/\tilde{u}_{\rm m}^2(k,m_2)$ decreases with $m_1$ for a fixed $m_2<m_1$ and $k$. Additionally, the relative weight of a halo with mass $m$ decreases towards larger $k$ because $\tilde{u}_{\rm m}(k,m)$ tends to decrease with $k$. As a result, at a given scale $k$ only halos below a typical mass essentially contribute to the biasing functions \citep{2000MNRAS.318..203S}. We move on to the correlation factor $r^{\rm 1h}(k)$ in the one-halo regime. Using the Eqs. \Ref{eq:pg}--\Ref{eq:pgm} and the relations \begin{eqnarray} \ave{N(N-1)|m}&=&R^{-2}(k,m)\,\tilde{u}_{\rm g}^{2q-2p}(k,m)\,\ave{N|m}\;;\\ \tilde{\rho}_{\rm m}(k,m)&=&m\,\tilde{u}_{\rm m}(k,m)\;;\\ \tilde{\rho}_{\rm g}(k,m)&=&\ave{N|m}\,\tilde{u}_{\rm g}(k,m) \end{eqnarray} we write \begin{equation} \label{eq:rk} r^{\rm 1h}(k)= \frac{P^{\rm 1h}_{\rm gm}(k)}{\sqrt{P^{\rm 1h}_{\rm g}(k)P^{\rm 1h}_{\rm m}(k)}} =:\zeta_{\rm sat}(k)\,\zeta_{\rm cen}(k)\,\zeta_{\Delta\sigma}(k) \end{equation} as product of the three separate factors \begin{eqnarray} \label{eq:rkpois} \zeta_{\rm sat}(k)\!\!\!\!&:=& \!\!\!\! \frac{\int_0^\infty\d m\;n(m)\, \tilde{\rho}_{\rm m}(k,m)\,\tilde{\rho}_{\rm g}(k,m)} { \left(\int_0^\infty\d m\;n(m)\,\tilde{\rho}^2_{\rm g}(k,m)\, \int_0^\infty\d m\;n(m)\,\tilde{\rho}^2_{\rm m}(k,m) \right)^{1/2} }\;; \\ \zeta_{\rm cen}(k)\!\!\!\!&:=& \!\!\!\!\int_0^\infty\d m\;w_{11}^{\rm 1h}(k,m)\,\tilde{u}^{q-1}_{\rm g}(k,m)\;; \\ \label{eq:rkul} \zeta_{\Delta\sigma}(k)\!\!\!\!&:=&\!\!\!\! \left(\int_0^\infty\d m\;w_{02}^{\rm 1h}(k,m)\, \tilde{u}_{\rm g}^{2q-2}(k,m)\,R^{-2}(k,m)\right)^{-1/2} \end{eqnarray} with the following meaning. \begin{itemize} \item The first factor $\zeta_{\rm sat}(k)$ quantifies, at spatial scale $k$, the correlation between the radial profiles of the matter density $\tilde{\rho}_{\rm m}(k,m)$ and the (average) number density of satellite galaxies $\tilde{\rho}_{\rm g}(k,m)$ across the halo mass-spectrum $n(m)$. As upper bound we always have \mbox{$|\zeta_{\rm sat}(k)|\le1$} because of the Cauchy-Schwarz inequality when applied to the nominator of Eq. \Ref{eq:rkpois}. Thus $\zeta_{\rm sat}(k)$ probably reflects best what we intuitively understand by a correlation factor between galaxies and matter densities inside a halo. Since it only involves the average satellite profile, the satellite shot-noise owing to a HOD variance is irrelevant at this point. The next two factors can be seen as corrections to $\zeta_{\rm sat}(k)$ owing to central galaxies or a non-Poisson HOD variance. \item The second factor $\zeta_{\rm cen}(k)$ is only relevant in the sense of \mbox{$\zeta_{\rm cen}(k)\ne1$} through low-occupancy halos with central galaxies ($q\ne1$). It has the lower limit $\zeta_{\rm cen}(k)\ge1$ because of \mbox{$\tilde{u}_{\rm g}(k,m)\le1$} and hence \mbox{$\tilde{u}_{\rm g}^{q-1}(k,m)\ge1$}. This correction factor can therefore at most increase the correlation $r^{1\rm h}(k)$. \item The third factor $\zeta_{\Delta\sigma}(k)$ is the only one that is sensitive to an excess variance \mbox{$\Delta\sigma_{\rm N}^2(m)\ne0$} of the HOD, namely through $R(k,m)$. In the absence of central galaxies, that means for \mbox{$p\equiv q\equiv1$}, $\zeta^2_{\Delta\sigma}(k)$ is the (weighted) harmonic mean of $R^2(k,m)$, or the harmonic mean of the reduced $[\tilde{u}_{\rm g}(k,m)\,R(k,m)]^2\le R^2(k,m)$ otherwise. \end{itemize} As sanity check, we note the recovery of the toy model by setting \mbox{$n(m)\propto\delta_{\rm D}(m-m_0)$} in \Ref{eq:bk} and \Ref{eq:rk}. In contrast to the toy model, the templates $b^{\rm 1h}(k)$ and $r^{\rm 1h}(k)$ can be scale-dependent even if $B(k,m)$ and $R(k,m)$ are constants. This scale dependence can be produced by a varying $w_{20}^{\rm 1h}(k,m)$ or $\zeta_{\rm sat}(k)$. \subsection{Galaxy biasing at large scales} \label{sect:largescalebias} From the two-halo terms \Ref{eq:p2halog}--\Ref{eq:p2halomg}, we can immediately derive the two-halo biasing functions. The bias factor is \begin{equation} b^{\rm 2h}(k) = \sqrt{\frac{P^{\rm 2h}_{\rm g}(k)}{P^{\rm 2h}_{\rm m}(k)}} = \frac{\int_0^{\infty}\d m\;w_{10}^{\rm 2h}(m)\,b_{\rm h}(m)\,\tilde{u}_{\rm g}(k,m)} {\int_0^{\infty}\d m\;w_{01}^{\rm 2h}(m)\,b_{\rm h}(m)\,\tilde{u}_{\rm m}(k,m)}\;, \end{equation} where we have introduced into the integrals the normalised (two-halo) weights \begin{equation} w_{ij}^{\rm 2h}(m):= \frac{n(m)\,[m\,b(m)]^i\,m^j}{\int_0^\infty\d m\;n(m)\,[m\,b(m)]^i\,m^j} \;. \end{equation} We additionally approximate \mbox{$\tilde{u}(k,m)\approx1$} for the two-halo regime. This is a reasonable approximation because virialised structures are typically not larger than \mbox{$\sim10\,h^{-1}\,\rm Mpc$} and hence exhibit \mbox{$\tilde{u}(k,m)\approx1$} for \mbox{$k\ll0.5\,h\,\rm Mpc^{-1}$}. Therefore, we find an essentially constant bias function at large scales, \begin{equation} \label{eq:lsbias} b^{\rm 2h}(k)\approx \int_0^{\infty}\d m\;w_{10}^{\rm 2h}(m)\,b_{\rm h}(m)=:b_{\rm ls}\;. \end{equation} We have used here \mbox{$\int\d m\;w_{01}^{\rm 2h}(m)\,b_{\rm h}(m)=1$} which follows from the constraint \mbox{$P_{\rm m}(k)\to P_{\rm lin}(k)$} for \mbox{$k\to0$} and the Eq. \Ref{eq:p2halom}. To have more template flexibility, we leave $b_{\rm ls}$ as free parameter and devise the Eq. \Ref{eq:lsbias} only if no large-scale information is available by observations. The two-halo correlation-function at large scales is exactly \begin{equation} \label{eq:lscorr} r^{\rm 2h}(k)= \frac{P^{\rm 2h}_{\rm gm}(k)}{\sqrt{P^{\rm 2h}_{\rm g}(k)P^{\rm 2h}_{\rm m}(k)}}=1=r_{\rm ls} \end{equation} due to \mbox{$P_{\rm h}(k;m_1,m_2)=b_{\rm h}(m_1)\,b_{\rm h}(m_2)\,P_{\rm m}(k)$} for the assumed halo clustering. Evidently, the large-scale matter-galaxy correlation is fixed to \mbox{$r_{\rm ls}=1$}. The correlation is necessarily high because the model galaxies are always inside halos so that galaxies closely follow the matter distribution at large scales. We note that \mbox{$r_{\rm ls}\ne1$} is physically conceivable although it is usually excluded in halo models \citep{1998ApJ...500L..79T}. To test for an actually high correlation \mbox{$r_{\rm ls}=1$} in real data, we may use $r_{\rm ls}$ as free parameter in the templates. \subsection{Fraction of central galaxies} \label{sect:centrals} Up to here, we assumed either one central galaxy for every halo that hosts galaxies or pure samples of satellite galaxies, meaning \mbox{$p\equiv q\equiv1$}. In reality where we select sub-populations of galaxies, not every sub-sample automatically provides a central galaxy in every halo; a central galaxy could belong to another galaxy population, for instance. For more template flexibility, we thus assume that only a fraction $f_{\rm cen}$ of \emph{halos} can have central galaxies from the selected galaxy population; the other fraction $1-f_{\rm cen}$ of halos has either only satellites or central galaxies from another population. Both halo fractions nevertheless shall contain $\ave{N|m}$ halo galaxies on average. Importantly, $f_{\rm cen}$ shall be independent of halo mass. This is not a strong restriction because the impact of central galaxies becomes only relevant for low-occupancy halos whose mass scale $m$ is confined by \mbox{$\ave{N|m}\lesssim1$} anyway. The extra freedom of \mbox{$f_{\rm cen}\ne1$} in the templates modifies the foregoing power spectra. On the one hand, the two-halo power spectra are unaffected because they do not depend on either $p$ or $q$. On the other hand for the one-halo regime, we now find the linear combination \begin{eqnarray} \label{eq:pgfcen} P_{\rm g}^{\rm 1h}(k)&=&f_{\rm cen}\,P_{\rm g}^{\rm cen}(k)+(1-f_{\rm cen})\,P_{\rm g}^{\rm sat}(k)\;,\\ % \label{eq:pgmfcen} P_{\rm gm}^{\rm 1h}(k)&=&f_{\rm cen}\,P_{\rm gm}^{\rm cen}(k)+(1-f_{\rm cen})\,P_{\rm gm}^{\rm sat}(k) \end{eqnarray} because halos with (or without) central galaxies contribute with probability $f_{\rm cen}$ (or $1-f_{\rm cen}$) to the one-halo term. In the equations, the $P^{\rm cen}(k)$ denote the one-halo power spectra of halos with central galaxies, and the $P^{\rm sat}(k)$ denote spectra of halos with only satellites. Both cases are covered in the foregoing formalism for appropriate values of $p,q$: Satellite-only halos with superscript `sat' are obtained by using \mbox{$p\equiv q\equiv1$}; halos with central galaxies, superscript `cen', use the usual mass-dependent expressions \Ref{eq:pq}. As result, we can determine the bias factor for the mixture scenario with \Ref{eq:pgfcen} by \begin{equation} \label{eq:bkcen} [b^{\rm 1h}(k)]^2= f_{\rm cen}\,[b_{\rm cen}(k)]^2 + (1-f_{\rm cen})\,[b_{\rm sat}(k)]^2\;. \end{equation} Here $b_{\rm cen}(k)$ denotes Eq. \Ref{eq:bk} in the central-galaxy scenario, whereas $b_{\rm sat}(k)$ denotes the satellite-only scenario of this equation. Similarly for the correlation, $r^{\rm 1h}(k)$ we obtain with \Ref{eq:pgfcen} and \Ref{eq:pgmfcen} \begin{eqnarray} \label{eq:rkcen} \lefteqn{r^{\rm 1h}(k)=}\\ \nonumber && f_{\rm cen}\,\frac{P_{\rm gm}^{\rm cen}(k)} {\sqrt{P^{\rm 1h}_{\rm g}(k)\,P^{\rm 1h}_{\rm m}(k)}} + (1-f_{\rm cen})\,\frac{P_{\rm gm}^{\rm sat}(k)} {\sqrt{P^{\rm 1h}_{\rm g}(k)\,P^{\rm 1h}_{\rm m}(k)}} \\ \nonumber &&= f_{\rm cen}\,\sqrt{\frac{P^{\rm cen}_{\rm g}(k)} {P^{\rm 1h}_{\rm g}(k)}}r_{\rm cen}(k)+ (1-f_{\rm cen}) \sqrt{\frac{P^{\rm sat}_{\rm g}(k)} {P^{\rm 1h}_{\rm g}(k)}}r_{\rm sat}(k) \\ \nonumber &&= \frac{f_{\rm cen}\,r_{\rm cen}(k)} {\sqrt{f_{\rm cen}+(1-f_{\rm cen})\,\left(\frac{b_{\rm sat}(k)}{b_{\rm cen}(k)}\right)^2}}+ \frac{(1-f_{\rm cen})\,r_{\rm sat}(k)} {\sqrt{1-f_{\rm cen}+f_{\rm cen}\left(\frac{b_{\rm cen}(k)}{b_{\rm sat}(k)}\right)^2}}\;, \end{eqnarray} because \begin{equation} P_{\rm g}^{\rm 1h}(k)= f_{\rm cen}\,[b_{\rm cen}(k)]^2\,P^{\rm 1h}_{\rm m}(k)+ (1-f_{\rm cen})\,[b_{\rm sat}(k)]^2\,P^{\rm 1h}_{\rm m}(k)\;. \end{equation} The function $r_{\rm cen}(k)$ denotes Eq. \Ref{eq:rk} in the central-galaxy scenario, and $r_{\rm sat}(k)$ is the satellite-only scenario. \section{Parameters of model templates and physical discussion} \label{sect:implementation} In this section, we summarise the concrete implementation of our templates, and we discuss their parameter dependence for a physical discussion on the scale-dependent galaxy bias. \subsection{Normalised excess-variance} For a practical implementation of our templates, we find it useful to replace $\Delta\sigma^2_N(m)$ in Eq. \Ref{eq:dsigma} by the `normalised excess-variance' \begin{equation} V(m)= \frac{\Delta\sigma^2_N(m)}{\ave{N|m}} =\frac{\sigma^2_{\rm N}(m)}{\ave{N|m}}-1\;, \end{equation} which typically has a small dynamic range with values between minus and plus unity. To see this, we discuss its upper and lower limits in the following. First, the normalised excess-variance has a lower limit because the average number of galaxy pairs is always positive, \begin{equation} \ave{N(N-1)|m}=\ave{N|m}^2\,\left(1+\frac{V(m)}{\ave{N|m}}\right)\ge0\,, \end{equation} which imposes \mbox{$V(m)\ge-\ave{N|m}$}. As additional constraint we have a positive variance \begin{equation} \sigma_{\rm N}^2(m)=\ave{N|m}\,\Big(V(m)+1\Big)\,\ge0 \end{equation} or $V(m)\ge-1$ so that we use \begin{equation} V(m)\ge\max{\Big\{-1,-\ave{N|m}\Big\}} \end{equation} for a valid set of template parameters. Second for the upper limit of $V(m)$, we imagine that there is a maximum $N_{\rm max}(m)$ for the amount of halo galaxies (of the selected population) inside a halo of mass $m$. A maximum $N_{\rm max}(m)$ makes physically sense because we cannot squeeze an arbitrary number of galaxies into a halo. Nevertheless, their amount \mbox{$0\le N(m) \le N_{\rm max}(m)$} shall be random with PDF $P(N|m)$. Of this PDF we already know that its mean is $\ave{N|m}$. For its the maximum possible variance $\sigma^2_{\rm max}(m)$, we note that $\sigma_{\rm N}^2(m)$ cannot be larger than that for halos with a bimodal distribution of only two allowed galaxy numbers \mbox{$N(m)\in\{0,N_{\rm max}(m)\}$} that shall occur with probability $1-\lambda$ and $\lambda$, respectively. The mean of this bimodal PDF is $\ave{N|m}=\lambda\,N_{\rm max}(m)$, and its variance $\sigma_{\rm max}^2(m)= \ave{N^2|m}-\ave{N|m}^2$ consequently satisfies \begin{multline} \sigma_{\rm max}(m)= \\ \sqrt{N_{\rm max}^2(m)\,\lambda-N_{\rm max}^2(m)\,\lambda^2}= \ave{N|m}^{1/2}\,\sqrt{N_{\rm max}(m)-\ave{N|m}}\;, \end{multline} which is the upper limit for any $P(N|m)$. Together with the lower bound of $V(m)$, we thus arrive at \begin{equation} \max{\Big\{-1,-\ave{N|m}\Big\}}\le V(m)\le N_{\rm max}(m)-1-\ave{N|m}\;. \end{equation} This means: halos that are (on average) filled close to the limit, that is \mbox{$\ave{N|m}\approx N_{\rm max}(m)\ge1$}, have a HOD variance that is sub-Poisson, close to \mbox{$V(m)=-1$}. This should be especially the case for halos with \mbox{$\ave{N|m}\approx1$}. On the other hand, halos with \mbox{$N_{\rm max}(m)\approx1$} and low occupancy, \mbox{$\ave{N|m}\ll1$}, necessarily obey Poisson statistics or are close to that, which means that \mbox{$V(m)\approx 0$}. On the other extreme end, spacious halos well below the fill limit, \mbox{$N_{\max}(m)\gg1$} and \mbox{$N_{\rm max}(m)\gg\ave{N|m}$}, have sufficient headroom to allow for a super-Poisson variance which means that \mbox{$V(m)>0$}. In the following, we adopt the upper limit \mbox{$V(m)\le+1$} meaning that we a-priori do not allow the HOD variance to become larger than twice the Poisson variance. \subsection{Implementation} \label{sect:modimplement} \renewcommand{\arraystretch}{1.1} \begin{table} \caption{\label{tab:model} List of free template parameters} \begin{center} \begin{tabular}{llr} \hline\hline Param & Description & Dim\\ \hline\\ $b_{\rm ls}$ & large-scale bias factor & 1\\ $r_{\rm ls}$ & large-scale correlation factor & (1)\\ $b(m)$ & mean biasing function (interp.) & 22\\ $V(m)$ & normalised excess-variance (interp.) & 22\\ $m_{\rm piv}$ & $\ave{N|m_{\rm piv}}=1$; pivotal halo mass & 1\\ $f_{\rm cen}$ & halo fraction open for central galaxies & 1\\ \hline\\ & & $\Sigma=47\,(48)$ \end{tabular} \tablefoot{The parameters $b(m)$ and $V(m)$ cover the mass range $10^4-10^{16}\,h^{-1}\,\msol$. The numbers ``(1)'' in brackets indicate optional degrees of freedom of the template. See text for more details.} \end{center} \end{table} \renewcommand{\arraystretch}{1.0} Generally the functions $V(m)$ and $b(m)$ are continuous functions of the halo mass $m$. We apply, however, an interpolation with $20$ interpolation points on a equidistant logarithmic $m$-scale for these functions, spanning the range $10^8\,h^{-1}\,\msol$ to $10^{16}\,h^{-1}\,\msol$; between adjacent sampling points we interpolate linearly on the log-scale; we set \mbox{$b(m)=V(m)=0$} outside the interpolation range. Additionally, we find in numerical experiments with unbiased galaxies that the halo mass-scale has to be lowered to $10^4\,h^{-1}\,\msol$ to obtain correct descriptions of the bias. We therefore include two more interpolation points at $10^4$ and $10^6\,h^{-1}\,\msol$ to extend the mass scale to very low halo masses. For the large-scale bias, we set $r_{\rm ls}\equiv1$ but leave $b_{\rm ls}$ as free parameter. \begin{figure*} \begin{center} \epsfig{file=fig6.ps,width=125mm,angle=-90} \end{center} \caption{\label{fig:models} Family of templates $b(k)$ (black lines) and $r(k)$ (red lines) for the range of wave numbers $k$ in the top axis; the left-hand $y$-axis applies to the panels in the first column, the right-hand axis to the second column. The aperture scale $\theta_{\rm ap}=4.25/(k\,f_K(z_{\rm d}))$ (bottom axis) crudely traces the projected $b_{\rm 2D}(\theta_{\rm ap})$ and $r_{\rm 2D}(\theta_{\rm ap})$ for lens galaxies at $z_{\rm d}=0.3$. Each panel varies only one template parameter. See text for more details.} \end{figure*} To predict the number density $\bar{n}_{\rm g}$ of galaxies, Eq. \Ref{eq:rhong}, and to determine $(p,q)$ for a given mass $m$ we have to obtain $\ave{N|m}$ from $b(m)$. For this purpose, we introduce another parameter $m_{\rm piv}$ which is the pivotal mass of low-occupancy halos, defined by \mbox{$\ave{N|m_{\rm piv}}=1$} such that \begin{equation} \label{eq:nofm} \ave{N|m}= \frac{m}{m_{\rm piv}}\, \frac{b(m)}{b(m_{\rm piv})}\;. \end{equation} The (comoving) number density of galaxies $\bar{n}_{\rm g}$ is then given by \begin{equation} \label{eq:nbar} \frac{\bar{n}_{\rm g}}{\overline{\rho}_{\rm m}}= \frac{\int_0^\infty\d m\;n(m)\,\ave{N|m}}{\overline{\rho}_{\rm m}}= \int_0^\infty\frac{\d m\;w_{01}^{\rm 2h}(m)}{m_{\rm piv}}\, \frac{b(m)}{b(m_{\rm piv})}\;, \end{equation} for which we use $\bar{\rho}_{\rm m}=\Omega_{\rm m}\,\bar{\rho}_{\rm crit}$. With this parameterisation, the normalisation of $b(m)$ is irrelevant in all equations of our bias templates. Nevertheless, $b(m)$ can be shown to obey \begin{equation} \int_0^\infty\!\!\d m\;n(m)\,m\,b(m)= \overline{\rho}_{\rm m}\Longleftrightarrow \int_0^\infty\!\!\d m\;w_{01}^{\rm 2h}(m)\,b(m)=1 \end{equation} which follows from the Eqs. \Ref{eq:flm} and \Ref{eq:rhong}. When plotting $b(m)$, we make sure that it is normalised correspondingly. Furthermore for the templates, we assume that satellite galaxies always trace the halo matter density so that \mbox{$\tilde{u}_{\rm g}(k,m)\equiv\tilde{u}_{\rm m}(k,m)$}. This assumption could be relaxed in a future model extension. For the matter density profile $\tilde{u}_{\rm m}(k,m)$, we assume a NFW profile \citep{1996ApJ...462..563N} with a mass concentration as in \citet{2000MNRAS.318..203S} and a halo mass spectrum $n(m)$ according to \citet{1999MNRAS.308..119S}. For the average biasing functions $b(k)$ and $r(k)$, we evaluate $n(m)$, $b_{\rm h}(m)$, and $\tilde{u}_{\rm m}(k,m)$ at the mean redshift of the lens galaxies. As model for $P_{\rm m}(k;\chi)$ in Sect. \ref{sect:biassmoothing} we employ the publicly available code \texttt{nicaea}\footnote{\url{http://www.cosmostat.org/software/nicaea/}} version 2.5 \citep{2009A&A...497..677K} that provides an implementation of \texttt{Halofit} with the recent update by \citet{2012ApJ...761..152T} and the matter transfer-function in \cite{1998ApJ...496..605E} for baryonic oscillations. We list all free parameters of the templates in Table \ref{tab:model}. Their total number is 47 by default. In a future application, we may also consider $r_{\rm ls}$ a free parameter to test, for instance, the validity of \mbox{$r_{\rm ls}=1$}. If no large-scale information on the aperture statistics is available, we predict $b_{\rm ls}$ from Eq. \Ref{eq:lsbias}, reducing the degrees of freedom in the model by one. To obtain the biasing functions $b(k)$ and $r(k)$ from the set of parameters we proceed as follows. We first compute the one-halo terms \Ref{eq:bk} and \Ref{eq:rk} for two separate scenarios: with and without central galaxies. Both scenarios are then mixed according to the Eqs. \Ref{eq:bkcen} and \Ref{eq:rkcen} for the given value of $f_{\rm cen}$. Finally, we patch together the one- and two-halo biasing functions according to Eqs. \Ref{eq:rkgeneral} and \Ref{eq:bkgeneral} with a weight $W_{\rm m}(k)$ for the fiducial cosmology. \subsection{Physical discussion} \label{sect:modeldiscussion} Fig. \ref{fig:models} is a showcase of conceivable biasing functions and their relation to the underlying galaxy physics which we compute in the aforementioned way. The wave number $k$ is plotted on the top axis, whereas the bottom axis is defined by $\theta_{\rm ap}=4.25/(k\,f_K(z_{\rm d}))$ for a lens redshift of $z_{\rm d}=0.3$, which is essentially a simplistic prediction for $b_{\rm 2D}(\theta_{\rm ap})$ and $r_{\rm 2D}(\theta_{\rm ap})$ as observed by the lensing technique in Sect. \ref{sect:projectedbias}. For the discussion here, we concentrate on the spatial biasing functions. We plot both $b(k)$ and $r(k)$ inside each panel. The black lines show a family of $b(k)$ that we obtain by varying one template parameter at a time in a fiducial model; the red lines are families of $r(k)$. The varied parameter is indicated in the top right corner of each panel. We assume a large-scale bias $b_{\rm ls}$ according to Eq. \Ref{eq:lsbias} with the theoretical halo bias $b_{\rm h}(m)$ in \cite{2005ApJ...631...41T}. The fiducial model has: (i) no central galaxies, \mbox{$f_{\rm cen}=0$}; (ii) a constant $b(m)>0$ for $m\in[10^9,10^{15}]\,h^{-1}\,\msol$ but vanishing everywhere else; (iii) a Poisson HOD-variance, \mbox{$V(m)=0$}, for all halo masses; and (iv) a pivotal mass of \mbox{$m_{\rm piv}=10^{11}\,h^{-1}\,\msol$}. This setup results in a large-scale bias factor of \mbox{$b_{\rm ls}=1.48$}. The details of the panels are as follows. \begin{itemize} \item The bottom left panel varies $f_{\rm cen}$ between zero and 100\% in steps of 20\% (bottom to top lines). Affected by a change of $f_{\rm cen}$ are only the small scales $k\gtrsim10\,h^{-1}\,\rm Mpc$ (or $\theta_{\rm ap}\lesssim1\,\rm arcmin$) that are strongly influenced by low-mass, low-occupancy halos. \item The bottom right panel increases $m_{\rm piv}$ from $10^9\,h^{-1}\,\msol$ (bottom line) to $10^{13}\,h^{-1}\,\msol$ (top line) in steps of one dex. An impact on the bias functions is only visible if we have either a non-Poisson HOD variance or central galaxies. We hence set \mbox{$f_{\rm cen}=20\%$} compared to the fiducial model. A greater value of $m_{\rm piv}$ shifts the mass scale of low-occupancy halos to larger masses and thus their impact on the bias functions to larger scales. \item In the top left panel, we adopt a sub-Poisson model of \mbox{$V(m)=\max{\{-0.5,-\ave{N|m}\}}$} for halos with \mbox{$m\le m_{\rm v}$}. We step up the mass scale $m_{\rm v}$ from \mbox{$10^{10}\,h^{-1}\,\msol$} (bottom line for $r$; top line for $b$) to \mbox{$10^{14}\,h^{-1}\,\msol$} (top line for $r$; bottom line for $b$) in one dex steps. Similar to the toy model in Sect. \ref{sect:toymodel}, a sub-Poisson variance produces opposite trends for $b$ and $r$: if $b$ goes up, $r$ goes down, and vice versa. The effect is prominent at small scales where low-occupancy halos significantly contribute to the bias functions. Conversely to what is shown here, these trends in $b$ and $r$ change signs if we adopt a super-Poisson variance instead of a sub-Poisson variance for \mbox{$m\le m_{\rm v}$}, which means that \mbox{$V(m)>0$}. \item The top right panel varies the mean biasing function $b(m)$. To achieve this we consider a mass-cutoff scale $m_{\rm f}$ beyond which halos not harbour any galaxies, that means \mbox{$b(m)=0$}. We reduce this cutoff from \mbox{$m_{\rm f}=10^{15}\,h^{-1}\,\msol$} down to \mbox{$10^{11}\,h^{-1}\,\msol$} by one dex in each step (top to bottom line). This gradually excludes galaxies from high-mass halos on the mass scale. Broadly speaking, we remove galaxies from massive clusters first, then groups, and retain only field galaxies in the end. In the same way as for a non-Poisson HOD or present central galaxies this gives rise to a strong scale-dependence in the bias functions but now clearly visible on all scales. Despite its complex scale-dependence, the correlation factor stays always \mbox{$r(k)\le1$} because of the Poisson HOD variance and the absence of central galaxies in the default model. \end{itemize} This behaviour of the biasing functions is qualitatively similar to what is seen in the related analytic model by \cite{2012MNRAS.426..566C}, where deviations from either faithful galaxies, a Poissonian HOD, or a constant mean biasing function \mbox{$b(m)\equiv1$} are also necessary for biased galaxies. Moreover, the scale-dependence that is induced by central galaxies or a non-Poisson HOD variance is there, as for our templates, restricted to small scales in the one-halo (low-occupancy halo) regime, typically below a few $h^{-1}\,\rm Mpc$. However, their model has a different purpose than our templates and is therefore less flexible. To make useful predictions of biasing functions for luminosity-selected galaxies they assume (apart from different technicalities as to the treatment of centrals and satellites) that: the mean galaxy number $\ave{N|m}$ is strongly confined by realistic conditional luminosity-functions ($b(m)$ is not free); their `Poisson function' \mbox{$\beta(m):=V(m)/\ave{N|m}+1$} is a constant ($V(m)$ is not free); the large-scale biasing factor $b_{\rm ls}$ is determined by $b(m)$. Especially, the freedom of $b(m)$ facilitates our templates with the flexibility to vary over a large range of scales (top right panel in Fig. \ref{fig:models}), which may be required for galaxies with a complex selection function. \section{Practical inference of biasing functions} \label{sect:statinference} In this section, we construct a methodology to statistically infer the biasing functions $b(k)$ and $r(k)$ from noisy observations of the lensing aperture statistics $\ave{{\cal N}^2}$, $\ave{{\cal N}M_{\rm ap}}$, and $\ave{M^2_{\rm ap}}$. The general idea is to utilise the model templates in Sect. \ref{sect:implementation} and to constrain the space of their parameters by the likelihood of the observed ratio statistics $b_{\rm 2D}(\theta_{\rm ap})$ and $r_{\rm 2D}(\theta_{\rm ap})$. The posterior distribution of templates will constitute the posterior of the deprojected biasing functions. To estimate the aperture statistics from lens and source catalogues we employ standard techniques that we summarise in Appendix \ref{sect:estimators} for a practical reference. We shall assume that we have measurements of the aperture statistics and their joint error covariance in the following, based on estimates of lens-lens, lens-shear, and shear-shear correlation functions between 1.4 arcsec to 280 arcmin and 64 jackknife samples. The aperture statistics are computed for nine radii $\theta_{\rm ap}$ between 1.8 arcmin and 140 arcmin. \subsection{Statistical analysis} \label{sect:statanalysis} In our statistical analysis, we fit for a set of $n_{\rm d}$ aperture radii $\theta_i$ a model of the aperture statistics $b_{\rm 2D}(\theta_i;b)$ and $r_{\rm 2D}(\theta_i;b,r)$, Eqs. \Ref{eq:b2d} and \Ref{eq:r2d}, to the measurement of the ratio statistics $b_{\rm 2D}(\theta_i)$ and $r_{\rm 2D}(\theta_i)$, Eqs. \Ref{eq:b2dobs} and \Ref{eq:r2dobs}. Ratios of the noisy aperture statistics result in a skewed error distribution for $b_{\rm 2D}$ and $r_{\rm 2D}$ which we account for in a non-Gaussian model likelihood that assumes Gaussian errors for the aperture moments $\ave{{\cal N}^2}$, $\ave{{\cal N}M_{\rm ap}}$, $\ave{M_{\rm ap}^2}$ themselves (and positive values for the variances). With regard to the validity of a (truncated) Gaussian model for the aperture moments, at least for current cosmic-shear studies this is known to be a sufficiently accurate approximation \citep[e.g.][]{2017MNRAS.465.1454H,2013MNRAS.430.2200K}. Nevertheless, our statistical tests in Appendix \ref{sect:nongauss} find evidence for a non-Gaussian statistics in our mock data, especially for the variance $\ave{M^2_{\rm ap}}$ on scales of one degree or larger. This may bias the reconstruction of $b(k)$ and $r(k)$ which will eventually be contained in our assessment of systematic errors later on. To motivate our model likelihood for $b_{\rm 2D}(\theta_i)$ and $r_{\rm 2D}(\theta_i)$, let us first consider a simpler case where $\hat{x}=x+\delta x$ and $\hat{y}=y+\delta y$ are measurements of two numbers $x$ and $y$, respectively, with a bivariate PDF $p_\delta(\delta x,\delta y)$ for the noise in the measurement. Our aim shall be to constrain the ratio \mbox{$R=\sqrt{y/x}$}. The posterior PDF $p(R|\hat{x},\hat{y})$ of $R$ given $\hat{x}$ and $\hat{y}$ can be written as the marginal PDF \begin{eqnarray} p(R|\hat{x},\hat{y})&=& \int\d x\;p(R,x|\hat{x},\hat{y})\\ &\propto& \int\d x\;{\cal L}(\hat{x},\hat{y}|R,x)\,p(x)\, p(R)\\ &=& p(R)\, \int\d x\;p_\delta\Big(\hat{x}-x,\hat{y}-R^2\,x\Big)\,p(x)\,, \end{eqnarray} where ${\cal L}(\hat{x},\hat{y}|R,x)$ shall be the likelihood of $(\hat{x},\hat{y})$ given a value pair $(x,R)$, and the product $p(x)\,p(R)$ is the joint prior of $(x,R)$ \citep[see][for a introduction to Bayesian statistics]{gelman2003bayesian}. We see that the integral in the last line, \begin{equation} \label{eq:illustration} {\cal L}(\hat{x},\hat{y}|R):= \int\d x\;p_\delta\Big(\hat{x}-x,\hat{y}-R^2\,x\Big)\,p(x)\;, \end{equation} has to be the likelihood of $(\hat{x},\hat{y})$ for a given ratio $R$. We are thus essentially fitting a two-parameter model $(R,x)$ to \mbox{$\hat{y}=R^2\,x$} and \mbox{$\hat{x}=x$} followed by a marginalising over $x$. Coming back to our statistical analysis of the aperture statistics, $y$ and $x$ would be here $f^2_{\rm b}\,\ave{{\cal N}^2}$ and $\ave{M^2_{\rm ap}}$, for example, and \mbox{$R=f_{\rm b}\,\ave{{\cal N}^2}^{1/2}\,\ave{M^2_{\rm ap}}^{-1/2}$} is the (projected) bias factor $b_{\rm 2D}$. For our full analysis, however, we have to jointly constrain $b_{\rm 2D}$ and the correlation factor $r_{\rm 2D}$ for a set of aperture radii $\theta_i$ in a more general approach. To implement a general approach involving $n_{\rm d}$ aperture radii and both the bias and correlation factors for all radii simultaneously, we combine the measurements of aperture moments inside the data vector with the (observed) elements \begin{equation} d_j=\left\{ \begin{array}{ll} \ave{{\cal N}^2(\theta_j)}\;, & 1\le j\le n_{\rm d}\\ \ave{{\cal N}M_{\rm ap}(\theta_{j-n_{\rm d}})} \;, &n_{\rm d}<j\le2n_{\rm d}\\ \ave{M^2_{\rm ap}(\theta_{j-2n_{\rm d}})} \;, &2n_{\rm d}<j\le3n_{\rm d} \end{array} \right.\;, \end{equation} and we fit this vector by the parameters $\vec{m}(\vec{\Theta},\vec{x})$ with template parameters $\vec{\Theta}$ (Table \ref{tab:model}) and (theoretical) vector elements \begin{equation} m_j(\vec{\Theta},\vec{x}) =\left\{ \begin{array}{ll} \frac{\ave{{\cal N}^2}_{\rm th}(\theta_j;b)\,x_j}{\ave{M^2_{\rm ap}}_{\rm th}(\theta_j)}\;, & 1\le j\le n_{\rm d}\\\\ \frac{\ave{{\cal N}M_{\rm ap}}_{\rm th}(\theta_{j-n_{\rm d}};b,r)\,x_{j-n_{\rm d}}}{\ave{M^2_{\rm ap}}_{\rm th}(\theta_{j-n_{\rm d}})} \;, & n_{\rm d}<j\le2n_{\rm d}\\\\ x_{j-2n_{\rm d}}\;, & 2n_{\rm d}<j\le3n_{\rm d} \end{array} \right. \end{equation} using a PDF $p_\delta(\delta\vec{d})$ that accounts for the correlated noise $\delta\vec{d}$ in the aperture statistics. The details of this PDF are given below. We note that the explicit normalisation $f_{\rm b}(\theta_i)$ and $f_{\rm r}(\theta_i)$ disappears here because both the theoretical and observed ratio-statistics $\{(b_{\rm 2D}(\theta_i),r_{\rm 2D}(\theta_i))\}$ are normalised exactly the same way. However, the normalisation is indirectly present through the ratio of theoretical aperture moments in $\vec{m}(\vec{\Theta},\vec{x})$ so that a wrong normalisation will introduce a bias in the reconstruction. Similar to the previous illustration, we integrate over the nuisance parameter $x_i=\ave{M^2_{\rm ap}(\theta_i)}$ to obtain the marginal likelihood \begin{equation} \label{eq:likelihood} {\cal L}(\vec{d}|\vec{\Theta})= \int\d^{n_{\rm d}}x\; p_\delta\Big(\vec{d}-\vec{m}(\vec{\Theta},\vec{x})\Big)\, p(\vec{x})\;. \end{equation} We adopt a uniform prior $p(\vec{x})$ for $\vec{x}$ with the additional condition that the variance of the aperture mass has to be positive or zero. The measurement noise $\delta\vec{d}$ in the aperture statistics approximately obeys Gaussian statistics which is characterised by a noise covariance \mbox{$\mat{N}=\ave{\delta\vec{d}\,\delta\vec{d}^{\rm T}}$}; the mean $\ave{\delta\vec{d}}$ vanishes by definition. The exact covariance $\mat{N}$, however, is unknown so that we estimate $\mat{N}$ from the data themselves by $\widehat{\mat{N}}$, obtained with $n_{\rm jk}$ jackknife realisations of the data (Appendix \ref{sect:estimators}). We include the uncertainty of $\hat{\mat{N}}$ in the posterior of the spatial biasing functions by analytically marginalising over its statistical error. As shown in \citet{2016MNRAS.456L.132S}, this produces for Gaussian $\delta\vec{d}$ a multivariate $t$-distribution for the noise model $p_\delta(\delta\vec{d})$, \begin{equation} \label{eq:likecond} -2\ln{p_\delta(\delta\vec{d})} = {\rm const}+ n_{\rm jk}\, \ln{\left(1+\frac{\chi^2}{n_{\rm jk}-1}\right)}\;, \end{equation} where $\chi^2:=\delta\vec{d}^{\rm T}\,\widehat{\mat{N}}^{-1}\,\delta\vec{d}$. To approximately evaluate \Ref{eq:likelihood}, we perform a numerical Monte-Carlo integration \begin{eqnarray} {\cal L}(\vec{d}|\vec{\Theta})&=& \int\d^{n_{\rm d}}\!x\,q(\vec{x})\, \frac{p_\delta\Big(\vec{d}-\vec{m}(\vec{\Theta},\vec{x})\Big)\,p(\vec{x})} {q(\vec{x})} \\ \label{eq:likemargin} &\approx& \frac{1}{n_x} \sum_{i=1}^{n_x} \frac{p_\delta\Big(\vec{d}-\vec{m}(\vec{\Theta},\vec{x}_i)\Big)\,p(\vec{x}_i)} {q(\vec{x}_i)}\;, \end{eqnarray} for which \begin{equation} \label{eq:importance} -2\ln{q(\vec{x})}= {\rm const}+ \Big(\vec{x}-\vec{d}_{\rm map}\Big)^{\rm T}\mat{N}^{-1}_{\rm map} \Big(\vec{x}-\vec{d}_{\rm map}\Big) \end{equation} is a so-called importance function of the Monte-Carlo integral, and $d_{{\rm map},j}=\ave{M^2_{\rm ap}(\theta_j)}$ are the measured variances of the aperture mass at $\theta_j$; the vectors $\vec{x}_i\sim q(\vec{x})$ are $n_{\rm x}$ random realisations of the importance function; the matrix $\mat{N}_{\rm map}^{-1}$ denotes our estimate for the inverse covariance of noise in $\vec{d}_{\rm map}$, that is that of $\ave{M^2_{\rm ap}}$ alone, which we also obtain from jackknife samples and the estimator in \cite{2007A&A...464..399H}. The purpose of the importance function $q(\vec{x})$ is to improve the convergence of the Monte-Carlo sum \Ref{eq:likemargin} by producing a higher density of sampling points $\vec{x}_i$ where the most of the probability mass of \mbox{$p_\delta(\vec{d}-\vec{m}(\vec{\Theta},\vec{x}))$} is located \citep[e.g.][]{2010MNRAS.405.2381K}. We note that for any $q(\vec{x})$ the sum always converges to the same ${\cal L}(\vec{d}|\vec{\Theta})$ as long as $q(\vec{x})$ is proper and \mbox{$q(\vec{x})>0$} for all $\vec{x}$. To save computation time, we initially prepare \mbox{$n_x=10^3$} realisations $\vec{x}_i$ and reuse these for every new estimation of the marginal likelihood in \Ref{eq:likemargin}. We explore the posterior distribution of parameters $\vec{\Theta}$ in the template, that is \begin{equation} \label{eq:posterior} p(\vec{\Theta}|\vec{d})= E^{-1}(\vec{d})\, {\cal L}(\vec{d}|\vec{\Theta})\,p(\vec{\Theta}) \propto{\cal L}(\vec{d}|\vec{\Theta})\,p(\vec{\Theta})\;, \end{equation} by applying sampling with the Multiple-Try Metropolis, where the constant evidence $E(\vec{d})$ is not of interest here \citep{2012arXiv1201.0646M}. We assume that the prior $p(\vec{\Theta})$ is uniform on a linear scale for all parameters within their defined boundaries, see Sect. \ref{sect:modimplement}, and \mbox{$0<b_{\rm ls}\le3$}. Different Monte-Carlo chains can be combined by joining the different sets of sampling points from independent Monte-Carlo runs. If the joint sample is too large to be practical, a resampling can be applied. This means we randomly draw a subset of points $\vec{\Theta}_i$ from the joint sample. Depending on the details of the adopted MCMC algorithm, the probability of drawing $\vec{\Theta}_i$ in the resampling has to be proportional to its weight in case points are not equally weighted. Finally to conclude the reconstruction, we map the Monte-Carlo realisations of $\vec{\Theta}$ in the joint sample to a set of spatial biasing functions. The final set then samples the posterior distribution of $b(k)$ and $r(k)$. \subsection{Marginalisation of errors in the galaxy-bias normalisation} For our analysis, the fiducial cosmology and the intrinsic alignment of sources is exactly known by the cosmological model in the mock data. For future applications, however, it may be necessary to additionally marginalise over an a priori uncertainty $p(\vec{\pi})$ of cosmological parameters $\vec{\pi}$ for the normalisation of the galaxy bias, meaning that the $\vec{\Theta}$ posterior is \begin{equation} \label{eq:post} p(\vec{\Theta}|\vec{d})\propto \int\d\pi\;p(\vec{\pi})\, {\cal L}(\vec{d}|\vec{\Theta},\vec{\pi})\, p(\vec{\Theta}) \approx \sum_{i=1}^{n_\pi}\, \frac{{\cal L}(\vec{d}|\vec{\Theta},\vec{\pi}_i)\, p(\vec{\Theta})}{n_\pi}\;, \end{equation} where ${\cal L}(\vec{d}|\vec{\Theta},\vec{\pi})$ is the likelihood of $\vec{d}$ for a given set $\vec{\Theta}$ and fiducial cosmology $\vec{\pi}$. Numerically the marginalisation over $\vec{\pi}$ can be achieved, as indicated by the right-hand side of \Ref{eq:post}, by (i) randomly drawing a realisation $\vec{\pi}_i$ from the prior $p(\vec{\pi})$, (ii) by performing the Monte-Carlo sampling of the posterior in Eq. \Ref{eq:posterior} for the fixed fiducial cosmology \mbox{$\vec{\pi}_i\sim p(\vec{\pi})$}, and (iii) by combining the different chains with varying $\vec{\pi}$. Concretely, let us call the resulting Monte-Carlo sample from step (ii) ${\cal M}_i$. We repeat this step $n_\pi$ times for different cosmologies. For joining the chains in step (iii), we randomly draw one $\vec{\Theta}_i$ from each sample ${\cal M}_i$ to produce $n_\pi$ new vectors $\vec{\Theta}$ that go into the final sample. We repeat this random selection of $n_\pi$-tupels until the final sample has the desired size. We may apply the same technique to also marginalise over errors in the redshift distributions of lenses and sources, or the uncertainties in the II and GI models. \subsection{Galaxy number density as prior} The halo model provides a prediction of the mean galaxy density $\bar{n}_{\rm g}(\vec{\Theta})$, Eq. \Ref{eq:nbar}, that can be included in the template fit to improve the constraints on the otherwise poorly constrained pivotal mass $m_{\rm piv}$. We may achieve this by adding the log-normal likelihood \begin{equation} \ln{{\cal L}(\bar{n}_{\rm g}|\vec{\Theta})}= -\frac{(\log_{10}\bar{n}_{\rm g}^{\rm est}-\log_{10}\bar{n}_{\rm g}(\vec{\Theta}))^2}{2\sigma_{\rm logn}^2}\;, \end{equation} to the logarithm of the marginal likelihood in \Ref{eq:likemargin}. Here we denote by $\sigma_{\rm logn}^2$ the root-mean-square (RMS) error of the logarithmic number density $\bar{n}_{\rm g}^{\rm est}$ estimated from the data. A reasonable prior on $\bar{n}_{\rm g}$ can also be found if $\bar{n}_{\rm g}^{\rm est}$ is not available as it is assumed here. The number density of galaxies is for redshifts $z\lesssim2$ typically of the order of $10^{-2}$ to $10^{-1}\,h^3\,\rm Mpc^{-3}$, or smaller for sub-samples \citep[e.g.][]{2016ApJ...830...83C}. Therefore in the reconstruction of our biasing functions, we employ a weak Gaussian prior of \begin{equation} \log_{10}{(\bar{n}_{\rm g}^{\rm est}\,h^{-3}\rm Mpc^3)}\pm\sigma_{\rm logn}=-3\pm2 \end{equation} for the galaxy number density, and we impose an upper limit of $\bar{n}_{\rm g}(\vec{\Theta})\le1\,h^3\,\rm Mpc^{-3}$ to prevent an nonphysically high number density of galaxies. We found that the upper limit improves the convergence of the MCMCs as chains can get stuck at low values of $m_{\rm piv}$ with unrealistically high values of $\bar{n}_{\rm g}$. \section{Results} \label{sect:results} In the following, we report our results for the reconstructed biasing functions for the galaxy samples SM1 to SM6, RED, and BLUE inside the two redshift bins low-$z$ (\mbox{$\bar{z}_{\rm d}\approx0.36$}) and high-$z$ (\mbox{$\bar{z}_{\rm d}\approx0.52$}). We concentrate on the reconstruction accuracy and precision although the template parameters found in the reconstructions are also available in the Appendix \ref{sect:physicaldetails}. If not stated otherwise, the results are for mock sources with a shape-noise dispersion \mbox{$\sigma_\epsilon=0.3$} and without reduced shear. As additional test of the methodology, we use generic templates for a non-physical model of the spatial biasing functions and compare the results to those of our physical templates. Furthermore, we estimate the systematic error in the bias normalisation originating from various conceivable sources. The final sub-section is a demonstration of our technique with data from the Garching-Bonn Deep Survey (\mbox{GaBoDS}). \subsection{Reconstruction accuracy and precision} \label{sect:mockanalysis} \begin{figure*}[htb!] \begin{center} \epsfig{file=fig7a.ps,width=90mm,angle=-90} \vspace{0.5cm} \epsfig{file=fig7b.ps,width=90mm,angle=-90} \vspace{-0.5cm} \end{center} \caption{\label{fig:brofksm} Biasing functions $b(k)$ (left panels) and $r(k)$ (right panels) for all mock galaxy samples SM1 to SM6 and two redshift bins. The top figure is for the low-$z$ samples ($\bar{z}_{\rm d}\approx0.36$); the bottom figure for the high-$z$ samples ($\bar{z}_{\rm d}\approx0.52$). The shaded regions indicate the $68\%$ and $95\%$ PI of the reconstructed biasing functions. The red data points are the true basing functions for comparison. For more visibility, we shifted the biasing functions by the constant value in the figure key.} \end{figure*} \begin{figure*} \begin{center} \hspace{-0.5cm} \epsfig{file=fig8.ps,width=98mm,angle=-90} \end{center} \caption{\label{fig:brofkredblue} As in Fig. \ref{fig:brofksm} but now for the colour-selected samples RED and BLUE.} \end{figure*} \renewcommand{\arraystretch}{1.3} \begin{table*} \caption{\label{tab:accuracy} Overview of the reconstruction accuracy by listing the mean fractional errors $\sigma_{\rm b,r}$ and extreme outliers $\Delta_{\rm b,r}$ of the inferred biasing functions $b(k)$ and $r(k)$, respectively, in per cent.} \begin{center} \input{table.accuracy.tex} \end{center} \tablefoot{The columns `physical' refer to results with a physical model (Sect. \ref{sect:mockanalysis}), `generic' columns list the results with generic fitting functions (Sect. \ref{sect:generic}). Quoted values are for the errors in the domain $k\in[0.05,10]\,h\,\rm Mpc^{-1}$ for $b(k)$ and $k\in[0.3,10]\,h\,\rm Mpc^{-1}$ for $r(k)$. The values $\Delta_{\rm b}$ and $\Delta_{\rm r}$ inside the brackets are the most significant deviations between reconstructed and true biasing functions. Errors $\sigma_{\rm b,r}\ge5\%$ or outliers $\Delta_{\rm b,r}\ge3\sigma$ are quoted in boldface. Values in the last rows with $\ave{\sigma_{\rm b,r}}$ (or $\ave{\Delta_{\rm b,r}}$) are averages and dispersions for $\sigma_{\rm b}$ and $\sigma_{\rm r}$ (or $\Delta_{\rm b}$ and $\Delta_{\rm r}$) of all samples in the same redshift bin.} \end{table*} \renewcommand{\arraystretch}{1.0} The Figs. \ref{fig:brofksm} and \ref{fig:brofkredblue} are a direct comparison of our reconstructed biasing functions for all samples (shaded regions) to the true $b(k;\bar{z})$ and $r(k;\bar{z})$ in the three-dimensional simulation cube of the MS shown as red data points; we use the snapshot redshifts $\bar{z}=0.362,0.509$ for low-$z$ and high-$z$ respectively. The shaded region indicate the 68\% and 95\% posterior intervals (PI) of our posterior constraints. In order to accommodate many reconstructions, we have shifted the biasing function along the $y$-axes by a constant value that is indicated in the legend of each plot. We note that most functions are shifted downwards so that relative errors might appear larger than in reality. The left panels show $b(k)$, the right panels $r(k)$. Figure \ref{fig:brofksm} displays only reconstructions for the stellar-mass samples where the top row is for the low-$z$ samples and the bottom row for the high-$z$ samples. Similarly, Fig. \ref{fig:brofkredblue} shows the results for the RED and BLUE samples, now low-$z$ and high-$z$ combined in one figure. Overall we find a good agreement between a reconstruction and the true biasing functions although significant disagreements are also visible. Most prominently, we find disagreements at large scales, this means at small wave numbers \mbox{$k\approx0.05\,h\,\rm Mpc^{-1}$}, for the low-$z$ $b(k)$ of RED, SM1, and SM5; or at small scales, \mbox{$k\approx10\,h^{-1}\,\rm Mpc$}, for the high-$z$ $r(z)$ of SM2 or BLUE; the function low-$z$ $b(k)$ of SM2 and SM3 is a few per cent offset on all scales which may be an indication of a normalisation error. The disagreement at high \mbox{$k\gtrsim10\,h^{-1}\,\rm Mpc$} could be related to insufficient sampling by our MCMC because the results improve significantly for samples without shape noise which reduces the statistical error at $\theta_{\rm ap}\approx2^\prime$ (not shown). It is also possible that the statistical model of the likelihood in Eq. \Ref{eq:likecond} is inaccurate and, as a consequence, underestimates the error distribution in the tail of the posterior at large $k$. To quantify the method accuracy we compare the reconstruction $b(k)$ or $r(k)$ to the true biasing function by the following metrics $\sigma_{\rm f}^2$ and $\Delta_{\rm f}$; the subscript `f' is either `b' for $b(k)$ or `r' for $r(k)$. The metrics compare the biasing functions at a discrete set $\{k_i:i=1\ldots n_k\}$ of $n_k=10$ wave numbers between \mbox{$0.05\le k\le10\,h\,\rm Mpc^{-1}$}, which we equally space on a log-scale. In the equations, we denote by $f(k)$ the posterior median of either $b(k)$ or $r(k)$ in the reconstruction, and $\sigma^2(k)$ is the variance of the posterior at a given $k$. In addition, we denote by $f_{\rm true}(k)$ the true biasing function and by $\sigma_{\rm true}^2(k)$ its standard error. The variance $\sigma^2_{\rm true}(k)$ is indicated by the error bars of the red data points in the Figs. \ref{fig:brofksm} and \ref{fig:brofkredblue}; it is usually negligible compared to $\sigma^2(k)$. Our first metric \begin{equation} \label{eq:metricfirst} \sigma_{\rm f}^2= \left(\sum_{i=1}^{n_k}\sigma_i^{-2}\right)^{-1}\, \sum_{i=1}^{n_k}\sigma_i^{-2}\,\left(\frac{f(k_i)}{f_{\rm true}(k_i)}-1\right)^2 \end{equation} then quantifies the average fractional error over the range of $k$, weighted by the inverse statistical error $\sigma_i^2=\sigma^2(k_i)+\sigma^2_{\rm true}(k_i)$. For $\sigma_{\rm r}^2$, we change the lower limit of $k$ to $0.3\,h\,\rm Mpc^{-1}$ to avoid a seemingly too optimistic metric: by definition $r(k)$ is in the reconstruction close to the true $r(k)=1$ of the MS data which makes $\sigma_i$ relatively small and therefore assigns too much weight to $k\lesssim0.3\,h\,\rm Mpc^{-1}$. The second metric \begin{equation} \Delta_{\rm f}= \max{\left\{ \sigma_i^{-1}\,\left|f(k_i)-f_{\rm true}(k_i)\right|:i=1\ldots n_k \right\}} \end{equation} yields the most significant deviation in units of $\sigma_i$; it is a measure for the strongest outlier within the $k$-range. Table \ref{tab:accuracy} lists $\sigma_{\rm f}$ (in per cent) and $\Delta_{\rm f}$ for all galaxy samples and redshift bins; the last rows are averages and dispersions for each table column. The table consists of two blocks of which we summarise the left-hand columns `physical' here and the right-hand column `generic' in the Sect. \ref{sect:generic} hereafter. The values for $\sigma_{\rm b}$ are typically in the range $5.4\pm2.9\%$ for low-$z$ samples and slightly better with $3.6\pm1.7\%$ for the high-$z$ samples. The accuracy of $\sigma_{\rm r}$ is consistently $3.0\pm2.0\%$ for both redshift bins. For the outlier statistics, we find on average $\Delta_{\rm b}=2.2\pm1.2\sigma$ and $\Delta_{\rm r}=2.2\pm1.8\sigma$ for all redshifts, which, however, can attain high values of $6-7\sigma$ in a few cases; see high-$z$ BLUE and SM2 for instance. We find these high values to be associated with mismatches of $r(k)$ at \mbox{$k\approx10\,h\,\rm Mpc^{-1}$}. This corresponds to \mbox{$\theta_{\rm ap}\approx1^\prime$}, thus to the lower limit of the angular scales that we sample in the mock analysis (cf. bottom and top $x$-axis in Fig. \ref{fig:models}). Moreover, we quantify the statistical precision of our reconstruction at wave number $k_i$ by the ratio of $\sigma(k_i)$ and the median of the posterior of either $f(k_i)$. For an average over all galaxy samples and the reconstruction within the range \mbox{$0.05\le k\le10\,h\,\rm Mpc^{-1}$}, we find a precision of $6.5\pm2.1\%$ for $b(k)$ and $5.5\pm5.7\%$ for $r(k)$; we combine the low-$z$ and high-$z$ samples because the precision is very similar for both bins. The errors denote the RMS variance of the precision. In summary, we find that a method accuracy of around $5\%$ with most significant deviations at scales of \mbox{$k\gtrsim10\,h\,\rm Mpc^{-1}$} which, however, are not supported by the measurements and have to be extrapolated by the templates. The statistical precision of the reconstructions is typically between $5-10\%$ for our fiducial survey and lens samples. \subsection{Deprojection with generic templates} \label{sect:generic} We repeat the reconstruction of $b(k)$ and $r(k)$ for our mock data with the Pad\'e approximants \begin{equation} \label{eq:generic} b(k)=\frac{b_0+b_1\,k+b_2\,k^2}{1+b_3\,k+b_4\,k^2} ~;~ r(k)=\frac{1+r_1\,k+r_2\,k^2}{1+r_3\,k+r_4\,k^2+r_0\,k^3}\; \end{equation} as generic templates of the biasing functions in Sect. \ref{sect:statanalysis} (without a $\bar{n}_{\rm g}$ regularisation). The $b_i$ and $r_i$ denote ten coefficients which we restrict to \mbox{$|b_i|,|r_i|\le100$} in the fit. These generic model templates are related to the fitting function for $b^2(k)$ in \citet{2005MNRAS.362..505C}. We found that the Pad\'e approximants are very good descriptions of the red data points in the Figs. \ref{fig:brofksm} and \ref{fig:brofkredblue}. By fitting generic templates we therefore investigate whether the foregoing inaccuracies in the reconstruction with the physical templates might be related to a model bias. If this is the case, we should obtain a better reconstruction here. We note that the particular approximant of $r(k)$ asserts \mbox{$r\to1$} for \mbox{$k\to0$}, and that in the generic templates, unlike the physical templates, $b(k)$ is independent from $r(k)$. Compared to the halo model, the generic templates produce a similar (low-$z$) or somewhat worse (high-$z$) reconstruction but is prone to more extreme deviations from the true biasing functions. The right-hand block of values `generic' in Table \ref{tab:accuracy} summarises the metrics of the reconstructions with the generic templates and compares them to the metrics with the physical templates `physical' on the left-hand side. We find an increased inaccuracy for the high-$z$ samples, especially for SM5, SM6, and RED; in particular the reconstruction of low-$z$ RED has not improved here. The worse reconstruction for high-$z$ is because of the inability of the generic templates to extrapolate to small spatial scales which is more important for high-$z$ where the same angular range corresponds to larger spatial scales. In a few cases, the generic templates produce very significant deviations, mostly on small scales and indicated by $\Delta_{\rm b,r}$, which are absent in the physical templates. \subsection{Errors in the galaxy-bias normalisation} \label{sect:calbias} \begin{figure*} \begin{center} \epsfig{file=fig9.ps,width=125mm,angle=-90} \vspace{-0.5cm} \end{center} \caption{\label{fig:calbias} Average error in the galaxy-bias normalisation $f_{\rm b}$ ($x$-axis) and $f_{\rm r}$ ($y$-axis). The points show indistinguishable the errors of all galaxy samples SM1-4, BLUE, and RED together; the point styles indicate the redshift bin and what is varied. Symbols as in the figure key indicate the low-$z$ samples, inverted symbols indicate the high-$z$ samples (e.g. solid and open circles). The `cosmo' and `sampling p(z)' data points reuse the same galaxy samples many times with random normalisation errors. The solid line marks the estimated error for high-$z$ samples due to the baryon uncertainty. The fiducial cosmology is WMAP9. See text for more details.} \end{figure*} \renewcommand{\arraystretch}{0.8} \begin{table} \caption{\label{tab:errors}Summary of possible systematic errors and their expected impact on the reconstruction of $b(k)$ or $r(k)$ for a WMAP9 cosmology and our galaxy samples.} \begin{center} \begin{tabular}{lll} \hline\hline \\ Origin & Error $b(k)$ & Error $r(k)$ \\ \hline \\ intr. align. $|A_{\rm ia}|\approx2$ & $\lesssim5.0\%$ & $\lesssim5.0\%$ \\\\ fiducial cosmology \\ and model of $P_{\rm m}(k;\chi)$ & $2.8\%$ ($3.0\%$) & $0.4\%$ ($1.1\%$) \\\\ lens $p_{\rm d}(z)$; $\delta_\sigma=5\%$ & $2.5\%$ & $2.2\%$ ($2.7\%$) \\\\ lens $p_{\rm d}(z)$; $\delta_{z}=1\%$ & $1.9\%$ & $0.5\%$ ($1.4\%$) \\\\ source $p_{\rm s}(z)$; $\delta_{z}=1\%$ & $1.9\%$ & $0.5\%$ ($1.0\%$) \\\\ shear bias $m=1\%$ & $1.0\%$ & $0.0\%$ \\\\ shear bias $c\approx10^{-3}\%$ & $<1.0\%$ & $<1.0\%$ \\\\ source $p_{\rm s}(z)$; $\delta_\sigma=5\%$ & $0.8\%$ & $0.5\%$ ($0.3\%$) \\\\ reduced shear & $\lesssim0.5\%$ & $\lesssim0.5\%$ \\\\ sampling noise of $p(z)$ & $0.4\%$ ($0.6\%$) & $0.4\%$ ($0.5\%$) \\ \hline \end{tabular} \end{center} \tablefoot{Values in brackets are for the high-$z$ samples ($\bar{z}_{\rm d}\approx0.52$) which are only shown if they differ from the low-$z$ values ($\bar{z}_{\rm d}\approx0.35$). Sources have a mean redshift of $\bar{z}_{\rm s}=0.93$. By $\delta_{z}$ and $\delta_\sigma$ we denote the relative error in the mean redshift and the redshift dispersion, respectively, which refer to either the lens redshift distribution, $p_{\rm d}(z)$, or that of the sources, $p_{\rm s}(z)$. We assume a constant residual shear bias $m$ here.} \end{table} \renewcommand{\arraystretch}{1.0} The ratio statistics are normalised with respect to unbiased galaxies in a fiducial model. Systematic errors in the normalisation affect the amplitude of the deprojected biasing functions. Therefore, we explore the robustness of the overall amplitude of $b(k)$ and $r(k)$ with respect to changes in the fiducial cosmology and the adopted redshift distributions in the normalisation; see the Eqs. \Ref{eq:calfb} and \Ref{eq:calfr} that are evaluated for the unbiased galaxies. We note that $f_{\rm b}$ and $f_{\rm r}$ normally show little dependence on $\theta_{\rm ap}$ so that changes in the fiducial model mainly scale the projected biasing functions up or down. The functions $f_{\rm b}(\theta_{\rm ap})$ and $f_{\rm r}(\theta_{\rm ap})$ shall be the correct normalisation of the galaxy bias. For Fig. \ref{fig:calbias}, we then compute $f_{\rm b}^\prime(\theta_{\rm ap})$ (and $f_{\rm r}^\prime(\theta_{\rm ap})$) for variations in the normalisation parameters, and we compute the quadratic mean of relative errors \mbox{$\delta_{\rm b}(\theta_{\rm ap})=f_{\rm b}^\prime(\theta_{\rm ap})/f_{\rm b}(\theta_{\rm ap})-1$} over the angular range \mbox{$1^\prime\le\theta_{\rm ap}\le140^\prime$}. The data points inside the figure indicate these means $\ave{\delta_{\rm b}^2(\theta_{\rm ap})}^{1/2}$ ($x$-axis) and $\ave{\delta_{\rm r}^2(\theta_{\rm ap})}^{1/2}$ ($y$-axis) for particular lens samples. To have a good representation of the scatter between possible lens-galaxy samples, we show results for all galaxy samples SM1-SM6, RED, and BLUE in the same redshift bin together by the same point style if they are subject to the same parameter variation. We give the normalisation errors a plus sign if the average of $\delta_{\rm b}(\theta_{\rm ap})$ is positive, and a negative sign otherwise. This flags $b(k)$ (or $r(k)$) that are overall too high (positive) or too low (negative). We apply variations relative to a default model which has: WMAP9+eCMB+BAO+$H_0$ cosmological parameters \citep{2013ApJS..208...19H}; redshifts distributions as shown in Fig. \ref{fig:pofz}; a non-linear matter power-spectrum according to \citet{2012ApJ...761..152T}. Inside the plot, data points have the styles shown in the figure key for low-$z$ samples and an inverted point style for high-$z$ samples, such as solid circles (low-$z$) and open circles (high-$z$). We vary the following parameters in the default model to quantify their impact on the normalisation. \begin{itemize} \item The data points `cosmo all' randomly draw combinations of cosmological parameters from an error distribution centred on the fiducial model \begin{multline} \label{eq:wmap9} \vec{\pi}=(\Omega_{\rm m},\Omega_\Lambda,\Omega_{\rm b},n_{\rm s},h,\sigma_8) \\ =(0.288,0.712,0.0472,0.971,0.6933,0.83)\;. \end{multline} In this distribution, errors are uncorrelated and Gaussian with a dispersion of \begin{equation} \sigma_\pi=(4\%,{\rm n/a},2\%,1\%,2\%,3\%) \end{equation} relative to the fiducial $\vec{\pi}$. The exception is $\Omega_\Lambda$ which we set to $\Omega_\Lambda=1-\Omega_{\rm m}$ in all realisations (a fixed \mbox{$K=0$} geometry). These errors are on the optimistic side but consistent with constraints from combined cosmological probes. In addition for each set of parameters, we plot data points for three different transfer functions of $P_{\rm m}(k;\chi)$: \citet{1986ApJ...304...15B}, and \citet{1998ApJ...496..605E} with and without BAOs. These are combined with two different \texttt{Halofit} models of the non-linear power spectrum: \citet{Smith03} and the more accurate \citet{2012ApJ...761..152T}. By these variations we mean to broadly account for model uncertainties in the non-linear power spectrum which produces extra scatter in the plot. In particular, the 10-20\% difference between the two versions of \texttt{Halofit} in the regime $k\gtrsim1\,h\,\rm Mpc^{-1}$ accounts to some extend for the theoretical uncertainty of baryons on the small-scale power spectrum \citep[e.g.][]{2017arXiv170703397J,2016MNRAS.463.3326F,2015MNRAS.450.1212H,2011MNRAS.417.2020S}. We find that errors in the cosmological parameters or the non-linear power spectrum mainly affect the normalisation of $b(k)$ which can be off by about $3.0\%$ (68\% confidence level, CL hereafter). The error in $r(k)$ is within $1.1\%$ (68\% CL) for high-$z$ or smaller $0.4\%$ (68\% CL) for the low-$z$ samples (solid symbols). The straight line inside the figure indicates the locus of errors for the high-$z$ samples that are produced by the baryon uncertainty in the non-linear power spectrum. \item For `cosmo $\Omega_{\rm m}$', we only vary $\Omega_{\rm m}$ in the cosmological parameters with the foregoing dispersion. This results in a distribution of data points that is very similar to `cosmo all'. For comparison, `cosmo $\sigma_8$' varies only $\sigma_8$. The scatter is now restricted to a small region. Therefore, the normalisation error owing to cosmological parameters is mainly explained by the variations in $\Omega_{\rm m}$. \item For the data points `sampling $p(z)$', we add random shot noise to the redshift distributions. The idea here is that redshift distributions are estimated from a sub-sample of galaxies which gives rise to sampling noise in the estimated distributions used for normalisation; see e.g. \citet{2017MNRAS.465.1454H} which use a weighted sample of spectroscopic redshifts to model the redshift PDF of the full galaxy sample. To emulate the sampling shot-noise, we randomly draw $n$ redshifts \mbox{$z\sim p(z)$} from the true $p(z)$ to build a finely binned histogram of a noisy redshift distribution ($\Delta z=0.015$). We then employ this histogram for $f^\prime_{\rm b}$ and $f^\prime_{\rm r}$. As fiducial values for our $1024\,\rm deg^2$ survey, we adopt $n=10^4$ for the lenses and $n=10^5$ for the sources. These fiducial values imply that we estimate $p(z)$ from spectroscopic redshifts of \mbox{$\sim0.5\%$} of the sources and roughly $1\%, 2\%, 20\%, 1\%$ of the lenses in the samples SM1, SM4, SM6, RED/BLUE, respectively. The result is a similar scatter for the low-$z$ and the high-$z$ samples in Fig.\ref{fig:pofz}. The error is typically within $0.5\%$ for $b(k)$ and $r(k)$ (68\% CL). \item The data points `shift $p_{\rm d}(z)$' vary the mean in the lens redshift distribution. For this, we systematically shift $z\mapsto z\,(1+\delta_z)$ by $\delta_z=\pm2\%$, which is twice as large as the typical error on the mean redshift reported in \citet{2017MNRAS.465.1454H}. The impact differs for the low-$z$ (solid circles) and the high-$z$ samples (open circles). For systematically higher redshifts in low-$z$, this means $\delta_z>0$, $b(k)$ is too large and $r(k)$ is too low. For high-$z$ and \mbox{$\delta_z>0$}, we find that both $b(k)$ and $r(k)$ are too high in amplitude. For \mbox{$\delta_z<0$}, the effects are exactly reversed. The overall systematic normalisation error is nevertheless not greater than typically $2\%$ for $b(k)$ and $1-2\%$ for $r(k)$. \item The data points `width $p_{\rm d}(z)$' vary the width of the lens redshift distribution. This we emulate by mapping $p_{\rm d}(z)\mapsto p_{\rm d}(z)^{1/(1-\delta_\sigma)^2}$ to a new PDF that is then used for the normalisation. For a Gaussian density $p_{\rm d}(z)$, this maps the dispersion to $\sigma\mapsto\sigma\,(1-\delta_\sigma)$ while leaving the mean and Gaussian shape in the new PDF unchanged. For skewed distributions, \mbox{$\delta_\sigma\ne0$} also moves the mean of the PDF. To account for this unwanted (small) side effect, we shift every PDF to assure that it retains its original mean redshift. We consider \mbox{$\delta_\sigma=\pm5\%$} here. The effect of squeezing, this means \mbox{$\delta_\sigma>0$}, is similar for low-$z$ and high-$z$: $b(k)$ is too low, $r(k)$ is too high, with errors of around $2-3\%$ for $b(k)$ and $r(k)$. A stretching, $\delta_\sigma<0$, has the reverse effect on both redshift bins. \item The data points `shift $p_{\rm s}(z)$' and `width $p_{\rm s}(z)$' explore the effect of errors in the mean or width of the source $p_{\rm s}(z)$. Shifting by \mbox{$\delta_z=+2\%$} produces too high $b(k)$ for low-$z$ and high-$z$ ($1.9\%$), too high $r(k)$ for low-$z$ ($0.5\%$), and a too low $r(k)$ for high-$z$ ($1.0\%$). The reverse behaviour is present for systematically lower redshifts with \mbox{$\delta_z=-2\%$}. Changes in the width of the distribution with \mbox{$\delta_\sigma=\pm5\%$} have a $0.5\%$ effect for $b(k)$ and $r(k)$, with low-$z$ samples being slightly less affected: a systematically wider distribution gives a too low $b(k)$ and a too high $r(k)$; the reverse effects apply for systematically narrower distributions, that is for \mbox{$\delta_\sigma>0$}. \item The intrinsic alignment of sources contributes to both $\ave{M^2_{\rm ap}}$ and $\ave{{\cal N}M_{\rm ap}}$ and thereby can have an impact on $b_{\rm 2D}$ and $r_{\rm 2D}$. We account for this in the normalisation by II and GI models; see Sect. \ref{sect:IIandGI}. If unaccounted for, as assumed here, we bias $b_{\rm 2D}$ by the error in $\ave{M_{\rm ap}^2}^{1/2}_{\rm th}$ that is used in the normalisation $f_{\rm b}$, Eq. \Ref{eq:calfb}. This error is plotted in Fig. \ref{fig:fbrGIandII} for varying values of $A_{\rm ia}$ and angular scales $\theta_{\rm ap}$. The normalisation error in $r_{\rm 2D}$ is determined by error in $\ave{{\cal N}M_{\rm ap}}_{\rm th}^{-1}\,\ave{M_{\rm ap}^2}^{1/2}_{\rm th}$ used for $f_{\rm r}$, Eq. \Ref{eq:calfr}, which is plotted in Fig. \ref{fig:frGIhighz} for SM4 high-$z$ as example; the errors of other high-$z$ samples are comparable. For the low-$z$ samples, the overlap of lens and source redshifts is small so that the error in $\ave{{\cal N}M_{\rm ap}}_{\rm th}^{-1}$ is negligible compared to the error in $\ave{M_{\rm ap}^2}^{1/2}_{\rm th}$. Therefore, the normalisation error for $r_{\rm 2D}$ in low-$z$ samples is approximately that of $b_{\rm 2D}$ in Fig. \ref{fig:fbrGIandII}. For \mbox{$|A_{\rm ia}|\lesssim2$}, the normalisation error of $b_{\rm 2D}$ and $r_{\rm 2D}$ is typically within $\pm5\%$ at scales \mbox{$\theta_{\rm ap}\gtrsim1^\prime$}. \end{itemize} A summary of normalisation errors and their estimated magnitude is listed in Table \ref{tab:errors}. We find that the response to errors in the redshift distributions is approximately linear for $\delta_{z}$ and $\delta_\sigma$ that are within several per cent so that the quoted values could be scaled. \begin{figure} \begin{center} \epsfig{file=fig10.ps,width=65mm,angle=-90} \end{center} \caption{\label{fig:fbrGIandII} Systematic relative errors in $b_{\rm 2D}(\theta_{\rm ap})$ (low-$z$ and high-$z$) and $r_{\rm 2D}(\theta_{\rm ap})$ (only low-$z$) when II and GI terms are ignored in the normalisation of the galaxy bias. Different lines show predictions for different values of $A_{\rm ia}$ with sources as in our mock survey. The fiducial cosmology is WMAP9.} \end{figure} \begin{figure} \begin{center} \epsfig{file=fig11.ps,width=65mm,angle=-90} \end{center} \caption{\label{fig:frGIhighz} As in Fig. \ref{fig:fbrGIandII} but now for $r_{\rm 2D}(\theta_{\rm ap})$ high-$z$. Shown are results form SM4 but the values are similar for the other samples.} \end{figure} \subsection{Shear bias and reduced shear} \label{sect:othersys} As another source of systematic error, we consider a residual bias in the shear estimators that has not properly corrected for in the lensing pipeline. Following \citet{2012MNRAS.423.3163K} (K+12 hereafter), we quantify a shear bias by $\ave{\gamma}=(1+m)\,\gamma+c$ for average estimated shear $\ave{\gamma}$ in an ensemble of sources that are subject to the same $\gamma$: $m$ is the so-called multiplicative bias and $c$ is the additive bias. For a crude estimate of the impact of $m$ on the measurement of $b_{\rm 2D}(\theta_{\rm ap})$ and $r_{\rm 2D}(\theta_{\rm ap})$, we assume a constant and real-valued $m$. A value of \mbox{$m\ne0$} produces a bias of $1+m$ in the measured aperture statistics $\ave{M_{\rm ap}^2}^{1/2}$ and $\ave{{\cal N}M_{\rm ap}}$. Therefore, applying our methodology while ignoring $m$ will scale the amplitude of $b(k)$, Eq. \Ref{eq:b2dobs}, by $(1+m)^{-1}=1-m+{\cal O}(m^2)$ but it will not change $r(k)$ in Eq. \Ref{eq:r2dobs}. Contemporary lensing techniques reach a typical accuracy of \mbox{$|m|\approx1\%$}, therefore we expect a similarly small systematic error for $b(k)$ (K+12). A residual additive bias $c$ does not affect the aperture statistics if it is constant. If, on the other hand, $c$ varies at a scale within the sensitive $\ell$-range of the aperture filter, we could have significant contributions to the measured $\ave{M_{\rm ap}^2}$, depending on the power of the $c$-fluctuations. Our polynomial filter in Eq. \Ref{eq:apfilter} has its maximum sensitivity for the angular wave number \mbox{$\ell_{\rm c}\approx4.25/\theta_{\rm ap}\approx1.5\times10^4\,(\theta_{\rm ap}/1^\prime)^{-1}$} or angular scale $\theta_{\rm c}=2\pi/\ell_{\rm c}\approx1.44\,\theta_{\rm ap}$ \citep{1998A&A...334....1V}. The typical residual amplitudes of $c$ \emph{after} a calibration correction of $\xi_\pm$ are of the order of $10^{-5}$ (K+12; Appendix D4 in \citealt{2017MNRAS.465.1454H}) so that systematic errors owing to $c$ fluctuations are probably below a per cent for \mbox{$\ave{M^2_{\rm ap}}^{1/2}\gtrsim10^{-3}$}, which is the case for \mbox{$\theta_{\rm ap}\lesssim2\,\rm deg$} and typical sources with \mbox{$z_{\rm s}\approx1$}; see the data points in Fig. \ref{fig:GIandII}. The statistic $\ave{{\cal N}M_{\rm ap}}$ is not affected by the additive shear bias in the likely absence of correlations between lens positions and fluctuations of $c$, or is presumably corrected for by subtracting the correlation between random lens positions and shear in the data; see the estimator in Eq. \Ref{eq:estggl}. With regard to reduced shear, our analysis assumes that the $\epsilon_i$ are estimates of shear $\gamma(\vec{\theta}_i)$, whereas they are in reality estimates of the reduced shear \mbox{$g_i=\gamma_i/(1-\kappa_i)$}. While $\ave{\epsilon_i}=\gamma_i$ is a good approximation for weak gravitational-lensing and substantially simplifies the formalism in Sect. \ref{sect:projectedbias}, we will have some systematic error. To quantify this error, we redo the reconstruction of the biasing functions for a new shear catalogue where the intrinsic source ellipticities are sheared by $g_i$ rather than $\gamma_i$; source positions and intrinsic shapes do not match between the old and new catalogues. For the new catalogues, we obtain a set of values $\sigma_{\rm f}^{\rm red}$, Eq. \Ref{eq:metricfirst}, which we statistically compare to the previous values $\sigma_{\rm f}$ in Table \ref{tab:accuracy} by fitting an average parameter $\delta_{\rm red}$ for the relative difference, defined by \mbox{$\sigma_{\rm f}^{\rm red}=(1+\delta_{\rm red})\,\sigma_{\rm f}$}, to all samples and redshift bins. For all values of $\sigma_{\rm b}$ and $\sigma_{\rm r}$ combined, we find no significant differences between the new and old shear catalogues, this means $\delta_{\rm red}$ is consistent with zero; the upper limit is $\delta_{\rm red}\lesssim13\%$ ($68\%$ CL). For an average of $\ave{\sigma_{\rm f}}=3.8\%$, the additional inaccuracy due to reduced shear is therefore less than \mbox{$13\%\times\ave{\sigma_{\rm f}}\approx0.5\%$}. \subsection{Garching-Bonn Deep Survey} \label{ap:gabods} Finally, we apply our procedure in a first demonstration to data in the \mbox{GaBoDS}~ (\citealt{2007A&A...461..861S}, SHS07 hereafter; \citealt{2007A&A...468..859H}). Because of its comparatively small effective survey area of roughly 15 square degree, the statistical power of \mbox{GaBoDS}~ is no longer competitive to measurements in contemporary surveys. Nevertheless, the results presented here shed some new light on the nature of the lens galaxies in SHS07 and round off the past \mbox{GaBoDS}~ analysis. We plan to apply our methodology to more recent lensing data in an upcoming paper. \begin{figure}[htb] \begin{center} \epsfig{file=fig13.ps,width=100mm,angle=0} \end{center} \caption{\label{fig:gabodscorr} Correlation matrix $C_{ij}$ of measurements errors for three kinds of aperture statistics of FORE-I lenses in the \mbox{GaBoDS}~ analysis. The integers on the two axes inside the matrix refer to either the $i$ or $j$ index. Values $1\le k\le6$ for $k$ being either $i$ or $j$ refer to errors of $\ave{{\cal N}^2(\theta_k)}$, values of $7\le k\le12$ to $\ave{{\cal N}M_{\rm ap}(\theta_{k-6})}$, and values $13\le k\le18$ to $\ave{M_{\rm ap}^2(\theta_{k-12})}$. The aperture scales $\{\theta_k/{\rm arcmin}\}$ are $\{2,3.3,5.3,8.7,14.1,23\}$. The matrix is estimated from 52 jackknife samples.} \end{figure} As lens sample in \mbox{GaBoDS}~ we choose FORE-I galaxies, which comprise \mbox{$R\le21.0$} flux-limited galaxies with mean redshift $\bar{z}_{\rm d}=0.35$; the RMS dispersion of the lens redshifts is 0.16. The source galaxies are flux-selected between $21.5\le R\le24.0$ and have $\bar{z}_{\rm s}=0.68$; see Figure 3 in SHS07 for the redshift distributions of lenses and sources in these samples. For the estimators, we bin the two-point correlation functions \Ref{eq:estxipm}-\Ref{eq:estomega} between 7 arcsec and 46 arcmin using 4100 linear bins and merge the catalogues of the $n_{\rm patch}=52$ \mbox{GaBoDS}~ fields also used in SHS07. In contrast to SHS07, we only use six aperture scales between 2 and 23 arcmin, equidistant on a logarithmic scale, because of the strong correlation of errors between similar aperture scales. The correlation matrix of statistical (jackknife) errors can be found in Fig. \ref{fig:gabodscorr}. Furthermore, we normalise the new measurements by a WMAP9 cosmology, Eq. \Ref{eq:wmap9}. In contrast to the foregoing analyses with our mock MS data, for which we measure the aperture statistics up to degree scales, we here have to use Eq. \Ref{eq:lsbias} to extrapolate the large-scale bias $b_{\rm ls}$, which is then no longer a free parameter. For the halo bias-factor $b_{\rm h}(m)$, needed in this extrapolation, we employ the fitting formula in \citet{2005ApJ...631...41T}. Owing to lacking information on an IA of \mbox{GaBoDS}~ sources, we do assume \mbox{$A_{\rm ia}=0$}. A value of \mbox{$|A_{\rm ia}|\lesssim2$} could therefore shift the amplitude of $b_{\rm 2D}$ by up to 10 to 15 per cent, mainly because of the GI term, and that of $r_{\rm 2D}$ by up to 2 per cent. \begin{figure*} \epsfig{file=fig12a.ps,width=75mm,angle=-90} \epsfig{file=fig12b.ps,width=75mm,angle=-90} \caption{\label{fig:gabods} \emph{Left}: Posterior model of $r_{\rm 2D}(\theta_{\rm ap})$ (top) and $b_{\rm 2D}(\theta_{\rm ap})$ (bottom) based on the \mbox{GaBoDS}~ measurements \mbox{FORE-I} (shaded regions with 68\% and 95\% PI). Shown as black open squares are the median values and a 68\% interval around the median for the measured $b_{\rm 2D}$ and $r_{\rm 2D}$; the open circles indicate the mean. The red-star data points H+02 show the measurements by \citet{2002ApJ...577..604H} for comparison. \emph{Right}: 68\% PI posterior of the excess HOD variance $V(m)$ with open box for the mass scale of the pivotal mass $m_{\rm piv}$ (top); 68\% PI posterior of the mean biasing function $b(m)$ and $f_{\rm cen}$ as open triangle (bottom). The fiducial model has WMAP9 parameters.} \end{figure*} Our updated measurements are shown in the left panel of Fig. \ref{fig:gabods} as $b_{\rm 2D}$ and $r_{\rm 2D}$ by the black data points designated SHS+07. To obtain these points from the observed aperture-moment statistics, we randomly draw realisations of the aperture statistics from a Gaussian likelihood based on our jackknife data covariance. The open squares show the median and 68 percentiles of the normalised bias parameters from this Monte-Carlo process, computed with Eqs. \Ref{eq:b2dobs} and \Ref{eq:r2dobs} for each realisation; the open circles are the mean of the realisations which are different to the median owing to the skewness in the error distribution. The shaded regions indicate the $68\%$ and $95\%$ PI of the posterior (projected) biasing functions. The red stars are measurements in \mbox{VIRMOS/DESCART}, broadly consistent with ours, for flux-limited galaxies with a similar selection function \citep{2002ApJ...577..604H}. The right panel of Fig. \ref{fig:gabods} depicts the posterior of the template parameters that provide a physical interpretation of the galaxy bias. We take from here that the scale-dependence of the galaxy bias mainly originates in a scale-dependence of $b(m)$: between halo masses of $10^{13}$ to $10^{14}\,h^{-1}\msol$ there is a relative scarcity of galaxies, which is qualitatively comparable to the BLUE low-$z$ sample (see Fig. \ref{fig:deltag}). The HOD variance is consistent with a Poisson model, that means \mbox{$V(m)=0$}, albeit only weakly constrained. The 68\% PI of the pivotal halo mass is \mbox{$m_{\rm piv}=10^{11.48+0.72-0.81}\,h^{-1}\msol$}, and the fraction \mbox{$f_{\rm cen}=0.50\pm0.31$} of halos open for central galaxies is essentially the uniform prior which has the variance $1/\sqrt{12}$ and the mean $0.5$. The posterior galaxy number density is \mbox{$\bar{n}_{\rm g}=0.19^{+0.33}_{-0.13}\,h^3\rm Mpc^{-3}$}. \begin{figure} \begin{center} \epsfig{file=fig14.ps,width=80mm,angle=-90} \end{center} \caption{\label{fig:gabodsbrofk} Reconstructed biasing functions of FORE-I galaxies in \mbox{GaBoDS}. Shown are the 68\% and 95\% PI of $b(k)$ in the bottom panel and that of $r(k)$ in the top panel. The biasing function are an average over the redshift range $0.34\pm0.16$ for a WMAP9 cosmology. The red data points show the biasing function of BLUE low-$z$, which have a similar $b(m)$.} \end{figure} Fig. \ref{fig:gabodsbrofk} displays the posterior distribution of the deprojected biasing functions and the 68\% PI for FORE-I galaxies. The biasing functions are an average for the redshift range covered by the lens galaxies. The \mbox{GaBoDS}~ data probe primarily the one-halo regime \mbox{$\theta_{\rm ap}\lesssim20\,\rm arcmin$}; the large-scale bias of \mbox{$b_{\rm ls}=0.92^{+0.04}_{-0.03}$} visible at $k\ll1\,h\,\rm Mpc^{-1}$ is extrapolated. The red data points show the biasing functions of BLUE low-$z$ for a qualitative comparison. \section{Discussion} \label{sect:discussion} In this study, we have outlined and successfully tested a refined technique to measure in contemporary lensing surveys the scale-dependent galaxy bias down to non-linear scales of \mbox{$k\sim10\,h^{-1}\,\rm Mpc$} for lens galaxies at \mbox{$z\lesssim0.6$}. To test our reconstruction technique, we employ a fiducial survey with a sky coverage of $\sim1000\,\rm deg^2$, and a photometry and a survey depth as in CFHTLenS. To construct realistic samples of lenses and sources, we have prepared mock catalogues that are consistent with those used in SES13 and \cite{2016arXiv160808629S}. Despite some variations in survey depth and area, these survey parameters are similar to the ongoing Kilo-Degree Survey (KiDS), Dark Energy Survey (DES), or the survey with the Hyper Suprime-Cam \citep{2015MNRAS.454.3500K,2016PhRvD..94b2002B,2017arXiv170405858A}. If the galaxy-bias normalisation is perfect, our technique applied to these data can achieve a statistical precision within the range of $5-10\%$ ($68\%$ CL), if similar lens and source samples are targeted, and a slightly better accuracy of $3-7\%$ ($68\%$ CL), see Table \ref{tab:accuracy}. For the high-$z$ samples, the accuracy will be somewhat higher with $3-5\%$. On the other hand, it is clear from our overview Table \ref{tab:errors} that the accuracy of the galaxy-bias normalisation is in fact limited, mainly by our knowledge of the intrinsic alignment of sources, cosmological parameters, and the galaxy redshift-distributions. With a broad knowledge of \mbox{$|A_{\rm ia}|\lesssim2$} and the specifications for the normalisation errors in Table \ref{tab:errors}, we conclude that systematic errors would potentially degrade the overall accuracy to approximately $15\%$ for $b(k)$ and $10\%$ for $r(k)$. For fully controlled intrinsic alignment of sources, these errors could be reduced by $5\%$. An additional reduction by $3\%$ may be possible by controlling the redshift distributions (their mean and variance) in the normalisation to $1\%$ accuracy. For the fiducial cosmology, the knowledge of $\Omega_{\rm m}$ is of most importance while the normalisation of the ratio statistics is less affected by $\sigma_8$. For a future method improvement, various problems could be of interest: (i) approximations in the formalism or estimators of Sect. \ref{sect:projectedbias}; (ii) an inaccurate statistical model for the likelihood function; (iii) a model bias in the templates. We discuss a few problems in the following. With regard to our statistical model, we find indeed evidence for deviations from a Gaussian model of the joint aperture statistics which is explicitly assumed in Eq. \Ref{eq:likelihood} (see Appendix \ref{sect:nongauss}). However, the magnitude of a bias owing to a Gaussian model is not clear and requires more research. For example, deviations from a Gaussian distribution in broadly related cosmological analyses with the aperture mass $M_{\rm ap}$ are reported in \cite{2015MNRAS.449.1505S} and \cite{2009A&A...504..689H} where non-Gaussian corrections to the likelihood produce insignificant changes in one case but not in the other. Interestingly for our data, the most inaccurate reconstruction (for small $k$) is that of RED low-$z$ which shows a strong indication of a non-Gaussian error-distribution for $\ave{{\cal N}^2}$ at large angular scales; see Table \ref{tab:gausstest}. Moreover, our likelihood model employs an error covariance that we estimate by the jackknife technique. The jackknife technique is known to underestimate cosmic variance, in particular for angular scales comparable to the size of sub-fields used for the jackknife sample \citep{2016MNRAS.456.2662F}. However, this problem is partly addressed in our analysis by using ratio statistics which is less affected by cosmic variance \citep{2011MNRAS.416.3009B}. While this may not be sufficient for future surveys, it seems to be so for contemporary surveys because cosmic variance is included in our assessment of the reconstruction accuracy. Finally, a model bias in our templates for $b(k)$ and $r(k)$ is arguably unlikely, at least for our simulated galaxy samples, because the purely generic models in Eq. \Ref{eq:generic} do not produce a more accurate reconstruction of the biasing functions although they are excellent fits to the true biasing functions (see Table \ref{tab:accuracy}). Nevertheless, a relevant model bias could arise through our assumption of a non-evolving galaxy bias for galaxy samples with a distance distribution that is broad compared to the galaxy-bias evolution. Our physical templates for the biasing functions $b(k)$ and $r(k)$ are also insightful for a basic physical interpretation of the scale-dependence of galaxy bias. On the one hand, the physical parameters in the physical templates describe the HOD of the actual galaxy population. On the other hand, these HOD parameters have only a moderate accuracy because our relatively simple halo model lacks the implementation of recently identified effects such as halo exclusion, non-linear or stochastic halo clustering, assembly bias, galaxy conformity, or a scale-dependent halo-bias function \citep{2013PhRvD..88h3507B,2007MNRAS.377L...5G,2013MNRAS.430.1447K,2005ApJ...631...41T}. And our model has a comparably simplistic treatment of central galaxies. According to \cite{2012MNRAS.426..566C}, by taking ratios of the aperture statistics we are, however, probably less sensitive to these shortfalls in the halo model. We therefore expect the HOD parameters in our templates not to be more accurate than $10-20\%$ compared to the true HOD in the lens sample, based on the reported biases in the cited literature. We stress that this does not necessarily pose a problem for the deprojection as long as the templates are good fits to the true biasing functions. With regard to a basic interpretation of galaxy bias, we nevertheless take from the discussion in Sect. \ref{sect:modeldiscussion} that central galaxies and a non-Poisson HOD variance produce a scale-dependent bias most prominently towards small scales, namely in the regime that is dominated by low-occupancy halos with $m\lesssim m_{\rm piv}$. A strong scale-dependence over a wider range of spatial or angular scales and a non-monotonic behaviour may be produced by a mean biasing function $b(m)$ that varies with halo mass $m$; in particular only $b(m)$ affects the large-scale bias. Interestingly here, the effect of central galaxies is different from that of a non-Poisson variance: central galaxies increase both $b(k)$ and $r(k)$ for larger $k$, whereas a non-Poisson variance induces opposite trends for $b(k)$ and $r(k)$. Therefore, the measurement of biasing functions can in principle constrain both $b(m)$ and the excess variance $V(m)$ to test galaxy models, although predictably with limited accuracy in contemporary surveys (see Fig. \ref{fig:Vm}). A demonstration of our reconstruction technique to data in the \mbox{GaBoDS}~ suggests that the \mbox{$R\le21$} flux-limited sample of lens galaxies FORE-I consists mainly of blue galaxies in the field. Fig. \ref{fig:gabodsbrofk} reports our reconstruction of the biasing functions for the FORE-I sample in \cite{2007A&A...461..861S}. The physical parameters in the right panel of Fig. \ref{fig:gabods} show that these galaxies tend to avoid halos in the broad mass-range $10^{13}-10^{14}\,h^{-1}\,\msol$ and thereby produce the relatively low (mean) values of \mbox{$b_{\rm 2D}\approx0.8$} and \mbox{$r_{\rm 2D}\approx0.6$} and their scale-dependence between a few and 20 arcmin (left panel); see also the measurements by H+02 for similar lens galaxies with comparable results. Consequently, they are presumably in majority field and group galaxies. The reconstructed biasing functions also broadly match those of BLUE low-$z$ which supports this interpretation. Clearly, the BLUE low-$z$ sample does not have the same selection function as FORE-I so that this comparison is certainly only qualitative. For a quantitative test of galaxy models with more recent galaxy surveys, simulated and observed galaxies have to be carefully selected to obtain consistent samples. If this succeeds, both our little demonstration with the $15\,\rm deg^2$ \mbox{GaBoDS}~ data and the multiplicity of biasing functions visible in the Figs. \ref{fig:brofksm} and \ref{fig:brofkredblue} promise useful constraints for galaxy models. \section*{Acknowledgements} We thank Hananeh Saghiha for preparing the RED and BLUE galaxy samples. We also thank Catherine Heymans and Indiarose Friswell for comments on the shear bias, and Peter Schneider for general comments on the paper. This work has been supported by Collaborative Research Center TR33 `The Dark Universe' and by the Deutsche Forschungsgemeinschaft through the project SI 1769/1-1. Patrick Simon also acknowledges support from the German Federal Ministry for Economic Affairs and Energy (BMWi) provided via DLR under project no. 50QE1103. Stefan Hilbert acknowledges support by the DFG cluster of excellence ‘Origin and Structure of the Universe’ (\url{www.universe-cluster.de}). \bibliographystyle{aa}
2024-02-18T23:40:39.028Z
2019-07-24T02:12:24.000Z
algebraic_stack_train_0000
2,961
26,304
proofpile-arXiv_065-14594
\section{Introduction} The strong maximum principle of Eberhard Hopf, often known as the Hopf's lemma\cite{2}, is a classical and fundamental result of the theory of second order elliptic partial differential equations. Its main idea is that \emph{if a function satisfies a second order partial differential inequality of a certain kind in a domain of $R^n$ and attains a maximum in the domain then the function is constant}. The Hopf's lemma has been generalized to describe the behavior of the solution to an elliptic problem as it approaches a point on the boundary where its maximum is attained. Its history can be first traced back to the maximum principle for harmonic functions. In the past decade this lemma has been generalized as the strong maximum principle for singular quasi-linear elliptic differential inequalities(\cite{7}). For a while it was thought that the Hopf's maximum principle applies only to linear differential operators. In the later sections of his original paper, however, Hopf considered a more general situation which permits certain nonlinear operators and, in some cases, leads to uniqueness statements in the Dirichlet problem for the mean curvature operator and the Monge-Amp\`{e}re equation. In the first part of this paper, we prove a Hopf's lemma for a nonlinear non-local pseudo-differential operator -- the fractional p-Laplacian. Nonlocal fractional operators, in particular the fractional Laplacian, have gained a lot of popularity among researchers working in a variety of fields. For instance, the fractional Laplacian has been utilized to model the dynamics in the Hamiltonian chaos in astrophysics (see \cite{10}), random motions such as the Brownian motion and the Poisson process in physics (see \cite{11} and \cite{12}), the jump precess in finance and probability (see \cite{13}) as well as the the acoustic wave in mechanics. In the diffusion process, it has been used to derive heat kernel estimates for a large class of symmetric jump-type processes (see \cite{14}, \cite{15}). The fractional Laplacian has also been applied to the study of the game theory, image processing, L\'{e}vy processes, optimization and so on. Readers who are interested in the application of the fractional Laplacians can refer to \cite{16}, \cite{17}, \cite{18} and the references therein. The interest in the fractional operators continues to grow strong in this decade. More and more beautiful results, whose counterparts are powerful tools in elliptic PDE analysis, have been proved in the fractional setting. The generalization, however, is no small feat due to the essential difference in how the fractional operators and the traditional differential operators are defined. In light of this, let's take a look at the fractional p-Laplacian. Let $s\in (0,1)$ and $p>1$. The fractional p-Laplacian is defined as \begin{eqnarray}\nonumber (-\Delta)^s_p u(x)&=&C_{n,s,p}\lim_{ \varepsilon\rightarrow 0}\int_{R^n\backslash B_\varepsilon(x)} \frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{n+ps}}dy\\ \label{20171041} &=&C_{n,s,p}PV\int_{\R^n}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))}{|x-y|^{n+ps}}dy, \end{eqnarray} where $PV$ stands for the Cauchy principal value. To ensure that the integral in (\ref{20171041}) is well defined, we assume that \[u\in C^{1,1}_{loc}(\Omega)\cap L_{sp}(R^n) \] with \[L_{sp}=\{u\in L^1_{loc}(R^n) \mid \int_{\R^n}\frac{|u(x)|^{p-1}}{1+|x|^{n+sp}}dx<\infty\}.\] When $s=1$ in Eq.(\ref{20171041}), it is the p-Lalacian. When $p=2$, Eq.(\ref{20171041}) becomes the nonlocal fractional Laplacian $(-\lap)^s$. A quick observation of the integral domain $R^n$ easily points to a characteristic shared among such integro-differential operators. Different from the traditional differential operators, such as the Laplace operator, these are not locally defined. To give an example of what's new in nonlocal problems compared with local ones, we consider the Dirichelet and the Neumann problem on a bounded domain $\Omega\subset R^n$. To study the Laplacian problems, we require of information of solutions on the boundary $\partial\Omega$. But this is not enough for the fractional problems, which demand knowledge of solutions on both $\partial\Omega$ and $R^n\setminus\bar{\Omega}$. This raises a natural discussion in how to install appropriate boundary conditions in different cases so that the solutions can be extended in a way that preserve proper regularity in the whole space $R^n$. The challenge is especially true in computer-based simulations given that there is a limited amount of data we can gather over time. On top of this, when $p\neq2$, the complexity increases because nonlinearity appears in the numerator. In this paper, we are interested in the fractional p-Laplacian problems with Dirichlet boundary conditions. Our first main result is a Hopf's lemma in a half-space. So far there are a few interesting results in the fractional settings on the Hopf's maximum principle. In \cite{1}, Caffarelli et al. quoted a generalized Hopf's lemma for the smooth solution to a harmonic fractional equation on a smooth domain $\Omega \subseteq R^n$. Either by the Harnack inequality or the Riesz potential, they claimed it true that if there is a point $X_0\in \partial\Omega$ for which $v(X_0)=0$, then there exists $\lambda> 0$ such that $v(x)\geq \lambda ((x-X_0)\cdot \nu (X_0))^\alpha$, where $\nu (X_0)$ is the inner normal to $\partial\Omega$ at $X_0$. In \cite{8}, Greco and Servadei considered \begin{equation*} (-\lap)^s u(x)\geq c(x)u(x), \quad x \in\Omega. \end{equation*} Assuming that $c(x)\leq 0$ in bounded domain $\Omega$, they derived that \begin{equation}\label{201710251} \inf \frac{u(x)}{(dist(x, \partial\Omega))^s}>0, \mbox{ as } x \ra \partial\Omega. \end{equation} Quite recently, Chen and Li\cite{5} proved a Hopf's lemma in terms of the boundary derivative for anti-symmetric functions on a half space through an elementary yet rather delicate analysis. \begin{lem}[Chen-Li] Assume that $w \in C^3_{loc}(\bar{\Sigma})$, $\overline{\underset{x \ra \partial\Sigma}{\lim}}=o(\frac{1}{[dist(x, \partial\Sigma)]^2})$, and \begin{equation*} \left\{\begin{array}{ll} (-\lap)^s w(x)+c(x)w(x)=0, & in \;\Sigma,\\ w(x)>0, & in \;\Sigma,\\ w(x^\la)=-w(x), &in\; \Sigma. \end{array} \right. \end{equation*} Then \begin{equation*} \frac{\partial w}{\partial\nu}(x)<0, \quad x \in \partial\Sigma. \end{equation*} \end{lem} In \cite{4}, Pezzo and Quass considered a fractional p-Laplacian problem on a bounded domain $\Omega$ satisfying the interior ball condition: \begin{equation}\label{201710181} (-\lap)^s_p u=c(x)|u|^{p-2}u, \quad x \in \Omega. \end{equation} Under certain assumptions on $c(x)$, they obtained a similar result to that in (\ref{201710251}) for the weak super-solution of (\ref{201710181}). Following the spirit in \cite{5}, we present a Hopf's lemma for $(-\lap)^s_p$ via the boundary derivative. Let \[T_\lambda=\{x\in \R^n \mid x_1=\lambda, \text{ for some } \lambda \in \R\}\] be the moving planes, \[\Sigma_\lambda=\{x\in \R^n \mid x_1>\lambda\}\] be the region to the right of the plane $T_\lambda $, \[ x^\lambda=(2\lambda-x_1,x_2,\cdots,x_n)\] be the reflection of $x$ about $T_\lambda$ and \[w_\lambda(x)=u_\lambda(x)-u(x).\] \begin{mthm}\label{201710250} For $p\geq3$, assume that $u\in C^3_{loc}(\bar \Sigma)\cap L_{sp}$ and satisfies \begin{equation}\label{2.2j} \begin{aligned} \begin{cases} (-\Delta)_p^{s}u_\lambda(x)-(-\Delta)^{s}_p u(x)+c(x)w(x)=0, &\text{ in } \Sigma,\\ w(x)>0,&\text{ in } \Sigma,\\ w(x^\lambda)=-w(x),&\text{ in } \Sigma. \end{cases} \end{aligned} \end{equation} Let $\nu$ be the outward normal vector on $\partial\Sigma$. If \begin{equation}\label{2.1j} \overline{\lim_{x\rightarrow \partial \Sigma}}c(x)=o(\frac{1}{[dist(x,\partial\Sigma)]^2}), \end{equation} then \begin{equation}\label{20171042} \dfrac{\partial w}{\partial \nu}(x)<0, \quad x \in \partial \Sigma. \end{equation} \end{mthm} Following this we present our second main result-a Lipschitz boundary regularity for the fractional p-Laplacian. In \cite{20}, Bogdan derived a boundary Harnack inequality for nonnegative solutions for a harmonic fractional problem with Dirichlet condition. Other boundary regularity for fractional equations were obtained by Caffarelli et al. in \cite{1} for a homogeneous fractional heat equation, and by Kim and Lee in \cite{22} for a free boundary problem for the fractional Laplacian. In both papers the authors proved that the limit of $u(x)/dist(x, \partial\Omega)$ exists point-wise on the boundary. In a recent paper by Ros-Oton and Serra \cite{3}, the authors considered \begin{equation*} \left\{\begin{array}{ll} (-\lap)^s u=g, &x \in \Omega,\\ u=0 , &x \in R^n\backslash\Omega. \end{array} \right. \end{equation*} Under the assumption that $g \in L^\infty (\Omega)$ for a bounded $\Omega$, they proved that the solution is $C^s(R^n)$ and $\frac{u(x)}{dist(x, \partial\Omega)}$ is $C^\alpha$ up to $\partial\Omega$ through a Krylov boundary Harnack inequality. Later, in \cite{19} Chen et al. proved a similar result for the classical solutions through a good plain argument. The closest result to ours was obtained by Iannizzotto et al. in \cite{23}. There the authors proved $C^\alpha$ regularity, $\alpha \in (0,s]$, up to the boundary for the weak solutions of a fractional p-Laplacian problem. Their proof was carried out in the spirit of Krylov’s approach to boundary regularity and was quite complicated. Inspired the work in \cite{19} and \cite{23}, we apply some of the ideas in \cite{19} on the following equation, \begin{equation}\label{20171091} \left\{\begin{array}{ll} (-\Delta)^s_p u(x)=f(x), &x \in \Omega,\\ u\equiv0, &x \in R^{n}\backslash\Omega. \end{array} \right. \end{equation} \begin{mthm}\label{20171092} Let $\Omega$ be a bounded domain in $R^n$ with exterior tangent spheres on the boundary, $s \in (0,1)$ and $p>2$. Assume that $\| f\|_{L^\infty(\Omega)}<\infty$ and $u(x) \in L_{ps}$. If $u$ is a solutions of (\ref{20171091}), then there exists some $\nu \in (0, s)$ such that for $x$ close to the boundary \be |u(x)|\leq c [dist(x, \partial\Sigma)]^\nu, \quad x \in \Sigma. \ee \end{mthm} \begin{mrem} For $p=2$, $\nu$ can be up to $s$ (see \cite{3}). \end{mrem} For convenience's sake, we let \[ |u(x)|^{p-2}u(x) =:[u(x)]^{p-1}.\] Throughout the paper, we denote positive constants by $c$, $C_i$ whose values may vary from line to line. \section{A Hopf's Lemma } In this section, we prove Theorem \ref{201710250}. For simplicity, in this section, we write $w_\lambda$ as $w$ and $\Sigma_\la$ as $\Sigma$. \textbf{Proof.} To prove (\ref{20171042}), we develop a contradictive argument. Suppose there exists some $\tilde{x} \in \partial \Sigma$ such that (\ref{20171042}) is not true, then \begin{equation}\label{20171043} \dfrac{\partial w}{\partial \nu}(\tilde{x})=0. \end{equation} Without loss of generality, let $\lambda=0$ and $\tilde{x}$ be the origin. Let the ray from $\tilde{x}$ in the direction of $-\nu$ be the $x_1$ axis. By (\ref{20171043}) and the anti-symmetry of $w$, we know that $$\frac{\partial ^2 w}{\partial x_1^2}(0)=0.$$ For some $\bar{x}=(\bar{x}, 0') \in R^n$ and close to the origin, by the Taylor expansion, we obtain \begin{equation}\label{2.6} w(\bar{x})=w(0)+Dw(0)\cdot \bar{x}+\bar{x}\cdot D^2w(0)\cdot \bar{x}^T+O(|\bar{x}|^3)=O(|\bar{x}|^3). \end{equation} For simplicity's sake, let \begin{equation} \delta= |\bar x_1|=dist(\bar{x},T_0). \end{equation} Then we have $w(\bar{x})=O(\delta^3)$, and \begin{equation} |Dw(\bar{x})|=O(\delta^2),|D^2w(\bar{x})|=O(\delta). \end{equation} For $\bar{x}$ sufficiently close to the origin, i.e. $\delta$ sufficiently small, it's trivial that \be\label{20171045} c(\bar x)w(\bar{x})=o(1)\delta. \ee Using the estimate we have on $w_\lambda$ and its derivatives, we can prove that for $\delta$ small and some $c_1>0$, it holds that \be\label{20171044} (-\Delta)_p^{s}u_\lambda(\bar x)-(-\Delta)^{s}_p u(\bar x)\leq - \frac{c_1}{4}\delta. \ee We postpone the proof of (\ref{20171044}) for the moment. Combining (\ref{20171045}) and (\ref{20171044}), we arrive at $$(-\Delta)_p^{s}u_\lambda(\bar x)-(-\Delta)^{s}_p u(\bar x)+c(\bar x)w(\bar{x})<0.$$ This contradicts to (\ref{2.2j}) and thus proves the theorem. Now we prove (\ref{20171044}). Recall that $y^\la=y^0=(-y_1,y')$. By (\ref{20171041}), we have \begin{eqnarray}\nonumber && (-\Delta)^s_p u_\lambda(\bar x) -(-\Delta)^s_p u(\bar x)\\\nonumber &= &C_{n,s,p}PV\int_{R^n}\dfrac{(u_\lambda(\bar x)-u_\lambda(y))^{p-1}-(u(\bar x)-u(y))^{p-1}}{|\bar x-y|^{n+ps}}dy\\\nonumber &= &C_{n,s,p}PV\int_{\Sigma}\dfrac{(u_\lambda(\bar x)-u_\lambda(y))^{p-1}-(u(\bar x)-u(y))^{p-1}}{|\bar x-y|^{n+ps}}dy\\\nonumber &&\qquad +C_{n,s,p}\int_{R^n/\Sigma}\dfrac{(u_\lambda(\bar x)-u_\lambda(y))^{p-1}-(u(\bar x)-u(y))^{p-1}}{|\bar x-y|^{n+ps}}dy\\\nonumber &=&C_{n,s,p}PV\int_{\Sigma}\dfrac{(u_\lambda(\bar x)-u_\lambda(y))^{p-1}-(u(\bar x)-u(y))^{p-1}}{|\bar x-y|^{n+ps}}dy\\\nonumber &&\qquad +C_{n,s,p}\int_{\Sigma}\dfrac{(u_\lambda(\bar x)-u(y))^{p-1}-(u(\bar x)-u_\lambda(y))^{p-1}}{|\bar x-y^0|^{n+ps}}dy\\\nonumber &=&C_{n,s,p}PV\int_{\Sigma}\Bigl((u_\lambda(\bar x)-u_\lambda(y))^{p-1}-(u(\bar x)-u(y))^{p-1}\Bigl)\\\nonumber &&\cdot \Bigl (\dfrac{1}{|\bar x-y|^{n+ps}}-\dfrac{1}{|\bar x-y^0|^{n+ps}}\Bigl )dy\\\nonumber &&+C_{n,s,p}\int_{\Sigma}\dfrac{(u_\lambda(\bar x)-u_\lambda(y))^{p-1}-(u(\bar x)-u(y))^{p-1}}{|\bar x-y^0|^{n+ps}}\\\nonumber &&\frac{+(u_\lambda(\bar x)-u(y))^{p-1}-(u(\bar x)-u_\lambda(y))^{p-1}}{}dy \\\label{2.1} &=:&C_{n,s,p}PV\int_{\Sigma}I\,dy+C_{n,s,p}\int_{\Sigma}II\,dy. \end{eqnarray} We first take care of $\int_{\Sigma}IIdy$ for later. Let $R_o>0$ be a given positive number. Then $$\Sigma= \big(\Sigma\cap B_{R_o}(\bar{x}) \big) \cup \big( \Sigma\cap B^c_{R_o}(\bar{x}) \big).$$ For $y\in \Sigma \cap B_{R_o}(\bar{x})$, by the mean value theorem we have \begin{eqnarray}\nonumber && (u_\lambda(\bar x)-u_\lambda(y))^{p-1}-(u(\bar x)-u(y))^{p-1}+(u_\lambda(\bar x)-u(y))^{p-1}-(u(\bar x)-u_\lambda(y))^{p-1}\\\nonumber &=&(u_\lambda(\bar x)-u_\lambda(y))^{p-1}-(u(\bar x)-u_\lambda(y))^{p-1}+(u_\lambda(\bar x)-u(y))^{p-1}-(u(\bar x)-u(y))^{p-1}\\\nonumber &=&(p-1)(|\xi_1|^{p-2} +|\xi_2|^{p-2} )w_\lambda(\bar x) \\\label{2.2} &\leq &c w_\lambda(\bar x) |\bar x-y^0| ^{p-2}, \end{eqnarray} with $\xi_1$ between $u_\lambda(\bar x)-u_\lambda(y)$ and $u(\bar x)-u_\lambda(y)$, $\xi_2$ between $u_\lambda(\bar x)-u(y)$ and $u(\bar x)-u(y)$. The last inequality holds because, under the assumption $w(y)>0$ for $y \in \Sigma$, we have \begin{eqnarray*} |\xi_1|&\leq & \max\{|u_\lambda(\bar x)-u_\lambda(y)|, |u(\bar x)-u_\lambda(y)|\}\\ &<&\max\{|u_\lambda(\bar x)-u_\lambda(y)|, |u(\bar x)-u(y)|\}\\ &\leq &c\max\{|\bar x-y^0|, |\bar x-y|\}=c|\bar x-y^0|. \end{eqnarray*} Similarly, one can show that $$|\xi_2|< c\max\{|\bar x-y^0|, |\bar x-y|\}=c|\bar x-y^0|.$$ From (\ref{2.2}), for $\delta$ sufficiently small we deduce that \begin{eqnarray}\nonumber \int_{\Sigma \cap B_{R_o}(\bar{x})}|II|dy&=& \int_{\Sigma \cap B_{R_o}(\bar{x})} IIdy\\\nonumber &\leq& c\int_{\Sigma \cap B_{R_o}(\bar{x})}\dfrac{w(\bar x)}{|\bar x-y^0|^{n+ps-p+2}}dy\\\nonumber &\leq& c w(\bar{x})\int_{ B_{2R_o}(\bar{x})\backslash B_\delta(\bar{x}) }\dfrac{1}{|\bar x-y^0|^{n+ps-p+2}}dy\\\label{2.10j} &\leq& c_1 \max\{\delta^{1+p-ps},\delta^2\}. \end{eqnarray} For $y\in \Sigma \cap B^c_{R_o}(\bar{x})$, using $u \in L_{sp}$ and the H\"{o}lder inequality we have \begin{eqnarray*} &&\int_{\Sigma \cap B^c_{R_o}(\bar{x})} II \, dy\\ & \leq& cw(\bar x)\int_{\Sigma \cap B^c_{R_o}(\bar{x}) } \dfrac{|u(\bar x)|^{p-2}+|u_\lambda(\bar x)|^{p-2}+|u(y)|^{p-2}+|u_\lambda(y)|^{p-2}}{|\bar x-y^0|^{n+ps}}dy\\ &\leq & c\,w(\bar x)\Big[C\int_{ |y|\geq R/2}\dfrac{1}{(1+|y|)^{n+ps}}dy+2\int_{ |y|\geq R/2}\dfrac{|u(y)|^{p-2}}{(1+|y|)^{n+ps}}dy\Big]\\ &\leq& c\,w(\bar x)\bigg(C+2\big(\int_{ |y|\geq R/2}\dfrac{|u(y)|^{p-1}}{(1+|y|)^{n+ps}}dy\big)^{\frac{p-2}{p-1}} \big(\int_{ |y|\geq R/2}\dfrac{1}{(1+|y|)^{n+ps}}dy\big)^{\frac{1}{p-1}}\bigg)\\ &\leq& c\,w(\bar x)\\ &\leq& c\,\delta^3. \end{eqnarray*} Together with (\ref{2.10j}), it shows that for $\delta$ small we have \begin{equation}\label{2.13j} \int_{\Sigma}IIdy\leq c \max\{\delta^{1+p-ps},\delta^2\}. \end{equation} Next we estimate $\int_{\Sigma}Idy$. For some $R>>1$ large, let $B^+_R(0)= \{x \in B_R(0) \mid x_1>0 \}$. To take care of the possible singularities, we divide $B^+_R(0)$ into five subregions( see Fig.\ref{p1}) defined as below. \begin{equation*} D_1=\{x\mid 1\leq x_1\leq 2,\,|x'|\leq 1\}, \end{equation*} \begin{equation*} D_2=\{x \in B_R(0)\mid x_1\geq \eta\}, \end{equation*} \begin{equation*} D_3=\{x \mid 0\leq x_1\leq 2\delta, \, |x'|<\delta\}, \end{equation*} \begin{equation*} D_4=\{x \mid 0\leq x_1\leq \eta, \: |x'|<\eta, \:x \not\in D_3\}, \end{equation*} \begin{equation*} D_5=\{x \in B_R(0)\mid 0 \leq x_1\leq \eta,\: |x'|>\eta\}. \end{equation*} \begin{figure} \centering \includegraphics{p1} \label{p1} \caption{Subregions} \end{figure} We estimate the integral in each region accordingly. Later, we will discuss the requirements that $R$ and $\eta$ must satisfy. Roughly speaking, we need to take $R$ sufficiently large and $\eta> \delta$ sufficiently small. We start with $D_1$. By the mean value theorem we have \begin{eqnarray}\nonumber &&\dfrac{1}{|\bar x-y|^{n+ps}}-\dfrac{1}{|\bar x-y^0|^{n+ps}}\\\nonumber &=&(-\frac{n+ps}{2})\frac{1}{|\xi_3|^{\frac{n+ps}{2}+1}}\bigl(|\bar x-y|^2-|\bar x-y^0|^2\bigl )\\\label{2.11} &=&\frac{n+ps}{2} \frac{1}{|\xi_3|^{\frac{n+ps}{2}+1}}4\bar x_1y_1 \end{eqnarray} with $$|\bar x-y|^2\leq \xi_3\leq |\bar x-y^0|^2,$$ and \be\label{2.12} (u_\lambda(\bar x)-u_\lambda(y))^{p-1}-(u(\bar x)-u(y))^{p-1}\\ =(p-1)|\xi_4|^{p-2}[w(\bar x)-w(y)], \ee where $\xi_4$ is between $u_\lambda(\bar x)-u_\lambda(y)$ and $u(\bar x)-u(y)$. Since $w(x)>0$ in $\Sigma$ and $w(0)=0$, for $y\in D_1$ and $\bar x$ sufficiently close to the origin, it is trivial that \be\label{2.24jjj} w(\bar x)-w_\lambda(y)<-c<0. \ee Hence $$u_\lambda(\bar x)-u_\lambda(y)<u(\bar x)-u(y).$$ Together with (\ref{2.12}), it shows that $$\xi_4 \neq 0, \quad y \in \Sigma.$$ Therefore there exists some $c$ such that $$|\xi_4|\geq c>0.$$ Combine this result with (\ref{2.11}) and (\ref{2.12}), it gives \begin{eqnarray}\nonumber &&\int_{D_1}\Bigl([u_\lambda(\bar x)-u_\lambda(y)]^{p-1}-[u(\bar x)-u(y)]^{p-1}\Bigl)\Bigl (\dfrac{1}{|\bar x-y|^{n+ps}}-\dfrac{1}{|\bar x-y^0|^{n+ps}}\Bigl )dy\\\nonumber &\leq& c\int_{D_1}|\xi_4|^{p-2}[w(\bar x)-w(y)]\frac{x_1 y_1}{|\bar x-y^0|^{n+ps+2}}dy\\\label{20171081} &\leq& -\int_{D_1} c\delta dx\leq -c_1\delta. \end{eqnarray} We estimate the integral on $D_2$. Later, in the proof for $D_4$ and $D_5$, we will discuss the ranges of $\eta$ and $R$ respectively. For now, we assume both $R$ and $\eta$ have already been selected and fixed. Then it's obvious that $$w_\lambda(x)-w_\lambda(y)\leq 0, \quad y\in \Omega_{R,\eta}, as \; \delta \ra 0.$$ Thus \begin{eqnarray}\nonumber && \int_{D_2 }\Bigl([u_\lambda(\bar x)-u_\lambda(y)]^{p-1}-[u(\bar x)-u(y)]^{p-1}\Bigl)\Bigl (\dfrac{1}{|\bar x-y|^{n+ps}}-\dfrac{1}{|\bar x-y^0|^{n+ps}}\Bigl )dy \\\nonumber & \leq &\int_{D_1}\Bigl(u_\lambda(\bar x)-u_\lambda(y))^{p-1}-(u(\bar x)-u(y))^{p-1}\Bigl)\Bigl (\dfrac{1}{|\bar x-y|^{n+ps}}-\dfrac{1}{|\bar x-y^0|^{n+ps}}\Bigl )dy \\\label{2.27} &\leq &-c_1\delta \end{eqnarray} On $D_3$, we separate the integrand $I$ into two pieces. On one hand, by Taylor expansion, we have \begin{eqnarray}\nonumber &&\bigg|\int_{D_3}\frac{[u_\lambda(\bar x)-u_\lambda(y)]^{p-1}-[u(\bar x)-u(y)]^{p-1}}{|\bar x-y^0|^{n+ps}}dy\bigg|\\\nonumber &\leq& c\int_{D_3}\frac{|\xi_4|^{p-2}}{|\bar x-y^0|^{n+ps}}\big|w(\bar x)-w(y)\big|dy \\\nonumber &\leq&c\int_{D_3}\frac{|\bar x-y|^{p-2}}{|\bar x-y^0|^{n+ps}} \big(|Dw(\bar x)\cdot(\bar x-y)|+ |(\bar x-y)\cdot D^2 w(\bar x)\cdot(\bar x-y)^{T}|\\\nonumber &&\qquad+|O(|\bar x-y|^3)|\big)dy\\\nonumber &\leq&c\delta^2\int_{D_3}\frac{1}{|\bar x-y^0|^{n+ps-p+1}}dy\\\label{20171072} &\leq&c \max\{\delta^2, \delta^{1+p-ps}\}. \end{eqnarray} The last inequality is true as $\delta \ra 0$. On the other hand, for $ \xi_5$ between $u_\la(\bar x)-u_\lambda(y)$ and $Du_{\la}(\bar x)\cdot(\bar x-y)$, we have \begin{eqnarray*} && \int_{D_3}\frac{[u_\lambda(\bar x)-u_\lambda(y)]^{p-1}} {|\bar x-y|^{n+ps}}dy\\ &=&\int_{D_3}\frac{[Du_{\la}(\bar x)\cdot(\bar x-y)]^{p-1}+ (p-1)|\xi_5|^{p-2}[(\bar x-y)\cdot D^2 u_{\la}(\bar x)\cdot(\bar x-y)^{T}}{|\bar x-y|^{n+ps}} \\ &&\frac{+O(|\bar x-y|^3)]}{}dy\\ &=& \int_{D_3}\frac{(p-1)|\xi_5|^{p-2}[(\bar x-y)\cdot D^2 u_{\la}(\bar x)\cdot(\bar x-y)^{T}+O(|\bar x-y|^3)]}{|\bar x-y|^{n+ps}}dy\\ &\leq & \int_{D_3}(p-1)\frac{|\xi_5|^{p-2}(\bar x-y)\cdot D^2 u_{\la}(\bar x)\cdot(\bar x-y)^{T}}{|\bar x-y|^{n+ps}}dy+ O(1)\delta^{1+p-ps}. \end{eqnarray*} We obtain the second to last equation from the fact that $$ \int_{D_3}\frac{[Du_{\la}(\bar x)\cdot(\bar x-y)]^{p-1}}{|\bar x-y|^{n+ps}}dy=0,$$ as a result of the symmetry of $D_3$ with respect to $\bar x$. Similarly, for $ \xi_6$ between $u(\bar x)-u(y)$ and $Du(\bar x)\cdot(\bar x-y)$, we have \begin{eqnarray*} && \int_{D_3}\frac{[u(\bar x)-u(y)]^{p-1}} {|\bar x-y|^{n+ps}}dy\\ &\leq&\int_{D_3}(p-1)\frac{|\xi_6|^{p-2}(\bar x-y)\cdot D^2 u(\bar x)\cdot(\bar x-y)^{T}}{|\bar x-y|^{n+ps}}dy+O(1)\delta^{1+p-ps}. \end{eqnarray*} Therefore, it follows that \begin{eqnarray}\nonumber &&\bigg|\int_{D_3} \dfrac{[u_\lambda(\bar x)-u_\lambda(y)]^{p-1}-[u(\bar x)-u(y)]^{p-1}} {|\bar x-y|^{n+ps}}dy\bigg|\\\nonumber &\leq & c(p-1)\bigg|\int_{D_3} \dfrac{(\bar x-y)\cdot[ |\xi_5|^{p-2} D^2 u_{\la}(\bar x)-|\xi_6|^{p-2} D^2 u(\bar x)] \cdot(\bar x-y)^{T}} {|\bar x-y|^{n+ps}}dy\bigg|\\\nonumber &&+c\delta^{1+p-ps}\\\nonumber &=&\bigg|\int_{D_3} \dfrac{(\bar x-y)\cdot |\xi_5|^{p-2} D^2 w(\bar x)\cdot(\bar x-y)^{T}} {|\bar x-y|^{n+ps}}\\\nonumber &&\frac{+(|\xi_5|^{p-2}-|\xi_6|^{p-2})(\bar x-y)\cdot D^2 u(\bar x)\cdot(\bar x-y)^{T}}{}dy\bigg|+c\delta^{1+p-ps}\\\label{20171074} &\leq& c\delta^{1+p-ps}. \end{eqnarray} Combining (\ref{20171072}) with (\ref{20171074}) it gives \be\label{20171071} |\int_{D_3} I\,dy| \leq c \max\{\delta^2, \delta^{1+p-ps}\}. \ee Below we deal with $D_4$. \begin{eqnarray}\nonumber &&\bigg|\int_{D_4}\Bigl([u_\lambda(\bar x)-u_\lambda(y)]^{p-1}-[u(\bar x)-u(y)]^{p-1}\Bigl)\Bigl (\dfrac{1}{|\bar x-y|^{n+ps}}-\dfrac{1}{|\bar x-y^0|^{n+ps}}\Bigl )dy\bigg|\\\nonumber &\leq& c\int_{D_4}|\xi_4|^{p-2}|w(\bar x)-w(y)\big|\frac{x_1 y_1}{|\bar x-y^0|^{n+ps+2}}dy\\\nonumber &\leq& c\int_{D_4}|\bar x-y|^{p-2} \big(|Dw(\bar x)\cdot(\bar x-y)|+ |(\bar x-y)\cdot D^2 w(\bar x)\cdot(\bar x-y)^{T}|\\\nonumber &&\qquad+|O(|\bar x-y|^3)|\big)\frac{\delta}{|\bar x-y|^{n+ps+1}}dy\\\nonumber &\leq& c\delta\int_{B_{2\eta}(\bar x)\backslash B_{\delta}(\bar x )} \frac{1}{|\bar x-y|^{n+ps-p}} dy\\\nonumber &=& c\delta \frac{(2\eta)^{p-ps}-\delta^{p-ps}}{p-ps}\\\label{20171082} &\leq& \frac{ c_1}{8}\delta. \end{eqnarray} The last inequality is true when $\eta$ is sufficiently small. On $D_5$, we have \begin{eqnarray}\nonumber &&\bigg|\int_{D_5}\Bigl([u_\lambda(\bar x)-u_\lambda(y)]^{p-1}-[u(\bar x)-u(y)]^{p-1}\Bigl)\Bigl (\dfrac{1}{|\bar x-y|^{n+ps}}-\dfrac{1}{|\bar x-y^0|^{n+ps}}\Bigl )dy\bigg|\\\nonumber &\leq& c\int_{D_5}|\xi_4|^{p-2}|w(\bar x)-w(y)\big|\frac{x_1 y_1}{|\bar x-y^0|^{n+ps+2}}dy\\\nonumber &\leq& c\int_{D_5}|\bar x-y|^{p-2} \big(|Dw(\bar x)\cdot(\bar x-y)|+ |(\bar x-y)\cdot D^2 w(\bar x)\cdot(\bar x-y)^{T}|\\\nonumber &&\qquad+|O(|\bar x-y|^3)|\big)\frac{\delta \eta}{|\bar x-y|^{n+ps+2}}dy\\\nonumber &\leq& c\delta \eta\int_{B_{2R}(\bar x)\backslash B_{\eta}(\bar x)} \frac{1}{|\bar x-y|^{n+ps-p+1}} dy\\\nonumber &=& c\delta \eta \frac{(2R)^{3+p-ps}-(\eta)^{3+p-ps}}{3+p-ps}\\\label{20171083} &\leq& \frac{ c_1}{8}\delta. \end{eqnarray} The validity of the last inequality results from $\eta$ being sufficiently small for $R$ fixed. Gathering the estimates on $D_i$, $i=1,2,3,4,5$, that is, (\ref{20171081}), (\ref{2.27}), (\ref{20171071}), (\ref{20171082}) and (\ref{20171083}), it shows that for $\eta$ sufficiently small, \be\label{20171087} \int_{B_R^+(0)}I \,dy\leq -c_1\delta. \ee What remains to do is the integral on $\Sigma\backslash B^+_R(0) $. \begin{eqnarray}\nonumber &&\bigl|\int_{\Sigma\backslash B^+_R(0)} \Bigl([u_\lambda(\bar x)-u_\lambda(y)]^{p-1}-[u(\bar x)-u(y)]^{p-1}\Bigl) \Bigl (\dfrac{1}{|\bar x-y|^{n+ps}}-\dfrac{1}{|\bar x-y^0|^{n+ps}}\Bigl )dy\bigl |\\\nonumber & \leq &c\delta\int_{\Sigma\backslash B^+_R(0)} \frac{|u(\bar x)|^{p-1}+|u_\lambda(\bar x)|^{p-1}+|u_\lambda (y)|^{p-1}+|u(y)|^{p-1}}{|\bar x-y|^{n+ps+1}}dy\\ \nonumber &\leq & c\delta\int_{\Sigma\backslash B^+_R(0)} \frac{|u(\bar x)|^{p-1}+|u_\lambda(\bar x)|^{p-1}}{|\bar x-y|^{n+ps+1}}dy + \frac{c\delta}{R}\int_{\R^n} \frac{ |u(y)|^{p-1}}{(1+|y|)^{n+ps}}dy\\\nonumber &\leq & \frac{c\delta}{R^{1+ps}}+\frac{c\delta}{R}\\\label{20171086} &\leq &\frac{c_1}{8}\delta. \end{eqnarray} The last inequality holds when $R$ is sufficiently large. Together with (\ref{20171087}), it gives \be\label{20171088} \int_{\Sigma}I \,dy\leq -\frac{c_1\delta}{2}. \ee Combining this with (\ref{2.13j}), for $\delta$ sufficiently small, we conclude that $$ (-\Delta)^s_p u_\lambda(\bar x) -(-\Delta)^s_p u(\bar x)\leq -\frac{c_1\delta}{4}. $$ This proves (\ref{20171044}) and completes the proof of the theorem. \section{Boundary Regularity} In this section we prove Theorem \ref{20171092}. Here the analysis of regularity up to the boundary is based on the existence of some super-solution, sometimes referred to as the barrier function in boundary regularity analysis, to the fractional p-Laplacian equation. To construct the barrier function, we begin with an equation in $R^+_1:=\{x \in R \mid x>0 \}$, whose solution is known explicitly. \begin{lem}\label{l3.1} For $0<\nu<s$, \begin{equation} (-\Delta)^s_p(x^\nu_+)=C_\nu x_+^{(p-1)\nu-ps},\quad x\in \R^+, \end{equation} with \[C_\nu=\int^{+\infty}_{-\infty}\frac{(1-z^\nu_+)^{p-1}}{|1-z|^{1+ps}}dz>0\]. \end{lem} \textbf{Proof.} Since $x>0$, we have $x_+=x$, and \begin{equation} \begin{aligned} (-\Delta)^s_p(x^\nu_+)&=\int_{\R}\frac{(x^\nu_+-y^\nu_+)^{p-1}}{|x-y|^{1+ps}}dy\\ &=\int_{-\infty}^{+\infty}\frac{x^{(p-1)\nu}(1-z^\nu_+)^{p-1}}{x^{1+ps}|1-z|^{1+ps}}xdz \;( y=xz) \\ &=x^{(p-1)\nu-ps}\int_{-\infty}^{+\infty}\frac{(1-z^\nu_+)^{p-1}}{|1-z|^{1+ps}}dz\\ &=C_\nu x_+^{(p-1)\nu-ps}, \end{aligned} \end{equation} with $\displaystyle C_\nu=\int^{+\infty}_{-\infty}\dfrac{(1-z^\nu_+)^{p-1}}{|1-z|^{1+ps}}dz$. Then \begin{eqnarray}\nonumber C_\nu&=&\int^{+\infty}_{0}\dfrac{(1-z^\nu_+)^{p-1}}{|1-z|^{1+ps}}dz+\int^{0}_{-\infty}\dfrac{(1-z^\nu_+)^{p-1}}{|1-z|^{1+ps}}dz\\\label{3.4} &=& \int^{+\infty}_{0}\dfrac{(1-z^\nu_+)^{p-1}}{|1-z|^{1+ps}}dz+\frac{1}{ps}. \end{eqnarray} For $0<\nu\leq\frac{ps-1}{p-1}$, \begin{equation*} \begin{aligned} &\ \ \ \int^{+\infty}_{0}\dfrac{(1-z^\nu_+)^{p-1}}{|1-z|^{1+ps}}dz \\&=\int^{1}_{0}\dfrac{(1-z^\nu)^{p-1}}{|1-z|^{1+ps}}dz+\int^{+\infty}_{1}\dfrac{(1-z^\nu)^{p-1}}{|1-z|^{1+ps}}dz\\ &=\int^{1}_{0}\dfrac{(1-z^\nu)^{p-1}}{|1-z|^{1+ps}}dz+\int^{1}_{0}\dfrac{(1-w^{-\nu})^{p-1}}{|1-w^{-1}|^{1+ps}}\frac{1}{w^2}dw\\ &=\int^{1}_{0}\dfrac{(1-z^\nu)^{p-1}}{|1-z|^{1+ps}}dz+\int^{1}_{0}-\dfrac{(1-w^\nu)^{p-1}}{|1-w|^{1+ps}}w^{1+ps-\nu(p-1)-2}dz\\ &=\int^{1}_{0}\dfrac{(1-z^\nu)^{p-1}}{|1-z|^{1+ps}}(1-z^{ps-\nu(p-1)-1})dz\geq0. \end{aligned} \end{equation*} Together with (\ref{3.4}) it implies that \begin{equation}\label{3.6} C_\nu>0, \quad \mbox{for }0<\nu<\frac{ps-1}{p-1}. \end{equation} To continue, we need Lemma 3.1 in \cite{23} which states that \begin{equation} (-\Delta)^s_p(x_+^s)|_{x=1}=0, \quad x\in R. \end{equation} Then for $\frac{ps-1}{p-1}<\nu<s$, it follows that \begin{eqnarray*}\nonumber C_\nu &=& \int^{+\infty}_{-\infty}\dfrac{(1-z^\nu_+)^{p-1}-(1-z^s_+)^{p-1}}{|1-z|^{1+ps}}dz\\\nonumber &=&\int^{+\infty}_{0}\dfrac{(1-z^\nu_+)^{p-1}-(1-z^s_+)^{p-1}}{|1-z|^{1+ps}}dz\\ \nonumber &=&\int^{+\infty}_1\dfrac{(1-z^\nu_+)^{p-1}-(1-z^s_+)^{p-1}}{|1-z|^{1+ps}}dz +\int^1_0\dfrac{(1-z^\nu_+)^{p-1}-(1-z^s_+)^{p-1}}{|1-z|^{1+ps}}dz\\\nonumber &=&\int^{1}_0\dfrac{(1-z^{-\nu}_+)^{p-1}-(1-z^{-s}_+)^{p-1}}{|1-z^{-1}|^{1+ps}}\frac{1}{z^2}dz +\int^1_0\dfrac{(1-z^\nu_+)^{p-1}-(1-z^s_+)^{p-1}}{|1-z|^{1+ps}}dz\\\nonumber &=&\int^{1}_0\dfrac{-(1-z^{\nu})^{p-1}z^{ps-1-(p-1)\nu}+(1-z^{s}_+)^{p-1}z^{s-1}}{|1-z|^{1+ps}}\frac{1}{z^2}dz \\&& +\int^1_0\dfrac{(1-z^\nu_+)^{p-1}-(1-z^s)^{p-1}}{|1-z|^{1+ps}}dz\\\nonumber &=&\int^1_0\dfrac{(1-z^\nu)^{p-1}-(1-z^s)^{p-1}}{|1-z|^{1+ps}}(1-z^{ps-1-(p-1)\nu})dz\\ && +\int^1_0\dfrac{(1-z^s)^{p-1}}{|1-z|^{1+ps}}(z^{s-1}-z^{ps-1-(p-1)\nu})dz>0. \end{eqnarray*} Together with (\ref{3.6}), we conclude that $$ C_{\nu}>0 \text{ for } \nu\in (0, s). $$ Next we generalize Lemma \ref{l3.1} to $n-$dimensions. Let $R^+_n:=\{x \in R^n \mid x_n>0 \}$. \begin{cor}\label{c3.1} For $0<\nu<s$, \begin{equation} (-\Delta)^s_p(x_n)^\nu_+=C_{\nu,n} (x_n)_+^{(p-1)\nu-ps},\text{ for } x \in R^n_+, \end{equation} with \[C_{\nu,n}=C_{\nu}\int_0^{\infty}\frac{t^{n-2}}{(1+t^2)^{\frac{n+ps}{2}}}dt >0.\] \end{cor} \textbf{Proof.} Let $x=(x',x_n)\in \R^n,r=|x'-y'| \text{ and }\tau=|x_n-y_n|$. From Lemma \ref{l3.1} we have \begin{eqnarray*} (-\Delta)^s_p({x_n})^\nu_+&=&\int_{\R^n}\frac{((x_n)^\nu_+-{(y_n)^\nu_+})^{p-1}}{|x-y|^{n+ps}}dy\\ &=&\int_{\R^n}\frac{({(x_n)^\nu_+}-{(y_n)^\nu_+})^{p-1}}{|(x_n-y_n)^2+(x'-y')^2|^{\frac{n+ps}{2}}}dy\\ &=&\int_{-\infty}^{+\infty}({(x_n)^\nu_+}-{(y_n)^\nu_+})^{p-1} \int_0^{\infty}\frac{w_{n-2}r^{n-2}}{|\tau^2+r^2|^{\frac{n+ps}{2}}}dr\,dy_n\\ &=&\int_{-\infty}^{+\infty} \frac{({(x_n)^\nu_+}-{(y_n)^\nu_+})^{p-1}}{|x_n-y_n|^{1+ps}}dy_n \int_0^{\infty}\frac{w_{n-2}t^{n-2}}{|1+t^2|^{\frac{n+ps}{2}}}dt\: (r=\tau t)\\ &=&(x_n)^{\nu(p-1)-ps}_+ C_\nu \int_0^{\infty}\frac{w_{n-2}t^{n-2}}{|1+t^2|^{\frac{n+ps}{2}}}dt\\ &:=&(x_n)^{\nu(p-1)-ps}_+ C_{\nu,n}. \end{eqnarray*} Now we are ready to construct the barrier function. \begin{lem}\label{20171098} Let $\phi(x)=(|x|^2-1)^\nu_+$ in $R^n$ with $\nu\in (0,s)$. Then there exists some $\epsilon>0$ small and $C_0>0$ such that \begin{equation}\label{20171093} (-\Delta)^s_p\phi(x)\geq C_0(|x|-1)^{\nu(p-1)-ps}, \quad x \in B_{1+\epsilon}(0)\backslash B_1(0). \end{equation} \end{lem} \textbf{Proof.} To prove the lemma, we argue by contradiction. Suppose (\ref{20171093}) is not true, then there exists a sequence $\{x^k\} \in B_1(0)$ so that $|x^k|\rightarrow 1$ and \be\label{20171094} (-\Delta)^s_p\phi(x^k)(|x^k|-1)^{ps-\nu(p-1)} \ra 0, \mbox{ as } \quad k \ra \infty. \ee Without loss of generality, let $x^k=(0,1+d_k)$. Then $$d_k=|x^k|-1 \ra 0, \quad \mbox{as } k \ra \infty.$$ Here we use an equivalent form of (\ref{20171041}) via the difference quotient $$(-\Delta)_p^s \phi(x^k)=\frac{C_{n,s,p}}{2}\int_{\R^n}\frac{[\phi(x^k)-\phi(x^k+y)]^{p-1}+[\phi(x^k)-\phi(x^k-y)]^{p-1}}{|y|^{n+ps}}dy.$$ Then by Lemma \ref{c3.1}, we have \begin{equation} \begin{aligned} &\ \ \ \ (|x^k-1|)^{ps-\nu(p-1)}(-\Delta)_p^s \phi(x^k)\\ &=\frac{d_k^{ps-\nu(p-1)}}{2}C_{n,s,p}\int_{\R^n} \frac{[\phi(x^k)-\phi(x^k+y)]^{p-1}+[\phi(x^k)-\phi(x^k-y)]^{p-1}}{|y|^{n+ps}}dy\\ &=C_{n,s,p}d_k^{ps-\nu(p-1)}\bigl(\int_{\R^n}\frac{\Bigl[(|x^k|^2-1)^\nu_+-(|x^k+y|^2-1)^\nu_+\Bigl]^{p-1}}{|y|^{n+ps}}dy\\ &\ \ +\int_{\R^n}\frac{\Bigl[(|x^k|^2-1)^\nu_+-(|x^k-y|^2-1)^\nu_+\Bigl]^{p-1}}{|y|^{n+ps}}dy\bigl)\\ &=C_{n,s,p}d_k^{ps-\nu(p-1)}\bigl(\int_{\R^n}\frac{\big[(d_k^2+2d_k)^\nu_+-(d_k^2+2d_k +2(1+d_k)y_n+|y|^2)^\nu_+\big]^{p-1}}{|y|^{n+ps}}\\ &\ \ +\frac{\big[(d_k^2+2d_k)^\nu_+-(d_k^2+2d_k -2(1+d_k)y_n+|y|^2)_+^\nu\big]^{p-1}}{}dy\bigl)\\ &=\frac{C_{n,s,p}}{2}\int_{\R^n}\bigg(\frac{[(d_k+2)^\nu_+-(d_k+2 +2(1+d_k)z_n+d_k|z|^2)^\nu_+]^{p-1}}{|z|^{n+ps}}\\ &\ \ \frac{+\big[(d_k+2)^\nu_+-(d_k+2 -2(1+d_k)z_n+d_k|z|^2)_+^\nu\big]^{p-1}}{}\bigg)dz \quad (y=d_k z )\\ &\ \ \ \rightarrow\frac{C_{n,s,p}}{2} \int_{\R^n}\frac{(2^\nu-(2+2z_n)_+^\nu)^{p-1}+(2^\nu-(2-2z_n)_+^\nu)^{p-1}}{|z|^{n+ps}}dz\\ &=2^{(p-1)\nu-1}C_{n,s,p}\int_{\R^n}\frac{(1-(1+z_n)_+^\nu)^{p-1}+(1-(1-z_n)^\nu_+)^{p-1}}{|z|^{n+ps}}dz\\ &=2^{(p-1)\nu}(-\Delta)_p^s(x_n)^\nu_+|_{x_n=1}\\ &=2^{(p-1)\nu}C_{\nu,n}>0. \end{aligned} \end{equation} This is a contradiction with (\ref{20171094}). In addition to the barrier function, we also need a comparison principle for for the fractional p-Laplacian (see \cite[Lemma 9]{24}). \begin{lem}\label{20171096} Let $\Omega$ be bounded in $R^n$, $p>2$ and $s \in (0,1)$. Assume that $u,\,v \in L_{ps}$. If \begin{equation} \begin{cases} (-\Delta)^s_p u\leq (-\Delta)^s_p v ,&\ \ x\in \Omega,\\ u\leq v,&\ \ x\in \Omega^C, \end{cases} \end{equation} then $u\leq v$ in $\Omega$. \end{lem} Let's prove Theorem \ref{20171092}. \textbf{Proof.} Briefly speaking, the proof consists of two parts. In part one, using the comparison principle we show that $$\|u\|_{L^\infty(\Omega)}<\infty.$$ In part two, we construct an auxiliary function that is Lipschitz continuous near the boundary so as to cover $u(x)$ from above. Let $g(x)=\min\{(2-x_n)^s_+,5^s\}$. Then \[g(x)=(2-x_n)^s_+-((2-x_n)^s_+-5^s)_+.\] By \cite[Lemma 3.1]{23}, we know $$(-\Delta)^s_p (2-x_n)^s_+=0, \quad x\in B_1. $$ Hence for $ x\in B_1$, we have \begin{eqnarray*} &&(-\Delta)^s_p g(x)\\ &=&(-\Delta)^s_p g(x)-(-\Delta)^s_p (2-x_n)^s_+\\ &=&\int_{y_n\leq -3}\frac{[(2-x_n)^s_+-5^s]_+^{p-1}-[(2-x_n)^s_+-(2-y_n)^s_+]^{p-1}}{|x-y|^{n+ps}}dy \\ &=:&I(x). \end{eqnarray*} Since $I: B_1(0)\rightarrow \R$ is continuous and positive, there exists $c>0$ such that \begin{equation} (-\Delta)^s_p g(x)\geq c>0 \text{ in }B_1(0). \end{equation} Let $\tilde{g}(x)=g(\frac{x}{R})C$ with $R>0, C>0$ sufficiently large so that $\Omega\subset B_R(0)$ and $$(-\Delta)^s_p\tilde g(x)= \frac{C^{p-1}}{R^{ps}}[(-\Delta)^s_p g](\frac{x}{R}) \geq \frac{cC^{p-1}}{R^{ps}} \geq \|f\|_{L^\infty(\Omega)} .$$ Then it is obvious that \begin{equation*} \left\{\begin{array}{ll} (-\Delta)^s_p\tilde g(x)\geq (-\Delta)^s_p u(x), & x \in B_R,\\ \tilde{g}(x)\geq u(x), &x \in R^{n}\backslash B_R. \end{array} \right. \end{equation*} From Lemma \ref{20171096} it follows $$u(x)\leq \tilde{g}(x) \leq c \text{ in } \Omega.$$ Similarly we can show that $$-u(x)\leq \tilde{g}(x) \leq c \text{ in } \Omega.$$ This proves that $$\|u\|_{L^\infty(\Omega)}\leq c.$$ \medskip Next we show that $u(x)$ is $C^\nu(\bar{\Omega})$ for $\nu \in (0,s)$. Here $C^\nu$ denotes the Lipschitz space. Given $x^o \in \Omega$ and close to $\partial\Omega$, let $\bar{x^o} \in \partial\Omega$ be such that $dist(x^o, \partial\Omega)=|x^o\bar{x^o}|$. We show that there exists a constant $c>0$ such that \be |u(x^o)-u(\bar{x^o})|\leq c |x^o-\bar{x^o}|^\nu. \ee Without loss of generality, we relocate the origin $O$ so that it is on the line $x^o\bar{x^o}$ and is outside of $\Omega$ with $|o\bar{x^o}|=1$. Let $\phi(x)=(|x|^2-1)^\nu_+$. Choose $\xi(x)$ to be a smooth cut-off function so that $\xi(x)=0$ in $B_1(0)$, $\xi(x)=1$ in $R^n\backslash B_{1+\epsilon}(0)$ with the same $\epsilon$ appeared in Lemma \ref{20171098} and $\xi(x)\in [0,1]$ in $R^n$. Let $$A(x)=C\phi(x)+\xi(x).$$ Then it is easy to see that $A(x)$ is $C^\nu(\overline{B_1(0)})$. Without loss of generality, Let $$D=B_{1+\epsilon}(0)\backslash B_1(0) \cap \Omega.$$ Given that $\bar{x^o}$ is near $\partial\Omega$, it is reasonable to say that $\bar{x^o} \in D$. Our goal is to show that \begin{equation}\label{20171095} \left\{\begin{array}{ll} (-\Delta)^s_p A(x)\geq (-\Delta)^s_p u(x), &x \in D,\\ A(x)\geq u(x), &x \in R^{n}\backslash D. \end{array} \right. \end{equation} We postpone the proof of (\ref{20171095}) for the moment. Together with Lemma \ref{20171096}, it yields $$A(x)\geq u(x), \quad x \in D.$$ Since $$u(x)|_{\partial\Omega}=\xi(x)|_{\partial B_1(0)}=0,$$ and $\xi$ is smooth everywhere, we have \begin{eqnarray*} |u(x^o)- u(\bar{x^o}) |=|u(x^o)|&\leq& |A(x^o)|\\ &=& |A(x^o)-\xi(\bar{x^o})|\\ &=& |C (|x^o|^2-1)^\nu_+ + \xi(x^o)-\xi(\bar{x^o})|\\ &=& |C (|x^o|^2-|\bar{x^o}|^2)^\nu_+ + \xi(x^o)-\xi(\bar{x^o})|\\ &\leq & C |x^o-\bar{x^o}|^\nu. \end{eqnarray*} This implies $u \in C^\nu(\bar{\Omega})$. What remains is to show (\ref{20171095}). On one hand, it's easy to see that the boundary condition is satisfied because the $A(x)$ controls $u(x)$ on $R^n \backslash D$ for $C$ sufficiently large. On the other hand, the fractional inequality on $D$ is valid for $\epsilon$ small because of $\nu<s$ and \be (-\Delta)^s_p A(x)\geq C_0(|x|-1)^{\nu(p-1)-ps}, \quad x \in B_{1+\epsilon}(0)\backslash B_1(0). \ee To verify this, we use an argument similar to that in the proof of Lemma \ref{20171098}. Suppose otherwise, then there exists a sequence $\{x^k\} \in D$ so that $|x^k|\rightarrow 1$ and \be\label{20171099} (-\Delta)^s_p A(x^k)(|x^k|-1)^{\nu(p-1)-ps} \ra 0, \mbox{ as } \quad k \ra \infty. \ee Without loss of generality, let $x^k=(0,1+d_k)$. Then $$d_k=|x^k|-1 \ra 0, \quad \mbox{as } k \ra \infty.$$ By Lemma \ref{c3.1}, we have \begin{eqnarray*} &&(|x^k-1|)^{ps-\nu(p-1)}(-\Delta)_p^s A(x^k)\\ &=&\frac{d_k^{ps-\nu(p-1)}}{2}C_{n,s,p}\int_{\R^n} \frac{[A(x^k)-A(x^k+y)]^{p-1}+[A(x^k)-A(x^k-y)]^{p-1}}{|y|^{n+ps}}dy\\ &=&C_{n,s,p}d_k^{ps-\nu(p-1)}\bigl(\int_{\R^n}\frac{\Bigl(C(|x^k|^2-1)^\nu_+-C(|x^k+y|^2-1)^\nu_+ +\xi(x^k)-\xi(x^k+y)\Bigl)^{p-1}}{|y|^{n+ps}}dy\\ &&\ \ +\int_{\R^n}\frac{\Bigl(C(|x^k|^2-1)^\nu_+-C(|x^k-y|^2-1)^\nu_+ +\xi(x^k)-\xi(x^k-y)\Bigl)^{p-1}}{|y|^{n+ps}}dy\bigl)\\ &=&\frac{C_{n,s,p}}{2}\int_{\R^n}\bigg(\frac{[C(d_k+2)^\nu_+-C(d_k+2 +2(1+d_k)z_n+d_k|z|^2)^\nu_+ +\frac{\xi(x^k)-\xi(x^k+d_k z)}{d_k^\nu} ]^{p-1}}{|z|^{n+ps}}\\ &&\ \ \frac{+\big[C(d_k+2)^\nu_+-C(d_k+2 -2(1+d_k)z_n+d_k|z|^2)_+^\nu + \frac{\xi(x^k)-\xi(x^k-d_k z)}{d_k^\nu} \big]^{p-1}}{}\bigg)dz \end{eqnarray*} \begin{eqnarray*} &=&\frac{C_{n,s,p}}{2}\int_{\R^n}\bigg(\frac{[C(d_k+2)^\nu_+-C(d_k+2 +2(1+d_k)z_n+d_k|z|^2)^\nu_+ +\nabla \xi (\tilde{z})\cdot z d_k^{1-\nu } ]^{p-1}}{|z|^{n+ps}}\\ &&\ \ \frac{+\big[C(d_k+2)^\nu_+-C(d_k+2 -2(1+d_k)z_n+d_k|z|^2)_+^\nu + \nabla \xi (\hat{z})\cdot z d_k^{1-\nu } \big]^{p-1}}{}\bigg)dz \\ &&\ \ \ \rightarrow\frac{C^{p-1}C_{n,s,p}}{2} \int_{\R^n}\frac{(2^\nu-(2+2z_n)_+^\nu)^{p-1}+(2^\nu-(2-2z_n)_+^\nu)^{p-1}}{|z|^{n+ps}}dz\\ &=&(2^\nu C)^{p-1}\frac{C_{n,s,p}}{2}\int_{\R^n}\frac{(1-(1+z_n)_+^\nu)^{p-1}+(1-(1-z_n)^\nu_+)^{p-1}}{|z|^{n+ps}}dz\\ &=&(2^\nu C)^{p-1}(-\Delta)_p^s(x_n)^\nu_+|_{x_n=1}\\ &=&(2^\nu C)^{p-1}C_{\nu,n} >0, \end{eqnarray*} where $\tilde{z}$ is between $x^k$ and $x^k-d_k z$, $\hat{z}$ is between $x^k$ and $x^k+d_k z$. This proves (\ref{20171099}) and thus completes the proof of the theorem.
2024-02-18T23:40:39.598Z
2017-11-09T02:01:46.000Z
algebraic_stack_train_0000
2,990
7,796
proofpile-arXiv_065-14595
\section{Introduction} \label{section:intro} The AdS/CFT correspondence has proven to be a fruitful avenue for probing various aspects of quantum gravity \cite{maldholo,wittenholo}. In particular, classical black hole spacetimes of non-trivial topology in three dimensions have been used to study parition functions and operators in 2d holographic CFTs \cite{MRW,toruspaper} as well as probe the entanglement structure of states in such theories \cite{MBW1, MBW2,cones}. These solutions arise as saddle points of the Euclidean Einstein-Hilbert action with boundary conditions given $\partial M = X$, where $X$ is a compact Riemann surface. One can think of constructing them by filling in various cycles of $X$, and so they are often referred to as handlebody phases \cite{Brill1, Brill2, Skenderis, Krasnov1, Krasnov2}. We review some of the tools used in these works \cite{MRW,cones,toruspaper} to study these handlebody solutions of Einstein's equations. These techniques include the finite element method (FEM) for numerically solving differential equations as well as the mathematical framework of Schottky uniformization for characterizing Riemann surfaces. The focus will be on useful formulas for implementing these calculations, and as such we have attached a \textit{Mathematica} package to the electronic version of this work that implements many of these tools. The outline of this review is as follows. First, we review the aspects of finite element methods used in these constructions. Next, we give a rough overview of Schottky uniformization, focusing on explicitly writing down a uniformization for a given Riemann surface that describes a desired handlebody phase. We additionally give explicit formulas for computing the regularized action of these phases reduced by certain symmetries, which we conveniently encode in the attached \textit{Mathematica} package. Finally, we give a simple example to illustrate the concepts and techniques described. \section{Finite Element Methods} Finite element methods are numerical methods for solving differential equations which involve discretizing the domain with a set of finite ``elements'' \cite{FEMgentle,FEMlecture}. We will restrict our attention exclusively to equations in two dimensions of the form \ban{ \nabla^2 u(x,y) + f(x,y) u(x,y) = g(x,y) \, \label{FEMeq} . } In order to solve this equation, we will discretize our solution space and convert this equation into a finite dimensional matrix equation, which we can then easily solve by algebraic methods. \subsection{Discretization of the domain} First, we discretize the domain $D$ with a mesh made up of triangular elements. We will use elements with six nodes: one on each vertex and one on the midpoint of each edge. An example of a valid triangulation for a domain used in \cite{toruspaper} is shown in figure \ref{mesh}. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{mesh} \caption{An example FEM mesh used in \cite{toruspaper}. \label{mesh} } \end{figure} We generate meshes using \textit{Mathematica}'s built-in \verb!ToElementMesh! function. Note that for numerical convenience, we approximate curved boundaries of $D$ by a large number of straight segments. We can estimate the error introduced by computing the length of $\partial D$ using the mesh and comparing it to the true value. The error introduced by this approximation can easily be made smaller by including more nodes on the boundary, and we always choose a sufficient number of nodes so that this error is always sub-leading. Given a valid mesh, we can define our solution space as the Sobolev space of piecewise continuous second-order polyomials spanned by the set of functions $\psi_i$ on $D$ such that $\psi_i(n_j) = \delta_{ij} $. That is, we parameterize our solution space with a basis of second order polynomials such that $\psi_i$ is one on node $n_i$ and vanishes on all other nodes. In this way we can approximate any function as \ban{ u \approx \sum_{i=1}^N u_i \psi_i \, , \label{FEMapprox} } where $u_i = u(n_i)$ and $N$ is the number of nodes in the mesh. We can improve this approximation by increasing the number of elements in the mesh. Note that $\psi_i$ is non-vanishing only on the set of elements containing $n_i$. In the discussion below and in the attached code, we refer to such a set as the ``neighborhood'' of $n_i$, and we can simplify some of the computations by restricting only to the appropriate neighborhood. We plot an example $\psi_i$ and highlight its associated neighborhood in figure \ref{neighborhood}. \begin{figure}[h!] \centering \includegraphics[width=0.65\textwidth]{neighborhood} \caption{A contour plot of $\psi_i$ for a particular $n_i$ and FEM mesh. Note that $\psi_i$ is $1$ on $n_i$, $0$ on $n_j \neq n_i$, and non-vanishing only in the highlighted neighborhood $\mathscr N_i$. \label{neighborhood} } \end{figure} \subsection{Solving the differential equation} To convert the equation \eqref{FEMeq} to a matrix equation, we can integrate both sides against an arbitrary $\psi_i$. \ban{ \int_D \nabla^2 u \, \psi_i + \int_D f \, u\, \psi_i = \int _D g \, \psi_i \, . } Integrating by parts gives the equation \ban{ \int _{\partial D} \nabla_n u\, \psi_i - \int_D \nabla u \cdot \nabla \psi_i + \int _D f\, u \, \psi_i = \int_D g\, \psi_i\, . } Finally we can use the approximation eq. \eqref{FEMapprox} to convert this equation into a matrix equation: \ban{ \sum_{j=1}^N u_j \left[ \int_{\partial D} \psi_i \nabla_n \psi_j \right] - \sum_{j=1}^N u_j \left[ \int_{ D} \nabla \psi_i \cdot \nabla \psi_j \right] + \sum_{j=1}^N u_j f_j \left[ \int_{ D} \psi_i \psi_j \right] = \sum_{j=1}^N g_j \left[ \int_{ D} \psi_i \psi_j \right] \, . } This equation is a bit ugly, but we can clean it up by introducing the following notation: \ban{ M_{ij} =\int_{ D} \psi_i \psi_j \, \hspace{1cm}W_{ij} = \int_{ D} \nabla \psi_i \cdot \nabla \psi_j \hspace{1cm} K_{ij} = \int_{\partial D} \psi_i \nabla_n \psi_j } where $M$ and $W$ are often called the ``mass'' and ``stiffness'' matrices respectively. Note that $K_{ij}$ is non-zero only when both $n_i$ and $n_j$ are on the boundary.\footnote{Additionally it is often possible to rewrite $K_{ij}$ in a simpler manner using the boundary conditions. We do so in the applications of FEM in \cite{cones,toruspaper} and later in this section.} With these definitions we can write our equation as \ban{ \sum_{j=1}^N K_{ij} u_j - \sum_{j=1}^N W_{ij} u_j + \sum_{j,k=1}^N M_{ij} (f_{j}\delta_{kj}) u_k = \sum_{j=1}^N M_{ij} g_j \notag \\ \sum_{j=1}^N \left[ K_{ij} - W_{ij} + \sum_{k=1}^N M_{ik} (f_{k}\delta_{kj})\right] u_j = \sum_{j=1}^N M_{ij} g_j \, , \label{matrixEQ} } which now takes the form of a matrix equation $A \cdot \vec u = \vec b$. We can easily solve this equation using the \verb!LinearSolve! function in \textit{Mathematica}, after appropriately enforcing the boundary conditions. We will exclusively consider boundary conditions which can be converted into a Neumann-type form, as in \cite{MRW,cones,toruspaper}. That is, we only consider cases where we can rewrite $K_{ij}$ using the boundary conditions $\nabla_n u = f$ in the manner \ban{ \sum_{j=1}^N K_{ij}u_j =\int_{\partial D} \psi_i \nabla_n u =\int_{\partial D} \psi_i f =\sum_{j=1}^N \int_{\partial D} \psi_i \psi_j f_j \, . } In this way, we have converted the term $K_{ij}$ in our matrix equation into a source term given by $\sum_{j=1}^N C_{ij} f_j$ where \ban{ C_{ij} = \int_{\partial D} \psi_i \psi_j \, . } This new source term enforces the appropriate boundary conditions, and so no further modifications need to be made to eq. \eqref{matrixEQ} to ensure the solution obeys them. The modified equation is given by \ban{ \sum_{j=1}^N \left[- W_{ij} + \sum_{k=1}^N M_{ik} (f_{k}\delta_{kj})\right] u_j = \sum_{j=1}^N\left[ M_{ij} g_j -C_{ij} f_j \right]\, .\label{matrixEQ2} } \subsection{Computation of matrix elements} In practice, we can compute the matrices $M$, $W$, and $C$ by deriving an analytic formula based on a unit ``reference element'' $R$ with vertices at $(0,0)$, $(0,1)$, and $(1,0)$ as drawn in figure \ref{elem}. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{elem} \caption{The unit reference element with nodes labeled. \label{elem} } \end{figure} Note that as $\psi_i$ is non-vanishing only in the neighborhood of $n_i$ (denoted by $\mathscr N_i$) we can write \ban{ M_{ij} = \int_{ D} \psi_i \psi_j = \int_{ \mathscr N_i \cap \mathscr N_j} \psi_i \psi_j = \sum_{E \in \mathscr N_i \cap \mathscr N_j} \int_E \psi_i \psi_j\, . } Therefore we can decompose $M_{ij}$ (and similarly $W_{ij}$ and $C_{ij}$) as a sum of integrals over elements in $\mathscr N_i \cap \mathscr N_j$. It will be useful then to derive analytic formulas for the following integrals over an arbitrary element $E$ specified by the coordinates of its vertices: \ban{ m_{ij} =\int_{E} \psi_i \psi_j \, \hspace{1cm}w_{ij} = \int_{E} \nabla \psi_i \cdot \nabla \psi_j \hspace{1cm} {c_{ij}(s)} = \int_{s} \psi_i \psi_j \, , } where $E$ is assumed to contain nodes $n_i$ and $n_j$ and $s$ is a particular boundary segment of $E$. First we can compute the value of these integrals on the unit reference element, then transform to an arbitrary element $E$ using an appropriate change of coordinates.\footnote{One can also compute these integrals using Gaussian quadrature rules as in \cite{MRW}, but we choose to eliminate the need to compute any analytic derivatives of the $\psi_i$.} Quantities associated with the reference element we denote by a superscript $R$. First we can write $\psi_i$ for the reference element: \ban{ \begin{array}{lll} \psi_1^{(R)} = (x+y-1)(2x+2y-1)\, , & \psi_2^{(R)} = x (2x-1)\, , & \psi_3^{(R)} = y (2y-1)\, ,\\ \psi_4^{(R)} = 4x (1-x-y) \, ,&\psi_5^{(R)} =4xy \, ,& \psi_6^{(R)} = 4y (1-x-y)\, . \end{array} } One can easily see that these functions are second order polynomials that satisfy $\psi_i(n_j) = \delta_{ij} $ as required. Additionally, given these expressions we can analytically compute the 36 matrix elements of $m^{(R)}_{ij}$, $w^{(R)}_{ij}$, and each of the three $c_{ij}^{(R)}(s)$. To transform from the reference element to an arbitrary element with nodes $n_i = (x_i, y_i)$, we can perform the coordinate transformation \ban{ \pp {x'}{y'} = \begin{pmatrix} x_2-x_1 & x_3 - x_1 \\ y_2 -y_1& y_3 - y_1 \end{pmatrix}\pp x y + \pp{x_1}{y_1} \, . } Using the standard change of basis formulas for integrals and derivatives, we can derive analytic expressions for the matrix elements as functions of the vertices $(x_i,y_i)$ of $E$: \ban{ m_{ij}&= |J| \footnotesize \begin{bmatrix} \frac{1}{60} & -\frac{1}{360} & -\frac{1}{360} & 0 & -\frac{1}{90} & 0 \\ -\frac{1}{360} & \frac{1}{60} & -\frac{1}{360} & 0 & 0 & -\frac{1}{90} \\ -\frac{1}{360} & -\frac{1}{360} & \frac{1}{60} & -\frac{1}{90} & 0 & 0 \\ 0 & 0 & -\frac{1}{90} & \frac{4}{45} & \frac{2}{45} & \frac{2}{45} \\ -\frac{1}{90} & 0 & 0 & \frac{2}{45} & \frac{4}{45} & \frac{2}{45} \\ 0 & -\frac{1}{90} & 0 & \frac{2}{45} & \frac{2}{45} & \frac{4}{45} \\ \end{bmatrix} \notag \\ w_{ij} &= \frac 1 {6|J|} \footnotesize \begin{bmatrix} 3 \, \chi _{23} & \xi _3 & \xi _2 & -4 \, \xi _3 & 0 & -4 \, \xi _2 \\ \xi _3 & 3 \, \chi _{13} & \xi _1 & -4 \, \xi _3 & -4 \, \xi _1 & 0 \\ \xi _2 & \xi _1 & 3 \, \chi _{12} & 0 & -4 \, \xi _1 & -4 \, \xi _2 \\ -4 \, \xi _3 & -4 \, \xi _3 & 0 & 4 \left(\chi _{12}+\chi _{13}+\chi _{23}\right) & -8 \, \xi _2 & -8 \, \xi _1 \\ 0 & -4 \, \xi _1 & -4 \, \xi _1 & -8 \, \xi _2 & 4 \left(\chi _{12}+\chi _{13}+\chi _{23}\right) & -8 \, \xi _3 \\ -4 \, \xi _2 & 0 & -4 \, \xi _2 & -8 \, \xi _1 & -8 \, \xi _3 & 4 \left(\chi _{12}+\chi _{13}+\chi _{23}\right) \\ \end{bmatrix}\, , } where we have used the notations \ban{ |J| &= | \left(x_3-x_2\right) y_1+\left(x_1-x_3\right) y_2+\left(x_2-x_1\right) y_3 |\notag \\ \xi_1 &= (x_1-x_2)(x_1-x_3) +(y_1-y_2)(y_1-y_3) \notag\\ \xi_2 &= (x_2-x_1)(x_2-x_3) +(y_2-y_1)(y_2-y_3) \notag\\ \xi_3 &= (x_3-x_1)(x_3-x_2) +(y_3-y_1)(y_3-y_2)\notag\\ \chi_{12} &= (x_1-x_2)^2+(y_1-y_2)^2\notag\\ \chi_{23} &= (x_2-x_3)^2+(y_2-y_3)^2\notag\\ \chi_{13} &= (x_1-x_3)^2+(y_1-y_3)^2 \, . } For $c^{(s)}_{ij}$ we can write the matrix elements as \ban{ &c^{(s)}_{a_s a_s} = c^{(s)}_{b_s b_s} = 2/15 |s| \notag\\ &c^{(s)}_{a_s d_s} = c^{(s)}_{b_s d_s} = 1/15 |s|\notag\\ &c^{(s)}_{a_s b_s} =-1/30 |s|\notag\\ & c^{(s)}_{d_s d_s} = 8/15|s| \, , } where all matrix elements not implied by $i\leftrightarrow j$ symmetry vanish and segment $s$ extends between nodes $a_s$ and $b_s$ with midpoint $d_s$, and $|s|$ is the Euclidean length of segment $s$. Using these formulas provides an efficient way to compute $M_{ij}$, $W_{ij}$, and $C_{ij}$ and, then numerically solve a given differential equation in terms of the matrix equation eq. \eqref{matrixEQ2}. \section{Handlebody Phases} All solutions of vacuum Einstein's equations with negative cosmological constant in 2+1 dimensions are quotients of AdS$_3$. These solutions provide a rich set of spacetimes for probing holography, as we are able to construct geometries with non-trivial topology simply by taking quotients. These geometries often arise as the gravitational duals of CFT states in two dimensions defined via a Euclidean path integrals. For a state defined as a path integral over a genus $g$ Riemann surface $X$, the associated gravitational path integral with boundary conditions $\partial M = X$ has a set of Euclidean saddles which we can characterize by specifying a set of $g$ cycles on the boundary to be made contractible in the bulk. We refer to these saddles as handlebody phases, which have been extensively studied in \cite{Brill1, Brill2, Skenderis, Krasnov1, Krasnov2, MRW}, and which are the focus of this section. In this section, we review methods for constructing these handlebody phases and evaluating their actions. We focus on practical tools and formulas for doing computations, and we refer the reader to \cite{toruspaper} and the various references for more details on the rich mathematical theory underlying these methods. In particular, we will show how to compute the regularized Einstein Hilbert action in the conformal frame where $R_\text{bndy} = -2/\ell^2$, and we will set $\ell=1$. \subsection{Schottky Uniformization} We can construct a convenient representation of a handlebody phase, called a Schottky uniformization, by starting with the boundary Riemann surface $X$ of genus $g$. To specify a handlebody phase, we need to choose a set of $g$ independent and non-intersecting cycles to be made contractible in the bulk. For example, given a basis $\{\alpha_i, \beta_j\}$ of the homotopy group of $X$ such that $\alpha_i \beta_j = \delta_{ij}$ and $\prod_i \alpha_i^{-1}\beta_i^{-1}\alpha_i \beta_i = 1$, we can choose the set of $g$ cycles $\{\alpha_i\}$, the cycles $\{\beta_i\}$, or any set of cycles given by the image of $\{\alpha_i\}$ under an element of the mapping class group. Having chosen a set of $g$ cycles, we now cut open the Riemann surface along each cycle and label each side of the cut $C_i$ and $C_i'$. The resulting surface is a Riemann sphere punctured by $2g$ circles that come in pairs. We can project this sphere into the complex plane, resulting in a Schottky domain for $X$. It is often useful to make sure that certain reflection and rotational symmetries of $X$ are preserved along the way, although it is sometimes not possible to preserve all such symmetries. Alternatively, one can begin with $2g$ circles in the complex plane and then reverse engineer the corresponding surface $X$ and handlebody phase, although we found this process to be more difficult in practice.\footnote{There is an additional complication that sometimes the symmetries of $X$ act in a non-trivial way on the Schottky uniformization, so determining the bulk geometry on a particular symmetry slice of the boundary can be difficult.} The region in $\mathbb C$ exterior to all the $C_i$ and $C_i'$ can be taken as a fundamental domain $D$ for the surface $X$. As $C_i$ and $C_i'$ are the same cycle on $X$, we can recover $X$ from the Schottky uniformization by taking the quotient by the subgroup of M\"obius transformations $\corr{L_i}$, where each $L_i$ maps the interior of $C_i$ to the exterior of $C_i'$. The Schottky domain resulting from this construction describes a bulk phase in which the initial cycles chosen on the boundary are contractible in the bulk. If we consider the half-plane model of $\mathbb H^3$ with the complex plane as its boundary,\footnote{We remind the reader that Euclidean AdS$_3$ is $\mathbb H^3$ or three dimensional hyperbolic space.} we can extend the identifications on the boundary into the bulk along geodesic hemispheres. That is, the quotient group acts in the bulk by identifying the hemispheres anchored on $C_i$ and $C_i'$. In this way, the cycles homologous to $C_i$ on the boundary are contractible in the bulk, as they may be lifted off the boundary along the corresponding hemisphere and shrunk down to a point. The dual cycles running between $C_i$ and $C_i'$ remain non-contractible. Therefore, we have successfully described the handlebody phase with the requisite boundary cycles contractible in the bulk. One way to characterize a handlebody phase is by the topology of a particular slice through the bulk, often corresponding to a moment of time-reflection symmetry. When this slice is fixed by a reflection symmetry of the boundary $X$, we can compute the topology using the following formula: \ban{ g_\text{slice} = \frac 12 (n-b+1)\, , \label{sliceG} } where $b$ is the number of disconnected boundaries of the slice and $n$ is the number of pairs of circles that lie on the slice. Note that the assumption of reflection symmetry ensures that either both circles of a pair lie on the slice or neither do. For example, a slice intersecting $2$ pairs of circles that divides the boundary into $3$ disconnected circles has no topology in the interior, and so this slice describes a simple three boundary wormhole. To compare the gravitational action between different phases, we must numerically solve for a standard conformal frame on the boundary and regularize the action. Additionally, we must be sure to compare phases with the same boundary $X$, and so we will need to compute the moduli of the boundary for each phase, and match the moduli between phases. This process is computationally intensive, but we may sometimes use a heuristic to get a rough understanding of the phase diagram. In general the phase with minimal action will be the one in which the total length of the boundary cycles made contractible is minimized.\footnote{Note that we have fixed the conformal frame of the boundary to be $R_\text{bndy}=-2$.} Note that for many phases there is not a unique choice of $g$ cycles that yield that phase, and so when applying the heuristic one must choose the choice of $g$ cycles that yields the minimal action. We can summarize this heuristic simply as: \textit{``Shorter cycles are more likely to pinch off than longer cycles''}. While this heuristic does not hold exactly (in fact we can construct cases where it fails), it is true approximately in the sense that as boundary cycles get longer the phase in which they are contractible becomes more subdominant. In this way, this heuristic is a useful shortcut for determining the general structure of the phase diagram. \subsection{The boundary metric} In order to fully specify the boundary Riemann surface $X$ and the corresponding handlebody phase, along with the set of contractible cycles we need to additionally specify the $3g-3$ moduli of the boundary. In the cases we consider, some of the moduli are fixed by symmetry, while others are computed by evaluating the lengths of certain geodesics on the boundary. Therefore, we need to specify a boundary metric before we can fully match a Schottky uniformization with its Riemann surface $X$. As detailed in \cite{Krasnov1}, to properly renormalize the gravitational action, we should choose a conformal frame on the boundary in which $R_\text{bndy}=-2$. As all metrics in 2d are conformally flat, we can write in general \ban{ ds^2 = e^{\phi(w)} |dw|^2\, , } where $\phi(w)$ is an arbitrary function for which we will solve. The regularity of the metric under the quotient by the $L_i$ imposes the following boundary conditions on $\partial D$: \ban{ \phi(L_i(w)) = \phi(w) - \frac 12 \log |L'_i(w)|^2\, . \label{bcs} } Additionally, the requirement $R_\text{bndy}=-2$ yields the Liouville equation for $\phi$: \ban{ \nabla^2 \phi = e^{2 \phi}\, . \label{Leqn} } In all cases we consider, the circles $C_i$ and $C_i'$ are fixed point sets of a symmetry of $D$ given by inversion through some circle in the complex plane. Using polar coordinates $(r_I,\theta_I)$ centered on the circle of inversion with radius $R_I$, invariance of the metric under this symmetry requires that \ban{ \phi(R^2_I /r_I , \theta_I ) = \phi(r_I, \theta_I)+ \log(r_I^2/R_I^2)\, . } Differentiating with respect to the unit normal $\hat r_I$ we find \ban{ \partial_{r_I}\phi(R_I^2/r_I, \theta)= - \frac{r_I^2}{R_I^2}\left( \partial_{r_I} \phi(r_I, \theta_I) + \frac 2 {r_I}\right)\, . \label{inversionSym} } Evaluating this equation on $r_I = R_I$ we have the simple formula that on $C_I$ \ban{ \left . \partial_{R_I} \phi \right|_{C_I} = - \frac 1 {R_I} \, . \label{INVbc} } In fact, we can show that when $C_i$ and $C_{i'}$ are related by an involution symmetry, this equation also holds on $C_i$ and $C_{i'}$. First, we consider $C$ and $C'$ as concentric circles centered at the origin with radii $\lambda$ and $1/\lambda$ respectively with $\lambda >1$, and $L(w) = w/\lambda^2$. The domain $D$ is the region between the circles\footnote{Note that the ``outside'' of $C$ is the region including the origin.}. From the boundary conditions eq. \eqref{bcs} we have \ban{ \frac 1 {\lambda^2} \partial_r \phi(1/\lambda) = \partial_r \phi(\lambda)\, . } Additionally, $C$ and $C'$ are related by inversion through the unit circle, and so by eq. \eqref{inversionSym} we have \ban{ \partial_r \phi(1/\lambda) = - \lambda^2 \left(\partial_r \phi(\lambda) + \frac 2 \lambda \right)\, . } Solving these two equations and noting $\nabla_n = \pm \partial_r$ for $C$ and $C'$ respectively we have \ban{ \left. \nabla_n \phi\right|_C = - \frac 1 \lambda \hspace{1cm} \left. \nabla_n \phi\right|_{C'} = \lambda \, . \label{Cbc} } Or in general we have \ban{ \left. \nabla_n \phi \right|_{C_i} = \frac {\sigma}{R_i} \label{FEMbc}\, , } where $\sigma_i = \pm 1$ for $D$ outside or inside of $C_i$ respectively and $R_i$ is the radius of $C_i$. To show that the condition eq. \eqref{FEMbc} holds whenever $C_i$ and $C_{i'}$ are related by an involution symmetry, we can perform a M\"obius transformation to move the unit circle to the appropriate circle of involution. Let the appropriate transformation be given by \ban{ w' = \frac{a\,w + b}{c\,w+d} \label{Mtrans}\, . } Under this transformation we have \ban{ \vec\nabla' \left(\phi(w') -\frac 12 \log \left|\frac{bc-ad}{(a-c\, w')^2}\right|^2\right)= J^{-1}\cdot \vec \nabla \phi(w)\, , } where $J$ is the Jacobian of the transformation eq. \eqref{Mtrans}. As on $C$ and $C'$ we know $\vec \nabla \phi(w)$ from eq. \eqref{Cbc} and we can compute $J$, we can solve this equation for $\vec \nabla {}'\phi(w')$. Taking the inner product with the normal vector on the image of $C$ and $C'$ under the coordinate transformation yields eq. \eqref{FEMbc}. We can now solve eq. \eqref{Leqn} using the Newton-Raphson algorithm and the finite element methods described in the previous section. First, we write $\phi= \phi_{(n)} + \delta \phi_{(n)}$ and expand the Liouville to first order in $\delta\phi_{(n)}$: \ban{ \nabla^2 \delta\phi_{(n)} - 2 e^{2\phi_{(n)}}\delta\phi_{(n)} = -\left(\nabla^2 \phi_{(n)} - e^{2\phi_{(n)}} \right)\, . } With the assumption that all $C_i$ and $C_{i'}$ are related by a $\mathbb Z_2$ symmetry of the domain, we can rewrite the boundary conditions eq. \eqref{bcs} as Neumann-type conditions eq. \eqref{FEMbc}. In the manner discussed in the previous section, we can enforce these boundary conditions by introducing a source term in the integral form of our differential equation:\footnote{Note that in the last term the orientation $\sigma_i$ is absorbed into the orientation of $d\theta_i$ in the manner described in the next section.} \ban{ - \int_D \nabla \psi\cdot \nabla \delta \phi_{(n)}- 2 \int_D \psi \, e^{2\phi_{(n)}} \, \delta\phi_{(n)} = \int_D \nabla \psi \cdot \nabla \phi_{(n)} + \int_D \psi \, e^{2\phi_{(n)}} + \sum_{i}\frac {1}{R_i} \int_{\partial D_i} {\psi}\, d \theta_i \, . \label{inteq} } This equation is now in the form to apply the formulas from the previous section. Further, we can often use the symmetries of the Schottky uniformization to reduce $D$ down to a reduced domain $\tilde D$. In all cases we consider, we use at least one reflection symmetry to reduce $D$, and without loss of generality we can choose for this reflection symmetry to act as inversion through the unit circle. Therefore, we choose to always work with a finite domain $\tilde D$. Note that the boundary conditions on the unit circle are fixed by eq. \eqref{INVbc}, and are accounted for by the final term in eq. \eqref{inteq}. Using FEM to discretize this equation, we then can solve the appropriate matrix equation for $\delta\phi_{(n)}$. Then, we update our solution to $\phi_{(n+1)} = \phi_{(n)}+ \delta \phi_{(n)}$ and solve a similar equation for $\delta\phi_{(n)}$. Starting with an initial seed of $\phi_{(0)}=0$, we repeat this process until $||\delta\phi_{(n+1)}||_\infty<10^{-10}$ or another desired accuracy. Given the solution for $\phi$, we can use the metric to numerically compute the lengths of all segments of $\partial \tilde D$. However, to compute the lengths of geodesics that do not make up $\partial \tilde D$ we must use a different method. We note that the region $\tilde D$ with $R_\text{bndy}=-2$ can be represented as a region in $\mathbb H^2$, and so if we can construct this region we can use the known analytic properties of $\mathbb H^2$ to compute the lengths of geodesics. Given a region $\tilde D$ with boundary segments $\partial \tilde D_i$ given by geodesics that meet at right angles,\footnote{This condition will be guaranteed by our symmetry requirements.} we can construct a corresponding region in $\mathbb H^2$ by the following algorithm. First, we start with an arbitrary geodesic segment of length $|\partial \tilde D_1|$. Next, we solve for the geodesic in $\mathbb H^2$ that intersects it orthogonally, and we follow that geodesic for length $|\partial \tilde D_2|$. We continue this process until we have represented all boundary segments and form a closed region. Using this region in $\mathbb H^2$, we can now solve for the lengths of geodesics using well known formulas. In this way, we can compute all the remaining moduli of the boundary $X$. \subsection{The bulk action} We can now compute the Einstein-Hilbert action for the associated handlebody phase. In terms of the field $\phi$ it was shown in \cite{MRW} that the regularized action is given by \ban{ I = - \frac c{24\pi} \left[ I_\text{TZ}[\phi] - A - 4 \pi (g-1)(1-\log 4 \rho_0^2)\right]\, , \label{fullaction} } where $A$ is the area of the boundary, $c=3/2G_N$ is the central charge of the dual CFT, and $\rho_0$ is the radius of the sphere for which the partition function is one, and we set $\rho_0=1$. Additionally defining $R_i$ to be the radius of $C_i$ and $\Delta_i$ as the distance between the center of $C_i$ and the point $w_\infty^{(i)}$ mapped to $\infty$ by $L_i$, we have, \ban{ I_{TZ}[\phi] = \int_D d^2 w\left( \left(\nabla \phi\right)^2 + e^{2\phi} \right) + \sum_i \left(\int_{C_i} 4 \phi\, d\theta_\infty^{(i)} - 4 \pi \log \left |R_i^2 - \Delta_i^2 \right|\right)\,, \label{TZaction} } where $\theta_\infty^{(i)}$ is the angle measured from the point $w_{\infty}^{(i)}$. In the rest of this section, we use our assumption of symmetries to simplify this action and derive useful formulas. First, we note that on shell we have the relation \ban{ A = \int_D d^2 w \, e^{2\phi}\, } and therefore the term $A$ in eq. \eqref{fullaction} cancels part of the integration in eq. \eqref{TZaction}. Further, we can reduce the remaining ntegral over $D$ to integrations over $\tilde D$ using the various inversion and reflection symmetries. Using the relation eq. \eqref{inversionSym} we have \ban{ \int_D d^2 w \left(\nabla \phi\right)^2 = 2 \int_{\tilde D}d^2 w \left[\left(\nabla \phi\right)^2 + \frac{2}{r_I}\partial_{r_I}\phi + \frac 2{r_I^2} \right]\, . } In practice, we only use reflections and inversion through the unit circle to reduce the Schottky domain.\footnote{Note that for some domains we consider inversion through the unit circle is not a symmetry, but the product of this inversion with a reflection is a symmetry. The discussion that follows also applies to this case.} We can think of a reflection as the limit of an inversion where $r_I \to \infty$, and so we see that there are no additional terms generated by this reduction (i.e. we can simply integrate over half the domain and multiply by a factor of $2$). Therefore reducing the domain by a product of $s$ reflections and an inversion through the unit circle yields \ban{ \int_D d^2 w \left(\nabla \phi\right)^2 & = 2^{s+1}\left[\int_{\tilde D}d^2 w \left(\nabla \phi\right)^2 + \int_{\tilde D}d^2 w \left(\frac{2}{r}\partial_{r}\phi + \frac 2{r^2} \right)\right]\notag\\ & = 2^{s+1} \int_{\tilde D}d^2 w \left(\nabla \phi\right)^2 + 2^{s+2} \int_{\partial \tilde D} \phi \, d\theta + 2^{s+2} \int_{\partial \tilde D} \log r\, d\theta \, . } We can additionally integrate by parts to get \ban{ \int_D d^2 w \left(\nabla \phi\right)^2 & = -2^{s+1} \int_{\tilde D}d^2 w \phi \nabla^2 \phi +2^{s+1} \int _{\partial \tilde D} \phi \nabla_n \phi + 2^{s+2} \int_{\partial \tilde D} \phi \, d\theta + 2^{s+2} \int_{\partial \tilde D} \log r\, d\theta \, ,\label{inversionCont} } which we can further simplify using the equations of motion $\nabla^2 \phi = e^{2\phi}$. Note that with our assumptions the boundary of $\tilde D$ consists of lines through the origin, the unit circle $U$, and some portion of the circles $C_i, C_i'$, and so we write $\partial \tilde D = \{\partial D_i\}$. It is thus convenient to write the integrals over $\partial \tilde D$ above as integrals over these lines and circles. All the integrals over the lines through the origin vanish due to either $d\theta$ or $\vec \nabla{}_n \phi$ vanishing, and so we are left with the integral over circles. Denoting $\mathscr I [\partial D_i]$ the contribution to the action from boundary segment $\partial D_i$, we can write the action as \ban{ - \frac{24\pi}{c} I = - 4 \pi (g-1)(1-\log 4) - 2 \int_{\tilde D}\phi\, e^{2\phi} \, d^2 w + \sum_i \mathscr I \left[\partial D_i\right]\, , } where $\mathscr I[\partial D_i]$ includes possible contributions coming from eq. \eqref{TZaction}. We now compute this contribution for each type of boundary segment. The following subsection is rather technical, and should be thought of as a compendium of useful formulas. The reader more interested in the overall narrative should skip to the example in \S\ref{sec:example}. \subsection{Boundary circle contributions} For simplicity, in this section we only compute the contribution reduced over inversion through the unit circle (or the product of this inversion and a reflection). Reducing over more reflections is straightforward and simply multiplies certain terms by factors of $2$. Throughout this section, we leave the sign inherited through the orientation of $\partial D$ implicit, i.e. we have \ban{ \int_{C_i} d\theta^{(i)}_0 = \pm\, 2 \pi } where we choose the positive or negative sign when $D$ lies inside or outside $C_i$ respectively. As previously mentioned, when $\partial D_i$ is a line the contribution $\mathscr I [\partial D_i]$ vanishes. For the unit circle $U$, we only have the contribution from eq. \eqref{inversionCont}: \ban{ \mathscr I [ U] = 2 \int_{U} \phi \nabla_n \phi \, d\theta + 4 \int_{U} \phi \, d\theta \, , } as the $\log r$ term vanishes on $U$. We can use eq. \eqref{INVbc} to rewrite the normal derivative and we have \ban{ \mathscr I [U] = 2 \int_{U} \phi \, d\theta \, . } The rest of the boundary segments are made up of parts or all of $C_i$ and $C_i'$. There are multiple cases depending on the positions of these circles, and we go through all of them in detail. Note that we only consider cases in which the domain can be reduced by at least inversion through the unit circle, and additionally in which $C_i$ and $C_i'$ are the fixed point set of a symmetry of the domain. In the simplest case, only one of $C_i$ or $C_i'$ is included in $\partial \tilde D$ and this circle does not intersect $U$. Without loss of generality, we can choose $C_i$ to be included in $\partial \tilde D$, so we have contributions to $\mathscr I [C_i]$ from eq. \eqref{inversionCont} and from the final summation in eq. \eqref{TZaction}. Using the boundary conditions eq. \eqref{FEMbc} we have \ban{ \mathscr I[ C_i] = -2 \int_{ C_i} \phi\, d\theta_0^{(i)} + 4 \int_{ C_i} \phi \, d \theta +4 \int_{ C_i} \log r d \theta + 4 \int_{ C_i} d\theta_\infty^{(i)} - 4 \pi \log |R_i ^2 - \Delta_i ^2 | \, , } where $\theta_0^{(i)}$ is the angular coordinate measured from the center of $C_i$. Additionally, one can show \ban{ \int_{ C_i} \log r d \theta = \pi \log \left( 1- R_i^2/X_i^2\right)\, , } for $X_i > R_i$ where $X_i$ is the Euclidean distance of the center of $C_i$ from the origin. Putting everything together we have \ban{ \mathscr I[ C_i] = 2 \int_{ C_i} \phi\, \left( 2 d \theta +2 d\theta_\infty^{(i)}- d\theta_0^{(i)}\right) +4 \pi \log \frac{1- R_i^2/X_i^2}{|R_i ^2 - \Delta_i ^2 |} \, . } Further, numerically it is only convenient to integrate over $d\theta_0^{(i)}$, and so we can introduce Jacobian factors to transform $d\theta$ and $d\theta_\infty^{(i)}$. In general integrating on $C_i$ over an angle $\xi$ measured from a point along the axis connecting the origin and $X_i$ introduces the factor\footnote{Note using the signed Jacobian factor is more convenient numerically as a built-in way to keep track of possible orientation reversal.} \ban{ \frac{d\xi}{d\theta_0^{(i)}} = \frac{R_i (R_i-d \cos \theta_0^{(i)})}{d^2-2 \, d \, R_i \cos \theta_0^{(i)}+R_i^2}\, , \label{Jangle} } where $d$ is the signed distance between $X_i$ and the point. For example, applying this formula to $\theta$ we have $d= - X_i$ and \ban{ \frac{d\theta}{d\theta_0^{(i)}} = \frac{R_i (R_i+X_i \cos \theta_0^{(i)})}{X_i^2+2 \, X_i \, R_i \cos \theta_0^{(i)}+R_i^2}\, . } In the second case, we assume both $C_i$ and $C_{i'}$ are fully contained in $\partial \tilde D$. By the symmetry assumptions, there must be a conjugate pair $C_{\bar i}$, $C_{\bar i'}$ related by inversion through the unit circle. Therefore we must account for the contribution from this pair as well. Following similar arguments and using the transformation of $\phi$ under the inversion, we have \ban{ \mathscr I [C_i ] + \mathscr I [C_i'] = & 2 \int_{C_i} \phi \,(2d\theta+2d\theta_\infty^{(i)}-d\theta_0^{(i)}) +4 \int_{C_i}(\phi +2 \log|w|) \frac{d\theta_\infty^{(\bar i)}}{d \theta_0^{(\bar i)}}\frac{d\theta_0^{(\bar i)}}{d \theta_0^{(i)}} d\theta_0^{(i)} \notag \\ &+2 \int_{C_{i'}} \phi \,(2d\theta-d\theta_0^{(i')})+ 4\pi \log \frac{ (1- {R_i^2}/{X_i^2})(1- {R_{i'}^2}/{X_{i'}^2})}{\left |R_i^2 - \Delta_i^2 \right|\left |R_{\bar i}^2 - \Delta_{\bar i}^2 \right| } \, . } We can similarly introduce Jacobian factors of the form eq. \eqref{Jangle} to numerically evaluate these integrals. The Jacobian for transforming the integral on $C_{(\bar i)}$ to one on $C_{(i)}$ can be worked out geometrically as \ban{ \frac{d\theta_0^{(\bar i)}}{d \theta_0^{(i)}} = \frac{R_i^2 - X_i ^2}{{X_i^2+2 \, X_i \, R_i \cos \theta_0^{(i)}+R_i^2}} \, . \label{Jinv} } Finally, we have to consider the cases in which $C_i$ and $C_i'$ intersect the unit circle. First, we consider when $C_i$ is mapped to itself under inversion through the unit circle. In this case the analytic formulas were worked out in \cite{MRW} and we have \ban{ \mathscr I[C_i] + \mathscr I[C_i'] = 2 \int _{\tilde C_i } \phi\, d\theta_0^{(i)}+2 \int _{\tilde C_i' } \phi\, d\theta_0^{(i')} - 8 \pi \log R_i + 8 \int_0 ^{2 \arctan R_i } \frac{x}{\sin x} dx\, , } where $\tilde C_i$ refers to the part of $C_i$ that is part of $\partial \tilde D$ and similarly for $\tilde C_i'$. Additionally, we can consider the case when inversion through the unit circle is not a symmetry, but the product of this inversion and a reflection is a symmetry. In this case, the part of $C_i$ outside of $\tilde D$ gets mapped to the part of $C_{i'}$ inside $\tilde D$, and so we must include the appropriate Jacobian factor for inversion as in eq. \eqref{Jinv}, with an extra minus sign to account for the reversal of orientation. \ban{ \mathscr I [C_i ] + \mathscr I [C_i'] = & 2 \int_{\tilde C_i} \phi \,(2d\theta+2d\theta_\infty^{(i)}-d\theta_0^{(i)}) + 4 \int_{\tilde C_i} \log |w| \, d\theta \notag \\ &+2 \int_{\tilde C_{i'}} \phi \,(2d\theta-d\theta_0^{(i')}) + 4 \int_{\tilde C_{i'}} \log |w| \, d\theta \notag \\ & +4 \int_{\tilde C_{i'}}(\phi + 2 \log|w|)\frac{d\theta_\infty^{(i)}}{d \theta_0^{(i)}} \frac{d\theta_0^{(i)}}{d \theta_0^{(i')}} d\theta_0^{(i')}- 4\pi \log {\left |R_i^2 - \Delta_i^2 \right|} \, . } All of the above formulas are included in the attached \textit{Mathematica} package, providing a convenient set of tools to study these phases. \section{A \textit{Mathematica} package} \label{sec:packages} In this section, we document the usage of the attached \textit{Mathematica} package for computing the action and moduli of a handlebody phase. There are two packages included; \textit{FEMfine.m} implements general finite element methods for numerically solving differential equations, and \textit{handlebodies.m} provides a framework for solving for the handlebody geometry. To load the packages, make sure both files are included in the same directory as the working notebook and execute \begin{verbatim} SetDirectory[NotebookDirectory[]]; <<``handlebodies.m''; \end{verbatim} The \textit{FEMfine} package is automatically loaded as part of \textit{handlebodies}. To solve for a handlebody, one must first specify the circles $C_i$ and $C_i'$ in the domain and the symmetries. There are five allowed circle types, as documented in figure \ref{circletypes}, categorized according to the symmetry that $C_i$ and $C_i'$ are the fixed point set under. \begin{figure}[h!] \centering \includegraphics[width=0.75\textwidth]{circles} \caption{Illustration of the allowed circle types according to the symmetries that exchange $C_i$ and $C_i'$ as follows. ``R": a reflection across $\hat x$ or $\hat y$. ``Inv:'' inversion through the unit circle. ``InvR:'' product of a reflection and inversion through the unit circle. ``InvRU:'' circles which are InvR and also intersect the unit circle. ``RU'' (not pictured): circles exchaged by reflection and also intersect the unit circle, and additionally must be fixed under inversion through unit circle. \label{circletypes} } \end{figure} Inversion in the unit circle must be a symmetry of the domain, and one can additionally speicfy reflection across the $x$ axis or $y$ axis as symmetries. This framework allows one to construct all of the handlebody phases considered in \cite{MRW,cones,toruspaper}, and additionally one can construct a large set of handlebodies for general application. In the \textit{handlebodies} package, one can specify the handlebody via the following code: \begin{verbatim} InitializeHandlebody[] AddCircle[{c1, r1, t1}] AddCircle[{c2, r2, t2}] ... AddSymmetry[``x''] AddSymmetry[``y''] \end{verbatim} where $c=\{c_x,c_y\}$ is the center of each circle, $r$ is the radius, and $t$ is the type. The function \verb!IntializeHandlebody[]! resets the list of circles and symmetries, and sets the mesh generation parameters in \textit{Mathematica}'s \verb!ToElementMesh! function as ``MaxCellMeasure''$\to 0.005$ and ``AccuracyGoal''$\to 4$. To increase the quality of the mesh, one can change the values of these parameters by resetting the variables \verb!mcm! and \verb!ag! to the desired values after the handlebody is initialized. Once the handlebody is specified, the executing the command \verb!SolveHandlebody[name]! computes a set of quantities and stores them as \verb!name[``Attribute'']!. If no variable \verb!name! is specified the attributes are stored as \verb!Handlebody[``Attribute'']!. The full list of quantities computed can be seen in the package documentation, and a few relevant ones are listed below. \begin{itemize} \item \verb!name[``genus'']!: Genus of boundary Riemann surface \item \verb!name[``mesh'']!: Finite element mesh used to discretize domain $D$ \item \verb!name[``CError'']!: Estimation of numerical error due to discretization by mesh \item \verb!name[``AError'']!: Estimation of numerical error from computation of area compared to the area determined from the genus by the Gauss-Bonnet theorem \item \verb!name[``BoundaryLengths'']!: List of the length of each boundary segment $\partial D_i$ compute using the solution for the metric in the order \{circle segments, x segments, y segments\} with the order for the circle segments given by the order they were added, with the unit circle first. \item \verb!name[``Action'']!: The Einstein-Hilbert action for the handlebody \end{itemize} One can read off various moduli of the Riemann surface in the list of boundary segment lengths, and additionally one can use this list to construct the analogous region in $\mathbb H^2$ to compute the rest of the moduli. Additionally, one must match moduli between different phases to determine the dominant phase for given boundary conditions. The \verb!NM! function and \verb!GradSearch! function are included as part of \textit{handlebodies} as convenient ways to match moduli using Newton's method and a gradient search method respectively. The documentation for these functions can also be read off from the package. \section{An example} \label{sec:example} As an example, we can use the \textit{handlebodies} package and the methods outlined in this section to study the toroidal geon phase originally studied in \cite{MRW}. First, we choose boundary conditions given by a genus $2$ Riemann surface drawn in figure \ref{X2} with three $\mathbb Z_2$ symmetries. \begin{figure}[h!] \centering \includegraphics[width=0.65\textwidth]{X2.pdf} \caption{Boundary Riemann surface with three $\mathbb Z_2$ symmetries given by reflection in each dashed line and the plane of the page. \label{X2} } \end{figure} This Riemann surface has a two dimensional moduli space leftover after imposing these symmetries. In order to specify a handlebody phase, we choose two independent cycles to make contractible in the bulk. There are three distinct choices that respect the $\mathbb Z_2$ symmetries of the boundary. Letting the $\alpha$ cycles go around the handles (red in fig \ref{X2cycles}) and the $\beta$ cycles go around the holes (orange in fig \ref{X2cycles}), the phases are defined by choosing $\{\alpha_1, \alpha_2\}$ contractible, choosing $\{\beta_1, \beta_2\}$, or choosing $\{\alpha_1 - \alpha_2, \beta_1+\beta_2\}$. Each of these choices results in a different handlebody phase. We choose to study the phase in which $\{\alpha_1 - \alpha_2, \beta_1+\beta_2\}$ are contractible. These cycles are drawn in blue and green on figure \ref{X2cycles} respectively. To study this phase we must first cut the Riemann surface apart along theses cycles and project it into the complex plane. First, cutting the Riemann surface along $\alpha_1-\alpha_2$ yields a square torus with two punctures related by a reflection symmetry. Next, we can cut this torus along its $\beta$ cycle (i.e. $\beta_1+\beta_2$) to yield the Riemann sphere with four punctures, where the punctures are identified by orthogonal reflection symmetries. Projecting this sphere into the plane gives the Schottky domain drawn in figure \ref{X2domain}. Reflection about the $\hat x$ and $\hat y$ axis identify each pair of $C_i$, $C_i'$, and inversion in the unit circle leaves the domain unchanged. Additionally, we can identify the cycles $\alpha_1$ and $\beta_1$ in this domain as the fixed point sets under the relevant symmetries. \begin{figure}[h!] \centering \subfloat[]{\includegraphics[width=0.65\textwidth]{X2cycles.pdf} \label{X2cycles} \put(-310,80){\makebox(0,0){$\alpha_1$}} \put(-225,50){\makebox(0,0){$\beta_1$}} \put(-150,60){\makebox(0,0){$\alpha_1-\alpha_2$}} \put(-150,108){\makebox(0,0){$\beta_1+\beta_2$}} }\\ \subfloat[]{\includegraphics[width=0.6\textwidth]{X2domain.pdf} \label{X2domain} \put(-148,210){\makebox(0,0){$\alpha_1$}} \put(-188,188){\makebox(0,0){$\beta_1$}} \put(-95,115){\makebox(0,0){\small$\alpha_1-\alpha_2$}} \put(-245,55){\makebox(0,0){$\beta_1+\beta_2$}} } \caption{(a) Cycles labeled on the boundary Riemann surface. (b) The Schottky uniformization of this Riemann surface used to compute the toroidal geon phase. } \end{figure} We can characterize the bulk geometry of this handlebody by considering the geometry of a particular time slice. Consider the slice given by the fixed point set of reflection across the vertical line in fig \ref{X2}. This symmetry fixes the $\hat x$ axis of the Schottky domain in fig \ref{X2domain}, and the topology of this slice is determined by eq. \eqref{sliceG}. The slice has $2$ pairs of circles, and the boundary consists of a single segment, giving a topology of $g_\text{slice}=1$. Therefore, in this phase this bulk time slice has geometry given by a single boundary wormhole with a genus one surface behind the horizon. As in \cite{toruspaper} we refer to this phase as the toroidal geon. Note that we could have chosen a different time slice to characterize the bulk geometry. A potential source of confusion is that doing so does not change the handlebody phase, but rather simply the bulk slice we are using to characterize it. If we had chosen either of the two remaining slices fixed by $\mathbb Z_2$ symmetries we would have resulted in a geometry with three boundaries-- with two of the boundaries connected by a wormhole, and the third a copy of the Poincar\'e disk. In each of these cases it is important to take the entire fixed point set of the reflection symmetry as the boundary. For example, if we considered the fixed point set of the reflection across the horizontal line in fig. \ref{X2} the boundary slice consists not only of the $\hat y$ axis but also of the cycle $\alpha_1-\alpha_2$. This statement is clear in fig. \ref{X2cycles} but more sublte in fig. \ref{X2domain}. Having specified the phase, we can now compute its action and moduli using the \textit{handlebodies} package. We can construct a general such phase via the following code: \begin{verbatim} InitializeHandlebody[] AddCircle[{{0, 0}, r, ``Inv''}] AddCircle[{{Sec[a], 0}, Tan[a], ``RU''}] AddSymmetry[``x'']; AddSymmetry[``y'']; SolveHandlebody[geon] \end{verbatim} A sample mesh used for this phase is shown in figure \ref{Xmesh}. \begin{figure}[h!] \centering \includegraphics[width=0.35\textwidth]{Xmesh.pdf} \caption{A mesh used to compute the toroidal geon phase. \label{Xmesh} } \end{figure} Evaluating this code for different values of $r$ and $a$ computes the action of this phase at various points in moduli space. To parameterize the moduli space, we can use $|\alpha_1|$ and $|\beta_1|$, which after reducing the Schottky domain by the three $\mathbb Z_2$ symmetries correspond to boundary segments $\partial D_i$. In figure \ref{Xaction} we show a contour plot of the action in this moduli space. \begin{figure}[h!] \centering \includegraphics[width=0.6\textwidth]{Xaction.png} \put(-20,0){\makebox(0,0){$|\alpha_1|$}} \put(-285,225){\makebox(0,0){$|\beta_1|$}} \caption{The action $I/c$ for the toroid geon as a function of moduli. We see the action decreases as $|\alpha_1|$ and $|\beta_1|$ increase. \label{Xaction} } \end{figure} \section*{Acknowledgments} Thank you to Don Marolf, Henry Maxfield, and Benson Way for useful conversations. This work was supported in part by the U.S. National Science Foundation under grant number PHY15-04541 and also by the University of California. \bibliographystyle{jhep} \phantomsection \renewcommand*{\bibname}{References}
2024-02-18T23:40:39.602Z
2017-11-09T02:02:06.000Z
algebraic_stack_train_0000
2,991
8,159
proofpile-arXiv_065-14712
\section{Introduction} \label{sec:introduction} One of the most pressing problems in quantum chemistry today is the challenge of predicting the detailed effects of electron correlation in systems far from the mean-field regime such as molecules with stretched bonds, transition metal oxide catalysts, and $\pi$-conjugated molecules with low-lying doubly-excited states. While traditional quantum chemistry methods that build up from a Hartree-Fock reference function are very effective at describing weak electron correlation (i.e.\ correlation that does not greatly alter the mean-field picture), and recent advances in density matrix renormalization group (DMRG) \cite{Chan:2011:dmrg_in_chem} and full configuration interaction quantum Monte Carlo (FCI-QMC) \cite{Booth:2015:mcscf_fciqmc} have greatly expanded the reach of active space approaches to strong correlation (i.e.\ correlation, typically within the valence electrons, that causes qualitatively non-mean-field effects), it remains difficult to affordably and accurately describe both weak and strong correlation simultaneously. Recently, we introduced \cite{Neuscamman:2013:cjagp} the cluster Jastrow antisymmetric geminal power (CJAGP) ansatz as a candidate to address this challenge by attempting to combine the strengths of cluster operators \cite{BARTLETT:2007:cc_review}, Hilbert space Jastrow factors \cite{Neuscamman:2013:jagp}, and pairing wave functions, but we were limited in our ability to test this new ansatz by the difficulty of combining quasi-Newton optimization techniques with variational Monte Carlo (VMC). In this paper, we present a more robust and efficient optimization scheme for the CJAGP based on the VMC linear method (LM) \cite{Nightingale:2001:linear_method,UmrTouFilSorHen-PRL-07,TouUmr-JCP-07,TouUmr-JCP-08} and use it to test this new ansatz on two challenging triple-bond dissociations that were inaccessible to the old optimization method. The ability of the CJAGP to encode strong correlation arises from its Jastrow-modified geminal power reference \cite{Neuscamman:2015:subtractive_jagp}, and so in a sense the theory can be seen as being part of the chemistry community's larger effort to construct ansatzes based on electron pairs. Indeed, the ubiquity of electron pairing in molecular physics has spurred the investigation of numerous pair-based approaches to electron correlation, in which the fundamental wave function building block is a two-electron geminal rather than a one-electron orbital. Early examples include perfect pairing (PP) \cite{POPLE:1953:agp,Beran:2005:upp}, the ``bare'' (i.e.\ not Jastrow-modified) antisymmetric geminal power (AGP) \cite{Bratoz:1965:AGP,COLEMAN:1965:AGP,Scuseria:2002:hfb}, and products of strongly orthogonal geminals \cite{Kutzelnigg:1964:apsg,Kutzelnigg:1965:apsg,Surjan:2012:apsg}. More recently, there has been renewed interest in pairing wave functions based on the idea of relaxing the strong orthogonality constraint, as in generalizations of PP \cite{Head-Gordon:2000:non_orth_pp,Head-Gordon:2000:imperfect_pairing,Head-Gordon:2002:gvb_cc} the antisymmetric product of 1-reference-orbital geminals (AP1roG) \cite{Bultinck:2013:nonorth_gems,Ayers:2014:nonvar_oo_ap1rog,VanNeck:2014:ap1rog_on_hubbard,VanNeck:2014:seniority_2_ap1rog_oo,Ayers:2014:geminal_accuracy,Ayers:2015:ap1rog_lcc} and extensions of the singlet-type strongly orthogonal geminal (SSG) approach \cite{Rassolov:2002:ssg,Rassolov:2004:pert_ssg,Rassolov:2007:ssg,Rassolov:2007:spin_proj_ssg,Rassolov:2014:sspg,Szabados:2015:spin_proj_ssg}. While the CJAGP has strong connections to these pairing theories, it is important to recognize that Jastrow-modification can drastically change the ansatz, and it is actually the combination of Jastrow factor and geminal power that lies at the heart of the ansatz's ability to capture strong correlation \cite{Neuscamman:2015:subtractive_jagp}. For this reason, the pairing theory that most closely relates to CJAGP is JAGP with real space Jastrows \cite{Sorella:2003:agp_sr,Sorella:2004:agp_sr,Sorella:2007:jagp_vdw,Sorella:2009:jagp_molec}, although we must emphasize that real space and Hilbert space Jastrow factors are quite different, and so many of the approximations involved are distinct. The ability of the CJAGP to encode weak correlation arises from the fact that under a unitary orbital rotation, the Hilbert space Jastrow factor becomes a simplified coupled cluster (CC) doubles operator \cite{Neuscamman:2013:cjagp} similar in structure to the tensor hypercontraction representation of doubles amplitudes \cite{Martinez:2012:thc_correlated}. The variational freedom of the cluster-Jastrow (CJ) operator is much reduced compared to the traditional CC doubles operator \cite{BARTLETT:2007:cc_review}, and as we will discuss below this simplicity may limit the CJAGP's ability to encode the finer details of dynamic correlation. Note that the CJ operator is \textit{not} a pairing operator, and that the electron pairing qualities of CJAGP come instead from its AGP reference. One must therefore be careful not to confuse the CJ operator with the CC operator representations of various pairing theories, such as PP \cite{Ukrainskii:1977,Cullen:1996:gvb_from_cc}, some forms of the generalized valence bond \cite{Head-Gordon:2001:gvb_cc}, AP1roG \cite{Ayers:2014:nonvar_oo_ap1rog,VanNeck:2014:ap1rog_on_hubbard,VanNeck:2014:seniority_2_ap1rog_oo}, and pair CC doubles \cite{Scuseria:2014:sen_0_pair_ccd,Henderson:2014:pair_ccd_attractive,Scuseria:2014:seniority_cc}. Indeed, these theories often use their pairing ansatzes' cluster operator formulation to facilitate a non-variational, projective optimization scheme as in traditional CC theory, whereas CJAGP is evaluated using \textit{variational} Monte Carlo. As such, it may be conceptually more useful to see CJAGP as an attempt to achieve a type of variational, multi-reference CC, inspired by the accuracy seen in studies of variational and quasi-variational CC \cite{Head-Gordon:2000:var_cc,Knowles:2010:vcc,Knowles:2012:quasi_var_cc,Knowles:2012:qvcc_benchmark,Knowles:2012:qvcc_nonlin_optical,Knowles:2012:qvcc_pert_triples} and the extraordinary accuracies achievable by multi-reference CC \cite{PaldusLi:1999:cc_review}. The remainder of this paper is organized as follows. We begin by defining the CJAGP ansatz (Section \ref{sec:basics}) and reviewing the typical formulation of the LM (Section \ref{sec:tlm}). We then show how the cost-scaling for applying the LM to the CJAGP may be reduced (Section \ref{sec:lsmb}), how the strong zero variance principle is maintained (Section \ref{sec:szv}), and how one can avoid constructing the LM matrices when desirable (Section \ref{sec:amb}). After presenting computational details (Section \ref{sec:comp_det}), we then present data on the improved optimization efficiency (Section \ref{sec:convergence}) as well as the accuracy of the method in the triple bond dissociations of N$_2$ (Section \ref{sec:n2}) and [ScO]$^+$ (Section \ref{sec:sco}), before concluding and offering remarks on possible future directions (Section \ref{sec:conclusions}). \section{Theory} \label{sec:theory} \subsection{Basics} \label{sec:basics} In this paper we seek to optimize the CJAGP ansatz, \begin{align} |\Psi\rangle = \exp(\hat{\mathcal{K}}) |\Phi\rangle, \label{eqn:cjagp} \end{align} in which the unitary orbital rotation operator $\exp(\hat{\mathcal{K}})$ is defined by the anti-Hermitian operator \begin{align} \hat{\mathcal{K}} = \sum_{p<q} K_{pq} ( a^+_p a_q - a^+_q a_p ) \label{eqn:k} \end{align} and \begin{align} |\Phi\rangle = \exp \left( \sum_{ij} J_{ij} \hat{n}_i \hat{n}_j \right) \left( \sum_{rs} F_{rs} a^+_r a^+_s \right)^{N/2} |0\rangle \label{eqn:jagp} \end{align} is the JAGP ansatz with pairing matrix $\bm{F}$ and Jastrow factor coefficients $\bm{J}$. In Eq.\ (\ref{eqn:jagp}), $N$ is the (even) number of electrons, $r$ and $s$ are restricted to $\alpha$ and $\beta$ spin-orbitals, respectively, and $i$ and $j$ range over all spin-orbitals. Note that unless otherwise stated, indices in this paper are assumed to range over all spin-orbitals. We will make use of the fermionic creation and destruction operators, $a^+_p$ and $a_p$, which create or destroy an electron in spin-orbital $p$ and which obey the usual anti-commutation rules. We also employ the number operators $\hat{n}_p = a^+_p a_p$. The development of improved optimization methods for the orbital rotation defined by $\hat{\mathcal{K}}$ is important because it is this rotation that allows the Jastrow factor to act as a limited CC doubles operator, \begin{align} e^{\hat{\mathcal{K}}} e^{\sum_{ij} J_{ij} \hat{n}_i \hat{n}_j} e^{-\hat{\mathcal{K}}} & = \exp \left( \sum_{ijkl} T_{ij}^{kl} a^+_k a_i a^+_l a_j \right), \label{eqn:cluster_op} \\ T_{ij}^{kl} & = \sum_{pq} U^*_{ip} U_{kp} J_{pq} U^*_{jq} U_{lq}, \label{eqn:cluster_amp} \end{align} where $\bm{U}$ results from exponentiating the antisymmetrization of the upper-triangular $\bm{K}$ \cite{Neuscamman:2013:cjagp}. Given the potentially highly multi-reference nature of the geminal power \cite{Neuscamman:2015:subtractive_jagp}, this raises the tantalizing question of whether the CJAGP can act as an effective surrogate for much more complex complete-active-space-based multireference CC ansatzes that have outstanding accuracy but steeply scaling computational costs. Although initial investigations into the CJAGP showed promise \cite{Neuscamman:2013:cjagp}, they were limited by the shortcomings of combining the quasi-Newton L-BFGS method with VMC. We will therefore turn our attention to creating a more effective optimization scheme in order to push CJAGP into larger and more interesting systems. \subsection{Traditional Linear Method} \label{sec:tlm} The LM \cite{Nightingale:2001:linear_method,UmrTouFilSorHen-PRL-07,TouUmr-JCP-07,TouUmr-JCP-08} optimization scheme works by solving the Schr\"{o}dinger equation (SE) in the subspace of Hilbert space spanned by the approximate wave function and its first derivatives with respect to its variables $\bm{\mu}$, which we write concisely as \begin{align} |\Psi^0\rangle \equiv |\Psi\rangle \qquad |\Psi^x\rangle \equiv \frac{\partial|\Psi\rangle}{\partial \mu_x} \quad x\in \{1,2,...,n_\mathrm{v}\}. \label{eqn:psi_var_der} \end{align} As these functions may not be orthogonal, the SE to be solved is a generalized eigenvalue problem, \begin{alignat}{2} \bm{H} \bm{c} &= E \bm{S} \bm{c} \label{eqn:lm_se} \\ H_{xy} &= \langle\Psi^x|\hat{H}|\Psi^y\rangle && \quad \forall \quad x,y\in\{0,1,2,...,n_\mathrm{v}\} \label{eqn:lm_h} \\ S_{xy} &= \langle\Psi^x|\Psi^y\rangle && \quad \forall \quad x,y\in\{0,1,2,...,n_\mathrm{v}\} \label{eqn:lm_s} \end{alignat} Assuming the initial wave function is close to the energy minimum, then the ratios $c_x/c_0$ for $x>0$ can be expected to be small, as the optimal wave function in the LM subspace should be a small change from $|\Psi\rangle$ (this smallness can be ensured by penalizing the $x>0$ diagonal elements $H_{xx}$ \cite{TouUmr-JCP-07}). Having solved Eq.\ (\ref{eqn:lm_se}) for $\bm{c}$, we may then update our wave function by a reverse Taylor expansion, \begin{align} |\Psi(\bm{\mu})\rangle \rightarrow |\Psi(\bm{\mu} + \text{\reflectbox{$\bm{c}$}}/c_0)\rangle \approx |\Psi\rangle + \sum_{x=1}^{n_\mathrm{v}} \frac{c_x}{c_0}|\Psi^x\rangle , \label{eqn:reverse_taylor} \end{align} where \text{\reflectbox{$\bm{c}$}} is the length-$n_\mathrm{v}$ vector obtained by removing the first element ($c_0$) from $\bm{c}$. The key role of Monte Carlo is to evaluate the matrices $\bm{H}$ and $\bm{S}$, which is done by a resolution of the identity in terms of occupation number vectors $\bm{n}$ (in real space we would instead use an integral over positions) over which a stochastic sample is taken, \begin{align} A_{xy} &= \sum_{\bm{n}} \langle\Psi^x|\bm{n}\rangle \langle\bm{n}|\hat{A}|\Psi^y\rangle \notag \\ &= \sum_{\bm{n}} |\langle\bm{n}|\Psi\rangle|^2 \frac{\langle\Psi^x|\bm{n}\rangle}{\langle\Psi|\bm{n}\rangle} \frac{\langle\bm{n}|\hat{A}|\Psi^y\rangle}{\langle\bm{n}|\Psi\rangle} \notag \\ &\approx \sum_{\bm{n}\in\xi} \frac{\langle\Psi^x|\bm{n}\rangle}{\langle\Psi|\bm{n}\rangle} \frac{\langle\bm{n}|\hat{A}|\Psi^y\rangle}{\langle\bm{n}|\Psi\rangle} \label{eqn:trad_mc_mat} \end{align} For $\bm{H}$ we set $\hat{A}=\hat{H}$ while for $\bm{S}$ we set $\hat{A}$ to the identity operator. In this paper the sample of configurations $\xi$ will be drawn from the distribution $|\langle\bm{n}|\Psi\rangle|^2$, but any distribution $|Q(\bm{n})|^2$ can be used if the right hand side of Eq.\ (\ref{eqn:trad_mc_mat}) is modified to $\sum_{\bm{n}\in\xi}\langle\Psi^x|\bm{n}\rangle\langle\bm{n}|\hat{A}|\Psi^y\rangle/|Q(\bm{n})|^2$. Note that the normalization constant for the sampled distribution may be ignored, as it will appear on either side of Eq.\ (\ref{eqn:lm_se}) and will thus not affect the solution $\bm{c}$. For CJAGP, we will retain the use of Eq.\ (\ref{eqn:trad_mc_mat}) for most but not all elements of $\bm{H}$ and $\bm{S}$, as shown in Figure \ref{fig:matrix_tiles}. To see why we do not retain the traditional approach for all matrix elements, consider element $H_{xy}$ in which $\mu_y$ is the orbital rotation variable $K_{pq}$, in which case we must evaluate \begin{align} \langle\bm{n}|\hat{H}|\Psi^y\rangle = \frac{\partial \langle\bm{n}|\hat{H}|\Psi\rangle}{\partial K_{pq}} = \langle\bm{n}|\hat{H} ( a^{+}_p a_q - a^{+}_q a_p ) |\Psi\rangle, \label{eqn:hard_term} \end{align} in which the two-electron component of $\hat{H}$ combines with the $pq$-indexed excitations to create triple excitations acting on the configuration $\bm{n}$. While such triple-excitation terms may be evaluated using the same approach as for double excitations (as in the JAGP energy evaluation \cite{Neuscamman:2013:jagp}), the cost scaling for this approach is $N^6$, which is much higher than the $N^4$ scaling that can be achieved \cite{Neuscamman:2013:jagp} when $\mu_y$ corresponds to a Jastrow or AGP variable. (Note that to get the LM's overall cost scaling, one must add an additional factor of $N$ if the statistical uncertainty of extensive quantities is to be held constant due to the requisite increase in the sample length.) \subsection{Lower Scaling Matrix Builds} \label{sec:lsmb} For a general two-body operator of the form \begin{align} \hat{A} = A_0 + \sum_{pq} A^p_q a^+_p a_q + \sum_{pqrs} A^{pq}_{rs} a^+_p a^+_q a_s a_r \label{eqn:gen_2body_op} \end{align} and a wave function ansatz consisting of a JAGP augmented by an orbital rotation as in Eq.\ (\ref{eqn:cjagp}), the per-sample cost scaling to build the matrix $\bm{A}$ can be reduced to $N^5$ by working in the one-particle basis in which $\hat{\mathcal{K}}=0$ and by performing the Monte-Carlo-sampled resolution of the identity in a slightly different way. Note that an arbitrary rotation of the one-particle basis (after which $\hat{A}$ will have the same form but different coefficients) can be achieved by converting \begin{align} |\Psi\rangle \rightarrow e^{-\hat{\mathcal{L}}}|\Psi\rangle \qquad \hat{A} \rightarrow e^{-\hat{\mathcal{L}}}\hat{A}e^{\hat{\mathcal{L}}} \label{eqn:rotate_basis} \end{align} using an anti-Hermitian one-body operator $\hat{\mathcal{L}}$ that defines the rotation. At the end of each LM iteration, at which point $\hat{\mathcal{K}}$ may be nonzero due to the LM update of Eq.\ (\ref{eqn:reverse_taylor}), we may thus ``reset'' $\hat{\mathcal{K}}$ to 0 via a basis-rotation with $\hat{\mathcal{L}}=\hat{\mathcal{K}}$. The one- and two-electron coefficients needed to represent $\hat{A}$ in the new basis, i.e.\ $A^p_q$ and $A^{pq}_{rs}$ in Eq.\ (\ref{eqn:gen_2body_op}), can be evaluated at an $N^5$ cost as per a standard atomic-to-molecular-orbital conversion of the one- and two-electron integrals \cite{Helgaker_book}. As the basis rotation is required only once per LM iteration, rather than once per sample, its cost is negligible compared to the sampling effort involved in estimating the matrix $\bm{A}$. \begin{figure}[t] \centering \includegraphics[width=7.5cm,angle=0]{matrix_tiles} \caption{Equations used for evaluating different subsections of the LM matrices $\bm{H}$ and $\bm{S}$. } \label{fig:matrix_tiles} \end{figure} Working in the $\hat{\mathcal{K}}=0$ one-particle basis, we may express the difficult $\mu_y=K_{pq}$ matrix element as \begin{align} A_{xy} & = \langle\Psi^x|\hat{A}|\Psi^y\rangle \notag \\ & = \left[ \langle\Psi^x| \frac{\partial}{\partial K_{pq}} \left( \hat{A} e^{\hat{\mathcal{K}}} |\Phi\rangle \right) \right]_{\hat{\mathcal{K}}=0} \notag \\ & = \left[ \langle\Psi^x| \frac{\partial}{\partial K_{pq}} \left( e^{\hat{\mathcal{K}}} e^{-\hat{\mathcal{K}}} \hat{A} e^{\hat{\mathcal{K}}} |\Phi\rangle \right) \right]_{\hat{\mathcal{K}}=0} \notag \\ & = \langle\Psi^x| \hat{C} |\Phi\rangle + \langle\Psi^x| \hat{D} \hat{A} |\Phi\rangle \notag \\ & = \sum_{\bm{n}} \langle\Psi^x|\bm{n}\rangle \langle\bm{n}|\hat{C}|\Phi\rangle + \langle\Psi^x|\hat{D}|\bm{n}\rangle \langle\bm{n}|\hat{A}|\Phi\rangle \label{eqn:double_ri} \end{align} where we have defined \begin{align} \hat{C} & \equiv \left[ \frac{\partial (e^{-\hat{\mathcal{K}}} \hat{A} e^{\hat{\mathcal{K}}})}{\partial K_{pq}} \right]_{\hat{\mathcal{K}}=0} = \left[ \hat{A}, \hspace{1mm} a^+_p a_q - a^+_q a_p \right] \label{eqn:der_of_eKAeK_wrt_K} \\ \hat{D} & \equiv \left[ \frac{\partial e^{\hat{\mathcal{K}}}}{\partial K_{pq}} \right]_{\hat{\mathcal{K}}=0} = a^+_p a_q - a^+_q a_p \label{eqn:der_of_eK_wrt_K} \end{align} The rationale for these placements of the identity resolutions is that they isolate the difficult operators $\hat{A}$ and $\hat{C}$ such that no term involves more than a double excitation on $|\bm{n}\rangle$ (the uncontracted triple excitations in the commutator of $\hat{C}$ cancel each other as usual), thus avoiding the triple excitation in Eq.\ (\ref{eqn:hard_term}) that led to $N^6$ scaling. Having placed our identity resolutions, we may now evaluate them stochastically on a sample $\xi$ drawn from $|\langle\Phi|\bm{n}\rangle|^2$ in order to produce our Monte Carlo estimate of the matrix element: \begin{align} A_{xy} & \approx \sum_{\bm{n}\in\xi} \frac{\langle\Psi^x|\bm{n}\rangle}{\langle\Phi|\bm{n}\rangle} \frac{\langle\bm{n}|\hat{C}|\Phi\rangle}{\langle\bm{n}|\Phi\rangle} + \frac{\langle\Psi^x|\hat{D}|\bm{n}\rangle}{\langle\Phi|\bm{n}\rangle} \frac{\langle\bm{n}|\hat{A}|\Phi\rangle}{\langle\bm{n}|\Phi\rangle} \label{eqn:new_Axy} \end{align} It now remains to evaluate these matrix element estimates for the identity and Hamiltonian operators involved in the LM. For the overlap matrix $\bm{S}$, for which $\hat{A}$ is the identity, $\hat{C}$ vanishes and Eq.\ (\ref{eqn:new_Axy}) simplifies to \begin{align} S_{xy} & \approx \sum_{\bm{n}\in\xi} \frac{\langle\Psi^x|(a^+_p a_q - a^+_q a_p)|\bm{n}\rangle}{\langle\Phi|\bm{n}\rangle}. \label{eqn:new_Sxy} \end{align} As shown in Appendix \ref{sec:oroc}, the per-sample cost to evaluate these $\mu_y=K_{pq}$ matrix blocks (i.e. the Eq.\ (\ref{eqn:new_Axy}) blocks for $\bm{S}$ in Figure \ref{fig:matrix_tiles}) grows as only $N^4$. For the Hamiltonian matrix $\bm{H}$, for which $\hat{A}=\hat{H}$, things are not so simple, although it is possible to avoid an $N^6$ per-sample cost scaling. To begin, we may recognize that the right hand part Eq.\ (\ref{eqn:new_Axy}) becomes a simple modification of Eq.\ (\ref{eqn:new_Sxy}) in which each term is scaled by the JAGP local energy $\langle\bm{n}|\hat{H}|\Phi\rangle / \langle\bm{n}|\Phi\rangle$ (which can be evaluated at an $N^4$ per-sample cost \cite{Neuscamman:2013:jagp}), and so its contribution to $\bm{H}$ can be evaluated at an $N^4$ per-sample cost by a direct analogue of the approach for $\bm{S}$ given in Appendix \ref{sec:oroc}. In the left-hand part of Eq.\ (\ref{eqn:new_Axy}), consider first the derivative ratios \begin{align} \mathcal{D}_{\bm{n}}(\mu_x) \equiv \frac{\langle\Psi^x|\bm{n}\rangle} {\langle\Phi|\bm{n}\rangle}. \end{align} For $\mu_x$ either a pairing matrix element or a Jastrow coefficient, these ratios have been evaluated previously for the JAGP \cite{Neuscamman:2013:jagp}. When $\mu_x$ is an orbital rotation variable $K_{pq}$, the ratios are \begin{align} \mathcal{D}_{\bm{n}}(K_{pq}) = \frac{\langle\Phi|(a^+_p a_q - a^+_q a_p)|\bm{n}\rangle} {\langle\Phi|\bm{n}\rangle}, \end{align} which can be evaluated efficiently as shown in Appendix \ref{sec:oroc}, specifically in Eq.\ (\ref{eqn:pq_ratio}). The final term needed to construct $\bm{H}$, and the one responsible for the overall $N^5$ per-sample cost scaling of the construction, is the term in Eq.\ (\ref{eqn:new_Axy}) containing $\hat{C}$. We will worry only about the two-electron component of $\hat{H}$ (the reader may convince herself that the one-electron component is less expensive), for which we must evaluate \begin{align} \frac{1} {\langle\bm{n}|\Phi\rangle} \langle\bm{n}| \left[ \hspace{1mm} \sum_{ijkl} \hspace{1mm} g^{ij}_{kl} \hspace{1mm} a^+_i a^+_j a_l a_k \hspace{1mm}, \hspace{1mm} a^+_p a_q - a^+_q a_p \hspace{0.7mm} \right] |\Phi\rangle \label{eqn:two_elec_comm} \end{align} where $g^{ij}_{kl}$ are the usual two-electron integrals \cite{Helgaker_book}. Defining the double excitation ratios \begin{align} Q^{ij}_{kl} \equiv \frac{\langle\bm{n}| a^+_i a^+_j a_l a_k |\Phi\rangle} {\langle\bm{n}|\Phi\rangle}, \label{eqn:double_excite_ratios} \end{align} which are derivatives of the JAGP local energy with respect to $g^{ij}_{kl}$ (see Eq.\ (34) of Ref.\ \cite{Neuscamman:2013:jagp}) and can thus all be evaluated for the same $N^4$ cost-per-sample scaling as the local energy itself, one may expand Eq.\ (\ref{eqn:two_elec_comm}) as \begin{align} \sum_{ i j k } \Big( \hphantom{+} \hspace{1mm} & g^{ i j }_{ p k } Q^{ i j }_{ q k } + g^{ i j }_{ k p } Q^{ i j }_{ k q } - g^{ i q }_{ j k } Q^{ i p }_{ j k } - g^{ q i }_{ j k } Q^{ p i }_{ j k } \notag \\ + \hspace{1mm} & g^{ i j }_{ p k } Q^{ q k }_{ i j } + g^{ i j }_{ k p } Q^{ k q }_{ i j } - g^{ i q }_{ j k } Q^{ j k }_{ i p } - g^{ q i }_{ j k } Q^{ j k }_{ p i } \hspace{1.5mm} \Big). \label{eqn:expanded_commutator} \end{align} Each of these terms can clearly be evaluated for a per-sample cost scaling as $N^5$, giving the explicit construction of the CJAGP $\bm{H}$ matrix according to the scheme in Figure \ref{fig:matrix_tiles} an overall per-sample cost that scales as $N^5$. This is better than the $N^6$ per-sample cost resulting from a naive application of the traditional LM matrix build, but nonetheless a higher scaling than for JAGP. \subsection{Strong Zero Variance} \label{sec:szv} In the traditional LM, the stochastic approximation to the generalized eigenvalue problem in Eq.\ (\ref{eqn:lm_se}) has the important property of satisfying what is known as the strong zero variance principle (SZVP), which says that the solution of the eigenproblem will have no statistical uncertainty if the exact wave function exists within the span of the current wave function and its first derivatives. In practice this means that as an accurate wave function is approached, statistical uncertainty in the LM is greatly reduced. This is a generalization of the standard VMC zero variance principle, in which the energy has no uncertainty if the wave function ansatz itself is exact. To see where the SZVP comes from, consider the following rearrangement of Eq.\ (\ref{eqn:lm_se}) in which the matrices have been approximated stochastically as in the traditional LM (i.e. via Eq.\ (\ref{eqn:trad_mc_mat})) \begin{align} 0 & = \sum_{y=0}^{n_\mathrm{v}} (H_{xy} - E \hspace{0.8mm} S_{xy}) c_y \label{eqn:lm_xy_pencil} \\ & \approx \sum_{y=0}^{n_\mathrm{v}} \sum_{\bm{n}\in\xi} \frac{\langle\Psi^x|\bm{n}\rangle} {\langle\Phi|\bm{n}\rangle} \frac{\langle\bm{n}| (\hat{H}-E) |\Psi^y\rangle} {\langle\bm{n}|\Phi\rangle} c_y \label{eqn:trad_szvp_middle} \\ & = \sum_{\bm{n}\in\xi} \frac{\langle\Psi^x|\bm{n}\rangle} {\langle\Phi|\bm{n}\rangle} \frac{\langle\bm{n}| (\hat{H}-E) \sum_{y=0}^{n_\mathrm{v}} |\Psi^y\rangle c_y} {\langle\bm{n}|\Phi\rangle} \label{eqn:trad_szvp} \end{align} If the exact wave function exists within the LM subspace, which is to say there is a vector $\bm{c}$ such that \begin{align} (\hat{H}-E) \sum_{y=0}^{n_\mathrm{v}} |\Psi^y\rangle c_y = 0, \label{eqn:exact_psi_in_fds} \end{align} then the terms in Eq.\ (\ref{eqn:trad_szvp}) vanish independently for every $\bm{n}$, and so the exact energy $E$ and the vector $\bm{c}$ giving the exact wave function will be found during the diagonalization of Eq.\ (\ref{eqn:lm_se}) regardless of which random sample $\xi$ is taken. In other words, they will be found with zero variance. Although the present approach does not satisfy the SZVP exactly, its deviation from the SZVP vanishes quadratically as the exact wave function is approached. To see why, replace $\bm{H}$ and $\bm{S}$ with Figure \ref{fig:matrix_tiles}'s stochastic approximations and (without loss of generality) set $c_0$ = 1, at which point the deviation of Eq.\ (\ref{eqn:lm_xy_pencil}) from zero (i.e.\ the deviation from the SZVP) becomes \begin{align} \eta_x & = \sum_{\bm{n}\in\xi} \frac{1} {|\langle\Phi|\bm{n}\rangle|^2} \Bigg[ \langle\Psi^x|\bm{n}\rangle \langle\bm{n}|(\hat{H}-E)|\Psi\rangle \hspace{1mm} + \notag \\ & \quad \quad \sum_{y=1}^{n_\mathrm{v}} \langle\Psi^x| \frac{\partial}{\partial \mu_y} \Bigg( e^{\hat{\mathcal{K}}}|\bm{n}\rangle\langle\bm{n}|e^{-\hat{\mathcal{K}}} (\hat{H}-E) |\Psi\rangle \Bigg) c_y \Bigg]_{\hat{\mathcal{K}}=0} \notag \\ & = \sum_{\bm{n}\in\xi} \frac{1} {|\langle\Phi|\bm{n}\rangle|^2} \Bigg[ \langle\Psi^x|\bm{n}\rangle \langle\bm{n}| (\hat{H}-E) \sum_{y=0}^{n_\mathrm{v}} |\Psi^y\rangle c_y \hspace{1mm} + \notag \\ & \quad \quad \sum_{y=1}^{n_\mathrm{v}} \langle\Psi^x| \frac{\partial}{\partial \mu_y} \Bigg( e^{\hat{\mathcal{K}}}|\bm{n}\rangle\langle\bm{n}|e^{-\hat{\mathcal{K}}} \Bigg) (\hat{H}-E) |\Psi\rangle c_y \Bigg]_{\hat{\mathcal{K}}=0} \notag \end{align} If we again assume that the (un-normalized) exact wave function $|\Psi_0\rangle = |\Psi\rangle + \sum_{z=1}^{n_\mathrm{v}} |\Psi^z\rangle c_z$ exists in the LM subspace, which implies that \begin{align} (\hat{H}-E) |\Psi\rangle & = - \sum_{z=1}^{n_\mathrm{v}} (\hat{H}-E) |\Psi^z\rangle c_z, \label{eqn:replace_exact_psi} \end{align} then the deviation from the SZVP simplifies to \begin{align} \eta_x & = - \sum_{y=1}^{n_\mathrm{v}} \sum_{z=1}^{n_\mathrm{v}} c_y c_z \sum_{\bm{n}\in\xi} \frac{Q^{(\bm{n})}_{xyz}} {|\langle\Phi|\bm{n}\rangle|^2} \label{eqn:quad_szvp_dev} \\ Q^{(\bm{n})}_{xyz} & \equiv \Bigg[ \langle\Psi^x| \frac{\partial}{\partial \mu_y} \Bigg( e^{\hat{\mathcal{K}}}|\bm{n}\rangle\langle\bm{n}|e^{-\hat{\mathcal{K}}} \Bigg) (\hat{H}-E) |\Psi^z\rangle \Bigg]_{\hat{\mathcal{K}}=0} \notag \end{align} Thus the deviation from the SZVP vanishes quadratically as \text{$|$\reflectbox{$\bm{c}$}$|^2$} with the difference \text{\reflectbox{$\bm{c}$}} between the current and exact wave functions. This is in stark contrast to the previous quasi-Newton optimization strategy \cite{Neuscamman:2013:cjagp} which lacked any kind of zero variance principle for the optimization updates, a fact that likely explains our previous observation that greatly increased sample lengths compared to the traditional LM were needed to stabilize the quasi-Newton approach. Note that while it is possible to approximate CJAGP's $\bm{S}$ matrix at an $N^4$ per-sample cost using the traditional LM's stochastic approach of Eq.\ (\ref{eqn:trad_mc_mat}), doing so would violate even this quadratic approach to the SZVP when used together with Figure \ref{fig:matrix_tiles}'s approximation for $\bm{H}$. Indeed, we have observed that drastically larger sample sizes are required when one mixes the traditional method for approximating $\bm{S}$ with our new method for approximating $\bm{H}$, and so we also approximate $\bm{S}$ via Figure \ref{fig:matrix_tiles} for the sake of reducing statistical uncertainty, even though this approximation is more complicated. \subsection{Avoiding Matrix Builds} \label{sec:amb} Although the LM typically works by first building the matrices $\bm{H}$ and $\bm{S}$ and then solving the generalized eigenvalue problem of Eq.\ (\ref{eqn:lm_se}), Krylov subspace (KS) methods \cite{eigenvalue_templates_2000} such as the Davidson \cite{Davidson:1975:davidson} or Arnoldi \cite{Arnoldi:1951:arnoldi} methods can be employed to eschew the matrix builds altogether. Such a KS approach has been used previously \cite{Neuscamman:2012:fast_sr} in the context of stochastic reconfiguration \cite{Sorella:2001:SR,Sorella:2004:agp_sr}, and here we give some details for how such approaches can be generalized to both the traditional LM and the newly proposed variant for CJAGP. Instead of requiring the matrices to be built, KS methods typically only require the ability to operate the matrix on an arbitrary vector, which in the context of either the LM or stochastic reconfiguration can be advantageous when the number of wave function variables $n_\mathrm{v}$ becomes large. In the traditional LM, a KS method will require evaluation of matrix-vector products $\bm{A} \bm{c}$ with the stochastic matrix approximation given in Eq.\ (\ref{eqn:trad_mc_mat}): \begin{align} \sum_y A_{xy} c_y &\approx \sum_{\bm{n}\in\xi} \frac{\langle\Psi^x|\bm{n}\rangle}{\langle\Psi|\bm{n}\rangle} \sum_{y} \frac{\langle\bm{n}|\hat{A}|\Psi^y\rangle}{\langle\bm{n}|\Psi\rangle} c_y \label{eqn:trad_lm_mv} \end{align} For wave functions like the JAGP \cite{Neuscamman:2013:jagp} for which the derivative vectors $\langle\Psi^x|\bm{n}\rangle / \langle\Psi|\bm{n}\rangle$ and $\langle\bm{n}|\hat{A}|\Psi^y\rangle / \langle\bm{n}|\Psi\rangle$ can be evaluated efficiently, each sample's contribution to the overall matrix vector product can be computed via a simple dot product. If, for example, storing or communicating the matrix would be prohibitive, this approach offers a lower-memory, lower-communication alternative. For the approach proposed here for the CJAGP ansatz, the matrix vector product takes on two parts. For the portion of the sum over $y$ covering the non-differentiated term, the pairing matrix derivatives, and the Jastrow derivatives, the evaluation is the same as in Eq.\ (\ref{eqn:trad_lm_mv}). For the portion of the sum in which $y$ runs over orbital rotation variables $K_{pq}$, we use Eq.\ (\ref{eqn:new_Axy}) to write \begin{align} & \sum_{y\in\mathrm{orb.\ rot.}} A_{xy} c_y \notag \\ & \hspace{6mm} \approx \sum_{\bm{n}\in\xi} \frac{\langle\Psi^x|\bm{n}\rangle}{\langle\Phi|\bm{n}\rangle} \frac{\langle\bm{n}| \breve{A} |\Phi\rangle}{\langle\bm{n}|\Phi\rangle} + \frac{\langle\Psi^x| \breve{B} |\bm{n}\rangle}{\langle\Phi|\bm{n}\rangle} \frac{\langle\bm{n}| \hat{A} |\Phi\rangle}{\langle\bm{n}|\Phi\rangle} \label{eqn:new_Axy_cy} \\ & \hspace{3mm} \breve{B} \equiv \sum_{p<q} c_{pq} ( a^+_p a_q - a^+_q a_p ) \label{eqn:breve_B} \\ & \hspace{3mm} \breve{A} \equiv \left[ \hat{A}, \hspace{1mm} \breve{B} \right] \label{eqn:breve_A} \end{align} In the definition of $\breve{B}$ we have relabeled the sum on $y$ over orbital rotations by the orbital indices $p$ and $q$ that label the individual orbital rotation variables. Crucially, because $\breve{B}$ is a one-electron operator, $\breve{A}$ is a two-electron operator with exactly the same form as $\hat{A}$. Moreover, the coefficients for $\breve{A}$ are independent of $\bm{n}$ and can thus be precomputed at an $N^5$ cost \textit{before} the sample is taken, so that the actual per-sample cost of evaluating the first term in Eq.\ (\ref{eqn:new_Axy_cy}) scales as only $N^4$. The second term in Eq.\ (\ref{eqn:new_Axy_cy}) also has a per-sample cost scaling as $N^4$, as it amounts to a weighted sum over the matrix elements of Eq.\ (\ref{eqn:new_Sxy}) scaled either by one (if $\bm{A} = \bm{S}$) or by the JAGP local energy (if $\bm{A} = \bm{H}$). Thus we see that in contrast to building the CJAGP LM matrices, which due to Eq.\ (\ref{eqn:expanded_commutator}) has a per-sample cost scaling of $N^5$, operating these matrices on an arbitrary vector without building them has a per-sample cost scaling of only $N^4$. In large systems this reduced scaling could be an advantage, depending on how many matrix vector products are required for the chosen Krylov subspace method. Systems studied in this work are too small for this reduced scaling to be beneficial, but we present the option of avoiding matrix builds anyways as it should be useful in future work. \section{Results} \label{sec:results} \subsection{Computational Details} \label{sec:comp_det} CJAGP results were obtained using our own software for VMC in Hilbert space, with one- and two-electron integrals for the Hamiltonian taken from Psi3 \cite{Psi3}. Complete-active space self-consistent field (CASSCF) \cite{Werner:1985_1:mcscf,Werner:1985_2:mcscf}, full configuration interaction (FCI) \cite{Handy:1984:fci,Handy:1989:fci}, complete-active space second order perturbation theory (CASPT2) \cite{Werner:1996:caspt2}, and size-consistency-corrected multi-reference configuration interaction (MRCI+Q) \cite{Knowles:1988:mrci,Werner:1988:mrci} results were obtained with Molpro \cite{MOLPRO_brief}. Except for the (6e,12o) CASSCF result displayed in Figure \ref{fig:sco_631g_absolute_energy}, all other cases of CASSCF, CASPT2, and MRCI+Q employed a minimal (6e,6o) active space containing the three pairs of bonding/antibonding orbitals for the triple bonds of N$_2$ and [ScO]$^+$. Results for restricted and unrestricted Hartree Fock (RHF and UHF) \cite{Szabo-Ostland} and coupled cluster with singles, doubles, and perturbative triples (CCSD(T) and UCCSD(T)) \cite{BARTLETT:2007:cc_review} were obtained with QChem \cite{QChem:2006,QChem:2013}. A 6-31G \cite{POPLE:1972:6-31g_basis} basis was used in all cases, and post-CASSCF methods (as well as CJAGP) froze the N 1s, O 1s, and Sc 1s, 2s, and 2p orbitals. In the optimization of our CJAGP wave function, some constraints were placed on the wave function to improve the ease of convergence. For the Jastrow factor, the coefficients were constrained to be symmetric between $\alpha$ and $\beta$ electrons, so $J_{i_\alpha j_\alpha}=J_{i_\beta j_\beta}$ and $J_{i_\alpha j_\beta}=J_{i_\beta j_\alpha}$. For the pairing matrix, we constrained $\bm{F}$ to be symmetric, resulting in an AGP reference with singlet spin. Finally, we added further constraints to ensure Jastrow coefficients and pairing matrix elements that should be equal by molecular symmetry were indeed equal. For example, in N$_2$ and [ScO]$^+$ the Jastrow coefficients for equivalent s-p$_{\mathrm{x}}$ and s-p$_{\mathrm{y}}$ couplings were constrained to be equal. \begin{figure}[t] \centering \includegraphics[width=7.5cm,angle=270]{h2o_631g_convergence} \caption{Convergence of the last few mE$_{\mathrm{h}}$ of correlation energy for the LM and L-BFGS optimization approaches for H$_2$O in a 6-31G basis with $r_{\mathrm{OH}}=1.0\hspace{0.4mm}\text{\AA}$ and $\angle HOH=109.57^{\circ}$, plotted against the number of LM matrix builds completed after passing a correlation energy of -133 mE$_{\mathrm{h}}$. The converged CJAGP correlation energy is -137.3 mE$_{\mathrm{h}}$. See Section \ref{sec:convergence} for further details. } \label{fig:h2o_631g_convergence} \end{figure} Note that the optimized CJAGP energy has two possible sources of statistical uncertainty. First, there is the usual uncertainty when estimating the final wave function's energy using VMC. Second, statistical uncertainty in the LM update direction $\text{\reflectbox{$\bm{c}$}}$ prevents the optimal variable values from being found precisely. In practice we observe the latter effect to be dominant, making the estimation of the overall method's statistical uncertainty somewhat difficult, as we do not wish to run a large number of separate optimizations at each molecular geometry to collect statistics. Instead, we have fit CJAGP's energy error over the dissociation curves to a smooth third order polynomial (e.g.\ see Figure \ref{fig:n2_631g_shifted_error}) and then estimated the statistical uncertainty of the energies based on the deviations of the actual points from this smooth curve. Assuming these deviations are normally distributed, we find 95\% confidence intervals of $\pm 0.13$ kcal/mol in N$_2$ and $\pm 0.3$ kcal/mol in [ScO]$^+$. \subsection{Convergence} \label{sec:convergence} \begin{figure}[t] \centering \includegraphics[width=7.5cm,angle=270]{n2_631g_shifted_energy} \caption{Potential energy curves for N$_2$ dissociation in a 6-31G basis, with each curve shifted so that the zero of energy occurs at 1.15 \AA. See Section \ref{sec:n2} for further details. } \label{fig:n2_631g_shifted_energy} \end{figure} Figure \ref{fig:h2o_631g_convergence} shows, in H$_2$O near equilibrium, the convergence of the present LM approach compared to the previous \cite{Neuscamman:2013:cjagp} quasi-Newton L-BFGS optimization scheme. The L-BFGS approach's idea was to optimize the orbital rotation $\hat{\mathcal{K}}$ on a surface $E(\bm{K})$ on which the other variables took on their optimal values (i.e.\ the Jastrow and pairing variables for a given $\bm{K}$ were taken as those that minimized the energy for that $\bm{K}$). In practice this surface was achieved by using the LM to reoptimize the Jastrow and pairing variables at each L-BFGS step, and so we are able to compare the number of LM matrix builds required in that scheme to the number required by the present full LM approach. While this comparison is somewhat imperfect as the previous LM matrix builds were less expensive than the present ones that also include the orbital rotation variables, Figure \ref{fig:h2o_631g_convergence} nonetheless displays the stark contrast in optimization efficiency between the two approaches. Note that this example was carried out under what might be called ``exact sampling'' (meaning that each configuration $\bm{n}$ was visited exactly once and its contribution to averages scaled by the wave function weight $|\langle\bm{n}|\Psi\rangle|^2$) so that statistical uncertainty was not present. In addition to being useful for debugging, such sampling allows us to test whether, independent of stochastic issues, the present LM outperforms L-BFGS, and indeed it clearly does. Note that the comparison becomes even more favorable for the present LM approach when a stochastic Markov-chain-based sample is used, as the LM obeys a zero variance principle while the L-BFGS approach does not. In practice, we therefore see that not only is the LM a superior optimization method, but that its inherently lower statistical uncertainty allows it to operate effectively with much smaller sample sizes than are required to stabilize the previous L-BFGS approach. One way to emphasize this advantage is to point out that in our previous study \cite{Neuscamman:2013:cjagp}, the stochastic sample sizes needed to stabilize the L-BFGS method were in all cases larger than the Hilbert spaces themselves (note that this is not uncommon for stochastic approaches in small systems), whereas the present study's sample lengths of 1.6$\times$10$^7$ for N$_2$ and 2.56$\times$10$^7$ for [ScO]$^+$ were in both cases smaller than the Hilbert spaces in question. \begin{table*}[t] \centering \caption{Energies for the N$_2$ stretch in a 6-31G basis. FCI is reported in E$_{\mathrm{h}}$, with other methods reported as the difference from FCI in mE$_{\mathrm{h}}$. The last row gives the non-parallelity errors in mE$_{\mathrm{h}}$. See Section \ref{sec:n2} for further details. } \begin{tabular*}{0.90\textwidth}{@{\extracolsep{\fill}} r r r r r r r r r } \hline \hline \multicolumn{1}{c}{$R$ (\AA)} & \multicolumn{1}{c}{FCI} & \multicolumn{1}{c}{RHF} & \multicolumn{1}{c}{UHF} & \multicolumn{1}{c}{CCSD(T)} & \multicolumn{1}{c}{UCCSD(T)} & \multicolumn{1}{c}{CASSCF} & \multicolumn{1}{c}{CASPT2} & \multicolumn{1}{c}{CJAGP} \\ \hline 1.00 & -109.0467 & 211.4 & 211.4 & -0.8 & -0.8 & 85.5 & 13.7 & 6.8 \\ 1.05 & -109.0857 & 223.3 & 223.3 & -0.5 & -0.5 & 86.5 & 14.0 & 7.4 \\ 1.10 & -109.1034 & 235.7 & 235.7 & -0.2 & -0.2 & 87.4 & 14.3 & 7.5 \\ 1.15 & -109.1059 & 248.8 & 247.5 & 0.2 & 0.9 & 88.3 & 14.5 & 7.4 \\ 1.20 & -109.0981 & 262.4 & 252.9 & 0.6 & 2.3 & 89.2 & 14.7 & 7.6 \\ 1.25 & -109.0835 & 276.6 & 252.9 & 1.1 & 3.5 & 90.0 & 14.8 & 7.6 \\ 1.30 & -109.0648 & 291.3 & 249.1 & 1.5 & 4.6 & 90.8 & 14.8 & 7.8 \\ 1.35 & -109.0438 & 306.6 & 242.7 & 2.1 & 5.8 & 91.6 & 14.8 & 7.9 \\ 1.40 & -109.0221 & 322.4 & 234.6 & 2.6 & 7.2 & 92.3 & 14.6 & 8.1 \\ 1.45 & -109.0005 & 338.8 & 225.3 & 3.0 & 8.7 & 93.0 & 14.4 & 8.5 \\ 1.50 & -108.9797 & 352.9 & 215.1 & -4.2 & 10.3 & 93.5 & 14.1 & 8.6 \\ 1.55 & -108.9602 & 363.1 & 203.9 & -16.3 & 11.9 & 93.9 & 13.6 & 8.5 \\ 1.60 & -108.9423 & 370.5 & 191.9 & -33.6 & 13.2 & 94.0 & 13.0 & 8.3 \\ 1.65 & -108.9260 & 376.2 & 179.1 & -56.4 & 13.9 & 94.0 & 12.4 & 8.8 \\ 1.70 & -108.9116 & 380.9 & 166.2 & -84.9 & 14.0 & 93.6 & 11.6 & 8.9 \\ 1.75 & -108.8989 & 385.2 & 153.5 & & 13.5 & 92.9 & 10.9 & 8.8 \\ 1.80 & -108.8880 & 389.6 & 141.5 & & 12.4 & 91.9 & 10.2 & 8.8 \\ 1.85 & -108.8788 & 394.3 & 130.6 & & 11.1 & 90.7 & 9.5 & 9.0 \\ 1.90 & -108.8711 & 399.5 & 120.9 & & 9.8 & 89.3 & 9.0 & 9.1 \\ 1.95 & -108.8648 & 405.2 & 112.5 & & 8.5 & 87.7 & 8.6 & 8.8 \\ 2.00 & -108.8597 & 411.4 & 105.2 & & 7.3 & 86.2 & 8.3 & 8.9 \\ 2.05 & -108.8556 & 418.0 & 99.1 & & 6.3 & 84.8 & 8.1 & 8.9 \\ 2.10 & -108.8523 & 424.9 & 93.9 & & 5.4 & 83.4 & 8.0 & 8.8 \\ 2.15 & -108.8496 & 432.1 & 89.6 & & 4.6 & 82.1 & 8.0 & 8.3 \\ 2.20 & -108.8476 & 439.4 & 86.1 & & 3.9 & 81.0 & 7.9 & 8.2 \\ 2.25 & -108.8459 & 446.7 & 83.1 & & 3.2 & 80.0 & 7.9 & 8.3 \\ 2.30 & -108.8446 & 454.1 & 80.6 & & 2.6 & 79.2 & 8.0 & 7.9 \\ 2.35 & -108.8435 & 461.5 & 78.6 & & 2.1 & 78.5 & 8.0 & 7.4 \\ 2.40 & -108.8427 & 468.7 & 76.9 & & 1.6 & 77.8 & 8.0 & 7.3 \\ 2.45 & -108.8420 & 475.8 & 75.5 & & 1.2 & 77.3 & 8.1 & 7.1 \\ 2.50 & -108.8414 & 482.7 & 74.3 & & 0.8 & 76.9 & 8.1 & 7.2 \\ 2.55 & -108.8410 & 489.5 & 73.4 & & 0.5 & 76.5 & 8.1 & 6.8 \\ 2.60 & -108.8406 & 496.1 & 72.5 & & 0.2 & 76.1 & 8.2 & 6.4 \\ \hline NPE & N/A & 284.6 & 180.3 & 87.9 & 14.8 & 17.9 & 6.9 & 2.7 \\ \hline \hline \end{tabular*} \label{tab:n2_energies} \end{table*} \subsection{N$_2$} \label{sec:n2} \begin{figure}[t] \centering \includegraphics[width=7.5cm,angle=270]{n2_631g_shifted_error} \caption{Energy deviations from FCI during N$_2$ dissociation in a 6-31G basis, with each curve shifted by a constant so that it crosses zero at a bond distance of 1.15 \AA. For CJAGP, the line is a cubic polynomial fit to the points to give a sense of statistical uncertainty. See Section \ref{sec:n2} for further details. } \label{fig:n2_631g_shifted_error} \end{figure} The dissociation of the nitrogen dimer's triple bond has long been used as a benchmark for multi-reference methods in quantum chemistry. As was seen previously \cite{Neuscamman:2013:cjagp} in H$_2$O and HF, the limited CC-like nature of the CJAGP's orbital-rotated Jastrow factor appears to capture a large fraction of the dynamic correlation energy while maintaining the ability to help capture static correlation in conjunction with the geminal power \cite{Neuscamman:2015:subtractive_jagp}. As we see in the N$_2$ results (Figures \ref{fig:n2_631g_shifted_energy} and \ref{fig:n2_631g_shifted_error} and Table \ref{tab:n2_energies}) these features allow CJAGP to vastly outperform single-reference methods like CCSD(T) and UCCSD(T). Here the catastrophic failure of CCSD(T) may be attributed both to the poor quality of its RHF reference (whose instabilities towards spatial symmetry breaking are responsible for the kink in its potential curve) and to the tendency of spurious interactions between its singlet and triplet amplitude channels to overcorrelate in the strongly correlated regime \cite{Scuseria:2015:ccd0}. Note that the issue of spatial symmetry breaking in the RHF might be avoided by enforcing spatial symmetry throughout the dissociation, but for N$_2$ we have chosen to present the CCSD(T) results for the minimum energy RHF reference as found via stability analyses \cite{Pople:1977:hf_stability}. In contrast, CJAGP avoids these issues thanks to its more flexible reference function and the variational nature of its evaluation, which guarantees that spurious couplings between its cluster amplitudes cannot lead to an overcorrelation catastrophe. \begin{table*}[t] \centering \caption{Energies for the [ScO]$^+$ stretch in a 6-31G basis. MRCI+Q is reported in E$_{\mathrm{h}}$, with other methods reported as the difference from MRCI+Q in mE$_{\mathrm{h}}$. The last row gives the non-parallelity errors in mE$_{\mathrm{h}}$. See Section \ref{sec:sco} for further details. } \begin{tabular*}{0.90\textwidth}{@{\extracolsep{\fill}} r r r r r r r r r } \hline \hline \multicolumn{1}{c}{$R$ (\AA)} & \multicolumn{1}{c}{MRCI+Q} & \multicolumn{1}{c}{RHF} & \multicolumn{1}{c}{UHF} & \multicolumn{1}{c}{CCSD(T)} & \multicolumn{1}{c}{UCCSD(T)} & \multicolumn{1}{c}{CASSCF (6e,6o)} & \multicolumn{1}{c}{CASPT2} & \multicolumn{1}{c}{CJAGP} \\ \hline 1.5 & -834.6354 & 345.5 & 345.5 & 0.8 & 0.8 & 202.9 & 25.5 & 69.4 \\ 1.6 & -834.6631 & 356.3 & 356.3 & 0.6 & 0.6 & 206.0 & 26.0 & 69.6 \\ 1.7 & -834.6688 & 367.5 & 367.5 & 0.4 & 0.4 & 209.3 & 26.6 & 69.0 \\ 1.8 & -834.6607 & 379.6 & 357.9 & 0.1 & 8.9 & 212.8 & 27.1 & 68.3 \\ 1.9 & -834.6445 & 392.9 & 340.5 & -0.4 & 7.6 & 216.4 & 27.4 & 68.7 \\ 2.0 & -834.6243 & 407.0 & 324.2 & -0.9 & 5.3 & 219.9 & 27.5 & 69.5 \\ 2.1 & -834.6024 & 420.6 & 310.1 & 4.7 & 5.3 & 223.1 & 27.1 & 68.5 \\ 2.2 & -834.5806 & 427.4 & 298.7 & 14.4 & 8.7 & 225.7 & 26.1 & 69.3 \\ 2.3 & -834.5600 & 429.1 & 289.8 & 14.2 & 12.5 & 227.3 & 24.2 & 70.5 \\ 2.4 & -834.5413 & 427.9 & 281.8 & 7.6 & 15.2 & 228.3 & 21.6 & 69.7 \\ 2.5 & -834.5248 & 425.2 & 266.5 & -6.3 & 14.2 & 228.7 & 17.0 & 70.4 \\ 2.6 & -834.5106 & 421.6 & 252.7 & -34.4 & 12.0 & 228.2 & & 71.6 \\ \hline NPE & N/A & 83.5 & 114.7 & 48.8 & 14.9 & 25.7 & 10.5 & 3.3 \\ \hline \hline \end{tabular*} \label{tab:sco_energies} \end{table*} More significantly, CJAGP outperforms CASPT2, one of the most affordable and most commonly used multi-reference methods in quantum chemistry. Both its absolute and relative energies show improvements compared to those of CASPT2, with the relative energies being particularly accurate: the non-parallelity error (NPE, the difference between the highest and lowest deviations) relative to FCI is less than 2 kcal/mol and less than half that of CASPT2. These improvements are especially significant when one considers that CASPT2's cost scales exponentially due to its complete active space reference, while CJAGP's cost scales only polynomially. In light of the Jastrow factor's CC-like form and the geminal power's multi-reference nature, it is interesting to compare CJAGP to the performance one might expect from the ideal of a variational singles-and-doubles CC method based on a complete active space self consistent field (CASSCF) wave function reference. As such a theory should outperform even MRCI+Q, one would expect absolute accuracies to be within 1 or 2 mE$_{\mathrm{h}}$ of FCI (see e.g.\ \cite{Neuscamman:2009:qct}). Unsurprisingly, given that both its cluster operator and its AGP reference function are more constrained than this ideal, CJAGP does not achieve such accuracies in the absolute energy. Its relative energies are nonetheless quite accurate, suggesting that the missing details that would account for the last few percent of the correlation energy are being left out consistently at all geometries. If supplied with a trial function as accurate as CJAGP, diffusion Monte Carlo \cite{FouMitNeeRaj-RMP-01} would be well placed to capture these final details. One very interesting question going forward is thus whether a real-space Jastrow factor can be devised to replicate the CC qualities of the orbital-rotated Hilbert-space Jastrow. \subsection{ScO Cation} \label{sec:sco} \begin{figure}[b] \centering \includegraphics[width=7.5cm,angle=270]{sco_631g_absolute_energy} \caption{Total energies during the dissociation of [ScO]$^+$ in a 6-31G basis. See Section \ref{sec:sco} for further details. } \label{fig:sco_631g_absolute_energy} \end{figure} Due to the importance of transition metals in catalysis and materials science, and the tendency of metal-oxygen bonds to exhibit strong electron correlations, theoretical approaches that can deal successfully with such correlations are a high priority. As an initial foray into this regime, we have tested CJAGP on the triple-bond dissociation of [ScO]$^+$. At first glance, this cation appears quite similar to N$_2$ in that it also contains one $\sigma$ and two $\pi$ bonds. In practice, however, its dissociation is even more fraught, with UCCSD(T) becoming qualitatively unreliable and minimal-active-space CASPT2 exhibiting intruder state problems. As seen in Figure \ref{fig:sco_631g_absolute_energy}, CCSD(T) exhibits its typical failure during multiple bond stretching. UCCSD(T) fares little better, being beset by a Coulson-Fischer point cusp near equilibrium where RHF and UHF separate as well as multiple low-lying UHF determinants as the bond is stretched. If one uses stability analyses to ensure that UCCSD(T) is always based on the lowest energy UHF solution, the result is a UCCSD(T) curve (UCCSD(T) (stable)) with multiple discontinuities. These discontinuities can be avoided by always using the UHF solution with character most similar to the $R=1.9$ \AA\ UHF state, as we have done for the data labeled UCCSD(T) in Figures \ref{fig:sco_631g_absolute_energy}-\ref{fig:sco_631g_shifted_error} and Table \ref{tab:sco_energies}, but even in this case UCCSD(T) displays a NPE of 9.3 kcal/mol. One should bear in mind that without benchmark results it would be difficult to know whether this UHF determinant or the lower energy determinants found through stability analyses were the more reasonable starting points, and so it is hard to recommend the use of UCCSD(T) for predicting energy profiles when stretching transition-metal-oxide bonds. When based on the triple-bond's minimal (6e,6o) active space, CASPT2 proves more reliable than CC and achieves a smaller 6.6 kcal/mol NPE. However, this CASPT2 approach failed to converge at $R=2.6$ \AA\ due to the presence of an intruder state. One could overcome this problem with either a larger-than-minimal active space or through the use of level shifts \cite{Andersson:1995:caspt2_level_shift}, but the former may become untenable in larger transition metal systems while the latter introduces an uncontrolled free parameter. \begin{figure}[t] \centering \includegraphics[width=7.5cm,angle=270]{sco_631g_shifted_energy} \caption{Energy curves during the dissociation of [ScO]$^+$ in a 6-31G basis, with each curve shifted so that the zero of energy occurs at 1.7 \AA. See Section \ref{sec:sco} for further details. } \label{fig:sco_631g_shifted_energy} \end{figure} As in N$_2$, the active-space-free CJAGP improves on the relative energy of CASPT2 with a NPE of just 2.1 kcal/mol, as seen in Figures \ref{fig:sco_631g_shifted_energy} and \ref{fig:sco_631g_shifted_error}. However, as seen in Figure \ref{fig:sco_631g_absolute_energy} and Table \ref{tab:sco_energies}, the absolute energy errors for CJAGP are now much larger than they were in N$_2$. While Figure \ref{fig:sco_631g_absolute_energy} reveals that CJAGP recovers significantly more correlation energy than even a full valence (6e,12o) CASSCF approach, it is still missing roughly 70 mE$_{\mathrm{h}}$ relative to the benchmark MRCI+Q. As discussed in Section \ref{sec:n2}, this performance is inferior to what one would expect from a (currently non-existent) CASSCF-based variational CC. Given the excellent shape of CJAGP's potential energy curve (again, NPE is only 2.1 kcal/mol), we do not think the issue lies with the multi-reference Jastrow-AGP combination but instead suspect the missing correlation energy is due to the limited flexibility of the Jastrow operator's CC form (Eq.\ (\ref{eqn:cluster_amp})) when compared to a full CC doubles operator. In other words, we suspect that the limited CC flexibility leads to a limited dynamic correlation recovery, although one that is surprising well balanced across different geometries. As for N$_2$, these results strongly suggest that excellent accuracies could be achieved if DMC could use a trial function of CJAGP quality, as DMC is excellent at recovering dynamic correlation details when supplied with a qualitatively correct trial function \cite{TouUmr-JCP-08}. Indeed, based on its energy results, CJAGP should be an even better DMC trial function than full-valence CASSCF, and so we feel further motivated to investigate this exciting possibility. As a final note, we would like to point out that beyond 2.6\AA, the CJAGP optimization failed to converge to a good singlet, likely because at around this geometry a singlet-triplet crossing occurs and the singlet is no longer the ground state \cite{Mavridis:2010:tmo_ScO_TiO_CrO_MnO}. While we hope to investigate CJAGP's prospects for the direct, variational targeting of excited states \cite{Zhao:2016:evp_qmc} in the future, we have limited ourselves here to bond distances below 2.6\AA\ for which the singlet is the ground state. \begin{figure}[t] \centering \includegraphics[width=7.5cm,angle=270]{sco_631g_shifted_error} \caption{Energy deviations from MRCI+Q (6e,6o) during [ScO]$^+$ dissociation in a 6-31G basis, with each curve shifted by a constant so that it crosses zero at a bond distance of 1.7 \AA. For CJAGP, the line is a cubic polynomial fit to the points to give a sense of statistical uncertainty. See Section \ref{sec:sco} for further details. } \label{fig:sco_631g_shifted_error} \end{figure} \section{Conclusions} \label{sec:conclusions} We have presented an improved LM optimization scheme for the CJAGP ansatz that achieves an $N^5$ per-sample cost scaling that drops to $N^4$ if Krylov subspace methods are employed. This LM optimization obeys the strong zero variance principle in a quadratic sense, and is thus vastly more statistically efficient than the previously employed quasi-Newton approach. In practice this improved optimization scheme has led to drastic reductions in the sample sizes and optimization steps required for variational energy minimization. The key theoretical development facilitating these improvements was the use of an alternative stochastic resolution of the identity in the estimation of the LM matrices or matrix vector products. With this improved optimization scheme, we showed that CJAGP is vastly more reliable than traditional single-reference CC in two challenging triple-bond dissociations, one involving a transition metal. Further, we showed that for relative energies, the polynomial-cost, active-space-free CJAGP also outperformed the exponentially scaling, active-space-based CASPT2 method. In both examples, the CJAGP relative energies were substantially more accurate than its absolute energy, suggesting to us that the limited flexibility of its cluster operator ($O(N^2)$ variables vs the traditional $O(N^4$)) prevented the capture of the finer details of dynamic correlation in a way that was well balanced across different geometries. Our findings in this study suggest two important avenues for future investigation. First, given that CJAGP appears to be a better trial function starting point than even a full-valence CASSCF reference, it would be highly desirable to combine it with diffusion Monte Carlo. This is not entirely trivial given that currently the CJ operator exists only in Hilbert (rather than real) space, but we look forward to investigating how its success may inform real space ansatz development. Second, our practical experience in applying CJAGP is making it increasingly clear that, in Hilbert space, the primary issue that will constrain the use of the CJAGP in the future is the fact that in its current form it must at each sample loop over a large slice of the two-electron integrals. As there has been much success in simplifying the handling of two-electron integrals in other areas of quantum chemistry, either by tensor decomposition or by screening, we look forward to the possibility of similar efficiency gains in the context of the CJAGP. \section{Acknowledgments} \label{sec:acknowledgments} Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. We also acknowledged support from the University of California.
2024-02-18T23:40:40.065Z
2016-03-23T01:01:48.000Z
algebraic_stack_train_0000
3,022
9,709
proofpile-arXiv_065-14764
\subsection*{Abbreviations}$~$\\ RF: Rotating frame with the planet\\ AP: Averaged problem\\ RAP: Reduced averaged problem\\ RS: Retrograde-satellite\\ TP: Tadpole\\ HS: Horseshoe\\ QS: Quasi-satellite\\ sRS : ``Satellized" retrograde satellite\\ \QSb : Binary quasi-satellite\\ \QSh : Heliocentric quasi-satellite \subsection*{List of symbols}$~$\\ $L_1$, $L_2$, $L_3$: Circular Eulerian aligned configurations\\ $L_4$, $L_5$: Circular Lagranian equilateral configurations\\ $\LQl$, $\LCl$: In the RF, long period families that originate from $L_4$ and $L_5$.\\ $\LT$, $\LQs$, $\LCs$: In the RF, short period families that originate from $L_3$, $L_4$ and $L_5$.\\[0.1cm] Family $f$: In the RF, one-parameter family of simple-periodic symmetrical retrograde satellite orbits that extends from an infinitesimal neighbourhood of the planet to the collision with the Sun. For $\eps<0.0477$, it is stable but contains two particular orbits where the frequencies $\nu$ and $1-g$ are in $1:3$ resonance. These two orbits decomposed the neighbourhood of the family $f$ in three domains: sRS, \QSb and \QSh. \\ $1$, $\nu$, $g$: Frequencies respectively associated with the fast variations (the mean longitudes $\lam$ and $\lam'$), the semi-fast component of the dynamics (oscillation of the resonant angle $\theta$) and the secular evolution of a trajectory (precession of the periaster argument $\omega$).\\ $\NLQ$, $\NLC$: In the RAP, the AP and the RF, families of $2\pi/\nu$-periodic orbits parametrized by $|u|\leq 0$ and that originates from $L_4$ and $L_5$. Moreover, they correspond to $\LQl$ and $\LCl$ in the RF.\\ $\GLT$, $\GLQ$, $\GLC$: In the RAP, families of fixed points parametrized by $e_0$ and that originate from $L_3$, $L_4$ and $L_5$. In the AP and the RF, these fixed points correspond to periodic orbits of frequency respectively $g$ and $1-g$. Moreover, they correspond to $\LT$, $\LQs$ and $\LCs$ in the RF.\\ $\GQS$: In the RAP, family of fixed points parametrized by $e_0$. In the AP and the RF, these fixed points correspond to periodic orbits of frequency respectively $g$ and $1-g$. Moreover, this family corresponds to a part of the family $f$ that belongs to the \QSh domain.\\ $\GdLT$, $\GdLQ$, $\GdLC$, $\GdQS$: In the RAP, fixed points that belong to $\GLT$, $\GLQ$, $\GLC$ and $\GQS$ and characterized by $g=0$. In the AP, sets of fixed points (also denoted as ``circles of fixed points") parametrized by $\omega(t=0)$. In the RF, sets of $2\pi$-periodic orbits parametrized by $\big(\lam'-\omega\big)_{t=0}$. \\ $\GdLTs$, $\GdLTu$, $\GdLQs$, $\GdLQu$, $\GdLCs$, $\GdLCu$, $\GdQSs$, $\GdQSu$: In the AP with $e'\geq 0$, families of fixed points that originate from the circles of fixed points $\GdLT$, $\GdLQ$, $\GdLC$, $\GdQS$ when $e'=0$. \section{Introduction} Following the discoveries, in 1899 and 1908, of the retrograde moons Phoebe and Pasiphea moving at great distances from their respective primaries Saturn and Jupiter, \cite{Ja1913} published the first study dedicated to the motion of the retrograde satellites (RS). Seeking to understand how a moon could still be satellized at this remote distance (close to the limit of the planet Hill's sphere), he highlighted in the Sun-Jupiter system that where ``\textit{[...] the solar forces would prohibited direct motion, [...] the solar and the Jovian forces would go hand in hand to maintain a retrograde satellite}". Thus, by this remark the author was the first to confirm the existence and stability of remote retrograde satellite objects in the solar system.\\ Afterwards, the existence and stability of some retrograde satellite orbits far from the secondary body have also been established in the planar restricted three-body problem with two equal masses \citep{St1933, Mo1935, He1965, He1965a}\footnote{ The two firsts are works of the Copenhagen group that extensively explored periodic orbit solutions in the planar restricted three-body problem with two equal masses. The two lasts are the first numerical explorations of all the solutions of the restricted three-body problem that recovered and completed the precedent works. } and in the Earth-Moon system \citep{Br1968}. In the framework of the Hill's approximation, \cite{He1969} extended Jackson's study and highlighted that there exists a one-parameter family of simple-periodic symmetrical retrograde satellite orbits (denoted family $f$) that could exists beyond the Eulerian configurations $L_1$ and $L_2$. This has been confirm in \cite{HeGu1970} in the restricted three-body problem. The authors showed in the rotating frame with the planet (RF), that the family $f$ extends from the retrograde satellite orbits in an infinitesimal neighbourhood of the secondary to the collision orbit with the primary. Besides, they pointed out that if $\eps$, the ratio of the secondary mass over the sum of the system masses, is less than $0.0477$, the whole family is stable. \citet{Be1974,Be1975,Be1976} extended these results by studying the stability of the neighbourhood of the family $f$ in the configuration space for $0\leq\eps\leq 1$. \begin{figure} \begin{center} \small \def0.85\textwidth{0.85\textwidth} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \ifx0.85\textwidth\undefined% \setlength{\unitlength}{640.8bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{0.85\textwidth}% \fi% \global\let0.85\textwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,0.35081149)% \put(0,0){\includegraphics[width=\unitlength]{Fig1_DefQS.eps}}% \put(0.09300874,0.16543423){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Sun}}}% \put(0.18164794,0.09427318){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Planet}}}% \put(0.38264669,0.16917955){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Asteroid}}}% \put(0.05680413,0.31025321){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Rotating frame with the planet (RF)}}}% \put(0.64481898,0.31025321){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Heliocentric frame}}}% \put(0.05305868,0.0380934){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{a.}}}% \put(0.61985019,0.0380934){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{b.}}}% \put(0.15667915,0.24783123){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Hill's sphere}}}% \end{picture \endgroup% \caption{\small Asteroid on a quasi-satellite orbit (QS). In the rotating frame with the planet (RF) (a.), the trajectory is those of a retrograde satellite (RS) outside the planet Hill's sphere. In the heliocentric frame (b.), the trajectory is represented by heliocentric osculating ellipses with a non zero eccentricity (in the circular case) and a resonant angle $\theta=\lam-\lam'$ that librates around zero.} \label{DefQS} \end{center} \end{figure} After these theoretical works, the study of the retrograde satellite orbits was addressed in a more practical point of view, with the project to inject a spacecraft in a circum-Phobos orbit. Remark that as the Phobos Hill's sphere is too close to its surface, remote retrograde satellites are particularly adapted trajectories. Hence, at the end of the eighties, the terminology ``quasi-satellite"\footnote{ Let us still mention that the ``quasi-satellite" terminology has already been used in the paper of \cite{DaIp1972} but this was to describe the resonant behaviour of the near-Earth Object 1685 Toro and therefore was completely disconnected to retrograde satellite motion. } (QS) appeared in the USSR astrodynamicist community to define trajectories in the restricted three-body problem in rotating frame that correspond to retrograde satellite orbits outside the Hill's sphere of the secondary body (see Fig.\ref{DefQS}a). The Phobos mission study led to the works of \cite{Ko1990} and \citet{LiVa1993,LiVa1994,LiVa1994a}. At the end of the nineties, the quasi-satellite motion appeared in the celestial mechanics community in the view of asteroid trajectories in the solar system.\\ Let us suppose a QS-type asteroid far enough from the planet so that the influence of the Sun dominates its movement and therefore that the planet acts as a perturbator. Then, its trajectory could be represented by heliocentric osculating ellipses whose variations are governed by the influence of the planet. In this context, \cite{MiIn1997} remarked that the asteroid and the planet are in $1:1$ mean motion resonance and therefore that the quasi-satellite orbits correspond to a particular kind of configurations in the co-orbital resonance. Unlike the tadpole (TP) orbits that librate around the Lagrangian equilibria $L_4$ and $L_5$ or the horseshoes (HS) that encompass $L_3$, $L_4$ and $L_5$, the quasi-satellite orbits are characterized by a resonant angle $\theta = \lam - \lam'$ that librates around zero (where $\lam$ and $\lam'$ are the mean longitudes of the asteroid and the planet) and a non zero eccentricity if the planet gravitates on a circle (see Fig.\ref{DefQS}b). In their paper, these authors also described a first perturbative treatment to study the long term stability of quasi-satellites in the solar system.\\ At that time no natural object was known to orbit this configuration. However, they suggested that, at least, the Earth and Venus could have quasi-satellite companions. Following this work, \cite{WiInMi2000} also predicted, via a numerical investigation of the stability around the giant planets, that Uranus and Neptune could harbour QS-type asteroids whereas they did not found stable solutions for Jupiter and Saturn. \\ Subsequently, \cite{Na1999} and \cite{NaChMu1999} became the reference in term of co-orbital dynamics with close encounters. Using Hill's approximation, these authors highlighted that in the spatial case, transitions between horseshoe and quasi-satellite trajectories could occurred. They exhibited new kinds of compound trajectories denoted HS-QS, TP-QS or TP-QS-TP which means that there exists stable transitions exit between quasi-satellite, tadpole and horseshoe orbits. Later, \cite{NeThFe2002} recovered these new co-orbital structures in a global study of the co-orbital resonance phase space. By developing a perturbative scheme using numerical averaging techniques, they showed how the tadpole, horseshoe, quasi-satellite and compound orbits vary with the asteroid eccentricity and inclination in the planar-circular, planar-eccentric and spatial-circular models. Particularly, they showed that the higher the asteroid's eccentricity is, the larger the domain occupied by the quasi-satellite orbits in the phase space is.\\ Eventually, the quasi-satellite long-term stability has been studied using perturbation theory in \cite{MiInWi2006} and \cite{SiNeAr2014}. The first ones developed a practical algorithm to detect QS-type asteroids on temporary or perpetual regime, while the last ones established conditions of existence of quasi-satellite motion and also explore its different possible regimes. Following these theoretical works, many objects susceptible to be at least temporary quasi-satellites have been found in the solar system. The first confirmed minor body was 2002 VE68 in co-orbital motion with Venus in \cite{MiBrWi2004}. The Earth \citep{BrInCo2004, CoChMi2002, CoVeBr2004, DeDe2014, Wa2009, Wa2010} and Jupiter \citep{KiNa2007, WaKr2012} are the two planets with the largest number of documented QS-type objects. Likewise, Saturn \citep{Ga2006}, Uranus \citep{Ga2006, DeDe2014}, Neptune \citep{DeDe2012} possess at least one of this type.\\ At last, let us mention that quasi-satellite motion could play a role in other celestial problems: according to \cite{Ko2005} and \citeyearpar{Ko2013}, planetesimals could be trapped in quasi-satellite motion around the protoplanet as well as interplanetary dust particles around Earth. Eventually, although no co-orbital exoplanet system has been found, several studies on the planar planetary three-body problem showed the existence and the stability of two co-orbital planets in quasi-satellite motion \citep{HaPsVo2009, HaVo2011, GiBeMi2010}. During these last twenty years, even though the ``quasi-satellite" terminology becomes dominant in the literature, some studies use rather ``retrograde satellite" \citep{Na1999, NeThFe2002} in reference to the neighbourhood of the family $f$ in the restricted problem in rotating frame with the planet. Hence, there exists an ambiguity in terms of terminology that is a consequence of the several approaches to describe these orbits, depending on the distance between the two co-orbitals. One of our purposes is thus to clarify the terminology to use between ``quasi-satellite" and ``retrograde satellite". Then, we chose to revisit the classical works on the family $f$ \citep{HeGu1970,Be1974} in the section \ref{sec:RF} and through a study on its frequencies, we show that the neighbourhood of the family is split in three different domains connected by an orbit; one corresponding to the ``satellized" retrograde satellite orbits while the two others to the quasi-satellites. Among these two quasi-satellite domains, we identify one that is associated with asteroid trajectories in the solar system. This is on this last one that the paper is focussed. An usual approach for these co-orbital trajectories in the restricted \citep{MiInWi2006,NeThFe2002,SiNeAr2014} and planetary \citep{RoPo2013} problems consists on averaging the Hamiltonian over the fast angle of the system (the planet mean longitude) to reduce the study of the problem to its semi-fast and secular components. This approach is generally denoted as the ``averaged problem" (AP). However, as mentioned in \cite{RoPo2013} and \cite{RoNiPo2015}, this one has the important drawback to reflect poorly the dynamics close to the singularity associated with the collision with the planet. Some quasi-satellite trajectories having close encounter with the planet, these are located close to the singularity in the averaged problem which implies that this approach would not be appropriate for them. Thus, in order to estimate a validity limit of the averaged problem for the study of quasi-satellite motion, we also revisit the co-orbital resonance via the averaged problem. Firstly, in the section \ref{sec:AP}, we develop the Hamiltonian formalism of the problem and introduce the averaged problem. Subsequently, in the section \ref{sec:CC}, we focus on the circular case (i.e. planet on a circular orbit) that allows possible reduction. We introduce the reduced averaged problem (RAP) that seems to be the most adapted approach to understand the dynamics in the co-orbital resonance. Focussing on quasi-satellite motion, we exhibit a family of fixed points in the reduced averaged problem representing the family $f$ that allows us to estimates the validity limit of the averaged problem. Next, to bridge a gap between the averaged problem and the works of \cite{HeGu1970} and \cite{Be1975}, we devote the section \ref{sec:RF} to revisit the motion in the rotating frame in the circular case in order to describe the family $f$ as well as its reachable part in the averaged problem and characterize its neighbourhood. Through this study, we show how the quasi-satellite domain reachable in the averaged problem shrinks by increasing $\eps$. At last, in the section \ref{sec:EC}, we come back to the averaged problem with the aim to extend in the eccentric case (i.e. planet on an eccentric orbit) a result on co-orbital frozen ellipses that has been highlighted in section \ref{sec:FOP}. \section{The averaged problem} \label{sec:AP} In the framework of the planar restricted three-body problem, we consider a primary with a mass $1-\eps$ (the Sun or a star), a secondary (a planet) with a mass $\eps$ small with respect to $1$ and a massless third body (particle or asteroid). We assume that the planet is in elliptic Keplerian motion whose eccentricity is denoted $e'$. Without loss of generality, we set that its semi-major axis is equal to 1 and that the argument of the periaster is equal to zero. Likewise, we fix its orbital period to $2\pi$ (and therefore its mean motion to $1$) which imposes the gravitational constant to be equal to $1$. In an heliocentric frame, the Hamiltonian of the problem reads \begin{align} \cH(\br, \brd, t) = \cH_K(\br,\brd) + \cH_P(\br, t) \label{eq:Ham1} \end{align} with $$\cH_K(\br,\brd) := \frac{1}{2}\norm{\brd}^2 - \frac{1}{\norm{\br}}$$ and $$ \cH_P(\br,t) := \eps\bigg(-\frac{1}{\norm{\br - \br'(t)}} +\frac{1}{\norm{\br}} +\frac{\br \cdot \br'(t)}{\norm{\br'(t)}^3}\bigg).$$ In this expression, $\br$ is the heliocentric position of the particle, $\brd$ its conjugated variable and $\brp(t)$ is the position of the planet at the time $t$. In order to work with an autonomous Hamiltonian, we extend the phase space by introducing $\Lam'$, the conjugated variable of $\lam':=t$ that corresponds to the mean longitude of the planet. As a consequence the Hamiltonian becomes, on the extended phase space, equal to $\Lam' + \cH$. In order to define a canonical coordinate system related to the elliptic elements $(a, e, \lam, \omega)$ (respectively semi-major axis, eccentricity, mean longitude and argument of the periaster) and adapted to the co-orbital resonance, we introduce the canonical coordinates $(\theta, u, -i\xb, x, \lam', \tLam')$ where \begin{equation} \theta := \lam - \lam' \qtext{and} u := \sqrt{a} - 1 \end{equation} are the resonant variables, \begin{equation} x := \sqrt{\Gam}\exp(i\omega) \qtext{with} \Gam := \sqrt{a}\big(1- \sqrt{1-e^2}\big) \end{equation} is the Poincar\'e's variable associated with the eccentricity $e$, and $\tLam'$ that is the conjugated variable of $\lam'$ such as \begin{equation} \Lam' = \tLam' - u. \end{equation} If we denote $\Phi$, the canonical transformation such that \begin{equation} \Phi : \quad \Bigg\{ \begin{array}{ccc} {\mathbb T}\times{\mathbb R}\times{\mathbb C}^2\times{\mathbb T}\times{\mathbb R} &\longrightarrow & {\mathbb R}^{4}\times{\mathbb T}\times{\mathbb R} \\ (\theta,u, -i\xb,x,\lam', \tLam' ) &\longmapsto &(\br ,\brd ,\lam', \Lam') \end{array} \nonumber, \end{equation} the Hamiltonian of the problem reads $\tLam' + H$ with \begin{equation} H:=\big(\Lam' + \cH\big)\circ\Phi -\tLam'= H_K - u + H_P \end{equation} where \begin{equation} H_K := -\frac{1}{2(1+u)^2} \qtext{and} H_P := \cH_P\circ\Phi \nonumber. \end{equation} In these variables, the Hamiltonian possesses 3 degrees of freedom, each one corresponding to a particular component of the dynamics inside the co-orbital resonance. Indeed, the resonant angle $\theta$ varies slowly with respect to the fast angle $\lam'$. Thus the degree of freedom $(\theta,u)$ is generally known as the ``semi-fast" component of the dynamics while the degree of freedom $(-i\xb, x)$ is associated with the ``secular" variations of the trajectory. As a consequence, a natural way to reduce the dimension of the problem in order to study the ``semi-fast" and ``secular" dynamics of the co-orbital motion is to average the Hamiltonian over $\lam'$. In the following, this averaged Hamiltonian will be denoted $\overline{H}$. \subsection{The averaged Hamiltonian} According to the perturbation theory, there exists a canonical transformation \begin{equation} \cC : \quad \Bigg\{ \begin{array}{lcl} {\mathbb T}\times{\mathbb R}\times{\mathbb C}^2\times{\mathbb T}\times{\mathbb R} &\longrightarrow & {\mathbb T}\times{\mathbb R}\times{\mathbb C}^2\times{\mathbb T}\times{\mathbb R}\\ (\und{\theta}, \und{u},-i\overline{\und{x}},\und{x}, \und{\lam'},\und{\tLam'}) &\longmapsto &(\theta,u,-i\xb,x, \lam', \tLam') , \end{array}\nonumber \end{equation} such as, in the averaged variables $(\und{\theta}, \und{u},-i\overline{\und{x}},\und{x}, \und{\lam'},\und{\tLam'})$, the Hamiltonian reads \begin{equation} \und{\tLam'} + {\bf{H}} = \big(\tLam' + H\big)\circ\cC \qtext{with } {\bf{H}}:= \Hb + H_* \label{eq:canon_C} \end{equation} where \begin{equation} \Hb:= H_K -\und{u} + \Hb_P\nonumber \end{equation} with \begin{equation} \Hb_P(\und{\theta}, \und{u}, -i\und{\overline{x}}, \und{x}) :=\frac{1}{2\pi}{\displaystyle{\int_0^{2\pi}}}H_P (\und{\theta}, \und{u}, -i\und{\overline{x}}, \und{x},\lam')d\lam'\nonumber. \label{eq:moy_pert} \end{equation} $H_*$ is a remainder that is supposed to be small with respect to $\Hb_P$. More precisely, the transformation $\cC$ is close to the identity and could be construct with the time-one map of the Hamiltonian flow generated by some auxiliary function $\chi$ \citep[for further details, see][]{RoNiPo2015}. As a consequence, if $\{f,g\}$ represents the Poisson bracket of the two functions $f$ and $g$ and if $y$ stands for one of the variables $(\theta, u, -i\xb, x, \lam', \tLam')$, then the two coordinate systems are related by \begin{equation} y = \und{y} + \{\chi, \und{y}\} + \ode\label{eq:Tr} \end{equation} with \begin{equation} \chi(\theta, u, -i\xb, x, \lam') = \int_0^{\lam'}\Big[\Hb_P(\theta, u, -i\xb, x) - H_P(\theta, u, -i\xb, x, \tau)\Big] \,d\tau\nonumber. \end{equation} In this paper, we only consider the restriction at first order in $\eps$ of the Hamiltonian in the equation \eqref{eq:canon_C}. This approximation of the initial problem that is described by $\Hb$ is generally known as the ``averaged problem" (AP). Thus, the averaged problem possesses two degrees of freedom and two parameters, $\eps$ and $e'$, respectively the planetary mass ratio and eccentricity of the planet.. \\ For the sake of clarity, the ``underdot" used to denote the averaged coordinates will be omitted below. \subsection{Numerical averaging} There exists at least two classical averaging techniques adapted to the co-orbital resonance: an analytical one based on an expansion of the Hamiltonian in power series of the eccentricity \citep[e.g.][]{Mo2001, RoPo2013}, and a numerical one consisting on a numerical evaluation of $\overline{H}$ and its derivatives \citep[e.g.][]{NeThFe2002, GiBeMi2010, BeRo2001, MiInWi2006, SiNeAr2014}. Whereas for low eccentricities the analytical technique is very efficient, reaching higher values of eccentricity requires high order expansions which generate very heavy expressions. Thus, in this case, the use of numerical methods may be more convenient. Then in order to explore the phase space of the co-orbital resonance for all eccentricities lower than one, we use the numerical averaging method developed by \cite{NeThFe2002}.\\ This method consists on a numerical evaluation of the integral (\ref{eq:moy_pert}). More generally, let $F$ be a generic function depending on $(\theta, u, -i\xb, x,E, E')$ where $E$ and $E'$ are the eccentric anomaly of the particle and the planet. As its average over the mean longitude $\lam'$ is computed for a given fixed value of $\theta$, we have $d\lam' = d\lam = \big(1-e(x)\cos E\big) dE$. As \begin{equation} \theta = \lam - \lam' = E + \omega(x) - E' - e(x)\sin E + e'\sin E', \end{equation} the eccentric anomalies $E'$ can be expressed in terms of $(\theta,E,x,e')$. Eventually, the integrals reads \begin{equation} \overline{F}(\theta, u, -i\xb,x) = \frac{1}{2\pi}\int_0^{2\pi} F\big(\theta, u, -i\xb,x, E, E'(\theta,E,x, e')\big)\big(1 - e(x)\cos E\big)dE, \end{equation} which can be computed by discretizing the variable $E$ as $E_k = \frac{k2\pi}{N}$ and $100\leq N\leq 300$ \citep[see][for more details]{NeThFe2002}. \section{The co-orbital resonance in the circular case ($e'=0$)} \label{sec:CC} In the circular case -- that is the case where the planet gravitates on a circle --, the averaged problem defined by $\Hb$ is invariant under the action of the symmetry group $SO(2)$ associated with the rotations around the vertical axis. Thereby, in the vicinity of the quasi-circular orbits ($|x|\ll1$), the expansion of $\Hb$ in power series of $x$ and $\xb$ reads \begin{equation} \sum_{(p,\pb) \in{\mathbb N}^2} \Psi_{p,\pb}(\theta,u) x^p \xb^\pb \end{equation} where the integers occurring in these summations satisfy the relation \begin{equation} p - \pb = 0 \end{equation} that results from the d'Alembert rule. Hence, we have \begin{equation} \dron{\Hb}{\omega}(\theta,u,-i\xb,x) = 0 = \dot{x}{\xb} + x\dot{\xb} =\dGam, \end{equation} which imposes $\Gam$ to be a first integral. As a consequence, in the averaged problem, the two degrees of freedom of the problem are separable and a reduction is possible.\\ By fixing the value of the parameter $\Gam=|x|^2$ and eliminating the cyclic variable $\omega=\arg(x)$, we remove one degree of freedom. We call this new problem the ``reduced averaged problem" (RAP). However, instead of using $\Gam$ as a parameter, we introduce $e_0$ such as \begin{equation} \Gam = (1 + u)\big(1-\sqrt{1-e^2}\big) = 1-\sqrt{1-e_0^2} . \end{equation} Then, if $u\ll1$, the parameter $e_0$ that is equal to $e + \cO(u)$ provides an approximation of the eccentricity value $e$ of the trajectory. \subsection{The reduced Hamiltonian} For a given value $e_0=a$ such that $0\leq a<1$, let us define $\MmoyI\subset{\mathbb T}\times{\mathbb R}\times{\mathbb C}^2$ the intersection of the phase space of the averaged problem (denoted $\Mmoy\subset{\mathbb T}\times{\mathbb R}\times{\mathbb C}^2$) with the hyperplane $\{e_0 = a\}$, and $\MmoyIQ$, the quotient space of this section by the symmetry group $SO(2)$. Under the action of the application \begin{align} \psi_{e_0}: \quad \Bigg\{ \begin{array}{ccc} \MmoyI & \longrightarrow & \MmoyIQ\\ (\theta, u, -i\xb, x) &\longmapsto &(\theta, u) \end{array},\label{eq:psie0} \end{align} the problem is reduced to one degree of freedom and is associated with the reduced Hamiltonian \begin{equation}\Hb_{e_0}:= \Hb\big(\,\cdot, \, \cdot, -i\xb(e_0),x(e_0)\big).\end{equation} Thus, for a fixed $e_0$, a trajectory in the RAP is generally a periodic orbit, but can also be a fixed point. As a consequence, the description of the RAP's phase portrait obtained for various values of $e_0$ allows to understand the global dynamics of the co-orbital resonance in the circular case. The AP being more usual to illustrate the semi-fast and secular variations of the orbital elements and the rotating frame (RF) more classic to understand the dynamics of the restricted three-body problem, we will see in the next section how a given orbit is represented in these three different points of view. \subsection{Correspondence between the RAP, the AP and the RF}\label{sec:Intp} For a given value of $e_0$, let us consider a periodic trajectory of frequency $\nu$ in the RAP. The correspondence between the RAP and the AP consists in the pullback of a trajectory belonging to $\MmoyIQ$ by the application $\psi_{e_0}^{-1}$. However, $\omega=\arg(x)$ being ignorable in the RAP, $\psi_{e_0}^{-1}$ is not an injection, which implies that a set of orbits in the AP parametrized by $\omega_0:= \omega(t=0)\in{\mathbb T}$ is mapped by $\psi_{e_0}$ to the initial trajectory. Furthermore, as \begin{equation} \dot{\omega}(t) = -\dron{}{\Gam}\Hb\big(\theta(t), u(t)\big),\nonumber \end{equation} then $\dot{\omega}(t)$ is $2\pi/\nu$-periodic and could be decomposed such as \begin{equation} \dot{\omega}(t) = g -\Big[\dron{}{\Gam}\Hb\big(\theta(t), u(t)\big) + g\Big] \end{equation} where \begin{equation} g :=\frac{\nu}{2\pi} \int_0^{2\pi/\nu} -\dron{}{\Gamma}\Hb_{e_0}\big(\theta(t),u(t)\big) dt\nonumber \end{equation} is the secular precession frequency of $\omega$. Thus, for each orbits of the family, the temporal evolution of the argument of its periaster is given by \begin{equation} \omega(t) = \omega_0 + gt - \int_0^t \left[\dron{}{\Gamma}\Hb_{e_0} \big((\theta(\tau),u(\tau)\big)+ g\right] d\tau . \end{equation} As a consequence, a given periodic trajectory in the RAP generally corresponds, in the AP, to a set of quasi-periodic orbits of frequencies $\nu$ and $g$. Nevertheless, $\omega$ being ignorable when the osculating ellipses are circles (i.e. $e_0=0$), the trajectories are fixed points or periodic orbits of frequency $\nu$ in both approaches. When $e_0>0$ and $g=0$, a periodic trajectory of the RAP provides a set of periodic orbits of frequency $\nu$ in the AP. Likewise a fixed point corresponds to a set of degenerated fixed points. These fixed points being distributed along a circle in the phase space represented by the variables $(x, -i\xb)$. Their set will be describe as a ``circle of fixed points" in what follows. Next, to connect the AP with the RF, we firstly have to apply $\cC$ to the trajectory which adds the fast frequency in the variations of the orbital elements, i.e. the planet mean motion. In the circular case, the d'Alembert rule implies that $\tLam'+ H$ only depends on the angles $\lam'-\omega$ and $\theta$. Consequently, by defining the canonical transformation \begin{equation} \widehat{\psi}\ : \quad \Bigg\{ \begin{array}{ccc} \sM &\longrightarrow & \widehat{\psi}(\sM) \\ (\theta,u,-i\xb,x,\lam', \tLam') &\longmapsto &(\theta,u,-i\overline{\xi},\xi,\lam', \tLam'-\Gam) \end{array}\nonumber \end{equation} with $\sM$ that corresponds to the non-averaged phase space\footnote{As we have to take into account the degree of freedom $(\lam', \tLam')$, we have $\sM\subset{\mathbb T}\times{\mathbb R}\times{\mathbb C}^2\times{\mathbb T}\times{\mathbb R}$.}, $\xi=\sqrt{\Gam}\exp(i\varphi)$ and $\varphi= \lam'-\omega$, the Hamiltonian $(\tLam' + H)\circ\widehat{\psi}^{-1}$ becomes autonomous with two degrees of freedom associated with the frequencies $\nu$ and $1-g$. Moreover, this Hamiltonian is related to those in the RF by the pullback by $\Phi^{-1}$, that is the canonical transformation in Cartesian coordinates. Thus, a trajectory in the RF is generally quasi-periodic with two frequencies. As a consequence, a given trajectory of the RAP generally corresponds to a set of orbits in the RF parametrized by $\varphi_0:=\varphi(t=0)\in{\mathbb T}$ with one more frequency. For the sake of clarity, we summarize the status of the remarkable orbits in the three different approaches in the table \ref{tab:Orb}. \begin{table} \begin{center} \small \begin{tabular}{|C{1.6cm}||C{1.5cm}|C{1.5cm}|C{1.5cm}|C{1.5cm}|C{1.5cm}|C{1.5cm}|} \hline \multirow{2}*{\textbf{Approach}} & \multicolumn{2}{c|}{$e_0=0$} & \multicolumn{4}{c|}{$e_0>0$} \\ \cline{4-7} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{$g\neq 0$} & \multicolumn{2}{c|}{$g= 0$} \\ \hline \multirow{2}*{\textbf{RAP}} & FP & PO & FP & PO & FP & PO \\ \multirow{2}*{$\downarrow $ } & & $(\nu)$& & $(\nu)$ & & $(\nu)$\\ \cline{2-7} \multirow{2}*{\textbf{AP}} & FP & PO & $S_{\omega_0}$PO & $S_{\omega_0}$QPO & $S_{\omega_0}$FP & $S_{\omega_0}$PO \\ \multirow{2}*{$\downarrow $ } & & $(\nu)$&$(g)$ & $(\nu,g)$ & & $(\nu)$\\ \cline{2-7} \multirow{2}*{\textbf{RF}} & FP & PO & $S_{\varphi_0}$PO & $S_{\varphi_0}$QPO & $S_{\varphi_0}$PO & $S_{\varphi_0}$QPO \\ & & $(\nu)$& $(1-g)$ & $(\nu,1-g)$& $(1)$ & $(\nu, 1)$\\ \hline \end{tabular} \smallskip \caption{\small Correspondence between the three approaches for a given trajectory in the RAP. $S_{\omega_0}$,$S_{\varphi_0}$: set of solutions parametrized by $\omega_0$ and $\varphi_0\in{\mathbb T}$. \textbf{FP}: Fixed point. \textbf{PO}: Periodic orbit. \textbf{QPO}: Quasi-periodic orbit. Parenthesis: associated frequencies. }\label{tab:Orb} \end{center} \end{table} \subsection{Phase portraits of the RAP}\label{sec:Phase} \begin{figure} \begin{center} \small \def0.85\textwidth{1.\textwidth} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \ifx0.85\textwidth\undefined% \setlength{\unitlength}{752bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{0.85\textwidth}% \fi% \global\let0.85\textwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,0.78723404)% \put(0,0){\includegraphics[width=\unitlength]{Fig2_Phase.eps}}% \put(0.06783473,0.56884895){\makebox(0,0)[lb]{\smash{$-0.02$}}}% \put(0.06783473,0.60328837){\makebox(0,0)[lb]{\smash{$-0.01$}}}% \put(0.08374001,0.67223991){\makebox(0,0)[lb]{\smash{$0.01$}}}% \put(0.08374001,0.70667932){\makebox(0,0)[lb]{\smash{$0.02$}}}% \put(0.25355303,0.65395404){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$L_4$}}}% \put(0.39784131,0.62203915){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$L_5$}}}% \put(0.18235921,0.05609273){\makebox(0,0)[lb]{\smash{$-120$}}}% \put(0.25609487,0.05609273){\makebox(0,0)[lb]{\smash{$-60$}}}% \put(0.33478066,0.05609273){\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.39682894,0.05609273){\makebox(0,0)[lb]{\smash{$60$}}}% \put(0.46139115,0.05609273){\makebox(0,0)[lb]{\smash{$120$}}}% \put(0.60776163,0.05612821){\makebox(0,0)[lb]{\smash{$-120$}}}% \put(0.68149729,0.05612821){\makebox(0,0)[lb]{\smash{$-60$}}}% \put(0.76018307,0.05612821){\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.82223135,0.05612821){\makebox(0,0)[lb]{\smash{$60$}}}% \put(0.88679356,0.05612821){\makebox(0,0)[lb]{\smash{$120$}}}% \put(0.14717004,0.5419235){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$a.$}}}% \put(0.93448523,0.54352821){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$b.$}}}% \put(0.14717004,0.31426351){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$c.$}}}% \put(0.93440439,0.31480386){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$d.$}}}% \put(0.14717004,0.08660403){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$e.$}}}% \put(0.93440439,0.08501672){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$f.$}}}% \put(0.09849547,0.63779531){\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.03248916,0.63489088){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$u$}}}% \put(0.06570707,0.34118936){\makebox(0,0)[lb]{\smash{$-0.02$}}}% \put(0.06570707,0.37562878){\makebox(0,0)[lb]{\smash{$-0.01$}}}% \put(0.08161235,0.44458034){\makebox(0,0)[lb]{\smash{$0.01$}}}% \put(0.08161235,0.47901975){\makebox(0,0)[lb]{\smash{$0.02$}}}% \put(0.09636779,0.41013572){\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.0303615,0.40723127){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$u$}}}% \put(0.31106141,0.02448512){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\theta$ $(\degre)$}}}% \put(0.73110061,0.02474781){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\theta$ $(\degre)$}}}% \put(0.14717005,0.63799659){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$L_3$}}}% \put(0.06570707,0.11352912){\makebox(0,0)[lb]{\smash{$-0.02$}}}% \put(0.06570707,0.14796868){\makebox(0,0)[lb]{\smash{$-0.01$}}}% \put(0.08161235,0.21692055){\makebox(0,0)[lb]{\smash{$0.01$}}}% \put(0.08161235,0.25136008){\makebox(0,0)[lb]{\smash{$0.02$}}}% \put(0.09636779,0.18247576){\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.0303615,0.17957131){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$u$}}}% \end{picture}% \endgroup% \caption{\small Phase portraits of a Sun-Jupiter like system in the circular case. For a, b, c, d, e and f, $e_0$ is equal to $0$, $0.25$, $0.5$, $0.75$, $0.85$ and $0.95$ . The black dot (a.) and curves represent the collision with the planet. The blue, sky blue and red dots are level curves of TP, QS and HS orbits. For $e_0=0$, the blue triangles and red circles represents $L_4$, $L_5$ and $L_3$, while for $e_0>0$ they form the families $\GLQ$, $\GLC$ and $\GLT$. From $L_3$ and the unstable part of $\GLT$ originates a separatrix that is represented by a red curve. The sky blue diamonds form the family $\GQS$. Eventually the green squares represents the stable part of $\GLT$ around which trajectories represented by green dots librate.}\label{fig:phase} \end{center} \end{figure} The figure \ref{fig:phase} displays the phase portraits of the RAP associated with six different values of the parameter $e_0$ for a Sun-Jupiter like system ($\eps =0.001$). In Fig.\ref{fig:phase}a, $e_0$ is equal to zero: the osculating ellipses of all the orbits are circles. The singular point located at $\theta = u = 0$ corresponds to the collision between the asteroid and the planet, where $\Hb$ is not defined (the integral (\ref{eq:moy_pert}) is divergent). The two elliptic fixed points, in $(\theta, u) = (\pm 60 \degre,0)$, correspond to the Lagrangian equilateral configurations $L_4$ and $L_5$ whereas the hyperbolic fixed point, close to $(\theta, u) = (180 \degre,0)$, is associated with the Eulerian aligned configuration $L_3$. \\ On the phase portraits described by \cite{NeThFe2002} two additional equilibria appears located at $\theta=0\degre$: the Eulerian aligned configurations $L_1$ and $L_2$. But as it has been shown in \cite{RoPo2013}, there exists a neighbourhood of the collision singularity inside which the averaged Hamiltonian does not reflect properly the dynamics of the ``initial" problem. Indeed, a remainder which depends on the fast variable and that is supposed to be small with respect to $\Hb_P$ is generated by the averaging process; we denoted it $H_*$ in the expression (\ref{eq:canon_C}). Although $H_*$ is equal to $\ode$ in the major part of the phase space, when the distance to the collision is of order $\eps^{1/3}$ and less, $H_*$ is at least of the same order than the perturbation $\Hb_P$ \citep{RoNiPo2015}. Thus, this define an ``exclusion zone" inside which the trajectories, and especially the equilibria $L_1$ and $L_2$, fall outside the scope of the averaged Hamiltonian. The orbits that librate around $L_4$ or $L_5$ lying inside the separatrix originated from $L_3$ correspond to the tadpoles (TP) orbits. For $e_0=0$, these two domains form two families of $2\pi/\nu$-periodic orbits originating in $L_4$ and $L_5$ and that are parametrized by $u\geq 0$. We denote them $\NLQ$ and $\NLC$. More precisely, they are the Lyapounov families of the Lagrangian equilateral configurations associated with the libration and generally known as the long period families $\LQl$ and $\LCl$ in the RF \citep[see][]{MeHa1992}. Eventually, outside the separatrix lies the horseshoe (HS) domain: the orbits that encompass the three equilibria $L_3$, $L_4$ and $L_5$. \begin{figure} \begin{center} \small \def0.85\textwidth{0.85\textwidth} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \ifx0.85\textwidth\undefined% \setlength{\unitlength}{785.43127415bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{0.85\textwidth}% \fi% \global\let0.85\textwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,0.52477539)% \put(0,0){\includegraphics[width=\unitlength]{Fig3_Families.eps}}% \put(0.0908876,0.03439255){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Fixed point}}}% \put(0.32006105,0.03834142){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Periodic orbit family}}}% \put(0.6765531,0.03834142){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Fixed point family}}}% \put(0.29321474,0.44718322){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$L_4$}}}% \put(0.54785191,0.44718322){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$L_3$}}}% \put(0.70063421,0.44718322){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$L_5$}}}% \put(0.74212705,0.38839565){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\NLC$}}}% \put(0.20732249,0.19690869){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$(\LQs)$}}}% \put(0.32922253,0.2111683){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$(\LQl)$}}}% \put(0.46198858,0.19690869){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$(\LT)$}}}% \put(0.61478821,0.19690869){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$(\LCs)$}}}% \put(0.73805312,0.20913119){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$(\LCl)$}}}% \put(0.88982548,0.19509823){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$(f)$}}}% \put(0,0.18803035){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{\textbf{AP \& RF} }}}% \put(0.33466177,0.38839565){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\NLQ$}}}% \put(0.2113968,0.37617334){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\GLQ$}}}% \put(0.46606279,0.3761734){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\GLT$}}}% \put(0.6188624,0.37617334){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\GLC$}}}% \put(0.87352857,0.37617334){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\GQS$}}}% \put(0.03844179,0.36525775){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{\textbf{RAP} }}}% \put(0.0861032,0.41284109){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$e_0=0$}}}% \put(0.0861032,0.32015316){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$e_0>0$}}}% \put(0.44737964,0.49273941){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Lyapounov families}}}% \put(0.0861032,0.23765071){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$e_0=0$}}}% \put(0.0861032,0.14088858){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$e_0>0$}}}% \end{picture}% \endgroup% \caption{\small Representation of the co-orbital families of periodic orbits from the three different points of view. From each Lagrangian triangular equilibrium originates two Lyapounov families that correspond to a periodic orbit family and a fixed point family in the RAP. These families are associated with the long and short period families in the AP and the RF. $L_3$ being a saddle center type in the RAP, only one Lyapounov family emanates from this equilibrium that is a fixed point family in the RAP and a periodic orbit family in the AP and RF. Eventually, for $e_0>0$, there exists a family of fixed points in the RAP that is not a Lyapounov family: $\GQS$. This family is associated with a periodic orbit family in the RF: the family $f$.}\label{fig:POF} \end{center} \end{figure} If, when $e_0=0$, the domain of definition of $\Hb_{e_0}$ excludes the origin $\theta=u = 0$, the location of its singularities (associated with the collision with the planet) evolves with the parameter $e_0$. Indeed, as soon as $e_0>0$, the origin becomes a regular point while the set of singular points describes a curve that surrounds the origin. The phase space is now divided in two different domains.\\ For small $e_0$ (for example $e_0=0.25$ represented in Fig.\ref{fig:phase}b), the domain outside the collision curve has the same topology as for $e_0=0$: two stable equilibria close to the $L_4$ and $L_5$'s location and a separatrix emerging from an hyperbolic fixed point close to $L_3$ that bounds the TP and the HS domains. However, contrarily to $e_0=0$, the fixed points do not correspond to equilibria in the AP and the RF but to periodic orbits of frequency respectively $g$ and $1-g$. Consequently, orbits in their vicinity correspond to quasi-periodic orbits. Thus, by varying $e_0$, these fixed points form three one-parameter families that we denote $\GLT$, $\GLQ$ and $\GLC$. In the RF, these ones are known as the short period families $\LQs$, $\LCs$ and $\LT$, the Lyapounov families associated with the precession, that emanate from $L_4$, $L_5$ and $L_3$ \citep[see][]{MeHa1992}.\\ Inside the collision curve appears a new domain containing orbits that librate around a fixed point of coordinates close to the origin: the QS domain. By varying $e_0$, the fixed points form a one-parameter family characterized by $\theta=0\degre$ and that originates from the singular point for $e_0=0$; we denote it $\GQS$. In the RF, these fixed points correspond\footnote{See the section \ref{sec:Intp}.} to periodic retrograde satellite orbits of frequency $1-g$ . As a consequence, the family $\GQS$ is related to the family $f$ that is\footnote{See the section \ref{sec:RF} for further details on the family $f$.} the one-parameter family of simple-periodic symmetrical retrograde satellite orbits.\\ Thus, for small eccentricities, TP, HS and QS domains are structured around two periodic orbit families ($\NLQ$ and $\NLC$) and four fixed point families ($\GLT$, $\GLQ$, $\GLC$ and $\GQS$) that we outline in Fig.\ref{fig:POF} to clarify their representations in the different approaches. \begin{figure} \begin{center} \small \def0.85\textwidth{0.5\textwidth} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \ifx0.85\textwidth\undefined% \setlength{\unitlength}{512bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{0.85\textwidth}% \fi% \global\let0.85\textwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,0.78125)% \put(0,0){\includegraphics[width=\unitlength]{Fig4_Bifurcation.eps}}% \put(0.315625,0.509375){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\LQs$}}}% \put(0.51875,0.55){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\LT$}}}% \put(0.4875,0.71875){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$L_3$}}}% \put(0.20625,0.63125){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$L_4$}}}% \put(0.76875,0.63125){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$L_5$}}}% \put(0.6125,0.509375){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\LCs$}}}% \put(0.24203809,0.47512563){\color[rgb]{0,0,0}\rotatebox{-41.14297817}{\makebox(0,0)[lb]{\smash{stable}}}}% \put(0.67528726,0.40428322){\color[rgb]{0,0,0}\rotatebox{41.43048196}{\makebox(0,0)[lb]{\smash{stable}}}}% \put(0.45138968,0.30618701){\color[rgb]{0,0,0}\rotatebox{-89.60275888}{\makebox(0,0)[lb]{\smash{stable}}}}% \put(0.45056681,0.62040441){\color[rgb]{0,0,0}\rotatebox{-90.33965996}{\makebox(0,0)[lb]{\smash{unstable}}}}% \put(0.678125,0.171875){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$C_{J}$}}}% \put(0.625,0.31875){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{bifurcation}}}% \end{picture}% \endgroup% \caption{\small Representation of the result of \cite{DeJaPa1967} in the RF: the merge of the short period families $\LQs$ and $\LCs$ with $\LT$ and bifurcation of the latter that becomes stable.}\label{fig:bif} \end{center} \end{figure} For higher values of $e_0$ (see Fig.\ref{fig:phase}c, d, e and f), the topology of the phase portraits does not change inside the collision curve: the QS domain is always present, but its size increases until it dominates the phase portrait for high eccentricity values. Outside the collision curve, the situation is different. As $e_0$ increases, the two stable equilibria get closer to the hyperbolic fixed point, which implies that the TP domains shrink and vanish when the three merge. This bifurcation generates a new domain new domain inside of which the orbits librate around the fixed point close to $(\theta,u)=(180\degre,0)$ (see Fig.\ref{fig:phase}f). A similar result was found by \cite{DeJaPa1967} for an Earth-Moon like system in the circular case ($\eps = 1/81$). In the RF, the authors showed that the short period families $\LQs$ and $\LCs$ terminate on a periodic orbit of $\LT$ (see the outline in Fig.\ref{fig:bif}). Now, let us focus on the QS domain. As mentioned above, there exists an exclusion zone in the vicinity of the collision curve such that the QS orbits does not represent ``real" trajectories of the initial problem. For high eccentricities, the QS dominates the phase portraits; the size of the intersection between the QS domain and the exclusion zone is small relatively to the whole domain. However by decreasing $e_0$, the QS domain shrinks with the collision curve. As a consequence, the relative size of the intersection increases until a critical value of $e_0$ where the exclusion zone contains all the QS orbits. In this case, the AP and a fortiori the RAP are not relevant to study the QS motion.\\ A simple way to estimate a validity limit of theses two approaches is to consider that the whole QS domain is excluded if and only if $\GQS$ is inside the exclusion zone. Thus the study of the fixed points family $\GQS$ allows to determinate the eccentricity value under which the averaging method cannot be applied to QS motion. \subsection{Fixed point families of the RAP} \label{sec:FOP} For a given value of $e_0$, let us consider a fixed point of the RAP, denoted $(\theta_0,u_0)$, such as \begin{equation} \dron{}{\theta}\Hb_{e_0}(\theta_0,u_0) = 0 \qtext{and} \dron{}{u}\Hb_{e_0}(\theta_0,u_0) = 0.\end{equation} The linear stability of this fixed point, is deduced from the eigenvalues of the matrix\footnote{In practice, the matrix $\cM(\theta_0,u_0)$ is provided by a numerical differentiation of the equations of motion at the fixed point $(\theta_0,u_0)$.} \begin{equation} \cM :=\begin{pmatrix} \dronss{\Hb_{e_0}}{\theta}{u} & \drons{\Hb_{e_0}}{u}\\ -\drons{\Hb_{e_0}}{\theta} & -\dronss{\Hb_{e_0}}{\theta}{u} \end{pmatrix}\nonumber \end{equation} that comes from the variational equations \begin{equation} \begin{pmatrix} \dot{\theta}\\\dot{u} \end{pmatrix} = \cM(\theta_0,u_0)\begin{pmatrix} \theta\\u \end{pmatrix}\nonumber \end{equation} associated with the linearization of the equations of motion in the vicinity of $(\theta_0,u_0)$. When this fixed point is elliptic, its eigenvalues are equal to $\pm i\nu$, where the real number $\nu$ is the rotation frequency around the equilibrium. Moreover, the secular precession frequency of its corresponding orbits in the AP is equal to \begin{equation} g=-\dron{}{\Gamma}\Hb_{e_0}(\theta_0,u_0) .\end{equation} \begin{figure} \begin{center} \small \def0.85\textwidth{1.05\textwidth} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \ifx0.85\textwidth\undefined% \setlength{\unitlength}{827.96875bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{0.85\textwidth}% \fi% \global\let0.85\textwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,0.63038309)% \put(0,0){\includegraphics[width=\unitlength]{Fig5_Frequencies.eps}}% \put(0.85153734,0.37580836){\makebox(0,0)[lb]{\smash{$0.05$}}}% \put(0.85226201,0.43438545){\makebox(0,0)[lb]{\smash{$0.1$}}}% \put(0.85153734,0.49302387){\makebox(0,0)[lb]{\smash{$0.15$}}}% \put(0.85226201,0.55160096){\makebox(0,0)[lb]{\smash{$0.2$}}}% \put(0.85280641,0.28534685){\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.85096445,0.217096){\makebox(0,0)[lb]{\smash{$-0.01$}}}% \put(0.85153734,0.15321248){\makebox(0,0)[lb]{\smash{$-0.02$}}}% \put(0.85491029,0.08858373){\makebox(0,0)[lb]{\smash{$-0.03$}}}% \put(0.46618748,0.03895989){\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.53739056,0.03895989){\makebox(0,0)[lb]{\smash{$0.2$}}}% \put(0.61106482,0.03895989){\makebox(0,0)[lb]{\smash{$0.4$}}}% \put(0.68467768,0.03895989){\makebox(0,0)[lb]{\smash{$0.6$}}}% \put(0.75835196,0.03895989){\makebox(0,0)[lb]{\smash{$0.8$}}}% \put(0.83449742,0.03895989){\makebox(0,0)[lb]{\smash{$1$}}}% \put(0.63168904,0.18410828){\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.64207571,0.13326206){\makebox(0,0)[lb]{\smash{$0.8$}}}% \put(0.72251633,0.13326234){\makebox(0,0)[lb]{\smash{$0.9$}}}% \put(0.79763708,0.13326234){\makebox(0,0)[lb]{\smash{$1$}}}% \put(0.62170003,0.10038044){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0.8352$}}}% \put(0.69175264,0.08066474){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0.8695$}}}% \put(0.75460781,0.10383025){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0.9775$}}}% \put(1.16859355,0.7773549){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\begin{minipage}{0.69983355\unitlength}\raggedright \end{minipage}}}% \put(0.06091892,0.37777717){\makebox(0,0)[lb]{\smash{$60$}}}% \put(0.05246922,0.47077585){\makebox(0,0)[lb]{\smash{$180$}}}% \put(0.05053678,0.56377453){\makebox(0,0)[lb]{\smash{$300$}}}% \put(0.06936371,0.33127783){\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.69229601,0.49387145){\makebox(0,0)[lb]{\smash{$0.917$}}}% \put(0.01567453,0.07456431){\makebox(0,0)[lb]{\smash{$-0.001$}}}% \put(0.05970151,0.13845372){\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.03398533,0.20234784){\makebox(0,0)[lb]{\smash{$0.001$}}}% \put(0.03398533,0.26623725){\makebox(0,0)[lb]{\smash{$0.002$}}}% \put(0.08801911,0.04678548){\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.15922216,0.03905572){\makebox(0,0)[lb]{\smash{$0.2$}}}% \put(0.23289643,0.03905572){\makebox(0,0)[lb]{\smash{$0.4$}}}% \put(0.30650935,0.03905572){\makebox(0,0)[lb]{\smash{$0.6$}}}% \put(0.38018365,0.03905572){\makebox(0,0)[lb]{\smash{$0.8$}}}% \put(0.29150392,0.58661087){\makebox(0,0)[lb]{\smash{$\GLT$ after}}}% \put(0.10383759,0.49162283){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\GLT$}}}% \put(0.10383759,0.58727863){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\GLC$}}}% \put(0.10383759,0.40369682){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\GLQ$}}}% \put(0.10318539,0.35152092){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\GQS$}}}% \put(0.16433689,0.28056263){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\GQS$ in exclusion zone}}}% \put(0.26057214,0.00517645){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$e_0$}}}% \put(0.64319528,0.00517645){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$e_0$}}}% \put(-0.00116638,0.19960186){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$u$}}}% \put(-0.00116638,0.5177016){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\theta$ $(\degre)$}}}% \put(0.90901335,0.19305153){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$g$}}}% \put(0.90901335,0.51823966){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$|\nu|$}}}% \put(0.26909721,0.55827924){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{the bifurcation}}}% \put(0.4445321,0.2964186){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{b.}}}% \put(0.60365867,0.53543796){\makebox(0,0)[lb]{\smash{$0.18$}}}% \put(0.44259965,0.59981283){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{a.}}}% \put(0.82135574,0.59981283){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{c.}}}% \put(0.82135574,0.2964186){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{d.}}}% \end{picture}% \endgroup% \caption{\small Location in $\theta$ (a.) and $u$ (b.) and frequencies $|\nu|$ (c.) and $g$ (d.) of the fixed points families for a Sun-Jupiter like system ($\eps = 10^{-3}$). $\cF_{L_4}$ and $\cF_{L_5}$ (blue curves) merge with $\GLT$ (red curve) which gives rise to a stable family of fixed points (green curve). The AP is relevant for QS motion when $\GQS$ (sky blue curve) is a continuous curve. There are particular orbits without precession on each family which correspond to degenerated fixed points of the AP.}\label{fig:CIF} \end{center} \end{figure} The evolution of the location and of the frequencies of the orbits associated with the families $\GQS$, $\GLT$, $\GLQ$ and $\GLC$ versus $e_0$ are described in Fig.\ref{fig:CIF} for a mass ratio equal to $\eps = 10^{-3}$ (a Sun-Jupiter like system). The red curve close to $(\theta,u) = (180\degre,0)$ represents the family $\GLT$ while the two blue curves that start in $L_4$ and $L_5$ correspond to $\GLQ$ and $\GLC$. As described in section \ref{sec:Phase}, by increasing $e_0$ these two last families merge with $\GLT$ for $e_0 \simeq 0.917$ (vertical dashed line). Above this critical value, the last family becomes stable (green curves in Fig.\ref{fig:CIF}).\\ The sky blue curve located nearby $(\theta,u) = (0\degre,0)$ represents the family $\GQS$. Along this family, for $0.4\leq e_0<1$, the frequencies $|\nu|$ and $|g|$ are of the same order as those of the TP equilibria, but the sign of $g$ is different. Then, by decreasing $e_0$, the moduli of the frequencies increase and tend to infinity. When the frequencies reach values of the same order or higher than the fast frequency, $\GQS$ enters the exclusion zone and the averaged problem does not describe accurately the quasi-satellite's motion.\\ In order to estimate an eccentricity range where the averaged problem is adapted to QS motion, we consider that $\GQS$ is outside the exclusion zone when $\vert g\vert$ and $\vert\nu\vert$ are lower than $1/4$. Fig.\ref{fig:CIF} shows that this quantity is given by $e_0=0.18$ (vertical dashed line). Therefore, the AP and RAP are relevant to study $\GQS$ and thus the QS motion for $e_0\geq 0.18$ in the Sun-Jupiter system. Now, we focus on the variations of $g$ along each families of fixed points. For each of them, the frequency is monotonous and crosses zero for a critical value of eccentricity: $e_0 \simeq 0.8352$ for $\GQS$, $e_0 \simeq 0.8695$ for $\GLQ$ and $\GLC$, and $e_0 \simeq 0.9775$ for $\GLT$. According to the section \ref{sec:Intp}, these particular trajectories in the RAP correspond to circles of fixed points in the AP, and $2\pi$-periodic orbits in the RF, i.e. frozen ellipses in the heliocentric frame. We denote them $\GdQS$, $\GdLQ$, $\GdLC$ and $\GdLT$. \begin{figure} \begin{center} \small \def0.85\textwidth{0.7\textwidth} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \ifx0.85\textwidth\undefined% \setlength{\unitlength}{639.09375bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{0.85\textwidth}% \fi% \global\let0.85\textwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,1.02122146)% \put(0,0){\includegraphics[width=\unitlength]{Fig6_Orbits.eps}}% \put(0.27860358,0.00990187){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.39126302,0.00990187){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$1$}}}% \put(0.14341169,0.00990187){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-1$}}}% \put(0.74676661,0.00990187){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.85942656,0.00990187){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$1$}}}% \put(0.60907133,0.00990187){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-1$}}}% \put(0.03090878,0.28386084){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.00210408,0.40276581){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-1$}}}% \put(0.02949482,0.16728391){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$1$}}}% \put(0.03090878,0.76955043){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0$}}}% \put(-0.00039947,0.88845411){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-1$}}}% \put(0.02949482,0.65297329){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$1$}}}% \put(0.19677151,0.77257128){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Sun}}}% \put(0.3797436,0.94306139){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Jupiter}}}% \put(0.43846932,0.77155349){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{ $f$}}}% \put(0.20691014,0.90160073){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\LQs$}}}% \put(0.0895764,0.97357608){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{a.}}}% \put(0.0895764,0.08732071){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{c.}}}% \put(0.93840074,0.08732071){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{d.}}}% \put(0.93840074,0.97357608){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{b.}}}% \put(0.19924422,0.64542226){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\LCs$}}}% \put(0.55047455,0.29488564){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\LT$}}}% \end{picture}% \endgroup% \caption{\small Periodic orbits in the rotating frame associated with stable orbits of $\GLQ$, $\GLC$, $\GLT$ and $\GQS$ for $e_0= 0.25$ (a.), $0.5$ (b.), $0.75$ (c.) and $0.95$ (d.) (see Fig.\ref{fig:phase} b, c, d and f). The blue curves are associated with $\LQs$; the sky blue curve with the family $f$ and the green curve corresponds to $\LT$ after the bifurcation.}\label{fig:RF} \end{center} \end{figure} To conclude this section, we connect the fixed points families in the RAP to the corresponding trajectory in the RF. Outside the exclusion zone, the transformation of these ones by $\Phi\circ\widehat{\psi}\circ\cC\circ\psi_{e_0}^{-1}$ provides us a first order approximation of their initial conditions in the RF. Therefore, by improving them with an iterative algorithm that removes the frequency $\nu$ \citep{CoLaCo2010}, we integrated the corresponding trajectories in the RF. An example of stable trajectories is represented on the figure \ref{fig:RF} for several values of $e_0$. \\ For a Sun-Jupiter like system, the families $\GLQ$, $\GLC$ provide the entire short period families, from their respective equilibrium to their merge with $\GLT$ and its collision orbit with the Sun. On the contrary, $\GQS$ provide only a part of the family $f$, from the collision with the Sun to the orbit with an eccentricity $e\simeq 0.18$. The figure \ref{fig:RF} shows that by increasing $e_0$, the size of the periodic trajectories in the RF increases. As expected, the libration center of the family $f$ is located close to the planet, while those of $\LQs$ and $\LCs$ shift from $L_4$ and $L_5$ towards $L_3$ where they merge with those of $\LT$. After the bifurcation, only trajectories of the $f$ and $\LT$ families remain. \section{Quasi-satellite's domains in the rotating frame with the planet} \label{sec:RF} \subsection{The family $f$ in the RF} The RAP seems to be the most adapted approach to understand the co-orbital motion in the circular case. However, the averaged approaches have the drawback to be poorly significant in the exclusion zone that surround the singularity associated with the collision with the planet. For the QS motion, we showed in the section \ref{sec:CC} that the whole domain could not be reachable by low eccentric orbits, that is when the trajectories get closer to the planet. As a consequence, to understand the QS nearby the planet and connect our results in the averaged approaches, we chose to revisit the classical works in the RF \citep{HeGu1970, Be1975} on the simple-periodic symmetrical family of retrograde satellite orbits, generally known as the family $f$. \begin{figure} \begin{center} \small \def0.85\textwidth{0.52\textwidth} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \ifx0.85\textwidth\undefined% \setlength{\unitlength}{147.65891278bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{0.85\textwidth}% \fi% \global\let0.85\textwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,0.67030245)% \put(0,0){\includegraphics[width=\unitlength]{Fig7_RS_trajectory.eps}}% \put(0.5558573,0.61464801){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$Y$}}}% \put(0.85536281,0.31652393){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$X$}}}% \put(0.08430112,0.18092022){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Sun}}}% \put(0.45457689,0.21804835){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{Planet}}}% \end{picture}% \endgroup% \caption{\small Representation of retrograde satellite trajectories in the RF. Red curve: simple-periodic symmetrical retrograte satellite trajectory that belongs to the family $f$. Black curve: trajectory in the neighbourhood of the previous one and that intersects the Poincar\'e section (black circles).} \label{fig:Sec} \end{center} \end{figure} In the RF with the planet on a circular orbit, the problem has two degrees of freedom that we represent by the position $\br = (X,Y)$ and the velocity $\brd=(\dot{X}, \dot{Y})$ in the frame whose origin is the planet position, the horizontal axis is the Sun-planet alignment and the vertical axis, its perpendicular (see Fig.\ref{fig:Sec}). This problem is autonomous and possesses a first integral $C_{J}$ generally known as the Jacobi constant. \\ For a given value of $C_{J}$, a simple-periodic symmetrical retrograde satellite orbit crosses the axes $\{Y = 0\}$ with $\dot{Y}<0$ and $\dot{X} = 0$ when $X>0$. By defining the Poincar\'e map $\Pi_T$ associated with the section $\{Y=0; \dot{Y}<0\}$ where $T$ is the time between two consecutive crossings, the problem could be reduced to one degree of freedom represented by $(X,\dot{X})$ and $\dot{Y}=\dot{Y}(X,Y,\dot{X},C_{J})$. As a consequence, an orbit of the family $f$ corresponds to a fixed point in this Poincar\'e section whose coordinates in the RF are $(X,0,0,\dot{Y})$ with \begin{equation} T=2\pi/(1-g)\label{eq:T} \end{equation} where $g$ is the precession frequency of the periaster argument $\omega$.\\ Moreover the stability of the fixed point is deduced from the trace of the monodromy matrix $d\Pi_T(X,0)$ evaluated at the fixed point. When the fixed point is stable, the frequency $\nu$ that characterized the oscillation of the resonant angle $\theta$ is obtained\footnote{Floquet theory; for further details, see \cite{MeHa1992}.} from its two conjugated eigenvalues $(\kappa, \overline{\kappa})$ such as \begin{equation} \kappa = \exp (i\nu T).\label{eq:nu}\end{equation} \subsection{Application to a Sun-Jupiter like system} \begin{figure} \begin{center} \small \centering \def0.85\textwidth{0.62\textwidth} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \ifx0.85\textwidth\undefined% \setlength{\unitlength}{379.19092064bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{0.85\textwidth}% \fi% \global\let0.85\textwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,0.61737122)% \put(0,0){\includegraphics[width=\unitlength]{Fig8_Familyf.eps}}% \put(0.53399487,0.49931262){\makebox(0,0)[lb]{\smash{family $f$ via $\GQS$}}}% \put(0.65889174,0.55184543){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{family $f$}}}% \put(0.15984561,0.06979731){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.30346915,0.06864153){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0.2$}}}% \put(0.45985668,0.06710268){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0.4$}}}% \put(0.61485288,0.06787538){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0.6$}}}% \put(0.77465865,0.068265){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0.8$}}}% \put(0.93755909,0.06710922){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$1$}}}% \put(0.12205965,0.58645557){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.07848528,0.49630646){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-0.4$}}}% \put(0.08576092,0.11753509){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-2$}}}% \put(0.07892432,0.21006102){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-1.6$}}}% \put(0.07848528,0.40069589){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-0.8$}}}% \put(0.07848528,0.30816962){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-1.2$}}}% \put(0.5398519,0.02438055){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$X$}}}% \put(0.01326252,0.35223936){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\dot{Y}$}}}% \put(0.1883645,0.02567297){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$X_{L_2}$}}}% \end{picture}% \endgroup% \caption{\small The family $f$ in the $(X,\dot{Y})$ plane (red curve) and its reachable part in the AP via $\GQS$ (sky blue curve). The two blue crosses indicate the particular orbits (whose fundamental frequencies are in $1:3$ resonance) that split the neighbourhood of the family $f$. The blue square indicates the collision orbit with the Sun while $\{X = 0\}$ corresponds to the collision with the planet. The grey outline schematizes the three connected domains of the family $f$ neighbourhood.} \label{fig:f1} \small \centering \def0.85\textwidth{0.62\textwidth} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \ifx0.85\textwidth\undefined% \setlength{\unitlength}{328.68303223bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{0.85\textwidth}% \fi% \global\let0.85\textwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,0.85389052)% \put(0,0){\includegraphics[width=\unitlength]{Fig9_Familyf_Frequencies.eps}}% \put(0.24075658,0.74868859){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{sRS}}}% \put(0.78814849,0.62575826){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{\QSh}}}% \put(0.5064953,0.70581997){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{\QSb}}}% \put(0.32972998,0.35896169){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$1\ll |g|$}}}% \put(0.72869788,0.17692188){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$1\gg |g|$}}}% \put(0.43776211,0.1256974){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$g=-1$}}}% \put(0.37460493,0.01335134){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$X_{L_2}$}}}% \put(0.14573275,0.04708358){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0$}}}% \put(0.32719817,0.04708358){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0.05$}}}% \put(0.50813542,0.04708358){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0.1$}}}% \put(0.69675529,0.04708358){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0.15$}}}% \put(0.86832303,0.04673653){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0.2$}}}% \put(0.10412061,0.16374701){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0.5$}}}% \put(0.12467937,0.25491536){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$1$}}}% \put(0.10716015,0.34302926){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$1.5$}}}% \put(0.7994271,0.35823074){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\nu$}}}% \put(0.67677142,0.39372494){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$(1-g)/3$}}}% \put(0.5059089,0.00938983){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$X$}}}% \put(0.08354252,0.80522298){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-0.1$}}}% \put(0.08354252,0.71648718){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-0.2$}}}% \put(0.08354252,0.62775138){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-0.3$}}}% \put(0.08354252,0.53679751){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-0.4$}}}% \put(0.08354252,0.45249883){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$-0.5$}}}% \put(0.01438875,0.66271825){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\dot{Y}$}}}% \put(0.19259551,0.46330583){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{a.}}}% \put(0.1925955,0.09599568){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{b.}}}% \end{picture}% \endgroup% \caption{\small (a.) Zoom in of Fig.\ref{fig:f1} on the two particular orbits whose fundamental frequencies are in $1:3$ resonance. (b.) Variation of the frequencies of the system along the family $f$. Comparing to Fig.\ref{fig:CIF}b, $\nu$ does not tend to infinity when the periodic orbits get closer to the planet ($\{X=0\}$), but increases and tends to $1$. The $1:3$ resonance splits the neighbourhood in three domains neatly defined in terms of frequencies: ``satellized" retrograde satellite (sRS), binary quasi-satellite (\QSb ) and heliocentric quasi-satellite (\QSh )}\label{fig:f2} \end{center} \end{figure} The figure \ref{fig:f1} and \ref{fig:f2}a represent the family $f$ in the $(X,\dot{Y})$ plane (red curve) and its reachable part in the averaged approaches (sky blue curve). Fig.\ref{fig:f1} shows that the family $f$ extends from the orbits in an infinitesimal neighbourhood of the planet ($X\simeq 0$) to the collision orbit with the Sun. Although, the whole family is linearly stable, we cannot predict the size of the stable region surrounding it. Indeed, this domain depends strongly on the position of the resonances between the fundamental frequencies $1-g$ and $\nu$, which are themselves conditioned by the value of $X$. This is what occurs in particular orbits of the family $f$ (blue crosses and dashed lines) where the stability domain's diameter tends to zero. Consequently, these two orbits divide the neighbourhood of the family $f$ in three connected domains that we outlined in grey in Fig.\ref{fig:f1} and Fig.\ref{fig:f2}a. The figure \ref{fig:f2}b exhibits the variations of the frequencies\footnote{In practice, the numerical algorithm of the Poincar\'e map provides $g$ as in the equation \eqref{eq:T} while the frequency $\nu$ is obtained via the monodromy matrix $d\Pi_T$ (see equation \eqref{eq:nu}) that is calculated with a numerical differentiation algorithm on the Poincar\'e map. } $\nu$ and $1-g$. Comparing to Fig.\ref{fig:CIF}b, we remark that $\nu$ does not tend to infinity when the periodic orbits get closer to the planet, but increases and tends to $1$. Likewise, Fig.\ref{fig:f2}b highlights that the resonance between the frequencies of the system is $\nu/(1-g) = 1/3$ and that the three domains are neatly defined in terms of frequencies as follows: \begin{align} \mbox{sRS}:\left\{\begin{array}{l} 3\nu < 1-g\\ |g|>1 \end{array}\right., \quad &\mbox{\QSb}:\quad 3\nu > 1-g \qtext{and} &\mbox{\QSh}:\left\{\begin{array}{l} 3\nu < 1-g\\ |g|<1 \end{array}\right.\mbox{.}\nonumber \end{align} The closest domain to the planet corresponds to the ``satellized" retrograde satellite orbits (sRS). Indeed, as the upper bound of this domain matches with the $L_2$ position, we recovered the notion of Hill's sphere in the context of the retrograde satellite trajectories. Hence, this domain consists of trajectories dominated by the gravitational influence of the planet whereas the star acts as a perturbator. Therefore the planetocentric osculating ellipses are the most relevant variables to represent the motion and perturbative treatments are possible.\\ The domain outside the Hill's sphere corresponds to the QS that is divided in two others domains. \\ The domain of \QSh orbits, that is the heliocentric QS, corresponds to the farthest domain to the planet, which implies that this body acts as a perturbator whereas the influence of the star dominates the dynamics. Therefore, the heliocentric orbital elements are well suited to the problem, and the perturbative treatment as well as the averaging over the fast angle are natural. As a consequence, it is the \QSh trajectories that are reachable in the averaged problem. As the orbits of the family $f$ included in the \QSh domain cross the Poincar\'e section at their aphelion, the $X$ coordinates is related to $e_0$ by the expression \begin{equation} X= e = e_0 + \oue .\label{eq:Xe0}\end{equation} The third domain, that we called the binary QS domain (\QSb), is intermediate between the sRS and \QSh ones. In this region, none of the two massive bodies has a dominant influence on the massless one. As a consequence, the frequencies $g$ could be of the same order or even equal to $1$, making inappropriate any method of averaging. Remark that in the planetary problem, \cite{HaVo2011} highlight a family of periodic orbits that corresponds to the family $f$. Indeed, along this family that ranges from orbits for which the two planets collide with the star to the orbits where the two planets are mutually satellized, all trajectories are stable and satisfy $\theta = 0\degre$. These authors also decomposed the family in three domains, denoted $A$, $B$ and planetary, which seem to correspond to our sRS, \QSb and \QSh domains. \subsection{Extension to arbitrary mass ratio} \begin{figure} \begin{center} \small \def0.85\textwidth{0.8\textwidth} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \ifx0.85\textwidth\undefined% \setlength{\unitlength}{624.34375bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{0.85\textwidth}% \fi% \global\let0.85\textwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,1.03308474)% \put(0,0){\includegraphics[width=\unitlength]{Fig10_Familyf_Evo.eps}}% \put(0.09327766,0.3255121){\makebox(0,0)[lb]{\smash{0}}}% \put(0.07646626,0.42857656){\makebox(0,0)[lb]{\smash{0.2}}}% \put(0.07646626,0.53164729){\makebox(0,0)[lb]{\smash{0.4}}}% \put(0.07646626,0.63479309){\makebox(0,0)[lb]{\smash{0.6}}}% \put(0.07646626,0.73786382){\makebox(0,0)[lb]{\smash{0.8}}}% \put(0.09327766,0.84092828){\makebox(0,0)[lb]{\smash{1}}}% \put(0.11614793,0.29795942){\makebox(0,0)[lb]{\smash{1e-07}}}% \put(0.25701458,0.29795942){\makebox(0,0)[lb]{\smash{1e-06}}}% \put(0.39788751,0.29795942){\makebox(0,0)[lb]{\smash{1e-05}}}% \put(0.53334224,0.29795942){\makebox(0,0)[lb]{\smash{0.0001}}}% \put(0.67981478,0.29795942){\makebox(0,0)[lb]{\smash{0.001}}}% \put(0.82628733,0.29795942){\makebox(0,0)[lb]{\smash{0.01}}}% \put(0.03375426,0.14450183){\makebox(0,0)[lb]{\smash{AP validity limit}}}% \put(0.51029479,0.14497735){\makebox(0,0)[lb]{\smash{$g=0$ ($\GdQS$)}}}% \put(0.51065039,0.19575565){\makebox(0,0)[lb]{\smash{$g=-1$}}}% \put(0.80670202,0.19440419){\makebox(0,0)[lb]{\smash{\QSh}}}% \put(0.81078115,0.14315038){\makebox(0,0)[lb]{\smash{\QSb}}}% \put(0.80973489,0.0899361){\makebox(0,0)[lb]{\smash{sRS}}}% \put(0.03476742,0.09324802){\makebox(0,0)[lb]{\smash{\QSb-\QSh boundary}}}% \put(0.03989281,0.04199421){\makebox(0,0)[lb]{\smash{sRS-QS boundary}}}% \put(0.03492593,0.19440419){\makebox(0,0)[lb]{\smash{\QSh reachable in the AP}}}% \put(0.49063695,0.25595875){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\eps$}}}% \put(0.0192851,0.58589849){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$X$}}}% \put(0.50825114,0.0899361){\makebox(0,0)[lb]{\smash{Sun collision}}}% \put(0.08886184,1.00712277){\color[rgb]{0,0,0}\rotatebox{-57.27471102}{\makebox(0,0)[lb]{\smash{Sun-Mer}}}}% \put(0.27850202,1.00712183){\color[rgb]{0,0,0}\rotatebox{-57.27471102}{\makebox(0,0)[lb]{\smash{Sun-Ear}}}}% \put(0.47070456,1.00712141){\color[rgb]{0,0,0}\rotatebox{-57.27471102}{\makebox(0,0)[lb]{\smash{Sun-Ura}}}}% \put(0.56039878,1.00712168){\color[rgb]{0,0,0}\rotatebox{-57.27471102}{\makebox(0,0)[lb]{\smash{Sun-Sat}}}}% \put(0.63215339,1.00712138){\color[rgb]{0,0,0}\rotatebox{-57.27471102}{\makebox(0,0)[lb]{\smash{Sun-Jup}}}}% \put(0.77822451,1.01224573){\color[rgb]{0,0,0}\rotatebox{-57.27471102}{\makebox(0,0)[lb]{\smash{Ear-Moo}}}}% \put(0.8704801,0.99430573){\color[rgb]{0,0,0}\rotatebox{-57.27471102}{\makebox(0,0)[lb]{\smash{0.0477}}}}% \end{picture}% \endgroup% \caption{\small Evolution of the sRS, \QSb and \QSh boundaries along the family $f$ by varying the mass ratio $\eps$. For small $\eps$, the \QSh domain dominates the family $f$ implying that the AP and the RAP are fully adapted to study the QS motion. By increasing $\eps$, the size of the part associated with the sRS and the \QSb trajectories increases making not reachable the orbits with small eccentricities in the averaged problem. Eventually, for $\eps>0.01$, the sRS and the \QSb domains become dominant while the \QSh one is reduced so much that the averaged problem becomes useless for all values of $e_0$.}\label{fig:map} \end{center} \end{figure} By varying the mass ratio $\eps$, we follow the evolution of the boundaries of the three domains along the family $f$ as well as the validity limit of the averaged problem. In Fig.\ref{fig:map}, the parameter $\eps$ ranges from $10^{-7}$ to $0.0477$ which is the critical mass ratio where a part of the family $f$ becomes unstable \citep[see][]{HeGu1970}. For Sun-terrestrial planet systems, the size of the \QSb and sRS domains is negligible with respect to the \QSh one. As a consequence, for these systems, the AP and RAP are fully adapted to describe the main part of the family $f$ and its neighbourhood (except for very small eccentricities). For Sun-giant planet systems as well as the Earth-Moon system, the gravitational influence of the planet being stronger, the size of the \QSb domain $f$ increases until to be of the same order than those of the sRS one while the size of the \QSh decreases. By the equation \eqref{eq:Xe0}, we established that for the Sun-Uranus, Sun-Saturn, Sun-Jupiter and Earth-Moon systems, the \QSh orbits are reachable in the averaged problem by $e_0$ greater than $0.08$, $0.13$, $0.18$ and $0.5$. Then, by increasing $\eps$, the \QSb domain becomes dominant while the \QSh one is reduced so much that the averaged problem becomes useless for all values of $e_0$ ($\eps\simeq 0.04$). Consequently, for the Pluto-Charon system ($\eps\simeq 1/10$) none \QSh trajectory could be described in the averaged approaches. Moreover, according to the stability map of the family $f$ in \cite{Be1975}, this system could not harbour a \QSh companion: only \QSb and sRS trajectories exist for this value of mass ratio. \section{On the frozen ellipses: an extension to the eccentric case ($e'\geq 0$)} \label{sec:EC} An important result of our study in the circular case has been to highlight the particular orbits $\GdQS$, $\GdLT$, $\GdLQ$ and $\GdLC$, that correspond\footnote{See the figure \ref{fig:CIF}.} to circles of fixed points in the averaged problem and therefore frozen ellipses in the heliocentric frame. A natural question is to know if these structures are preserved when a small eccentricity is given to the planetary orbit. This question can be addressed in a perturbative way. Indeed, for sufficiently low values of planet's eccentricity, the Hamiltonian of the problem reads $\Hb|_{e'=0} + e'R$, i.e. the perturbation of the Hamiltonian in the circular case by the first order term in planetary eccentricity. However, as $\omega=\arg(x)$ is no longer an ignorable variable in this Hamiltonian, the dimension of the phase space could not been reduced as in the section \ref{sec:CC} and the persistence of the set of degenerated fixed points is not necessary guaranteed. In the present paper, we limit our approach to numerical explorations of the phase space associated with $\Hb|_{e'\geq 0}$. For a very low value of $e'$ in a Sun-Jupiter like system, the (numerical) solving of the equations of motion in the averaged problem, \begin{equation}\left\{ \begin{split} \dron{}{\theta}\Hb(\theta,u,-i\xb, x) = 0\\ \dron{}{u}\Hb(\theta,u,-i\xb, x) = 0\\ \dron{}{x}\Hb(\theta,u,-i\xb, x) = 0 \end{split}\right. , \end{equation} shows that although each circle of fixed points is destroyed, two fixed points survived to the perturbation. One is stable and the other unstable. We denote these fixed points $G_{X,1}^{e'}$ and $G_{X,2}^{e'}$ with $X$ corresponding to QS, $L_4$, $L_5$ and $L_3$. By varying $e'$, we followed them and show families of fixed points of the averaged problem that originate from $\GdLT$, $\GdLQ$, $\GdLC$ and $\GdQS$. For a given $e'$ in the AP, the linear dynamics in the vicinity of a fixed point is given by two couples of eigenvalues: $\pm\mu$ or $\pm i\nu$ and $\pm f$ or $\pm ig $ where $\mu$, $f$, $\nu$ and $g$ are real. If these eigenvalues are all imaginary then they characterized an elliptic fixed point with libration and secular precession frequencies $\nu$ and $g$. Otherwise, the fixed point is unstable. Thus, we also characterized the stability variations of these families of fixed points by varying $e'$.\\ Their initial conditions and the moduli of the real and imaginary part of the eigenvalues versus $e'$ are plotted on Fig.\ref{fig:FPQS}, \ref{fig:FPL3} and \ref{fig:FPL4}. \begin{figure} \begin{center} \small \def0.85\textwidth{0.85\textwidth} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \ifx0.85\textwidth\undefined% \setlength{\unitlength}{699.99995117bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{0.85\textwidth}% \fi% \global\let0.85\textwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,0.73330313)% \put(0,0){\includegraphics[width=\unitlength]{Fig11_GQS.eps}}% \put(0.13324807,0.64010945){\makebox(0,0)[lb]{\smash{0}}}% \put(0.09334182,0.67610834){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{1e-10}}}% \put(0.13324805,0.42639401){\makebox(0,0)[lb]{\smash{0}}}% \put(0.09825925,0.51625451){\makebox(0,0)[lb]{\smash{0.001}}}% \put(0.12511078,0.2435401){\makebox(0,0)[lb]{\smash{0.5}}}% \put(0.12511078,0.30346763){\makebox(0,0)[lb]{\smash{0.7}}}% \put(0.1273965,0.36339509){\makebox(0,0)[lb]{\smash{0.9}}}% \put(0.25396054,0.33970541){\makebox(0,0)[lb]{\smash{$0.565$}}}% \put(0.13553377,0.10567965){\makebox(0,0)[lb]{\smash{0}}}% \put(0.1178251,0.18304123){\makebox(0,0)[lb]{\smash{180}}}% \put(0.08191325,0.60753664){\makebox(0,0)[lb]{\smash{-1e-10}}}% \put(1.03833511,0.68419386){\makebox(0,0)[lb]{\smash{}}}% \put(0.15761856,0.03923992){\makebox(0,0)[lb]{\smash{0}}}% \put(0.22598325,0.03923991){\makebox(0,0)[lb]{\smash{0.2}}}% \put(0.30291938,0.03923991){\makebox(0,0)[lb]{\smash{0.4}}}% \put(0.37995209,0.03923991){\makebox(0,0)[lb]{\smash{0.6}}}% \put(0.4550706,0.03923991){\makebox(0,0)[lb]{\smash{0.8}}}% \put(1.36154127,0.27933396){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\begin{minipage}{0.28218421\unitlength}\raggedright \end{minipage}}}% \put(0.29665837,0.00267315){\makebox(0,0)[lb]{\smash{$e'$}}}% \put(0.56439382,0.46216946){\makebox(0,0)[lb]{\smash{0.04}}}% \put(0.56439382,0.52859813){\makebox(0,0)[lb]{\smash{0.08}}}% \put(0.62043416,0.19458238){\makebox(0,0)[lb]{\smash{0}}}% \put(0.67960337,0.19458238){\makebox(0,0)[lb]{\smash{0.2}}}% \put(0.75286659,0.19458238){\makebox(0,0)[lb]{\smash{0.4}}}% \put(0.82605195,0.19458238){\makebox(0,0)[lb]{\smash{0.6}}}% \put(0.89932153,0.19458238){\makebox(0,0)[lb]{\smash{0.8}}}% \put(0.64563656,-0.25541861){\makebox(0,0)[lb]{\smash{}}}% \put(0.68277109,0.32705256){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$|g|$}}}% \put(0.68518721,0.23982497){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$|f|$}}}% \put(0.55354233,0.26602654){\makebox(0,0)[lb]{\smash{2e-02}}}% \put(0.55354233,0.31516717){\makebox(0,0)[lb]{\smash{4e-04}}}% \put(0.55354233,0.36431338){\makebox(0,0)[lb]{\smash{6e-04}}}% \put(0.59896248,0.21948864){\makebox(0,0)[lb]{\smash{0}}}% \put(0.60102058,0.40048382){\makebox(0,0)[lb]{\smash{0}}}% \put(0.6788059,0.48332936){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$|\nu|$}}}% \put(0.67907777,0.41867705){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$|\mu|$}}}% \put(0.752021,0.16263602){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$e'$}}}% \put(0.75957746,0.62963153){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\GdQSs$}}}% \put(0.03023607,-0.26468991){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(1.42151651,0.61728121){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(0.01227473,0.60813832){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(0.0099887,0.44871993){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$u$}}}% \put(0.0099887,0.30126125){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$e$}}}% \put(0.18980308,0.50788522){\makebox(0,0)[lb]{\smash{$\GdQSu$}}}% \put(0.3231338,0.4452602){\makebox(0,0)[lb]{\smash{$\GdQSs$}}}% \put(0.41771786,0.29841087){\rotatebox{38.15209713}{\makebox(0,0)[lb]{\smash{$e=e'$}}}}% \put(0.38123178,0.69485446){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(0.0099887,0.64013844){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\theta~(\degre)$}}}% \put(0.01227473,0.13042267){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(0.0099887,0.14592195){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\omega~(\degre)$}}}% \put(0.1853162,0.20650571){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{d.}}}% \put(0.64602004,0.58366627){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{e.}}}% \put(0.1853162,0.37107765){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{c.}}}% \put(0.1853162,0.54250686){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{b.}}}% \put(0.1853162,0.69107887){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{a.}}}% \put(0.64602004,0.37109402){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{f.}}}% \end{picture}% \endgroup% \caption{\small a, b, c and d: orbital elements of the families of fixed points $\GdQSu$ and $\GdQSs$ versus $e'$. e and f: variations of the moduli of the real and imaginary part of the eigenvalues of the Hessian matrix along $\GdQSs$. $\GdQSs$ describe a configuration of two ellipses with opposite periaster and $\theta=0\degre$. The fixed points are stable until $e'<0.8$. Moreover, this family possesses a particular orbit where $e=e'\simeq 0.565$. On the contrary, the family $\GdQSu$ is unstable and describe a configuration of two ellipses with aligned periaster and $\theta=0\degre$.}\label{fig:FPQS} \end{center} \end{figure} \begin{figure} \begin{center} \small \def0.85\textwidth{0.85\textwidth} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \ifx0.85\textwidth\undefined% \setlength{\unitlength}{699.99995117bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{0.85\textwidth}% \fi% \global\let0.85\textwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,0.72907706)% \put(0,0){\includegraphics[width=\unitlength]{Fig12_GL3.eps}}% \put(0.14691418,0.64305703){\makebox(0,0)[lb]{\smash{0}}}% \put(0.09329375,0.67905594){\makebox(0,0)[lb]{\smash{1e-10}}}% \put(1.03833512,1.40648129){\makebox(0,0)[lb]{\smash{}}}% \put(0.06455831,0.44948784){\makebox(0,0)[lb]{\smash{-7.5e-04}}}% \put(0.06540983,0.50334387){\makebox(0,0)[lb]{\smash{-7.0e-04}}}% \put(0.12734831,0.25662959){\makebox(0,0)[lb]{\smash{0.5}}}% \put(0.12734831,0.31148442){\makebox(0,0)[lb]{\smash{0.7}}}% \put(0.12506264,0.36627241){\makebox(0,0)[lb]{\smash{0.9}}}% \put(0.13548561,0.10405586){\makebox(0,0)[lb]{\smash{0}}}% \put(0.11777696,0.18370317){\makebox(0,0)[lb]{\smash{180}}}% \put(0.07957941,0.60134139){\makebox(0,0)[lb]{\smash{-1e-10}}}% \put(0.16353257,0.04063414){\makebox(0,0)[lb]{\smash{0}}}% \put(0.22863353,0.04062984){\makebox(0,0)[lb]{\smash{0.2}}}% \put(0.30548212,0.04062984){\makebox(0,0)[lb]{\smash{0.4}}}% \put(0.38242746,0.04062984){\makebox(0,0)[lb]{\smash{0.6}}}% \put(0.45745988,0.04062984){\makebox(0,0)[lb]{\smash{0.8}}}% \put(1.36154128,1.00162139){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\begin{minipage}{0.28218421\unitlength}\raggedright \end{minipage}}}% \put(0.30548212,0.00406237){\makebox(0,0)[lb]{\smash{$e'$}}}% \put(0.19101435,0.65688437){\color[rgb]{0,0,0}\rotatebox{0.37200928}{\makebox(0,0)[lb]{\smash{+180}}}}% \put(0.64563658,0.46686882){\makebox(0,0)[lb]{\smash{}}}% \put(0.56591071,0.47966255){\makebox(0,0)[lb]{\smash{0.04}}}% \put(0.56591071,0.56258666){\makebox(0,0)[lb]{\smash{0.08}}}% \put(0.55505918,0.2889471){\makebox(0,0)[lb]{\smash{4e-04}}}% \put(0.55505918,0.35544486){\makebox(0,0)[lb]{\smash{8e-04}}}% \put(0.59848777,0.22030316){\makebox(0,0)[lb]{\smash{0}}}% \put(0.59807669,0.4009665){\makebox(0,0)[lb]{\smash{0}}}% \put(0.62199655,0.1957577){\makebox(0,0)[lb]{\smash{0}}}% \put(0.68090768,0.19574187){\makebox(0,0)[lb]{\smash{0.2}}}% \put(0.75649367,0.19574187){\makebox(0,0)[lb]{\smash{0.4}}}% \put(0.83199931,0.19574187){\makebox(0,0)[lb]{\smash{0.6}}}% \put(0.90759184,0.19574187){\makebox(0,0)[lb]{\smash{0.8}}}% \put(0.75562145,0.16379677){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$e'$}}}% \put(0.76027911,0.63081726){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\GdLTs$}}}% \put(0.03023608,0.45759751){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(1.42151652,1.33956864){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(0.33116892,0.32282687){\makebox(0,0)[lb]{\smash{$0.73$}}}% \put(0.24734016,0.42166593){\makebox(0,0)[lb]{\smash{$\GdLTu$}}}% \put(0.20369423,0.48779382){\makebox(0,0)[lb]{\smash{$\GdLTs$}}}% \put(0.33412174,0.248689){\rotatebox{38.18524256}{\makebox(0,0)[lb]{\smash{$e=e'$}}}}% \put(0.38123179,1.41714188){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(0.77716022,0.24448478){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$|g|$}}}% \put(0.74288075,0.30267063){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$|f|$}}}% \put(0.65594886,0.49475936){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$|\nu|$}}}% \put(0.65622073,0.41639238){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$|\mu|$}}}% \put(0.01227476,0.60722011){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(0.01456018,0.44323029){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$u$}}}% \put(0.01456018,0.30034336){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$e$}}}% \put(0.01456018,0.63464883){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\theta~(\degre)$}}}% \put(0.01227476,0.12950508){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(0.01456018,0.14500426){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\omega~(\degre)$}}}% \put(0.10742923,0.66057307){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(0.18927895,0.69495931){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{a.}}}% \put(0.58521013,0.1794866){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{b.}}}% \put(0.18927895,0.54181629){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{b.}}}% \put(0.19156467,0.37038705){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{c.}}}% \put(0.18927895,0.20124354){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{d.}}}% \put(0.6445721,0.58057303){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{e.}}}% \put(0.6445721,0.37028652){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{f.}}}% \end{picture}% \endgroup% \caption{\small a, b, c and d: orbital elements of the families of fixed points $\GdLTu$ and $\GdLTs$ versus $e'$. e and f: variations of the moduli of the real and imaginary part of the eigenvalues of the Hessian matrix along $\GdLTs$. $\GdLTs$ describe a configuration of two ellipses with aligned periaster and $\theta=180\degre$. The fixed points are stable for $ 0 \leq e'\leq 0.15$. Moreover, this family possesses a particular orbit with $e'=e\simeq 0.73$ where the planet and the particle share the same ellipses. On the contrary, $\GdLTu$ is unstable and describe a configuration of two ellipses with opposite periaster and $\theta=180\degre$. }\label{fig:FPL3} \end{center} \end{figure} \begin{figure} \begin{center} \small \def0.85\textwidth{0.85\textwidth} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \ifx0.85\textwidth\undefined% \setlength{\unitlength}{700bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{0.85\textwidth}% \fi% \global\let0.85\textwidth\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,0.74118039)% \put(0,0){\includegraphics[width=\unitlength]{Fig13_GL4.eps}}% \put(1.03833504,2.16076764){\makebox(0,0)[lb]{\smash{}}}% \put(0.11847354,0.59085965){\makebox(0,0)[lb]{\smash{140}}}% \put(0.11847354,0.63657063){\makebox(0,0)[lb]{\smash{160}}}% \put(0.11847354,0.68228709){\makebox(0,0)[lb]{\smash{180}}}% \put(0.06570621,0.42100265){\makebox(0,0)[lb]{\smash{-1.0e-03}}}% \put(0.06496385,0.47521355){\makebox(0,0)[lb]{\smash{-0.9e-03}}}% \put(0.06496385,0.52485868){\makebox(0,0)[lb]{\smash{-0.8e-03}}}% \put(0.11347354,0.28121584){\makebox(0,0)[lb]{\smash{0.85}}}% \put(0.12347354,0.33743017){\makebox(0,0)[lb]{\smash{0.9}}}% \put(0.1616451,0.03822494){\makebox(0,0)[lb]{\smash{0}}}% \put(0.22583068,0.03822061){\makebox(0,0)[lb]{\smash{0.2}}}% \put(0.30244348,0.03822061){\makebox(0,0)[lb]{\smash{0.4}}}% \put(0.3791528,0.03822061){\makebox(0,0)[lb]{\smash{0.6}}}% \put(0.45395504,0.03822061){\makebox(0,0)[lb]{\smash{0.8}}}% \put(1.36154118,1.75590777){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\begin{minipage}{0.28218419\unitlength}\raggedright \end{minipage}}}% \put(0.30244348,0.00165331){\makebox(0,0)[lb]{\smash{$e'$}}}% \put(0.64563652,1.22115524){\makebox(0,0)[lb]{\smash{}}}% \put(0.57930072,0.44954103){\makebox(0,0)[lb]{\smash{0.2}}}% \put(0.57930072,0.50139371){\makebox(0,0)[lb]{\smash{0.4}}}% \put(0.57930072,0.55089375){\makebox(0,0)[lb]{\smash{0.6}}}% \put(0.59690954,0.2201336){\makebox(0,0)[lb]{\smash{0}}}% \put(0.54890933,0.2864918){\makebox(0,0)[lb]{\smash{4e-04}}}% \put(0.54890933,0.35298952){\makebox(0,0)[lb]{\smash{8e-04}}}% \put(0.59670115,0.39918912){\makebox(0,0)[lb]{\smash{0}}}% \put(0.62270361,0.19330237){\makebox(0,0)[lb]{\smash{0}}}% \put(0.68204409,0.19329774){\makebox(0,0)[lb]{\smash{0.2}}}% \put(0.7549259,0.19329774){\makebox(0,0)[lb]{\smash{0.4}}}% \put(0.83046789,0.19329774){\makebox(0,0)[lb]{\smash{0.6}}}% \put(0.90335609,0.19329774){\makebox(0,0)[lb]{\smash{0.8}}}% \put(0.75953462,0.16134253){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$e'$}}}% \put(0.74643399,0.62618224){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\GdLQs$}}}% \put(0.03023606,1.21188393){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(1.42151641,2.093855){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(0.38123176,2.17142823){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{}}}% \put(0.28461408,0.52518707){\makebox(0,0)[lb]{\smash{$\GdLQu$}}}% \put(0.20179709,0.47133693){\makebox(0,0)[lb]{\smash{$\GdLQs$}}}% \put(0.67296213,0.29365017){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$|g|$}}}% \put(0.67238539,0.23542523){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$|f|$}}}% \put(0.64359641,0.4318199){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$|\nu|$}}}% \put(0.86421986,0.41128898){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$|\mu|$}}}% \put(0.01227444,0.44525767){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$u$}}}% \put(0.01227444,0.29779944){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$e$}}}% \put(0.01227444,0.63667604){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\theta~(\degre)$}}}% \put(0.01227444,0.14246052){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\omega~(\degre)$}}}% \put(0.18468928,0.20378515){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{d.}}}% \put(0.64328176,0.37033057){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{f.}}}% \put(0.10599225,0.08584916){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{-150}}}% \put(0.10599225,0.11240159){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{-100}}}% \put(0.11388084,0.13895402){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{-50}}}% \put(0.12623994,0.16539959){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0}}}% \put(0.11834694,0.19195215){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{50}}}% \put(0.64328176,0.58289875){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{e.}}}% \put(0.18468928,0.37749679){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{c.}}}% \put(0.18468928,0.54435139){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{b.}}}% \put(0.18468928,0.69749191){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{a.}}}% \put(-1.35581003,0.78547104){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{140}}}% \put(-1.35581003,0.86352017){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{160}}}% \put(-1.35581003,0.94157836){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{180}}}% \put(-1.37438498,0.835242){\color[rgb]{0,0,0}\rotatebox{90}{\makebox(0,0)[lb]{\smash{θ}}}}% \put(-1.37438498,0.85895644){\color[rgb]{0,0,0}\rotatebox{90}{\makebox(0,0)[lb]{\smash{(°) }}}}% \put(-1.09138078,0.80062533){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{F}}}% \put(-1.0842448,0.79505){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{L4}}}% \put(-1.07213158,0.80062533){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{FP}}}% \put(-1.05751432,0.79505){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{+}}}% \put(-1.0851028,0.7718231){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{F}}}% \put(-1.07796681,0.76624778){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{L}}}% \put(-1.07231166,0.76178758){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{3}}}% \put(-1.06728871,0.7718231){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{FP}}}% \put(-1.05267146,0.76624778){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{-}}}% \put(-1.37211499,0.52412762){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{-0.001}}}% \put(-1.38000984,0.60856515){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{-0.0009}}}% \put(-1.38000984,0.69301173){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{-0.0008}}}% \put(-1.39832505,0.61081333){\color[rgb]{0,0,0}\rotatebox{90}{\makebox(0,0)[lb]{\smash{u }}}}% \put(-1.35975745,0.30796577){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0.85}}}% \put(-1.3518626,0.395661){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0.9}}}% \put(-1.38253535,0.34796358){\color[rgb]{0,0,0}\rotatebox{90}{\makebox(0,0)[lb]{\smash{e }}}}% \put(-1.36027712,0.05800169){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{-200}}}% \put(-1.36027712,0.0850707){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{-150}}}% \put(-1.36027712,0.11213063){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{-100}}}% \put(-1.35238667,0.13919056){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{-50}}}% \put(-1.34002473,0.1661416){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0}}}% \put(-1.34791958,0.19320153){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{50}}}% \put(-1.35581003,0.22027053){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{100}}}% \put(-1.32621315,0.02919947){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0}}}% \put(-1.29480116,0.02919947){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0.1}}}% \put(-1.25746803,0.02919947){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0.2}}}% \put(-1.22013489,0.02919947){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0.3}}}% \put(-1.18280618,0.02919947){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0.4}}}% \put(-1.14541577,0.02919947){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0.5}}}% \put(-1.10808705,0.02919947){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0.6}}}% \put(-1.07075392,0.02919947){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0.7}}}% \put(-1.0334252,0.02919947){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0.8}}}% \put(-0.99609207,0.02919947){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{0.9}}}% \put(-1.37438498,0.11603944){\color[rgb]{0,0,0}\rotatebox{90}{\makebox(0,0)[lb]{\smash{ω}}}}% \put(-1.37438498,0.13742456){\color[rgb]{0,0,0}\rotatebox{90}{\makebox(0,0)[lb]{\smash{(°) }}}}% \put(-1.17609623,-0.01453018){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{e}}}% \put(-1.16846352,-0.02010551){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{planet}}}% \end{picture}% \endgroup% \caption{\small a, b, c and d: orbital elements of the families of fixed points $\GdLQu$ and $\GdLQs$ versus $e'$. e and f: variations of the moduli of the real and imaginary part of the eigenvalues of the Hessian matrix along $\GdLQs$. The whole family $\GdLQs$ is stable whereas $\GdLQu$ is unstable.}\label{fig:FPL4} \end{center} \end{figure} According to the figures \ref{fig:FPQS}, \ref{fig:FPL3} and \ref{fig:FPL4}, we find eight families of fixed points in the averaged problem that correspond to frozen ellipses in the heliocentric frame. For $e'=0$, these equilibria of the averaged problem belong to the set of degenerated fixed points or ``circles of fixed points" that exist for $\omega\in{\mathbb T}$ and that we denoted $\GdLT$, $\GdLQ$, $\GdLC$ and $\GdQS$. Among these eight families of fixed points, two are more relevant: $\GdQSs$ and $\GdLTs$. The fixed points of $\GdQSs$ originates from $\GdQS$ and are stable until $e'\simeq 0.8$. It corresponds to a configuration of two ellipses with two opposite periaster ($\omega = 180\degre$), $\theta=0\degre$ and a very high eccentricity that decreases when $e'$ increases (the slope being close to $\der{e}{e'}= -1/2$). On the contrary, the fixed points of $\GdLTs$ originates from $\GLT$ and is only stable for $0 \leq e'\leq 0.15$. It describes a configuration of two ellipses with aligned periaster ($\omega = 0\degre$), $\theta=180\degre$ and a very high eccentricity that decreases when $e'$ increases.\\ Along these two families, there exists a critical value of $e'$ where the planet and the asteroid ellipses have the same eccentricities. The dashed lines of the figures \ref{fig:FPQS} and \ref{fig:FPL3} show that these particular orbits exist for $e'=e\simeq 0.565$ along $\GdQSs$ and $e'=e\simeq 0.73$ along $\GdLTs$. Let us notice that these two families of configurations have been highlighted in the planetary problem. Indeed, these two families have certainly to do with the stable and unstable families of periodic orbits described in \cite{HaPsVo2009} and \cite{HaVo2011}. As regard $\GdQSs$, it could also be associated with the QS fixed point family in \cite{GiBeMi2010}. In \cite{GiBeMi2010} as well as in \cite{HaVo2011}, these authors remarked that the configuration described by $\GdQSs$ with two equal eccentricities exists with an eccentricity value close to $0.565$ for several planetary mass ratio. In our study, we establish that this particular orbit also exists in the restricted three-body problem for $e=e'\simeq 0.565$ (see Fig.\ref{fig:FPQS}). Likewise, according to \cite{HaPsVo2009}, the configuration described by $\GdLTs$ with two equal eccentricities seems to exist in the planetary problem for an eccentricity value close to $0.73$. Consequently, this suggests that these two particular configurations are weakly dependent on the ratio of the planetary masses. Eventually, we remark that the existence of some of these eight configurations has already been showed. Indeed, in the range $0.01 \leq e'\leq 0.5$, \cite{NeThFe2002} exhibit QS stable and unstable fixed points. In addition, these authors also shown very high eccentric fixed points that correspond to the configuration of $\GdLQs$ and $\GdLQu$.\\ Likewise, \cite{Bi1978} and \cite{Ed1985} highlighted some frozen ellipses in co-orbital motion in the Sun-Jupiter system with $e'=e'_{Jupiter}\simeq 0.048$. The first author found six very high eccentric fixed points denoted $P_1$, $Q_1$, $P_2$, $Q_2$, $P_3$, $Q_3$ that correspond to $\GdLQs$, $\GdLQu$, $\GdLCs$, $\GdLCu$, $\GdQSs$ and $\GdQSu$. The other found a frozen ellipse in co-orbital resonance with $e = 0.975$ and $\theta=180\degre$, that is an orbit of $\GdLTs$. \section{Conclusions} In this paper, we clarify the definition of quasi-satellite motion and estimate a validity limit of the averaged approach by revisiting the planar and circular restricted three-body problem . First of all, we focussed on the co-orbital resonance via the averaged problem and showed that the studies of the phase portraits of the reduced averaged problem parametrized by $e_0$ allow to understand its global dynamics. Indeed, they reveal that tadpole, horseshoe and quasi-satellite domains are structured around four families of fixed points originating from $L_4$, $L_5$ ($\GLQ$ and $\GLC$), $L_3$ ($\GLT$) and the singularity point for $e_0=0$ ($\GQS$). By increasing $e_0$, the quasi-satellite orbits appear inside the domain opened by the collision curve for $e_0>0$ and becomes dominant for high eccentricities. On the contrary, tadpole and horseshoe domains shrink and vanish when $\GLQ$ and $\GLC$ get closer and merge $\GLT$. Moreover, we showed that this remaining family bifurcates and generates a new domain of high eccentric orbits librating around $(\theta,u)=(180\degre,0)$. \\ However, the averaged approaches having the drawback to be poorly significant in the exclusion zone, we highlighted that for sufficiently small eccentricities, the whole quasi-satellite domain is contained inside it which makes this type of motion not reachable by averaging process. The study of the evolution of the libration and secular precession frequencies along $\GQS$, allowed us to show that the family $f$ and a fortiori the quasi-satellite domain are not reachable by $0\leq e_0<0.18$ in a Sun-Jupiter like system. In order to clarify the terminology to use between ``retrograde satellite" or ``quasi-satellite" when these orbits are close encountering trajectories with the planet, we revisited the works in the rotating frame on the family of simple-periodic symmetrical retrograde satellite orbits, or family $f$.\\ We highlighted that the family $f$ possesses two particular orbits that divide its neighbourhood in three connected areas: ``satellized" retrograde satellite, binary quasi-satellite and heliocentric quasi-satellite domains. We established that the last one is the only one reachable in the averaged approaches. The study of the frequencies of the fixed point families of the reduced averaged problem has also shown some frozen ellipses in the heliocentric frame which are equivalent to sets of degenerated fixed points (also denoted ``circles of fixed points") in the averaged problem. In order to exhibit fixed points when the planet's orbit is eccentric, we highlighted numerically that from each circles of fixed points originates at least two families of fixed points parametrized by the planet eccentricity. Among them, $\GdQSs$ is the most interesting as it is in quasi-satellite motion with a configuration of two ellipses with opposite perihelia and connected to the stable family described in \cite{HaPsVo2009} in the planetary problem. Moreover, $\GdQSs$ as well as the family in the planetary problem possess a configuration with equal eccentricities for any mass ratio with an eccentricity value close to $0.565$. As a consequence, this suggests that this remarkable configuration is weakly dependent to the ratio between the planetary masses. Likewise, let us mentioned that this configuration is similar to those of the family ``A.1/1" described in \cite{Br1975} in the general three-body problem with three equal masses which suggests a connection between them. When $e_0>0.4$, we denoted that the moduli of the libration frequency $\nu$ and of the secular precession frequency $g$ along the family $f$ are of the same order than those of the two tadpole periodic orbit families. Thus, in the framework of long-term dynamics of the Jovian quasi-satellite asteroids in the solar system, we can assume that a study of the global dynamics by means of the frequency map analysis will reveal resonant structures close to those of the trojans identified in \cite{RoGa2006}. However, by remarking that the direction of the perihelion precession being the opposite of those of the planets in the solar system, resonances with these secular frequencies should be of higher order in comparison to the tadpoles orbits. On the contrary, resonances with their node precession should be of lower order. These questions will be addressed in a forthcoming work. \section*{Acknowledgement} This work has been developed during the Ph.D thesis of Alexandre Pousse at the ``Astronomie et Syst\`emes Dynamiques", IMCCE, Observatoire de Paris. \bibliographystyle{apalike}
2024-02-18T23:40:40.329Z
2017-02-06T02:00:41.000Z
algebraic_stack_train_0000
3,030
19,214
proofpile-arXiv_065-14783
\section{Introduction} \label{intro} Although it's great success, the standard model (SM) needs some extension. In order to accommodate atmospheric and solar \cite{Fogli:2012ua},\cite{Gonzalez-Garcia:2014bfa} neutrino data, the neutrino masses and mixings should be generated via some reasonable extension. See-saw mechanism \cite{{Minkowski:1977sc},{GellMann:1980vs},{Yanagida:1979as},{Glashow:1979nm},{Mohapatra:1979ia},{Schechter:1980gr},{Schechter:1981cv}} realized by the introduction of the heavy right-handed neutrinos (RHN), is simplest one for neutrino mass generation. Additional an appealing feature of this extension is that it can also generate the needed amount of the baryon asymmetry via leptogenesis \cite{Fukugita:1986hr} (for reviews see: \cite{{Giudice:2003jh},{Buchmuller:2004nz},{Davidson:2008bu}}). Since the neutrino sector involves CP phases and parameters (e.g. Dirac Yukawa couplings and heavy Majorana neutrino masses) which are not measured so far, a priory it is impossible to make predictions unless some reduction of model parameters are achieved. For this purpose, the texture zero Yukawa and/or Majorana mass matrices have been investigated in the literature \cite{{Frampton:2002qc},{Frampton:2002yf},{Ibarra:2003up},{Branco:2005jr},{Fritzsch:2011qv},{Dev:2006qe},{Xing:2002ta},{Grimus:2011sf},{Dev:2014dla},{Zhou:2015qua},{Kitabayashi:2015jdj},{Meloni:2014yea},{Merle:2006du},{Lashin:2011dn},{Deepthi:2011sk},{Liao:2013rca},{Gautam:2015kya},{Nath:2015emg}}. This approach, besides some predictions, opens up a possibility of relating phase $\delta $ (appearing in neutrino oscillations) to the CP asymmetry of the thermal leptogenesis \cite{{Frampton:2002qc},{Frampton:2002yf},{Ibarra:2003up}},\cite{{Babu:2007zm},{Babu:2008kp},{Harigaya:2012bw},{Ge:2010js}}. Since for a solution to the gauge hierarchy problem the supersymmetry appears to be a well motivated framework, we consider the MSSM augmented with two RHN states. The latter being quasi-degenerate in mass have potential to realize a resonant leptogenesis scenario \cite{{Flanz:1996fb},{Pilaftsis:1997jf},{Pilaftsis:2003gt}} (and \cite{{Blanchet:2012bk},{Dev:2015wpa},{Dev:2014laa},{Dev:2014wsa}} for recent discussions on resonant leptogenesis) which would not suffer from the gravitino problem \cite{{Khlopov:1984pf},{Ellis:1984er},{Davidson:2002qv},{Kohri:2005wn}}. Noting also that the low scale SUSY has the dark matter candidate, the framework we are considering, is well motivated from the several viewpoints. With the two RHN's we investigate texture zero $3\times 2$ Dirac type Yukawa couplings, which lead to the neutrino mass matrices with zero entries. On top of this, we augmented the Lagrangian couplings with a single $\Delta L=2$ lepton number violating $\rm d=5$ operator, which allows to keep some predictions and, at the same time, makes some mass matrices experimentally acceptable. It turns out that only three Yukawa textures (out of nine) possess cosmological CP phase which we relate to neutrino CP $\delta$ phase. All experimentally viable neutrino mass matrices lead to interesting predictions, which we investigate in detail. The paper is organized as follows. In section \ref{matrices}, we describe our framework and list all possible two texture zero $3\times 2$ Yukawa matrices. In section \ref{derivation}, resorting to the $\rm d=5$ operator and $3\times 2$ Yukawa matrices we construct neutrino mass matrices. Simple example of possible generation of $\rm d=5$ operators, we are exploiting, is also outlined. In section \ref{analysis}, parametrization of the lepton mixing matrix is given and experimentally acceptable mass matrices are recognized. We investigate these neutrino mass matrices and derive predictive relations, some of which are exact and very applicable to analysis. In section \ref{cosmology}, cosmological CP phase is related to the $\delta $ phase responsible for the CP violation in neutrino oscillations. In Sect. \ref{conclusion} we conclude. \section{Two texture zero $3\times2$ Yukawa matrices: $2T_0Y_{32}$'s} \la{matrices} \numberwithin{equation}{section} Let us consider the lepton sector of MSSM augmented with two right-handed neutrinos $N_{1}$ and $N_{2}$. The relevant Yukawa superpotential couplings are given by: \begin{equation} W_{lept}=W_{e}+W_{\nu},\quad W_{e}=l^{T}Y_{e}^{\rm diag}e^{c}h_{d},\quad W_{\nu}=l^{T}Y_{\nu}Nh_{u}-\frac{1}{2}N^{T}M_{N}N \la{r21}, \end{equation} where $h_{d}$ and $h_{u}$ are down and up type MSSM Higgs doublet superfields respectively. $N$, $l$, $e^{c}$ denote: \begin{equation} N=\binom{N_{1}}{N_{2}}, \quad l^{T}=(l_{1}, l_{2}, l_{3}), \quad e^{cT}=(e^{c}_{1}, e^{c}_{2}, e^{c}_{3}). \end{equation} In the next section, upon deriving the neutrino mass matrices, together with couplings of Eq. (\ref{r21}), the single $\rm d=5$ operator per the neutrino mass matrix will be applied. Because of this, in comparison with the approach considered in \cite{Babu:2008kp}, more two texture zero $Y_{\nu }$ Yukawa matrices will be compatible with the current experiments. We will work in a basis in which the charged lepton Yukawa matrix is diagonal and real: \beq Y_{e}^{\rm diag}={\rm Diag}(\lambda_{e}, \lambda_{\mu}, \lambda_{\tau}). \eeq As far as the RHN mass matrix $M_{N}$ is concerned, we will assume that it has the form: \beq M_{N}= \left(\begin{array}{ccc} 0&1\\ 1&0 \end{array}\right)M. \la{m01} \eeq This form of $M_{N}$ is crucial for our studies, since (\ref{m01}) at a tree level leads to the mass degeneracy of the RHN's, it has interesting implications for resonant leptogenesis \cite{Babu:2007zm},\cite{Babu:2008kp} and also, as we will see below, for building predictive neutrino scenarios. In a spirit of \cite{Babu:2008kp}, here we attempt to classify specific texture zero scenarios with degenerate RHN's which lead to predictions consistent with experiments. The matrix $Y_{\nu}$ contains two columns. Since due to the form of $M_{N}$ there is an exchange invariance $N_{1}\rightarrow N_{2}$, $N_{2}\rightarrow N_{1}$, it does not matter in which column of $Y_{\nu}$ we set elements to zero. Thus, starting with the Yukawa couplings, we consider the following nine different $3\times2$ Yukawa matrices with two zero entries: \\ \beqs T_{1}= \left(\begin{array}{ccc} \times&0\\ \times&0\\ \times&\times \end{array}\right),\quad T_{2}= \left(\begin{array}{ccc} \times&0\\ \times&\times\\ \times&0 \end{array}\right),\quad T_{3}= \left(\begin{array}{ccc} \times&\times\\ \times&0\\ \times&0 \end{array}\right), \eeqs \beqs T_{4}= \left(\begin{array}{ccc} 0&0\\ \times&\times\\ \times&\times \end{array}\right),\quad T_{5}= \left(\begin{array}{ccc} \times&0\\ 0&\times\\ \times&\times \end{array}\right),\quad T_{6}= \left(\begin{array}{ccc} \times&0\\ \times&\times\\ 0&\times \end{array}\right), \eeqs \beq T_{7}= \left(\begin{array}{ccc} \times&\times\\ 0&0\\ \times&\times \end{array}\right),\quad T_{8}= \left(\begin{array}{ccc} \times&\times\\ \times&0\\ 0&\times \end{array}\right),\quad T_{9}= \left(\begin{array}{ccc} \times&\times\\ \times&\times\\ 0&0 \end{array}\right),\la{xxx} \eeq where "$\times$"s stand for non-zero entries. Next, we factor out phases from these textures, in such a way as to make maximal number of entries be real. As it turns out only $T_4, T_7$ and $T_9$ will have unfactorable phases. The latter should be relevant to the lepton asymmetry. \\ \\ TEXTURE $T_{1}$ \\ Starting with $T_{1}$ Yukawa matrix, we parameterize it and write in a form of factored out phases: \beq T_{1}=\begin{pmatrix} a_{1}e^{i\alpha_{1}} & 0\\ a_{2}e^{i\alpha_{2}} & 0\\ a_{3}e^{i\alpha_{3}} & b_{3}e^{i\beta_{3}} \end{pmatrix} = \begin{pmatrix} e^{ix} & 0&0\\ 0 & e^{iy}&0\\ 0 & 0&e^{iz} \end{pmatrix} \begin{pmatrix} a_{1} & 0\\ a_{2} & 0\\ a_{3} &b_{3} \end{pmatrix} \begin{pmatrix} e^{i\omega} & 0\\ 0 & e^{i\rho} \end{pmatrix}, \eeq {\rm with} \beq \omega= \rho+\alpha_{3}-\beta_{3},\quad x = \alpha_{1}+\beta_{3}-\alpha_{3}-\rho, \quad y= \alpha_{2}+\beta_{3}-\alpha_{3}-\rho,\quad z=\beta_{3}-\rho. \eeq where $a_{i}$, $b_{3}$ and all phases are real. Below, in a similar way, we write down the remaining Yukawa textures given in Eq.(\ref{xxx}). \\ \\ TEXTURE $T_{2}$ \\ \beq T_{2}=\begin{pmatrix} a_{1}e^{i\alpha_{1}} & 0\\ a_{2}e^{i\alpha_{2}} & b_{2}e^{i\beta_{2}}\\ a_{3}e^{i\alpha_{3}} & 0 \end{pmatrix} = \begin{pmatrix} e^{ix} & 0&0\\ 0 & e^{iy}&0\\ 0 & 0&e^{iz} \end{pmatrix} \begin{pmatrix} a_{1} & 0\\ a_{2} & b_{2}\\ a_{3} &0 \end{pmatrix} \begin{pmatrix} e^{i\omega} & 0\\ 0 & e^{i\rho} \end{pmatrix}, \eeq {\rm with} \beq \omega= \rho+\alpha_{2}-\beta_{2},\quad x= \alpha_{1}+\beta_{2}-\alpha_{2}-\rho,\quad y= \beta_{2}-\rho,\quad z= \alpha_{3}+\beta_{2}-\alpha_{2}-\rho. \eeq TEXTURE $T_{3}$ \\ \beq T_{3}=\begin{pmatrix} a_{1}e^{i\alpha_{1}} &b_{1}e^{i\beta_{1}}\\ a_{2}e^{i\alpha_{2}}& 0\\ a_{3}e^{i\alpha_{3}} &0 \end{pmatrix} = \begin{pmatrix} e^{ix} & 0&0\\ 0 & e^{iy}&0\\ 0 & 0&e^{iz} \end{pmatrix} \begin{pmatrix} a_{1} & b_{1}\\ a_{2} & 0\\ a_{3} &0 \end{pmatrix} \begin{pmatrix} e^{i\omega} & 0\\ 0 & e^{i\rho} \end{pmatrix}, \eeq {\rm with} \beq \omega= \rho+\alpha_{1}-\beta_{1},\quad x= \beta_{1}-\rho,\quad y= \alpha_{2}-\alpha_{1}+\beta_{1}-\rho,\quad z= \alpha_{3}-\alpha_{1}+\beta_{1}-\rho. \eeq TEXTURE $T_{4}$ \\ \beq T_{4}=\begin{pmatrix} 0 & 0\\ a_{2}e^{i\alpha_{2}} & b_{2}e^{i\beta_{2}}\\ a_{3}e^{i\alpha_{3}} & b_{3}e^{i\beta_{3}} \end{pmatrix} = \begin{pmatrix} e^{ix} & 0&0\\ 0 & e^{iy}&0\\ 0 & 0&e^{iz} \end{pmatrix} \begin{pmatrix} 0 & 0\\ a_{2} & b_{2}\\ a_{3} &b_{3}e^{i\phi} \end{pmatrix} \begin{pmatrix} e^{i\omega} & 0\\ 0 & e^{i\rho} \end{pmatrix}, \la{t4212} \eeq {\rm with} \beq \omega=\alpha_{2}-\beta_{2}+\rho,\quad y= \beta_{2}-\rho,\quad z= \alpha_{3}-\alpha_{2}+\beta_{2}-\rho, \quad \phi=\alpha_{2}-\alpha_{3}+\beta_{3}-\beta_{2}. \la{t42121} \eeq TEXTURE $T_{5}$ \\ \beq T_{5}=\begin{pmatrix} a_{1}e^{i\alpha_{1}} & 0\\ 0 & b_{2}e^{i\beta_{2}}\\ a_{3}e^{i\alpha_{3}} & b_{3}e^{i\beta_{3}} \end{pmatrix} = \begin{pmatrix} e^{ix} & 0&0\\ 0 & e^{iy}&0\\ 0 & 0&e^{iz} \end{pmatrix} \begin{pmatrix} a_{1} & 0\\ 0 & b_{2}\\ a_{3} &b_{3} \end{pmatrix} \begin{pmatrix} e^{i\omega} & 0\\ 0 & e^{i\rho} \end{pmatrix}, \la{t5214} \eeq {\rm with} \beq \omega= \rho+\alpha_{3}-\beta_{3},\quad x= \alpha_{1}+\beta_{3}-\alpha_{3}-\rho,\quad y= \beta_{2}-\rho,\quad z= \beta_{3}-\rho. \eeq TEXTURE $T_{6}$ \\ \beq T_{6}=\begin{pmatrix} a_{1}e^{i\alpha_{1}} & 0\\ a_{2}e^{i\alpha_{2}}& b_{2}e^{i\beta_{2}}\\ 0 & b_{3}e^{i\beta_{3}} \end{pmatrix} = \begin{pmatrix} e^{ix} & 0&0\\ 0 & e^{iy}&0\\ 0 & 0&e^{iz} \end{pmatrix} \begin{pmatrix} a_{1} & 0\\ a_{2} & b_{2}\\ 0 &b_{3} \end{pmatrix} \begin{pmatrix} e^{i\omega} & 0\\ 0 & e^{i\rho} \end{pmatrix}, \eeq {\rm with} \beq \omega= \rho+\alpha_{2}-\beta_{2}, \quad x= \alpha_{1}+\beta_{2}-\alpha_{2}-\rho, \quad y= \beta_{2}-\rho, \quad z= \beta_{3}-\rho. \eeq TEXTURE $T_{7}$ \\ \beq T_{7}=\begin{pmatrix} a_{1}e^{i\alpha_{1}} & b_{1}e^{i\beta_{1}}\\ 0 & 0\\ a_{3}e^{i\alpha_{3}} & b_{3}e^{i\beta_{3}} \end{pmatrix} = \begin{pmatrix} e^{ix} & 0&0\\ 0 & e^{iy}&0\\ 0 & 0&e^{iz} \end{pmatrix} \begin{pmatrix} a_{1} & b_{1}\\ 0 & 0\\ a_{3} &b_{3}e^{i\phi} \end{pmatrix} \begin{pmatrix} e^{i\omega} & 0\\ 0 & e^{i\rho} \end{pmatrix}, \la{t7218} \eeq {\rm with} \beq \omega= \rho+\alpha_{1}-\beta_{1}, \quad x= \beta_{1}-\rho, \quad z= \alpha_{3}-\alpha_{1}+\beta_{1}-\rho,\quad \phi= \alpha_{1}-\alpha_{3}-\beta_{1}+\beta_{3}. \eeq TEXTURE $T_{8}$ \\ \beq T_{8}=\begin{pmatrix} a_{1}e^{i\alpha_{1}} &b_{1}e^{i\beta_{1}}\\ a_{2}e^{i\alpha_{2}}& 0\\ 0 &b_{3}e^{i\beta_{3}} \end{pmatrix} = \begin{pmatrix} e^{ix} & 0&0\\ 0 & e^{iy}&0\\ 0 & 0&e^{iz} \end{pmatrix} \begin{pmatrix} a_{1} & b_{1}\\ a_{2} & 0\\ 0 &b_{3} \end{pmatrix} \begin{pmatrix} e^{i\omega} & 0\\ 0 & e^{i\rho} \end{pmatrix}, \eeq {\rm with} \beq \omega= \rho+\alpha_{1}-\beta_{1}, \quad x= \beta_{1}-\rho, \quad y=\alpha_{2}-\alpha_{1}+\beta_{1}-\rho, \quad z= \beta_{3}-\rho. \eeq TEXTURE $T_{9}$ \\ \beq T_{9}=\begin{pmatrix} a_{1}e^{i\alpha_{1}} & b_{1}e^{i\beta_{1}}\\ a_{2}e^{i\alpha_{2}} & b_{2}e^{i\beta_{2}}\\ 0 & 0 \end{pmatrix} = \begin{pmatrix} e^{ix} & 0&0\\ 0 & e^{iy}&0\\ 0 & 0&e^{iz} \end{pmatrix} \begin{pmatrix} a_{1} & b_{1}\\ a_{2} & b_{2}e^{i\phi}\\ 0 &0 \end{pmatrix} \begin{pmatrix} e^{i\omega} & 0\\ 0 & e^{i\rho} \end{pmatrix}, \la{t9222} \eeq {\rm with} \beq \omega=\alpha_{1}-\beta_{1}+\rho, \quad x= \beta_{1}-\rho, \quad y= \alpha_{2}-\alpha_{1}+\beta_{1}-\rho, \quad \phi= \alpha_{1}-\beta_{1}-\alpha_{2}+\beta_{2}. \la{t9223} \eeq The phases $x, y$ and $z$ can be eliminated by proper redefinition of the states $l$ and $e^c$. As far as the phases $\omega $ and $\rho $ are concerned, because of the form of the $M_N$ matrix (\ref{m01}), also they will turn out to be non-physical. This is the one main difference of our construction from the scenarios considered earlier \cite{Harigaya:2012bw}. As we see, in textures $T_{4}$, $T_{7}$ and $T_{9}$ there remains one unremovable phase $\phi$ (i.e. in the second matrices of the r.h.s of Eqs. (\ref{t4212}) (\ref{t7218}) and (\ref{t9222}) respectively). This physical phase $\phi$ is relevant to the leptogenesis\cite{Babu:2008kp} and also, as we will see below, it will be related to phase $\delta$, determined from the neutrino sector. \section{Neutrino mass matrices derived from $2T_0Y_{32}$'s and one ${\rm d}=5$ operator} \la{derivation} \numberwithin{equation}{section} Integrating the RHN's, from the superpotential couplings of Eq. (\ref{r21}), using the see-saw formula, we get the following contribution to the light neutrino mass matrix: \beq M^{ss}_{\nu}=\langle h^{0}_{u}\rangle^{2} Y_{\nu}M^{-1}_{N}Y^{T}_{\nu}. \la{seesaw} \eeq For $Y_{\nu }$ in (\ref{seesaw}) the textures $T_i$ listed in the previous section should be used in turn. All obtained matrices $M_{\nu }^{ss}$, if identified with light neutrino mass matrices, will give experimentally unacceptable results. The reason is the number of texture zeros which we have in $T_{i}$ and $M_{N}$ matrices. In order to overcome this difficulty we include the following $\rm d=5$ operator: \beq {\cal O}^{5}_{ij}\equiv\frac{\tilde{d_{5}}e^{i{x_{5}}}}{2M_{*}}l_{i}l_{j}h_{u}h_{u} \la{d5} \eeq where $\tilde{d_{5}}$, $x_{5}$ and $M_{*}$ are real parameters. For each case, we will include a single term of the type of Eq. (\ref{d5}). The latter, together with (\ref{seesaw}) will contribute to the neutrino mass matrix. This will allow to have viable models and, at the same time because of the minimal number of the additions, we will still have predictive scenarios. The operators (\ref{d5}) can be obtained by another sector in such a way as to not affect the forms of $T_{i}$ and $M_{N}$ matrices. We comment about this in Sect. \ref{origin}. Here, we just consider operators (\ref{d5}) without specifying their origin and investigate their implications. Recall that, in the previous section, we have written the Yukawa textures in the form: \beq Y_{\nu}=P_{1}Y^{R}_{\nu}P_{2}, \eeq where $P_{1}, P_{2}$ are diagonal phase matrices and $Y^{R}_{\nu}$ is either a real matrix or contains only one phase (for $T_{4}$, $T_{7}$ and $T_{9}$). Making the field phase redefinitions: \beq l^{\prime}=P_{1}l, \quad N^{\prime}=P_{2}N, \quad (e^{\prime})^{c}=P^{*}_{1}e^{c} \eeq with: \beq P_{1}= \begin{pmatrix} e^{ix} & 0&0\\ 0 & e^{iy}&0\\ 0&0&e^{iz} \end{pmatrix}, \quad P_{2}= \begin{pmatrix} e^{i\omega} & 0\\ 0 & e^{i\rho} \end{pmatrix} \eeq the superpotential coupling will become: \begin{equation} W_{e}=(l^{\prime})^{T}Y_{e}^{\rm diag}(e^{\prime})^{c}h_{d},\quad W_{\nu}=(l^{\prime})^{T}Y^{R}_{\nu}N^{\prime}h_{u}-\frac{1}{2}(N^{\prime})^{T}M^{\prime}_{N}N^{\prime} \end{equation} with: \beq M^{\prime}_{N}= \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}\tilde{M}, \quad \tilde{M}=e^{-i(\omega + \rho)}M. \eeq Now, for simplification of the notations, we will get rid of the primes (i.e. perform $l^{\prime}\rightarrow l$, $e^{c \prime}\rightarrow e^{c}$,...) and in Eq. (\ref{seesaw}) using $Y_{\nu}^R$ instead of $Y_{\nu}$, from different $T_{i}$ textures we get corresponding $M_{\nu}^{ss}$, and then adding the operator (\ref{d5}), obtain the final neutrino mass matrix. \\ \\ From textures $T_{1,2,3}$ we obtain: \beq M_{T_{1}}= \begin{pmatrix} 0 & 0&a_{1}b_{3}\\ 0 & 0&a_{2}b_{3}\\ a_{1}b_{3}&a_{2}b_{3}&2a_{3}b_{3} \end{pmatrix}\bar m, \quad M_{T_{2}}= \begin{pmatrix} 0 & a_{1}b_{2}&0\\ a_{1}b_{2}&2a_{2}b_{2}&a_{3}b_{2}\\ 0&a_{3}b_{2}&0 \end{pmatrix}\bar m, \quad M_{T_{3}}= \begin{pmatrix} 2a_{1}b_{1} & a_{2}b_{1}&a_{3}b_{1}\\ a_{2}b_{1}& 0&0\\ a_{3}b_{1}&0&0 \end{pmatrix}\bar m, \eeq where $\bar m=\langle h^{0}_{u}\rangle^{2}/\tilde M$. It is easy to verify that adding one $\rm d=5$ operator mass term to any entry of these mass matrices will not make them experimentally acceptable. Thus, discarding them we move to the remaining textures. \\ \\ From texture ${T_{4}}$: \beq M_{T_{4}}= \begin{pmatrix} 0 & 0&0\\ 0&2a_{2}b_{2}&a_{3}b_{2}+a_{2}b_{3}e^{i\phi}\\ 0&a_{3}b_{2}+a_{2}b_{3}e^{i\phi}&2a_{3}b_{3}e^{i\phi} \end{pmatrix}\bar m. \eeq \\ Adding the $\rm d=5$ operators to zero entries of this matrix, we will get three different neutrino mass matrices. Therefore, addition of (\ref{d5}) type term will be performed in the (1,1), (1,2) and (1,3) entries respectively. Since the phase $x$ in Eqs. (\ref{t4212}), (\ref{t42121}) is undetermined, we can shift the phase of state $l_{1}$ in such a way as to match the phase of the (\ref{d5}) operator with the phase of $\bar m$. Thus, this addition will not introduce additional phases inside the neutrino mass matrices. They will have forms: \\ \beq M^{(11)}_{T_{4}}= \begin{pmatrix} d_{5}& 0&0\\ 0&2a_{2}b_{2}&a_{3}b_{2}+a_{2}b_{3}e^{i\phi}\\ 0&a_{3}b_{2}+a_{2}b_{3}e^{i\phi}&2a_{3}b_{3}e^{i\phi} \end{pmatrix}\bar m, \eeq \beq M^{(12)}_{T_{4}}= \begin{pmatrix} 0 & d_{5}&0\\ d_{5}&2a_{2}b_{2}&a_{3}b_{2}+a_{2}b_{3}e^{i\phi}\\ 0&a_{3}b_{2}+a_{2}b_{3}e^{i\phi}&2a_{3}b_{3}e^{i\phi} \end{pmatrix}\bar m, \la{t124} \eeq \beq M^{(13)}_{T_{4}}= \begin{pmatrix} 0 & 0&d_{5}\\ 0&2a_{2}b_{2}&a_{3}b_{2}+a_{2}b_{3}e^{i\phi}\\ d_{5}&a_{3}b_{2}+a_{2}b_{3}e^{i\phi}&2a_{3}b_{3}e^{i\phi} \end{pmatrix}\bar m, \la{t134} \eeq where $d_5$ is a real parameter: $d_5=-\tilde d_5 \tilde M/M_*$. By similar way, we will get the other neutrino mass matrices using the remaining Yukawa textures. Also, one can make sure that for those remaining cases there are undetermined phases [see Eqs: (\ref{t5214})-(\ref{t9223})] and their proper shift can match the phase of the term (\ref{d5}) with $\bar m$. Therefore, below, without loss of any generality we can take the parameter $d_5$ (in the neutrino mass matrices) to be real. \\ \\ From texture ${T_{5}}$: \beq M_{T_{5}}= \begin{pmatrix} 0 & a_{1}b_{2}&a_{1}b_{3}\\ a_{1}b_{2} & 0&a_{3}b_{2}\\ a_{1}b_{3}&a_{3}b_{2}&2a_{3}b_{3} \end{pmatrix}\bar m. \eeq \beq M^{(11)}_{T_{5}}= \left(\begin{array}{ccc} d_{5} & a_{1}b_{2}&a_{1}b_{3}\\ a_{1}b_{2} & 0&a_{3}b_{2}\\ a_{1}b_{3}&a_{3}b_{2}&2a_{3}b_{3} \end{array}\right)\bar m, \quad M^{(22)}_{T_{5}}= \left(\begin{array}{ccc} 0 & a_{1}b_{2}&a_{1}b_{3}\\ a_{1}b_{2} & d_{5}&a_{3}b_{2}\\ a_{1}b_{3}&a_{3}b_{2}&2a_{3}b_{3} \end{array}\right)\bar m. \eeq From texture ${T_{6}}$: \beq M_{T_{6}}= \begin{pmatrix} 0 & a_{1}b_{2}&a_{1}b_{3}\\ a_{1}b_{2} &2a_{2}b_{2}&a_{2}b_{3}\\ a_{1}b_{3}&a_{2}b_{3}&0 \end{pmatrix}\bar m. \eeq \beq M^{(33)}_{T_{6}}= \left(\begin{array}{ccc} 0 & a_{1}b_{2}&a_{1}b_{3}\\ a_{1}b_{2} &2a_{2}b_{2}&a_{2}b_{3}\\ a_{1}b_{3}&a_{2}b_{3}&d_{5} \end{array}\right)\bar m, \quad M^{(11)}_{T_{6}}= \left(\begin{array}{ccc} d_{5}& a_{1}b_{2}&a_{1}b_{3}\\ a_{1}b_{2} &2a_{2}b_{2}&a_{2}b_{3}\\ a_{1}b_{3}&a_{2}b_{3}&0 \end{array}\right)\bar m. \eeq From texture ${T_{7}}$: \beq M_{T_{7}}= \begin{pmatrix} 2a_{1}b_{1} & 0&a_{3}b_{1}+a_{1}b_{3}e^{i\phi}\\ 0&0&0\\ a_{3}b_{1}+a_{1}b_{3}e^{i\phi}&0&2a_{3}b_{3}e^{i\phi} \end{pmatrix}\bar m. \eeq \beq M^{(22)}_{T_{7}}= \begin{pmatrix} 2a_{1}b_{1} & 0&a_{3}b_{1}+a_{1}b_{3}e^{i\phi}\\ 0&d_{5}&0\\ a_{3}b_{1}+a_{1}b_{3}e^{i\phi}&0&2a_{3}b_{3}e^{i\phi} \end{pmatrix}\bar m, \eeq \beq M^{(12)}_{T_{7}}=\begin{pmatrix} 2a_{1}b_{1} & d_{5}&a_{3}b_{1}+a_{1}b_{3}e^{i\phi}\\ d_{5}&0&0\\ a_{3}b_{1}+a_{1}b_{3}e^{i\phi}&0&2a_{3}b_{3}e^{i\phi} \end{pmatrix}\bar m, \eeq \beq M^{(23)}_{T_{7}}= \begin{pmatrix} 2a_{1}b_{1} & 0&a_{3}b_{1}+a_{1}b_{3}e^{i\phi}\\ 0&0&d_{5}\\ a_{3}b_{1}+a_{1}b_{3}e^{i\phi}&d_{5}&2a_{3}b_{3}e^{i\phi} \end{pmatrix}\bar m.\la{t237} \eeq From texture ${T_{8}}$: \beq M_{T_{8}}= \begin{pmatrix} 2a_{1}b_{1} & a_{2}b_{1}&a_{1}b_{3}\\ a_{2}b_{1}& 0&a_{2}b_{3}\\ a_{1}b_{3}&a_{2}b_{3}&0 \end{pmatrix}\bar m. \eeq \beq M^{(22)}_{T_{8}}= \left(\begin{array}{ccc} 2a_{1}b_{1} & a_{2}b_{1}&a_{1}b_{3}\\ a_{2}b_{1}& d_{5}&a_{2}b_{3}\\ a_{1}b_{3}&a_{2}b_{3}&0 \end{array}\right)\bar m, \quad M^{(33)}_{T_{8}}= \left(\begin{array}{ccc} 2a_{1}b_{1} & a_{2}b_{1}&a_{1}b_{3}\\ a_{2}b_{1}& 0&a_{2}b_{3}\\ a_{1}b_{3}&a_{2}b_{3}&d_{5} \end{array}\right)\bar m. \eeq From texture ${T_{9}}$: \beq M_{T_{9}}= \begin{pmatrix} 2a_{1}b_{1} & a_{2}b_{1}+a_{1}b_{2}e^{i\phi}&0\\ a_{2}b_{1}+a_{1}b_{2}e^{i\phi}&2a_{2}b_{2}e^{i\phi} &0\\ 0&0&0 \end{pmatrix}\bar m. \eeq \beq M^{(13)}_{T_{9}}= \begin{pmatrix} 2a_{1}b_{1} & a_{2}b_{1}+a_{1}b_{2}e^{i\phi}&d_{5}\\ a_{2}b_{1}+a_{1}b_{2}e^{i\phi}&2a_{2}b_{2}e^{i\phi} &0\\ d_{5}&0&0 \end{pmatrix}\bar m, \eeq \beq M^{(23)}_{T_{9}}= \begin{pmatrix} 2a_{1}b_{1} & a_{2}b_{1}+a_{1}b_{2}e^{i\phi}&0\\ a_{2}b_{1}+a_{1}b_{2}e^{i\phi}&2a_{2}b_{2}e^{i\phi} &d_{5}\\ 0&d_{5}&0 \end{pmatrix}\bar m, \la{t239} \eeq \beq M^{(33)}_{T_{9}}= \begin{pmatrix} 2a_{1}b_{1} & a_{2}b_{1}+a_{1}b_{2}e^{i\phi}&0\\ a_{2}b_{1}+a_{1}b_{2}e^{i\phi}&2a_{2}b_{2}e^{i\phi} &0\\ 0&0&d_{5} \end{pmatrix}\bar m. \eeq We have shown that only $T_{4}$, $T_{7}$ and $T_{9}$ $2T_0Y_{32}$'s give rise to complex mass matrices and that complexity, i.e. phase $\delta $ in the lepton mixing matrix, arises through (\ref{seesaw}) $\text{---}$ from complex $2T_0Y_{32}$'s $\text{---}$ and not from an $x_{5}$ phase. \subsection{Possible origin of $\rm d=5$ operators} \la{origin} The $\rm d=5$ operator coupling [see Eq. (\ref{d5})] in our case has been directly introduced in the neutrino mass matrices. Here we give one example of possible generation of $\rm d=5$ operators we are exploiting within our setup. Besides being of a quantum gravity origin, such $\rm d=5$ couplings can be generated from a different sector via renormalizable interactions. For instance, introducing the pair of MSSM singlet states ${\cal N}$, $\overline{\cal N}$ and the superpotential couplings \begin{equation} \lam^{(i)}l_i{\cal N}h_u+\bar \lam^{(j)}l_j\overline{\cal N}h_u-M_*{\cal N}\overline{\cal N}~, \end{equation} it is easy to verify that integration of the heavy ${\cal N}$, $\overline{\cal N}$ multiplets leads to the operator in Eq. (\ref{d5}) with \begin{equation} \tilde {d_5}e^{ix_5}=2\lam^{(i)}\bar \lam^{(j)}~. \end{equation} Important ingredient here is to maintain forms of the resulting mass matrices and do not mix the states ${\cal N}$, $\overline{\cal N}$ with RHN's $N_{1,2}$. This can be achieved by some (possible flavor) symmetries (which we do not pursue here). Perhaps a safer way to generate those $\Delta L=2$ effective couplings would be to proceed in a spirit of type II \cite{{Magg:1980ut},{Lazarides:1980nt},{Mohapatra:1980yp}}, or type III \cite{Foot:1988aq},\cite{Ma:1998dn} see-saw mechanisms, or exploit alternative possibilities \cite{{Zee:1980ai},{Babu:1988ki},{Babu:2011vb},{Tavartkiladze:2001by},{Perez:2008ha},{Bajc:2007zf},{Babu:2009aq},{Bonnet:2009ej},{Kumericki:2012bh},{Xing:2009hx},{Gavela:2009cd}}. through the introduction of appropriate extra states. Details of such scenarios should be pursued elsewhere. \section{Analyzing neutrino mass matrices} \la{analysis} \numberwithin{equation}{section} Since we are working in a basis of a diagonal charged lepton mass matrix, lepton mixing matrix $U$ entirely comes from the neutrino sector. Therefore, the following equality holds: \beq M_{\nu}=PU^{*}P^{'}M_{\nu}^{\rm diag}U^{+}P \la{nu1} \eeq where \beq M_{\nu}^{\rm diag}=(m_{1},m_{2},m_{3}), \quad P={\rm Diag}(e^{i\omega_{1}},e^{i\omega_{2}},e^{i\omega_{3}}),\quad P^{'}={\rm Diag}(1,e^{i\rho_{1}},e^{i\rho_{2}})\la{nu2} \eeq \beq U= \left(\begin{array}{ccc} c_{13}c_{12} &c_{13}s_{12}&s_{13}e^{-i\delta}\\ -c_{23}s_{12}-s_{23}s_{13}c_{12}e^{i\delta}& c_{23}c_{12}-s_{23}s_{13}s_{12}e^{i\delta}&s_{23}c_{13}\\ s_{23}s_{12}-c_{23}s_{13}c_{12}e^{i\delta}&-s_{23}c_{12}-c_{23}s_{13}s_{12}e^{i\delta}&c_{23}c_{13} \end{array}\right)\la{nu3} \eeq where $m_{i}$ denote neutrino masses. $U$ given in Eq. (\ref{nu3}) is the standard parametrization used in the literature (see for instance \cite{Fogli:2012ua}, \cite{Agashe:2014kda}). The relation (\ref{nu1}) turns out to be convenient and useful for neutrino mass matrix analysis. Numerical values of oscillation parameters both, for normal (NH) and inverted (IH) hierarchies can be found in \cite{Gonzalez-Garcia:2014bfa}. Thus, for these mass orderings we will use the following notations: \\ \begin{center} {\centering For normal hierarchy (NH):} \end{center} \beq \Delta m_{sol}^{2}=m_{2}^{2}-m_{1}^{2},\quad \Delta m_{atm}^{2}=m_{3}^{2}-m_{2}^{2},\quad m_{1}=\sqrt{m_{3}^{2}-\Delta m_{atm}^{2}-\Delta m_{sol}^{2}},\quad m_{2}=\sqrt{m_{3}^{2}-\Delta m_{atm}^{2}} \la{nh1} \eeq \begin{center} {\centering For inverted hierarchy (IH)} \end{center} \beq \Delta m_{atm}^{2}=m_{2}^{2}-m_{3}^{2},\quad \Delta m_{sol}^{2}=m_{2}^{2}-m_{1}^{2},\quad m_{1}=\sqrt{m_{3}^{2}+\Delta m_{atm}^{2}-\Delta m_{sol}^{2}},\quad m_{2}=\sqrt{m_{3}^{2}+\Delta m_{atm}^{2}} \la{ih1} \eeq \subsection{Types of neutrino mass matrices} Complex $3\times3$ Majorana type neutrino mass matrices with more than two independent zero entries are all excluded by current experiments. As it turns out, experimental data also exclude the possibility of real neutrino mass matrices with two independent zero entries. This was noticed earlier upon studies of the texture zero neutrino mass matrices \cite{Frampton:2002qc},\cite{Frampton:2002yf},\cite{Fritzsch:2011qv},\cite{Dev:2006qe}. Therefore, experimentally viable neutrino mass matrices, from our $3\times2$ Yukawa textures (listed in Sect. \ref{matrices}) should be produced by $T_4,..., T_9$ giving either neutrino mass matrices with two independent zero entries and the complex phase, or the one zero entry real neutrino mass matrices (via textures $T_{5}$, $T_{6}$, $T_{8}$ and one d=5 operator). Two zero entry complex neutrino mass matrices (we have obtained) have forms: \beq P_{1}=\left(\begin{array}{ccc} 0 & \times&0\\ \times& \times&\times\\ 0&\times&\times \end{array}\right),\quad P_{2}=\left(\begin{array}{ccc} 0& 0&\times\\ 0&\times&\times\\ \times&\times&\times \end{array}\right),\quad P_{3}=\left(\begin{array}{ccc} \times &0&\times\\ 0& 0&\times\\ \times&\times&\times \end{array}\right),\quad P_{4}=\left(\begin{array}{ccc} \times& \times&0\\ \times& \times&\times\\ 0&\times&0 \end{array}\right). \la{pse} \eeq These types of textures correspond to the following mass matrices, we have obtained: \\ \\ $P_{1}$-type:\quad$M^{(12)}_{T_{4}}$,\qquad$P_{2}$-type:\quad$M^{(13)}_{T_{4}}$,\quad$P_{3}$-type:\quad$M^{(23)}_{T_{7}}$,\qquad$P_{4}$-type:\quad$M^{(23)}_{T_{9}}$ \\ \\ As far as the one zero entry neutrino mass matrices are concerned we are getting the following types of real mass matrices: \beq P_{5}=\left(\begin{array}{ccc} 0 & \times&\times\\ \times& \times&\times\\ \times&\times&\times \end{array}\right),\quad P_{6}=\left(\begin{array}{ccc} \times&\times&\times\\ \times&0&\times\\ \times&\times&\times \end{array}\right),\quad P_{7}=\left(\begin{array}{ccc} \times &\times&\times\\ \times& \times&\times\\ \times&\times&0 \end{array}\right). \la{ps1} \eeq Also here, we indicate the correspondence of $P_{5,6,7}$ textures to the appropriate neutrino mass matrices we have obtained: $P_{5}$-type:\quad$M^{(22)}_{T_{5}}$,\quad$M^{(33)}_{T_{6}}$,\qquad$P_{6}$-type:\quad$M^{(11)}_{T_{5}}$,\quad$M^{(33)}_{T_{8}}$ and\qquad$P_{7}$-type:\quad$M^{(11)}_{T_{6}}$,\quad$M^{(22)}_{T_{8}}$. \subsection{Predictions from $P_{1,2,3,4}$ type neutrino mass matrices} Here we analyze neutrino mass matrices with two independent zero entries. As we will see, for each case we will get several predictions. \\ \\ TYPE $P_{1}$ \\ Structure of the $P_{1}$ in Eq.(\ref{pse}) imposes the following conditions: $M^{(1,1)}_{\nu}=0$ and $M^{(1,3)}_{\nu}$=0, which taking into account (\ref{nu1})-(\ref{nu3}) give the following relations: \beq \frac{m_{1}}{m_{3}}c^{2}_{12}+\frac{m_{2}}{m_{3}}s^{2}_{12}e^{i\rho_{1}}=-t^{2}_{13}e^{i(\rho_{2}+2\delta)} \la{b} \eeq and \beq -\left(\frac{m_{1}}{m_{3}}-\frac{m_{2}}{m_{3}}e^{i\rho_{1}}\right)t_{23}s_{12}c_{12}-s_{13}e^{i(\rho_{2}+\delta)}+s_{13}e^{-i\delta}\left(\frac{m_{1}}{m_{3}}c^{2}_{12}+\frac{m_{2}}{m_{3}}s^{2}_{12}e^{i\rho_{1}}\right)=0\quad \la{c} \eeq Using (\ref{b}) in the last term of (\ref{c}) we obtain: \beq \left(\frac{m_{1}}{m_{3}}-\frac{m_{2}}{m_{3}}e^{i\rho_{1}}\right)t_{23}s_{12}c_{12}+s_{13}e^{i(\rho_{2}+\delta)}+s_{13}t^{2}_{13}e^{i(\rho_{2}+\delta)}=0 \la{d} \eeq which gives: \beq m_{3}s_{13}(1+t^{2}_{13})=|m_{1}-m_{2}e^{i\rho_{1}}|t_{23}s_{12}c_{12} \la{e} \eeq while from Eq. (\ref{b}) we have: \beq m_{3}t^{2}_{13}=|m_{1}c^{2}_{12}+m_{2}s^{2}_{12}e^{i\rho_{1}}|. \la{f} \eeq We can exclude phase $\rho_{1}$ from (\ref{e}) and (\ref{f}) to obtain: \beq m^{2}_{3}(t^{4}_{13}+s^{2}_{13}\cot^{2}_{23}(1+t^{2}_{13})^{2})=m^{2}_{1}c^{2}_{12}+m^{2}_{2}s^{2}_{12} \la{g} \eeq From which, based on recent experimental data \cite{Gonzalez-Garcia:2014bfa} inverted hierarchical pattern (IH) is excluded. For normal hierarchical neutrinos from (\ref{g}), with (\ref{nh1}) we get \beq m^{2}_{3}=\frac{\Delta m^{2}_{atm}+\Delta m^{2}_{sol}c^{2}_{12}}{1-s^{2}_{13}\cot^{2}_{23}(1+t^{2}_{13})^{2}-t^{4}_{13}}. \la{h} \eeq Using $\sin^2\theta_{23}=0.49$, the best fit values for the remaining mixing angles \cite{Gonzalez-Garcia:2014bfa} and also the best fit values for the atmospheric and solar neutrino mass squared differences:\beq \Delta m^{2}_{atm}=0.002382 ~\rm eV^{2},\quad \Delta m^{2}_{sol}=7.5\times10^{-5} ~\rm eV^{2} \la{i} \eeq from (\ref{h}) we obtain for NH: \beq m_{1}=0.00613 ~\rm eV,\quad m_{2}=0.0106 ~\rm eV,\quad m_{3}=0.0499 ~\rm eV. \la{k} \eeq Using these, from (\ref{f}) we predict: \beq \cos\rho_{1}=\frac{m^{2}_{3}t^{4}_{13}-m^{2}_{1}c^{4}_{12}-m^{2}_{2}s^{4}_{12}}{2m_{1}m_{2}c^{2}_{12}s^{2}_{12}} \Rightarrow \rho_{1}=\pm 3.036, \la{l} \eeq while from (\ref{b}) and (\ref{d}) we have: \beqs \delta=\arg[m_{1}c^{2}_{12}+m_{2}s^{2}_{12}e^{i\rho_{1}}]-\arg[m_{1}-m_{2}e^{i\rho_{1}}], \eeqs \beq \rho_{2}=\pm \pi-\arg[m_{1}c^{2}_{12}+m_{2}s^{2}_{12}e^{i\rho_{1}}]+2\arg[m_{1}-m_{2}e^{i\rho_{1}}]. \la{m} \eeq With numbers given in (\ref{k}) and (\ref{l}), from (\ref{m}) we obtain: \beq \delta =\pm 0.378,\quad \rho_1=\pm 3.036,\quad \rho_2=\pm 2.696,\quad m_{\beta\beta}=0, \eeq where the neutrino-less double beta decay parameter $m_{\beta\beta }$ is determined as: $m_{\beta\beta}=|m_{1}c_{12}^{2}c_{13}^{2}+m_{2}e^{i\rho_{1}}c_{13}^{2}s_{12}^{2}+m_{3}e^{i\rho_{2}}s_{13}^{2}e^{2i\delta}|$. We summarize our results in Table \ref{tab1}. \begin{center} \begin{tabular}{|l|r|r|r|} \hline \multicolumn{1}{|c|}{\sffamily $\delta$} &\multicolumn{1}{|c|}{\sffamily $\rho_{1}$}&\multicolumn{1}{|c|}{\sffamily $\rho_{2}$}&\multicolumn{1}{|c|}{\sffamily works with}\\ \hline $\delta =\pm 0.378$&$\rho_1=\pm 3.036$&$\rho_2=\pm 2.696$&\makecell{NH, $\sin^{2}\theta_{23}=0.49$ and best\\ fit values for remaining oscillation parameters,\\ $(m_{1},m_{2},m_{3})=(0.00613,0.0106,0.0499)$, $m_{\beta\beta}=0$}\\ \hline \end{tabular} \captionof{table}{Results from $P_{1}$ type texture. Masses are given in eVs.} \label{tab1} \end{center} TYPE $P_{2}$ \\ In this case $M^{(1,1)}_{\nu}=0$ and $M^{(1,2)}_{\nu}$=0 and together with Eq.(\ref{b}), the following relation holds: \beq -\left(\frac{m_{1}}{m_{3}}-\frac{m_{2}}{m_{3}}e^{i\rho_{1}}\right)s_{12}c_{12}+s_{13}t_{23}e^{i(\rho_{2}+\delta)}-s_{13}t_{23}e^{-i\delta}\left(\frac{m_{1}}{m_{3}}c^{2}_{12}+\frac{m_{2}}{m_{3}}s^{2}_{12}e^{i\rho_{1}}\right)=0. \la{b1} \eeq Using (\ref{b}) in the last term of (\ref{b1}) we obtain: \beq -\left(\frac{m_{1}}{m_{3}}-\frac{m_{2}}{m_{3}}e^{i\rho_{1}}\right)s_{12}c_{12}+s_{13}t_{23}e^{i(\rho_{2}+\delta)}+s_{13}t_{23}t^{2}_{13}e^{i(\rho_{2}+\delta)}=0 \la{c1} \eeq which gives: \beq m_{3}s_{13}t_{23}(1+t^{2}_{13})=|m_{1}-m_{2}e^{i\rho_{1}}|s_{12}c_{12}. \la{d1} \eeq Excluding phase $\rho_{1}$ from Eqs. (\ref{d1}) and (\ref{f})[which is derived from Eq.(\ref{b}), i.e. the condition $M^{(1,1)}_{\nu}=0$] we obtain: \beq m^{2}_{3}(t^{4}_{13}+s^{2}_{13}t^{2}_{23}(1+t^{2}_{13})^2)=m^{2}_{1}c^{2}_{12}+m^{2}_{2}s^{2}_{12} \la{e1} \eeq Last relation makes obvious that the IH case is excluded. On the other hand, for NH neutrinos, from (\ref{e1}), with (\ref{nh1}) we get: \beq m^{2}_{3}=\frac{\Delta m^{2}_{atm}+\Delta m^{2}_{sol}c^{2}_{12}}{1-s^{2}_{13}t^{2}_{23}(1+t^{2}_{13})^{2}-t^{4}_{13}}.\la{f1} \eeq After finding the value of $m_3$ and remaining masses, \beq (m_{1},m_{2},m_{3})=(0.00501,0.01,0.04982)~\rm eV. \eeq Eqs. (\ref{b1}) and (\ref{c1}) allow to calculate the phases: \beq \cos\rho_{1}=\frac{m^{2}_{3}t^{4}_{13}-m^{2}_{1}c^{4}_{12}-m^{2}_{2}s^{4}_{12}}{2m_{1}m_{2}c^{2}_{12}s^{2}_{12}} \Rightarrow \rho_{1}=\mp 2.828,\la{i1} \eeq\beqs \delta=\pm\pi+\arg[m_{1}c^{2}_{12}+m_{2}s^{2}_{12}e^{i\rho_{1}}]-\arg[m_{1}-m_{2}e^{i\rho_{1}}], \eeqs \beq \rho_{2}=\mp \pi-\arg[m_{1}c^{2}_{12}+m_{2}s^{2}_{12}e^{i\rho_{1}}]+2\arg[m_{1}-m_{2}e^{i\rho_{1}}]. \la{j1} \eeq Using the best fit values of measured parameters \cite{Gonzalez-Garcia:2014bfa} for NH we obtain results \beq \delta =\pm 1.924,\quad \rho_1=\mp 2.828,\quad \rho_2=\mp 1.715,\quad m_{\beta\beta}=0, \la{n} \eeq which are summarized in Table \ref{tab2}: \begin{center} \begin{tabular}{|l|r|r|r|} \hline \multicolumn{1}{|c|}{\sffamily $\delta$} &\multicolumn{1}{|c|}{\sffamily $\rho_{1}$}&\multicolumn{1}{|c|}{\sffamily $\rho_{2}$}&\multicolumn{1}{|c|}{\sffamily works with}\\ \hline $\delta =\pm 1.924$&$\rho_1=\mp 2.828$&$\rho_2=\mp 1.715$&\makecell{NH and best fit\\ values of oscillation parameters,\\ $(m_{1},m_{2},m_{3})=(0.00501,0.01,0.04982)$, $m_{\beta\beta}=0$}\\ \hline \end{tabular} \captionof{table}{Results from $P_{2}$ type texture. Masses are given in eVs.} \label{tab2} \end{center} $P_1$ and $P_2$ neutrino textures were studied in \cite{{Fritzsch:2011qv},{Dev:2006qe}, {Xing:2002ta},{Grimus:2011sf},{Dev:2014dla},{Zhou:2015qua},{Kitabayashi:2015jdj}}. Our analytical expressions, allowing thorough investigations, are compact and exact. To analyze the textures $P_{3}$ and $P_{4}$ it is convenient to note, that equation $M^{(i,j)}_{\nu}=0$ can be written as: $A_{2}\times m_{2} e^{i\rho_{1}}+A_{3}\times m_{3} e^{i\rho_{2}}=A_{1}\times m_{1}$. When two mass matrix elements are equal to zero we have a pair of similar equations which we write in a matrix form: \beq \left(\begin{array}{ccc} A_{2} &A_{3}\\ B_{2}& B_{3} \end{array}\right) \binom{m_{2} e^{i\rho_{1}}}{m_{3} e^{i\rho_{2}}}= \binom{A_{1}m_{1}}{B_{1}m_{1}}.\eeq From these equations we have: \beq m_{2} e^{i\rho_{1}}=\frac{1}{A_{2}B_{3}-A_{3}B_{2}}(B_{3}A_{1}-A_{3}B_{1}) m_{1}, \quad m_{3} e^{i\rho_{2}}=\frac{1}{A_{2}B_{3}-A_{3}B_{2}}(A_{2}B_{1}-B_{2}A_{1}) m_{1}\la{p1}\eeq or, \beq m^{2}_{2}=\frac{|B_{3}A_{1}-A_{3}B_{1}|^{2}}{|A_{2}B_{3}-A_{3}B_{2}|^{2}}m^{2}_{1}, \quad m^{2}_{3}=\frac{|A_{2}B_{1}-B_{2}A_{1}|^{2}}{|A_{2}B_{3}-A_{3}B_{2}|^{2}}m^{2}_{1}\la{p2}\eeq and \beq \frac{\Delta m^{2}_{sol}}{\pm\Delta m^{2}_{atm}}=\frac{|B_{3}A_{1}-A_{3}B_{1}|^{2}-|A_{2}B_{3}-A_{3}B_{2}|^{2}}{|A_{2}B_{1}-B_{2}A_{1}|^{2}-|B_{3}A_{1}-A_{3}B_{1}|^{2}}, \la{p3} \eeq where "+" and "-" signs correspond to normal and inverted hierarchies respectively. Eq. (\ref{p3}) is the relation for calculating the value of $\delta $. At the same time (after knowing the $\delta $), from Eq. (\ref{p2}) and (\ref{nh1})/(\ref{ih1}) the neutrino masses can be calculated. After these, with relations in Eq. (\ref{p1}) the phases $\rho_1$ and $\rho_2$ can be found. Below, we use this procedure for the textures $P_3$ and $P_4$. \\ \\ TYPE $P_{3}$ \\ For this case we have: \beqs A_{1}=-U^{\ast}_{11}U^{\dagger}_{12},\quad A_{2}=U^{\ast}_{12}U^{\dagger}_{22},\quad A_{3}=U^{\ast}_{13}U^{\dagger}_{32},\quad B_{1}=-U^{\ast}_{21}U^{\dagger}_{12},\quad B_{2}=U^{\ast}_{22}U^{\dagger}_{22},\quad B_{3}=U^{\ast}_{23}U^{\dagger}_{32}\eeqs and using these in Eqs. (\ref{p1})-(\ref{p3}), for NH and IH neutrino mass ordering, we get results which are summarized in Table \ref{tab3'}. \begin{center} \begin{tabular}{|l|r|r|r|} \hline \multicolumn{1}{|c|}{\sffamily $\delta$} &\multicolumn{1}{|c|}{\sffamily $\rho_{1}$}&\multicolumn{1}{|c|}{\sffamily $\rho_{2}$}&\multicolumn{1}{|c|}{\sffamily works with}\\ \hline $\delta =\pm 1.547$&$\rho_1=\pm 0.0615$&$\rho_2=\mp 3.098$&\makecell{NH and best fit values\\ of oscillation parameters,\\ $(m_{1},m_{2},m_{3})=$\\$(0.07213,0.07265,0.08752)$,\\ $m_{\beta\beta}=0.0726$}\\ \hline $\delta =\pm 1.579$&$\rho_1=\mp 0.0998$&$\rho_2=\pm 3.0726$&\makecell{IH and best fit values\\ of oscillation parameters,\\ $(m_{1},m_{2},m_{3})=$\\$(0.07195,0.07247,0.05294)$,\\ $m_{\beta\beta}=0.0716$}\\ \hline \end{tabular} \captionof{table}{Results from $P_{3}$ type texture. Masses are given in eVs.} \label{tab3'} \end{center} TYPE $P_{4}$ \\ For this case we have:\beqs A_{1}=-U^{\ast}_{11}U^{\dagger}_{13},\quad A_{2}=U^{\ast}_{12}U^{\dagger}_{23},\quad A_{3}=U^{\ast}_{13}U^{\dagger}_{33},\quad B_{1}=-U^{\ast}_{31}U^{\dagger}_{13},\quad B_{2}=U^{\ast}_{32}U^{\dagger}_{23},\quad B_{3}=U^{\ast}_{33}U^{\dagger}_{33}.\eeqs For this case NH works with $\sin^2 \theta_{23}$ larger by $1\sigma $ from the best fit value. However, IH case requires a lower value of $\sin^2 \theta_{23}$. Using above relations in Eqs. (\ref{p1})-(\ref{p3}), for NH and IH cases we get results which are summarized in Table \ref{tab4}. \begin{center} \begin{tabular}{|l|r|r|r|} \hline \multicolumn{1}{|c|}{\sffamily $\delta$} &\multicolumn{1}{|c|}{\sffamily $\rho_{1}$}&\multicolumn{1}{|c|}{\sffamily $\rho_{2}$}&\multicolumn{1}{|c|}{\sffamily works with}\\ \hline $\delta =\pm 1.575$&$\rho_1=\mp 0.0127$&$\rho_2=\pm 3.133$&\makecell{NH and $\sin^2 \theta_{23}=0.51$ and best fit values\\ for remaining oscillation parameters,\\ $(m_{1},m_{2},m_{3})=$\\$(0.171701,0.171919,0.1787)$,\\ $m_{\beta\beta}=0.1719$}\\ \hline $\delta =\pm 1.5705$&$\rho_1=\pm 0.00622$&$\rho_2=\mp 3.137$&\makecell{IH and $\sin^2 \theta_{23}=0.495$ and best fit values\\ for remaining oscillation parameters,\\ $(m_{1},m_{2},m_{3})=$\\$(0.2513,0.25145,0.2465)$,\\ $m_{\beta\beta}=0.2512$}\\ \hline \end{tabular} \captionof{table}{Results from $P_{4}$ type texture. Masses are given in eVs.} \label{tab4} \end{center} Our results for the textures $P_3$ and $P_4$ are compatible with ones \cite{{Fritzsch:2011qv},{Dev:2006qe}, {Xing:2002ta},{Grimus:2011sf},{Dev:2014dla},{Zhou:2015qua},{Kitabayashi:2015jdj},{Meloni:2014yea}}, obtained before.\footnote{Some of these works used the earlier experimental data. We have made sure, that with those inputs, we would get similar results.} \subsection{Predictions from real one zero entry neutrino textures - $P_{5,6,7}$} Now we turn to the analysis of the one texture zero neutrino mass matrices we have obtained in Section 3. They fall in the category of the $P_{5,6,7}$ type mass matrices given in Eq. (\ref{ps1}). One texture zero neutrino mass matrices were investigated in \cite{{Merle:2006du},{Lashin:2011dn},{Deepthi:2011sk},{Liao:2013rca},{Gautam:2015kya}}. In our construction, these mass matrices are real. This makes them more predictive. \\ \\ TYPE $P_{5}$ \\ In this case, our construction implies $\phi$=0 and all elements of the lepton mixing matrix are real (i.e. $\delta$=0 or $\pi$). Therefore, together with $M_{\nu }^{(1,1)}$=0 we have to match phases of both sides of Eq.(\ref{nu1}). This turns out to be impossible for $\rho_{1},\rho_{2}$ not equal to either 0 or $\pi$ , because we have only three free phases $\omega_{1,2,3}$. Thus, it turns out that only normal hierarchical scenario will be allowed with $\delta =0$ or $\pi $. With these, and from the condition $M_{\nu }^{(1,1)}$=0, we get \beq \tan\theta_{13}=\left(-c_1c_2s_{12}^{2}\frac{m_{2}}{m_{3}}-c_2c_{12}^{2}\frac{m_{1}}{m_{3}}\right)^{\frac{1}{2}}, \la{tta} \eeq where $c_1$ and $c_2$ stand for $\cos \rho_1$ and $\cos \rho_2$ respectively. This relation can be satisfied by special selection of the neutrino masses and $\rho_{1,2}=0$ or $\pi $. Since two mass square differences are fixed from the neutrino data, only one free mass is available, which we choose to be $m_{3}$. The latter is tightly constrained via Eq.(\ref{tta}). Thus, the model predicts three neutrino masses and the phases. For the best fit values of the oscillation parameters \cite{Gonzalez-Garcia:2014bfa} for NH we obtain solutions: \\ \beqs m_{1}=0.002268~\rm eV,\quad m_{2}=0.008952~\rm eV, \quad m_{3}=0.04962~\rm eV, \eeqs \beq {\rm with}\quad m_{\beta\beta}=0, \quad \delta=0 \quad{\rm or}\quad \pi, \quad \rho_{1}=\pi, \quad \rho_{2}=0 \eeq \\ and \beqs m_{1}=0.010677~\rm eV,\quad m_{2}=0.006245~\rm eV, \quad m_{3}=0.04996~\rm eV, \eeqs \beq {\rm with}\quad m_{\beta\beta}=0, \quad \delta=0\quad{\rm or} \quad\pi, \quad \rho_{1}=\pi, \quad \rho_{2}=\pi. \eeq By the similar analysis, we can easily make sure that inverted hierarchy is not allowed within our construction for this $P_{5}$ type texture. \\ \\ TYPE $P_{6}$ \\ For this case, the condition $M_{\nu }^{(2,2)}=0$ gives the following expression for $\theta_{12}$: \beqs \tan \theta_{12}=\frac{c_{23}s_{23}\hat{s}_{13}(m_{2}c_{1}-m_{1})}{m_{1}c_{23}^{2}+m_{2}s_{23}^{2}s_{13}^{2}c_{1}+m_{3}s_{23}^{2}c_{13}^{2}c_{2}} \eeqs \beq \pm\frac{\sqrt{c^2_{23}s^2_{23}s^2_{13}(m_{2}c_{1}-m_{1})^2-(m_{1}c_{23}^{2}+m_{2}s_{23}^{2}s_{13}^{2}c_{1}+m_{3}s_{23}^{2}c_{13}^{2}c_{2})(m_{1}s_{23}^{2}s_{13}^{2}+m_{2}c_{23}^{2}c_{1}+m_{3}s_{23}^{2}c_{13}^{2}c_{2})}}{m_{1}c_{23}^{2}+m_{2}s_{23}^{2}s_{13}^{2}c_{1}+m_{3}s_{23}^{2}c_{13}^{2}c_{2}} \la{tpm1} \eeq where, $c_{1}$ and $c_{2}$ stand for $\cos\rho_{1}$ and $\cos\rho_{2}$ respectively. $\hat{s}_{13}=\pm{s}_{13}$ and a "+" corresponds to $\delta=0$ and a "-" sign to $\delta=\pi$. So, this equation will include all cases. Some cases work with the best fit values (BFV) of the oscillation parameters \cite{Gonzalez-Garcia:2014bfa}, while some cases work only with deviations from the BFV. We will allow some of these parameters to vary within a 3$\sigma$ range. Results are summarized in Table \ref{tab6}. \begin{center} \begin{tabular}{|l|r|r|r|r|} \hline \multicolumn{1}{|c|}{\sffamily $\delta$} &\multicolumn{1}{|c|}{\sffamily p}&\multicolumn{1}{|c|}{\sffamily $\rho_{1}$}&\multicolumn{1}{|c|}{\sffamily $\rho_{2}$}&\multicolumn{1}{|c|}{\sffamily works with}\\ \hline 0&-&0&$\pi$&\makecell{IH, by best fit values of oscillation parameters,\\ $(m_{1},m_{2},m_{3})=(0.07613,0.07662,0.0585)$, $m_{\beta\beta}=0.0733$}\\ \hline $\pi$&-&0&$\pi$&\makecell{IH, by best fit values of oscillation parameters,\\ $(m_{1},m_{2},m_{3})=(0.07635,0.07684,0.05878)$, $m_{\beta\beta}=0.07354$}\\ \hline 0&-&0&$\pi$&\makecell{NH, by best fit values of oscillation parameters,\\ $(m_{1},m_{2},m_{3})=(0.06353,0.06412,0.08058)$, $m_{\beta\beta}=0.06056$}\\ \hline $\pi$&-&0&$\pi$&\makecell{NH, by best fit values of oscillation parameters,\\ $(m_{1},m_{2},m_{3})=(0.06315,0.06374,0.08028)$, $m_{\beta\beta}=0.0602$}\\ \hline $\pi$&+&$\pi$&0&\makecell{IH, by best fit values of oscillation parameters,\\ $(m_{1},m_{2},m_{3})=(0.05735, 0.058, 0.03024)$, $m_{\beta\beta}= 0.02246$}\\ \hline 0&+&$\pi$&0&\makecell{IH, by best fit values of oscillation parameters,\\$(m_{1},m_{2},m_{3})=(0.04879,0.04955, 0.002516)$, $m_{\beta\beta}=0.0185$}\\ \hline $\pi$&+&$\pi$&0&\makecell{NH, $\sin^2 \theta_{13}=0.0218$, $\sin^{2}\theta_{23}\in[0.382, 0.4]$, $m_{3}\in[0.12, 0.3]$, $\sin^2 \theta_{12}=[0.27, 0.297] $,\\ $m_{\beta\beta}\in[0.052, 0.14]$, $\sum m_{i}\in[0.34, 0.9]$}\\ \hline 0&+&$\pi$&$\pi$&\makecell{IH, $\sin^2 \theta_{13}=0.0218$, $\sin^{2}\theta_{23}\in[0.552, 0.644]$, $m_{3}\in[0, 0.002]$, $\sin^2 \theta_{12}=[0.313, 0.344] $,\\ $m_{\beta\beta}\in[0.0146, 0.0176]$}\\ \hline \end{tabular} \captionof{table}{Results from $P_{6}$ type texture. "p" stands for a sign of a square root in (\ref{tpm1}). Masses are given in eVs.} \label{tab6} \end{center} TYPE $P_{7}$ \\ For this case, the condition $M_{\nu }^{(3,3)}=0$ gives: \\ \beqs \tan\theta_{12}=\frac{c_{23}s_{23}\hat{s}_{13}(m_{1}-m_{2}c_{1})}{m_{1}s_{23}^{2}+m_{2}c_{23}^{2}s_{13}^{2}c_{1}+m_{3}c_{23}^{2}c_{13}^{2}c_{2}} \eeqs \beq \pm\frac{\sqrt{c^2_{23}s^2_{23}s^2_{13}(m_{1}-m_{2}c_{1})^2-(m_{1}s_{23}^{2}+m_{2}c_{23}^{2}s_{13}^{2}c_{1}+m_{3}c_{23}^{2}c_{13}^{2}c_{2})(m_{1}c_{23}^{2}s_{13}^{2}+m_{2}s_{23}^{2}c_{1}+m_{3}c_{23}^{2}c_{13}^{2}c_{2})}}{m_{1}s_{23}^{2}+m_{2}c_{23}^{2}s_{13}^{2}c_{1}+m_{3}c_{23}^{2}c_{13}^{2}c_{2}} \la{tpm2} \eeq Notations here are similar to those for case $P_6$ [see comment after Eq. (\ref{tpm1})]. Results are summarized in Table \ref{tab7}. As above, we have used data from Ref.\cite{Gonzalez-Garcia:2014bfa}. \\ \begin{center} \begin{tabular}{|l|r|r|r|r|} \hline \multicolumn{1}{|c|}{\sffamily $\delta$} &\multicolumn{1}{|c|}{\sffamily p}&\multicolumn{1}{|c|}{\sffamily $\rho_{1}$}&\multicolumn{1}{|c|}{\sffamily $\rho_{2}$}&\multicolumn{1}{|c|}{\sffamily works with}\\ \hline 0&+&$\pi$&0&\makecell{IH, by best fit values of oscillation parameters,\\ $(m_{1},m_{2},m_{3})=(0.9997, 0.10034,0.08729)$, $m_{\beta\beta}=0.04$}\\ \hline 0&-&0&$\pi$&\makecell{IH, $\sin^{2}\theta_{23}\in[0.389, 0.487]$, and bfv for remaining osc. parameters, \\$m_{3}\in[0.04496, 0.4138]$, $m_{\beta\beta}\in[0.064, 0.398]$, $\sum m_{i}\in[0.178, 1.25]$}\\ \hline $\pi$&+&$\pi$&0&\makecell{IH, by best fit values of oscillation parameters,\\ $(m_{1},m_{2},m_{3})=(0.05004, 0.05078,0.01142)$, $m_{\beta\beta}= 0.019$}\\ \hline $\pi$&+&$\pi$&$\pi$&\makecell{IH, $\sin^{2}\theta_{23}\in[0.389, 0.448]$, $\sin^2 \theta_{12}=[0.325, 0.344]$ \\and bfv for remaining osc. parameters, $m_{3}\in[0, 0.001379]$, $m_{\beta\beta}\in[0.0146, 0.0165]$}\\ \hline $\pi$&-&0&$\pi$&\makecell{IH, $\sin^{2}\theta_{23}\in[0.389, 0.488]$, and bfv for remaining osc. parameters, \\$m_{3}\in[0.04473, 0.6183]$, $m_{\beta\beta}\in[0.064, 0.59]$, $\sum m_{i}\in[0.178, 1.86]$}\\ \hline 0&+&$\pi$&0&\makecell{NH, $\sin^{2}\theta_{23}\in[0.621, 0.643]$, and bfv for remaining osc. parameters, \\$m_{3}\in[0.1246, 0.5928]$, $m_{\beta\beta}\in[0.046, 0.24]$, $\sum m_{i}\in[0.354, 1.77]$}\\ \hline 0&-&0&$\pi$&\makecell{NH, $\sin^{2}\theta_{23}\in[0.49, 0.643]$, and bfv for remaining oscillation parameters, \\$m_{3}\in[0.05803, 0.5187]$, $m_{\beta\beta}\in[0.0286, 0.4938]$, $\sum m_{i}\in[0.1196, 1.551]$}\\ \hline $\pi$&-&0&$\pi$&\makecell{NH, $\sin^{2}\theta_{23}\in[0.49, 0.643]$, and bfv for remaining oscillation parameters, \\$m_{3}\in[0.05821, 0.5209]$, $m_{\beta\beta}\in[0.02895, 0.4959]$, $\sum m_{i}\in[0.1205, 1.558]$}\\ \hline \end{tabular} \captionof{table}{Results from $P_{7}$ type texture. "p" stands for a sign of a square root in (\ref{tpm2}). Masses are given in eVs.} \label{tab7} \end{center} \section{Relating cosmological CP and $\delta$} \la{cosmology} As we have already seen, from certain $2T_0Y_{32}$'s complex phases cannot be factored out. Such couplings are: $T_{4},T_{7},T_{9}$ and they give rise to complex mass matrices. Here we calculate phase $\phi$ in terms of the CP phase entering in neutrino oscillation. Recall that the $\delta $ is predicted from the neutrino mass matrices (\ref{t124}),(\ref{t134}),(\ref{t237}),(\ref{t239}), which we have considered. Keeping in mind (\ref{pse}), we use (\ref{nu1}) and (\ref{nu2}) to find the numerical value of phase $\phi$ in each case. \\ \\ Case of $M^{(12)}_{T_{4}}$ (Texture $P_1$): \\ Equating (2,2), (3,3) and (2,3) matrix elements of both sides in Eq. (\ref{nu1}), we get the relations: \beq 2a_{2}b_{2}|\bar m|e^{i\phi_{\bar m}}=e^{2i\omega_{2}}\mathcal{A}_{22},\quad 2a_{3}b_{3}e^{i\phi}|\bar m|e^{i\phi_{\bar m}}=e^{2i\omega_{3}}\mathcal{A}_{33},\quad (a_{3}b_{2}+a_{2}b_{3}e^{i\phi})|\bar m|e^{i\phi_{\bar m}}=e^{i(\omega_{2}+\omega_{3})}\mathcal{A}_{23}, \la{q0} \eeq with \beq \mathcal{A}_{ij}=U^{\ast}_{i1}U^{\ast}_{j1}m_{1}+U^{\ast}_{i2}U^{\ast}_{j2}m_{2}e^{i\rho_{1}}+U^{\ast}_{i3}U^{\ast}_{j3}m_{3}e^{i\rho_{2}}. \la{qq} \eeq Note, that from the neutrino sector all $\mathcal{A}_{ij}$ numbers are determined. Dividing the last relation in (\ref{q0}) in turn on the 1-st and 2-nd relations and then multiplying resulting two equations, we get the following relation: \beq xe^{i\phi}=\left(\frac{\mathcal{A}_{23}}{\sqrt{\mathcal{A}_{22}\mathcal{A}_{33}}}\pm \sqrt{\frac{\mathcal{A}^{2}_{23}}{\mathcal{A}_{22}\mathcal{A}_{33}}-1}\right)^{2}, \quad x\equiv \frac{a_{2}b_{3}}{a_{3}b_{2}}. \eeq Therefore, we have: \beq \phi ={\rm Arg}\left[\left(\frac{\mathcal{A}_{23}}{\sqrt{\mathcal{A}_{22}\mathcal{A}_{33}}}\pm \sqrt{\frac{\mathcal{A}^{2}_{23}}{\mathcal{A}_{22}\mathcal{A}_{33}}-1}\right)^{2}\right]. \eeq From here, using results given in Table \ref{tab1}, we find numerical value of $\phi$: \beq \phi=\pm1.287. \eeq \\ In a pretty similar way, for remaining three neutrino mass matrices (\ref{t134}),(\ref{t237}),(\ref{t239}), for the phase $\phi$ we get \\ \beq \phi ={\rm Arg}\left[\left(\frac{\mathcal{A}_{23}}{\sqrt{\mathcal{A}_{22}\mathcal{A}_{33}}}\pm \sqrt{\frac{\mathcal{A}^{2}_{23}}{\mathcal{A}_{22}\mathcal{A}_{33}}-1}\right)^{2}\right], \quad \phi ={\rm Arg}\left[\left(\frac{\mathcal{A}_{13}}{\sqrt{\mathcal{A}_{11}\mathcal{A}_{33}}}\pm \sqrt{\frac{\mathcal{A}^{2}_{13}}{\mathcal{A}_{11}\mathcal{A}_{33}}-1}\right)^{2}\right], \eeq \beq \phi={\rm Arg}\left[\left(\frac{\mathcal{A}_{12}}{\sqrt{\mathcal{A}_{11}\mathcal{A}_{22}}}\pm \sqrt{\frac{\mathcal{A}^{2}_{12}}{\mathcal{A}_{11}\mathcal{A}_{22}}-1}\right)^{2}\right], \eeq which yield \beqs \phi=\pm1.169,\quad \phi^{\rm NH}=\pm2.957\quad {\rm and}\quad\phi^{\rm IH}=\pm3.124,\eeqs \beq \phi^{\rm NH}=\pm3.058\quad{\rm and}\quad \phi^{\rm IH}=\pm3.136 \eeq respectively. For these we have used results given in Tables: \ref{tab2}, \ref{tab3'} and \ref{tab4} resp. Note, that $\phi$ phases in all four cases have been found for the reason that with a predictive neutrino sector there is no undetermined parameter. This makes the whole scenario really attractive to study the baryon asymmetry via the leptogenesis (for similar studies see: \cite{Frampton:2002qc},\cite{Frampton:2002yf},\cite{{Babu:2007zm},{Babu:2008kp},{Harigaya:2012bw},{Ge:2010js}},\cite{Frampton:2004df}). As mentioned, since the $\phi$ participates in the coupling of RHN states with $l$ and $h_{u}$ (\ref{r21}) it will control CP asymmetric decays of the N states. Thus, it is interesting to look into the details of the leptogenesis within the scenarios we have considered here. This will be pursued in a subsequent work \cite{AZ}. \section{Conclusions} \la{conclusion} Within the MSSM augmented with two quasi-degenerate right-handed neutrinos, we analyzed all possible two texture zero $3\times 2$ Yukawa matrices, which together with minimal $\rm d=5$ operator couplings contribute to the light neutrino mass matrices. All viable neutrino mass matrices have been investigated and predictive relations were derived. Cosmological CP violation has been related to the leptonic CP violating $\delta $ phase. Further work will be focused on details of realizations of resonant leptogenesis. It is also desirable to get texture zeros with the help of flavor symmetries in a spirit of Refs.\cite{Fritzsch:2011qv},\cite{Babu:2007zm},\cite{{Pakvasa:1977in},{Binetruy:1996xk},{Lola:1998xp},{Vissani:1998xg},{Shafi:1998dv},{Barbieri:1999km},{Berezhiani:2000cg},{Ma:2001dn},{Chkareuli:2001dq},{Babu:2004tn},{Hagedorn:2006ug},{Nandi:2007cw},{King:2013eh}}. These and related issues will be addressed elsewhere. \subsubsection*{Acknowledgments} Research of Z.T. is partially supported by Shota Rustaveli National Science Foundation (Contracts No. 31/89 and No. DI/12/6-200/13). \bibliographystyle{unsrt}
2024-02-18T23:40:40.411Z
2016-07-08T02:07:22.000Z
algebraic_stack_train_0000
3,038
10,393
proofpile-arXiv_065-14795
\section{Introduction} \label{sec:intro} Deep, regularized neural networks work better in practice than shallow unconstrained neural networks \cite{deepbook}. This regularization takes classic forms such as L2-norm ridge regression, L1-norm LASSO, architectural constraints such as convolutional layers \cite{lecun1998gradient}, but also uses modern techniques such as dropout \cite{srivastava2014dropout}. Recently, especially in the subfield of autoencoding neural networks, regularization has been accomplished with variational methods \cite{VAE}\cite{rezende2014stochastic}. In this paper we propose Information Theoretic-Learning \cite{itlbook} divergence measures for variational regularization. In deep learning, variational regularization forces the function implemented by a neural network to be as close as possible to an imposed prior, which is a stronger restriction than that imposed by point-wise regularization methods such as L1 or L2 norms. Variational methods for deep learning were popularized by the variational autoencoder (VAE) framework proposed by \cite{VAE} and \cite{rezende2014stochastic} which also brought the attention of deep learning researchers to the reparametrization trick. The Gaussian reparametrization trick works as follows: the encoder (deep) network outputs a mean $\mu$ and a standard deviation $\sigma$, from which we sample a latent factor $z=\mu + \sigma \cdot \epsilon$, where $\epsilon \sim N(0, 1)$. This latent factor is then fed forward to the decoder network and the parameters $\mu$ and $\sigma$ are regularized using the KL-divergence $KL(N(\mu, \sigma) \| N(0, 1))$ between the inferred distribution and the imposed prior, which has a simple form \cite{VAE}. After training, one can generate data from a VAE by first sampling from the Gaussian prior distribution and feeding it to the VAE's decoder. This is an approach similar to the inverse cumulative distribution method and does not involve estimation of the partition function, rejection sampling, or other complicated approaches \cite{mcmcbook}. VAE's methodology has been successfully extended to convolutional autoencoders \cite{kulkarni2015deep} and more elaborate architectures such as Laplacian pyramids for image generation \cite{denton2015deep}. Unfortunately, VAE cannot be used when there does not exist a simple closed form solution for the KL-divergence. To cope with that, generative adversarial networks (GAN) were proposed \cite{goodfellow2014generative}. GAN uses two neural networks that are trained competitively---a generator network $G$ for sampling data and a discriminator network $D$ for discerning the outputs of $G$ from real data. Unfortunately, training $G$ to match a high dimensional dataset distribution using only $D$'s binary ``fake'' or ``legit'' outputs is not a stable or simple process. Makhzani et. al. proposed adversarial autoencoders \cite{makhzani2015adversarial} which use an adversarial discriminator $D$ to tell the low dimensional codes in the output of the encoder from data sampled from a desired distribution. In this way adversarial autoencoders can approximate variational regularization as long as it is possible to sample from the desired distribution. We note that although this partially solves the problem of generalized functional regularization for neural networks \footnote{We still have a problem when we cannot sample from the desired distribution.}, adversarial autoencoders require us to train a third network, the discriminator, in addition to the encoder and decoder already being trained. Here we observe that, assuming we can sample from the desired distribution, we can use empirical distribution divergence measures proposed by Information Theoretic-Learning (ITL) as a measure of how close the function implemented by an encoder network is to a desired prior distribution. Thus, we propose Information Theoretic-Learning Autoencoders (ITL-AE). In the next section of this paper we review ITL's Euclidean and Cauchy-Schwarz divergence measures \cite{principe2000information}. In Section 3 we propose the ITL-AE and run experiments to illustrate the proposed method in Section 4. We conclude the paper afterwards. \section{Information Theoretic-Learning} \label{sec:itl} Information-theoretic learning (ITL) is a field at the intersection of machine learning and information theory \cite{itlbook} which encompasses a family of algorithms that compute and optimize information-theoretic descriptors such as entropy, divergence, and mutual information. ITL objectives are computed directly from samples (non-parametrically) using Parzen windowing and Renyi's entropy \cite{renyi1961measures}. \subsection{Parzen density estimation} Parzen density estimation is a nonparametric method for estimating the pdf of a distribution empirically from data. For samples $x_i$ drawn from a distribution $p$, the parzen window estimate of $p$ can be computed nonparametrically as \begin{equation}\label{eq:parzen} \hat{p}(x)=\frac{1}{N}\sum_{i=1}^N G_\sigma (x - x_i). \end{equation} \begin{figure}[t] \centering \includegraphics[scale=0.37]{parzen_example.eps} \caption{One-dimensional Parzen windowing example with Gaussian kernel functions. A small Gaussian (colored dashed lines) is placed at each sample, the sum of which (solid black line) is used to estimate the pdf from which samples (vertical tick marks) were drawn.} \label{fig:parzen} \end{figure} Intuitively, as shown in Fig \ref{fig:parzen}, parzen estimation corresponds to centering a Gaussian kernel at each sample $x_i$ drawn from $p$, and then summing to estimate the pdf. The optimal kernel size \cite{silverman1986density} depends on the density of samples, which approaches zero as the number of samples approaches infinity. \subsection{ITL descriptors} Renyi's $\alpha$-order entropy for probability density function (pdf) $p$ is given by: \begin{equation} \label{eq:renyi_entropy} H_\alpha (X) = \frac{1}{1-\alpha} \log \int p^\alpha(x) dx \end{equation} where $p \in L^\alpha$. Renyi's $\alpha$-order entropy can be considered a generalization of Shannon entropy since $\lim_{\alpha \to 1}H_\alpha = \int -\log p(x) dx$, which is Shannon entropy. For the case of $\alpha=2$, equation \eqref{eq:renyi_entropy} simplifies to $H_2 = -\log \int p^2(x) dx$ which is known as Renyi's quadratic entropy. Plugging \eqref{eq:parzen} into \eqref{eq:renyi_entropy}, for $\alpha=2$, we obtain: \begin{align} \hat{H}_2(X) &= -\log \int \left( \frac{1}{N} \sum_{i=1}^N G_\sigma (x - x_i) \right)^2 dx \\ &= -\log \left( \frac{1}{N^2} \sum_{i=1}^N \sum_{j=1}^N G_{\sigma \sqrt{2}} (x_j - x_i) \right) \label{renyi_estimator} \end{align} where $G_\sigma(x,y) = \frac{1}{\sqrt{2\pi} \sigma} \exp \left( \frac{\|x-y\|^2}{2\sigma^2} \right)$ is the Gaussian kernel, and $\sigma$ is the kernel size. The argument of the logarithm in equation \eqref{renyi_estimator} is called the \emph{information potential} by analogy with potential fields from physics and is denoted by $\hat{V}_\sigma(X)$. Another important ITL descriptor is Renyi's cross-entropy which is given by: \begin{equation} \label{eq:cross_ent} H_2(X,Y) = -\log \int p_X(z)p_Y(z)dz \end{equation} Similarly to equation \eqref{eq:renyi_entropy}, cross-entropy can be estimated by \begin{equation} \label{eq:cross_est} \hat{H}_2(X,Y) = -\log \frac{1}{N_X N_Y}\sum_{i=1}^{N_X} \sum_{j=1}^{N_Y} G_{\sqrt{2}\sigma} (x_i - y_j) \end{equation} The argument of the logarithm in equation \eqref{eq:cross_ent} is called \emph{cross-information potential} and is denoted $\hat{V}_\sigma(X,Y)$. Cross-information potential can be viewed as the average sum of interactions of samples drawn from $p_X$ with the estimated pdf $\hat{p}_Y$ (or vice-versa). ITL has also described a number of divergences connected with Renyi's entropy. In particular, the Euclidean and Cauchy-Schwarz divergences are given by: \begin{align} \label{eq:D_ED} D&_{ED}(p_X\|p_Y) = \int \left( p_X(z) - p_Y(z) \right)^2 dz \\ & = \int p_X^2(z)dz + \int p_Y^2(z)dz - 2 \int p_X(z)p_Y(z)dz \end{align} and \begin{equation} \label{eq:D_CS} D_{CS}(p_X\|p_Y) = -\log \frac{\left( \int p_X(z) p_Y(z) dz \right)^2}{\int p_X^2(z) dz \int p_Y^2(z) dz}, \end{equation} respectively. Equations \eqref{eq:D_ED} and \eqref{eq:D_CS} can be put in terms of information potentials: \begin{align} \label{eq:D_pot} D_{ED}(p_X||p_Y) &= V(X) + V(Y) - 2 V(X,Y) \\ D_{CS}(p_X||p_Y) &= \log \frac{V(X) V(Y)}{V^2(X,Y)} \end{align} Euclidean divergence is so named because it is equivalent to the Euclidean distance between pdfs. Furthermore, it is equivalent to maximum mean discrepancy \cite{gretton2006kernel} (MMD) statistical test. Cauchy-Schwarz divergence is named for the Cauchy-Schwarz inequality, which guarantees the divergence is only equal to zero when the pdfs are equal almost everywhere. $D_{CS}$ is symmetric but, unlike $D_{ED}$, does not obey the triangle equality. Minimizing either divergence over $p_X$, i.e., $\min_{p_X} D(p_X \|p_Y)$, is a tradeoff between minimizing the information potential (maximizing entropy) of $p_X$ and maximizing the cross-information potential (minimizing cross-entropy) of $p_X$ with respect to $p_Y$. Intuitively, minimizing the information potential encourages samples from $p_X$ to spread out, while maximizing the cross-information potential encourages samples from $p_X$ to move toward samples from $p_Y$. \section{ITL Autoencoders} \label{sec:itlae} Let us define autoencoders as a 4-tuple $AE = \{E, D, L, R\}$. Where $E$ and $D$ are the encoder and the decoder functions, here parameterized as neural networks. $L$ is the reconstruction cost function that measures the difference between original data samples $x$ and their respective reconstructions $\tilde{x} = D(E(x))$. A typical reconstruction cost is mean-squared error. $R$ is a functional regularization. Here this functional regularization will only be applied to the encoder $E$. Nonetheless, although we are only regularizing the encoder $E$, the interested investigator could also regularize another intermediate layer of the autoencoder \footnote{For those interested in such investigation we recommend modifying the companion code of this paper. In our method adding more regularization does not increase the number of adaptive weights.}. The general cost function for the ITL-AE can be summarized by the following equation: \begin{equation} \text{cost} = L \left(x,\tilde{x} \right) + \lambda R(E,P), \end{equation} where the strength of regularization is controlled by the scale parameter $\lambda$, and $P$ is the imposed prior. \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{model.eps} \caption{Block diagram for ITL autoencoder. L is the reconstruction cost function while R is the functional regularization that uses information theoretic measures. } \label{fig:varreg} \end{figure} The functional regularization costs investigated in this paper are the ITL Euclidean and Cauchy-Schwarz divergences. Both types of divergence encourage a smooth manifold similar to the imposed prior. That is, maximizing latent code distribution entropy encourages code samples to spread out, while minimizing the cross-entropy between the latent code and the prior distributions encourages code samples to be similar to prior samples. Note that if the data dimensionality is too high, ITL divergence measures require larger batch sizes to be reliably estimated, this explains why Li et. al. \cite{li2015generative} used batches of 1000 samples in their experiments and also why they reduced the data dimensionality with autoencoders. In our own experiments (not shown), the Cauchy-Schwarz divergence worked better than Euclidean for high-dimensional data. \section{Relation to other work} Generative Moment Matching Networks (GMMNs) \cite{li2015generative} correspond to the specific case where the input of the decoder $D$ comes from a multidimensional uniform distribution and the reconstruction function $L$ is given by the Euclidean divergence measure. GMMNs could be applied to generate samples from the original input space itself or from a lower dimensional previously trained stacked autoencoder (SCA) \cite{bengio2012better} hidden space. An advantage of our approach compared to GMMNs is that we can train all the elements in the 4-tuple $AE$ together without the elaborate process of training layerwise stacked autoencoders for dimensionality reduction. Variational Autoencoders (VAE) \cite{VAE} adapt a lower bound of the variational regularization, $R$, using parametric, closed form solutions for the KL-divergence. That divergence can be defined using Shannon's entropy or $H_{\alpha=1}$. Thus, we can also interpret ITL-AE as nonparametric variational autoencoders, where the likelihood of the latent distribution is estimated empirically using Parzen windows. Note that since we can estimate that distribution directly, we do not use the reparametrization trick. Here the reparametrization trick could possibly be used for imposing extra regularization, just like how adding dropout noise regularizes neural networks. Adversarial autoencoders (AA) \cite{makhzani2015adversarial} have the architecture that inspired our method the most. Instead of using the adversarial trick to impose regularization on the encoder, we defined that regularization from first principles, which allowed us to train a competing method with much fewer trainable parameters. Our most recent experiments show that AA scales better than ITL-AE for high dimensional latent codes. We leave investigation into high dimensional ITL-AE for future work. \begin{figure*}[!ht] \centering \includegraphics[width=.7\textwidth]{embedding.eps} \caption{Effect of different priors on the function defined by the encoder neural network. a) Swiss-roll, b) Laplacian, b) Gaussian.} \label{fig:varreg} \end{figure*} \section{Experiments} \label{sec:exp} In this section we show experiments using the ITL-AE architecture described in the previous section. First, for a visual interpretation of the effects of variational regularization we trained autoencoders with 2-dimensional latent codes. We used as desired priors a Gaussian, a Laplacian, and a 2D swiss-roll distribution. The resulting codes are shown in Fig. \ref{fig:varreg}. Note that in all of these examples the autoencoder was trained in a completely unsupervised manner. However, given the simplicity of the data and the imposed reconstruction cost, some of the numbers were clustered in separate regions of the latent space. Fig. \ref{fig:samples} shows some images obtained by sampling from a linear path on the swiss-roll and random samples from the Gaussian manifold. For easier comparisons and to avoid extensive hyperparameter search, we constrained our encoder and decoders, $E$ and $D$, to have the same architecture as those used in \cite{makhzani2015adversarial} i.e., each network is a two hidden layer fully connected network with 1000 hidden neurons. Thus, the only hyperparameters investigated in this paper were kernel size $\sigma$ and scale parameter $\lambda$. For the MNIST dataset, the Euclidean distance worked better with smaller kernels, such as $\sigma=1$ or $\sigma=5$, while the Cauchy-Schwarz divergence required larger kernel, $\sigma=10$ for example. Nevertheless, here we will focus in regularizing the low dimensional latent codes and leave experiments using Cauchy-Schwarz divergence for future work. Our best results for small batch sizes common in deep learning had 3-dimensional latent codes in the output of the encoder, euclidean divergence as the regularization $R$ and mean-squared error as the reconstruction cost $L$. As we will show in the next section, we were able to obtain competitive results and reproduce behaviors obtained by methods trained with larger networks or extra adversarial networks. \begin{figure}[!ht] \centering \includegraphics[width=0.3\textwidth]{samples.eps} \caption{Samples from the embedded manifolds. a) samples from a linear walk over the swiss-roll manifold, b) random samples from a zero mean, std 5 Gaussian distribution.} \label{fig:samples} \end{figure} \subsection{Log-likelihood analysis} We followed the log-likelihood analysis on the MNIST dataset reported on the literature \cite{hinton2006fast}\cite{bengio2012better}\cite{bengio2013deep}. After training ITL-AE on the training set, we generated $10^4$ images by inputting $10^4$ samples from a $N(0, 5)$ distribution to the decoder. Those generated MNIST images were used estimate a distribution using Parzen windows on the high dimensional image space\footnote{Note that this is not an optimal benchmark due the problems with Parzen estimators in high dimensional spaces we explained. All the results, including ours, should be taken with a grain of salt.}. We calculated the log-likelihood of separate 10k samples from the test set and report the results on Table \ref{table:ll}. The kernel size of that Parzen estimator was chose using the best results on a held-out cross-validation dataset. That kernel size was $\sigma=0.16$. Note that our method obtained the second best results between all the compared fully connected generative models. Remember that this was obtained with about $10^6$ less adaptive parameters than the best method Adversarial autoencoders. \begin{table}[ht] \caption{Log-likelihood of MNIST test dataset. Higher the values are better.} \centering \begin{tabular}{c c c c} \hline Methods & Log-likelihood \\ [0.5ex] \hline Stacked CAE \cite{bengio2012better}& $121 \pm 1.6$ \\ DBN \cite{hinton2006fast}& $138\pm 2$ \\ Deep GSN \cite{bengio2013deep}& $214\pm 1.1$ \\ GAN \cite{goodfellow2014generative}& $225\pm 2$ \\ GMMN + AE \cite{li2015generative}& $282\pm 2$ \\ ITL-AE$^*$ & $300 \pm 0.5$ \\ Adversarial Autoencoder \cite{makhzani2015adversarial}& $340 \pm 2$ \\ [1ex] \hline $^*$ Proposed method. \end{tabular} \label{table:ll} \end{table} \section{Conclusions} \label{sec:conclusions} Here we derived and validated the Information Theoretic-Learning Autoencoders, a non-parametric and (optionally) deterministic alternative to Variational Autoencoders. We also revisited ITL for neural networks, but this time, instead of focusing on nonparametric cost functions for non-Gaussian signal processing, we focused on distribution divergences for regularizing deep architectures. Our results using relatively small, for deep learning standards, 4 layer networks with 3-dimensional latent codes obtained competitive results on the log-likelihood analysis of the MNIST dataset. Although our results were competitive for fully connected architectures, future work should address the scalability of the ITL estimators for large dimensional latent spaces, which is common on large neural networks and convolutional architectures as well. \subsubsection*{References} \bibliographystyle{IEEEbib}
2024-02-18T23:40:40.464Z
2016-03-23T01:04:53.000Z
algebraic_stack_train_0000
3,042
2,888
proofpile-arXiv_065-14988
\section{Introduction and main results} Important notions in spaces of analytic functions include zero-sets, Carleson measures, interpolation, sampling, frames, etc. Such properties have been studied for many well-known spaces of analytic functions in a deterministic setting. A canonical example is the Hardy space, where all these properties are well established, see \cite{Ga}. In other spaces such properties admit theoretical characterizations which are not checkable in general (e.g.\ interpolation in Dirichlet spaces), see e.g. \cite{S} for a general reference. There also exist situations where a general characterization is not available. In these circumstances it is useful to consider a random setting, which allows to see whether certain properties are ``generic'' in a sense. The random model we are interested in here is the Poisson point process. A \emph{Poisson point process} in the unit disk $\D$ is a random sequence $\Lambda$ defined in the following way: for any Borel set $A\subset \D$ the counting random variable $N_A=\# (A\cap\Lambda) $ is well defined and \begin{itemize} \item [(a)] $N_A$ is a Poisson random variable, i.e., there exists $\mu(A)\geq 0$ such that the probability distribution of $N_A$ is \[ \P(N_A=k)=e^{-\mu(A)} \frac{(\mu(A))^k}{k!}\ , k\geq 0. \] In particular $\E[N_A]=\Var[N_A]=\mu(A)$. \item [(b)] If $A,B\subset\D$ are disjoint Borel sets then the variables $N_A$, $N_B$ are independent. \end{itemize} It turns out that these two properties uniquely characterize the point process. Also, the values $\mu(A)$ define a $\sigma$-finite Borel measure on $\D$, which is called the \emph{intensity} of the process. The Poisson process is a well-known statistical model for point distributions with no (or weak) interactions, and it has multiple applications in a great variety of fields \cite{Wi}. Because of property (b), it is clearly not adequate to describe distributions in which each point is not statistically independent of the other points of the process. For such situations other models have been proposed (e.g. determinantal processes or zeros of Gaussian analytic functions for random sequences with repulsion, or Cox processes for situations with positive correlations and clumping \cite{HKPV}). It is also possible to create a Poisson process from a given, $\sigma$-finite, locally finite, positive Borel measure $\mu$ in $\D$, in the sense that there exists a point process $\Lambda_\mu$ with intensity $\mu$, i.e, whose counting functions satisfy properties (a) and (b) above. This is a well-known, non-trivial fact that can be found, for example, in \cite{La-Pe}*{Theorem 3.6}. Such a Poisson process $\Lambda_\mu$ is sometimes called \emph{inhomogeneous}, or non-stationary. In this paper, given a positive Borel measure $\mu$ on $\D$, we study elementary geometric properties of the inhomogeneous Poisson process of intensity $\mu$, specifically in relation to conditions used to describe interpolating sequences for various spaces of analytic functions in $\D$. We shall always assume that $\mu(\D)=+\infty$, since otherwise $\Lambda_\mu$ would be finite almost surely. The probabilistic point of view has already been explored before in connection with interpolation. Here we mention Cochran \cite{Coc} and Rudowicz \cite{Ru} who considered the probabilistic model $\Lambda=\{r_n e^{i\theta_n}\}_n$ in which the radii $r_n\subset(0,1)$ are fixed a priori and the arguments $\theta_n$ are chosen uniformly and independently in $[0,2\pi]$ (a so-called Steinhaus sequence). For this model they established a zero-one condition on $\{r_n\}_n$ so that the resulting random sequence is almost surely interpolating for the Hardy spaces. In \cite{CHKW} similar results, for the same probabilistic model, were proven for the scale of weighted Dirichlet spaces between the Hardy space and the classical Dirichlet space. See also \cite{DWW} for related results in the unit ball and the polydisk. We express our results in terms of a dyadic discretization of $\mu$. Consider first the dyadic annuli \[ A_n=\{z\in \D:2^{-(n+1)}< 1-|z|\leq 2^{-n}\}, \quad n\geq 0. \] Each $A_n$ can be split into $2^n$ boxes of the same size $2^{-n}$: \[ T_{n,k}=\bigl\{z=re^{it}\in A_n: \frac{k}{2^n}\le \frac t{2\pi}<\frac{k+1}{2^n}\bigr\},\quad k=0,1,\ldots,2^n-1. \] These boxes can be viewed as the top halves of the Carleson windows \[ Q(I_{n,k})=\bigl\{z=re^{i\theta}\in \D : r>1-2^{-n}, \, e^{i\theta}\in I_{n,k}\bigr\} \] associated to the dyadic intervals \begin{equation}\label{dy-int} I_{n,k}=\bigl\{ e^{it}\in\T : \frac{k}{2^n}\le \frac t{2\pi}<\frac{k+1}{2^n}\bigr\}\ ,\quad n\geq 0\ ,\, k=0,1,\ldots, 2^{n}-1. \end{equation} \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.6] \centering \draw [ultra thick] (-4,0) -- (4,0); \path [thin, blue,fill=blue!7] (-4,4)--(4,4)--(4,8)--(-4,8)--(-4,4); \draw [thin] (-5,0) -- (-4,0); \draw [thin] (4,0) -- (5,0); \draw [thin] (-4,8) -- (4,8); \draw [thin] (-4,4) -- (4,4); \draw [thin] (-4,2) -- (4,2); \draw [thin, dashed] (-4,1) -- (4,1); \draw [thin, dashed] (-3,0) -- (-3,1); \draw [thin, dashed] (-1,0) -- (-1,1); \draw [thin, dashed] (3,0) -- (3,1); \draw [thin, dashed] (1,0) -- (1,1); \draw [thin] (-4,0) -- (-4,8); \draw [thin] (4,0) -- (4,8); \draw [thin] (0,0) -- (0,4); \draw [thin] (-2,0) -- (-2,2); \draw [thin] (2,0) -- (2,2); \node [below] at (0,0) {$I_{n,k}$}; \node [right] at (4.2,4) {$Q(I_{n,k})$}; \node [above, blue] at (0,5.5) {$T_{n,k}$}; \end{tikzpicture} \caption{Carleson window $Q(I_{n,k})$ associated to the dyadic interval $I_{n,k}$ and its top half $T_{n,k}$.} \end{figure} Denote $X_{n,k}=N_{T_{n,k}}$, which by hypothesis is a Poisson random variable of parameter \[ \mu_{n,k}:=\E[X_{n,k}]=\Var [X_{n,k}]= \mu(T_{n,k}). \] In these terms, the assumption $\mu(\D)=+\infty$ is just \[ \mu(\D)=\sum_{n\in\N} \sum_{k=0}^{2^n-1} \mu_{n.k}=\sum_{n,k}\mu_{n,k}=+\infty. \] A first geometric property on random sequences we are interested in is separation. For this, we recall that the pseudo-hyperbolic distance in $\D$ is given by \[ \rho(z,w)=\left|\frac{z-w}{1-\bar w z}\right|\ \quad z,w\in\D. \] \begin{definition} A sequence $\Lambda=\{\lambda_k\}_{k\geq 1}\subset \D$ is \emph{separated} if there exists $\delta>0$ such that \[ \rho(\lambda_k,\lambda_l)\ge\delta,\quad k\neq l. \] When we need to specify the separation constant we say that $\Lambda$ is $\delta$-separated. \end{definition} We are now in a position to state our result characterizing those $\Lambda_\mu$ which can (almost surely) be expressed as finite unions of separated sequences. \begin{theorem}\label{thm:separation} Let $\Lambda_\mu$ be the Poisson process associated to a positive, $\sigma$-finite, locally finite measure $\mu$ and let $M\geq 1$ be an integer. Then \[ \P\bigl(\Lambda_\mu\ \textrm{union of $M$ separated sequences}\bigr)= \begin{cases} 1\quad \textrm{if}\quad \displaystyle\sum\limits_{n,k}\mu_{n,k}^{M+1}<\infty \\ 0\quad \textrm{if}\quad \displaystyle\sum\limits_{n,k}\mu_{n,k}^{M+1}=\infty. \end{cases} \] In particular, \[ \P\bigl(\Lambda_\mu\ \textrm{separated}\bigr)= \begin{cases} 1\quad \textrm{if}\quad \displaystyle\sum\limits_{n,k}\mu_{n,k}^{2}<\infty \\ 0\quad \textrm{if}\quad \displaystyle\sum\limits_{n,k}\mu_{n,k}^{2}=\infty. \end{cases} \] \end{theorem} The characterization of a.s. separated sequences was first obtained, with a different proof, in \cite{Ap}*{Teorema 3.2.1}. Our second result deals with so-called $\alpha$-Carleson sequences. Given any arc $I\subset \T=\partial \D$ let $|I|$ denote its normalized length and consider the associated Carleson window \[ Q(I)=\bigl\{z=re^{i\theta}\in \D : r>1-|I|, \, e^{i\theta}\in I\bigr\}. \] \begin{definition} Let $\alpha\in (0, 1]$. The sequence $\Lambda$ satisfies the $\alpha$-\emph{Carleson} condition if there exists $C>0$ such that for all arcs $I\subset\T$ \[ \sum_{\lambda\in Q(I)} (1-|\lambda|)^\alpha \leq C |I|^\alpha. \] Such sequences will also be called $\alpha$-Carleson sequences. \end{definition} Observe that to check the $\alpha$-Carleson condition it is enough to test on the dyadic intervals $I_{n,k}$ given in \eqref{dy-int}. The sequences $\Lambda$ satisfying the $1$-Carleson condition are by far the most studied, because of their r\^ole in the famous characterization of the interpolating sequences for the algebra $H^\infty$ of bounded holomorphic functions, given by L. Carleson \cite{Ca} (see Section~\ref{int_h}). They are sometimes found in the literature under the name of {\it Carleson-Newman} sequences. The $\alpha$-Carleson property above is a special case of a more general condition: a finite, positive Borel measure $\sigma$ on $\D$ is a Carleson-measure of order $\alpha\in (0,1]$ if $\sigma(Q(I))\le C|I|^\alpha$ for some $C>0$ and all intervals $I$. As shown by L. Carleson (see e.g. \cite{Ga}), Carleson measures (of order $1$) are precisely those for which the embedding $H^2\subset L^2(\D,\sigma)$ holds; here $H^2$ is the classical Hardy space (see the definition in Subsection \ref{HardyBergman} below). Carleson measures of order $\alpha<1$ have been used, for example, in providing sufficient conditions for solvability of the $\bar\partial_b$-equation in $L^p$, $L^{p,\infty}$ and in Lipschitz spaces of the boundary of strictly pseudoconvex domains \cite{Am-Bo}. \begin{theorem}\label{thm:Carleson} Let $\Lambda_\mu$ be the Poisson process associated to a positive, $\sigma$-finite, locally finite measure $\mu$. Then \begin{itemize} \item [(a)] \[ \P\bigl(\Lambda_\mu\ \textrm{is a 1-Carleson sequence}\bigr) = \begin{cases} 1\quad \textrm{if there exists $\gamma>1$ such that}\quad \displaystyle\sum\limits_{n,k}\mu_{n,k}^{\gamma}<\infty \\ 0\quad \textrm{if for all $\gamma>1$}\quad \displaystyle\sum\limits_{n,k}\mu_{n,k}^{\gamma}=\infty. \end{cases} \] \end{itemize} \item [(b)] Let $\alpha\in (0,1)$. If there exists $1<\gamma<\frac 1{1-\alpha}$ such that $ \sum\limits_{n,k} \mu_{n,k}^{\gamma}<+\infty$, then \[ \P\bigl(\Lambda_\mu\ \textrm{is $\alpha$-Carleson}\bigr)=1 \] \item [(c)] There exists a positive, $\sigma$-finite, locally finite measure $\mu$ such that $ \sum\limits_{n,k} \mu_{n,k}^{1/(1-\alpha)}<+\infty$ and \[ \P\bigl(\Lambda_\mu\ \textrm{is $\alpha$-Carleson}\bigr)=0. \] \item [(d)] For every $\gamma>1$ there exists a positive, $\sigma$-finite, locally finite measure $\mu$ such that $ \sum\limits_{n,k} \mu_{n,k}^{\gamma}=+\infty$ but \[ \P\bigl(\Lambda_\mu\ \textrm{is $\alpha$-Carleson}\bigr)=1 \] for all $\alpha\in (0,1)$. \end{theorem} \begin{remarks*} 1) The first statement in part (a) is connected with the first part of the statement in Theorem~\ref{thm:separation}, since it is a well-known fact that every 1-Carleson (or Carleson-Newman) sequence can be split into a finite number of separated sequences, each of which being of course 1-Carleson\cite{McDS}*{Lemma 21} (obviously a finite number of arbitrary separated sequences may not be Carleson-Newman). However Theorem \ref{thm:Carleson}(a) does not give a precise information on the number of separated sequences involved. It is also mentionable that the condition for a.s.\ separation from Theorem \ref{thm:separation} implies automatically the Carleson condition (picking $\gamma=2>1$). This is perhaps more surprising and may be explained by the nature of the process: the independence of the different points allows for big fluctuations, so the probability of finding pairs of points arbitrarily close is quite big unless the number of points in the process is restricted severely (up to $ \sum_{n,k} \mu_{n,k}^{2}<\infty$). 2) It is interesting to point out that for the inhomogeneous Poisson process we have a characterization of $1$-Carleson sequences, while in the a priori simpler random model with fixed radii and random arguments there is only a sufficient -- still optimal -- condition (see \cite{CHKW}*{Theorem 1.4}). 3) In the case $\alpha\in (0,1)$ the results are less precise than when $\alpha=1$. The value $1/(1-\alpha)$ turns out to be an optimal breakpoint, but nothing specific can be said beyond this value without additional conditions on the distribution of $\mu$. The example given in (c) is part of a certain parameter dependent scale of measures which will be discussed in Section \ref{Examples}, and for which the $\alpha$-Carleson condition is characterized in terms of the parameter. 4) Our conditions, both here and in Theorem~\ref{thm:separation}, are expressed in terms of $\mu_{n,k}$, thus redistributing continuously $\mu$ on $T_{n,k}$ if necessary, we can always assume that $\mu$ is absolutely continuous with respect to the Lebesgue measure. \end{remarks*} The structure of the paper is as follows. In Section~\ref{theorems} we prove the main Theorems~\ref{thm:separation} and ~\ref{thm:Carleson}. Section~\ref{int_h} deals with the consequences of these results in the study of interpolating sequences for various spaces of holomorphic functions. In particular, we find precise conditions so that a Poisson process $\Lambda_\mu$ is almost surely an interpolating sequence for the Hardy spaces $H^p$, $0<p\leq\infty$, the Bloch space $\mathcal B$, or the Dirichlet spaces $\mathcal D_\alpha$, $\alpha\in (1/2,1)$. A final section is devoted to provide examples of Poisson processes associated to some simple measures and to give integral conditions (non-discrete) on $\mu$ which are in some cases equivalent to the discrete versions used in the statements. We finish this introduction recalling the Borel-Cantelli lemma, which is a central tool in this paper. We refer to \cite{Bil} for a general source on probability theory. Given a sequence of events $A_k$ let $\limsup A_k=\{\omega:\omega\in A_k$ for infinitely many $k\}$. \begin{lemma} Let $(A_k)_k$ be a sequence of events in a probability space. Then \begin{enumerate} \item If $\sum \P(A_k)<\infty$, then $\P(\limsup A_k)=0$, \item If the events $A_k$ are independent and $\sum \P(A_k)=\infty$, then $\P(\limsup A_k)=1$. \end{enumerate} \end{lemma} {\bf Acknowledgements:} The authors would like to thank Joaquim Ortega-Cerd\`a for suggesting the consideration of Poisson processes and for helpful discussions. \section{Proof of Theorems~\ref{thm:separation} and ~\ref{thm:Carleson}.}\label{theorems} \subsection{Proof of Theorem~\ref{thm:separation}} Assume first that $\sum_{n,k}\mu_{n,k}^{M+1}<+\infty$, and define the events \[ A_{n,k}=\{X_{n,k}> M\}=\{X_{n,k}\ge M+1\}. \] Then \[ \P(A_{n,k})=1-\sum_{j=0}^M \P(X_{n,k}=j)=1-e^{-\mu_{n,k}} \bigl(\sum_{j=0}^M \frac{\mu_{n,k}^j}{j!}\bigr). \] By hypothesis $\lim\limits_n(\sup_k\mu_{n,k})= 0$, so we can use Taylor's formula \begin{equation}\label{tay} 1-e^{-x}(\sum_{j=0}^M\frac{x^j}{j!})=\frac{x^{M+1}}{(M+1)!}+o(x^{M+1})\qquad x\to 0 \end{equation} to deduce that \[ \sum_{n,k} \P(A_{n,k})\lesssim \sum_{n,k}\frac{\mu_{n,k}^{M+1}}{(M+1)!}<+\infty. \] By the Borel-Cantelli lemma $X_{n,k}\leq M$ for all but at most a finite number of $T_{n,k}$. In principle this does not imply that $\Lambda_\mu$ can be split into $M$ separated sequences, because it might happen that points of two neighboring $T_{n,k}$ come arbitrarily close. This possibility is excluded by repeating the above arguments to a new dyadic partition, made of shifted boxes $\tilde{T}_{n,k}$ having the ``lower vertices'' (those closer to $\T$) at the center of the $T_{n,k}$'s (see Figure \ref{Fig2} below); let \[ \tilde T_{n,k}=\Bigl\{z=re^{it}: \frac 32 2^{-(n+2)}<1-r\leq \frac 32 2^{-(n+1)}\, ;\ \frac{k+1/4}{2^n}\le \frac t{2\pi}<\frac{k+3/4}{2^n}\Bigr\}. \] Since each $\tilde T_{n,k}$ is included in the union of at most four $T_{m,j}$, we still have $\sum_{n,k} \tilde\mu_{n,k}^{M+1}<\infty$, and therefore, as before, $\tilde X_{n,k}=N_{\tilde T_{n,k}}$ is at most $M$, except for maybe a finite number of indices $(n,k)$. This prevents that two adjacent $T_{n,k}$ have more than $M$ points getting arbitrarily close. In conclusion, for all but a finite number of indices $X_{n,k}\leq M$, hence the part of $\Lambda_\mu$ in these boxes can be split into $M$ separated sequences. Adding the remaining finite number of points to any of these sequences may change the separation constant, but not the fact that they are separated. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.6] \centering \draw [ultra thick] (-4,0) -- (4,0); \path [thin, blue,fill=blue!7] (-4,4)--(4,4)--(4,8)--(-4,8)--(-4,4); \path [thin, blue,fill=blue!7] (4,1)--(6,1)--(6,2)--(4,2)--(4,1); \draw [thin, red,fill=red!7] (2,3)--(6,3)--(6,6)--(2,6)--(2,3); \draw [thin, red,fill=red!7] (-3,3)--(-3,1.5)--(-5,1.5)--(-5,3); \draw [thin] (-9,0) -- (-4,0); \draw [thin] (4,0) -- (9,0); \draw [thin] (-4,8) -- (4,8); \draw [thin] (-4,4) -- (4,4); \draw [thin] (-4,2) -- (4,2); \draw [thin, dashed] (-4,1) -- (4,1); \draw [thin, dashed] (-3,0) -- (-3,1); \draw [thin, dashed] (-1,0) -- (-1,1); \draw [thin, dashed] (3,0) -- (3,1); \draw [thin, dashed] (1,0) -- (1,1); \draw [thin] (-4,0) -- (-4,8); \draw [thin] (-8,0) -- (-8,4); \draw [thin] (8,0) -- (8,4); \draw [thin] (-8,4) -- (-4,4); \draw [thin, dashed] (-9,4) -- (-8,4); \draw [thin, dashed] (9,4) -- (8,4); \draw [thin, dashed] (9,8) -- (4,8); \draw [thin, dashed] (-9,8) -- (-4,8); \draw [thin] (8,4) -- (4,4); \draw [thin] (4,0) -- (4,8); \draw [thin] (0,0) -- (0,4); \draw [thin] (-2,0) -- (-2,2); \draw [thin] (-8,2) -- (-4,2); \draw [thin] (8,2) -- (4,2); \draw [thin] (2,0) -- (2,2); \draw [thin] (-6,0) -- (-6,2); \draw [thin] (6,0) -- (6,2); \draw [thin, dashed] (-8,1) -- (-4,1); \draw [thin, dashed] (8,1) -- (4,1); \draw [thin, dashed] (-7,0) -- (-7,1); \draw [thin, dashed] (-5,0) -- (-5,1); \draw [thin, dashed] (5,0) -- (5,1); \draw [thin, dashed] (7,0) -- (7,1); \draw [thin, red] (2,3)--(-2,3)--(-2,6)--(2,6); \draw [thin, red] (-2,3)--(-6,3)--(-6,6)--(-2,6); \draw [thin, red] (-8,3)--(-6,3); \draw [thin, red, dashed] (-8,3)--(-9,3); \draw [thin, red] (-8,6)--(-6,6); \draw [thin, red, dashed] (-8,6)--(-9,6); \draw [thin, red] (-5,3)--(-5,1.5)--(-7,1.5)--(-7,3)--(-5,3); \draw [thin, red] (-7,3)--(-8,3); \draw [thin, red, dashed] (-9,3)--(-8,3); \draw [thin, red] (-8,1.5)--(8,1.5); \draw [thin, red, dashed] (9,3)--(8,3); \draw [thin, red, dashed] (9,1.5)--(8,1.5); \draw [thin, red, dashed] (-9,1.5)--(-8,1.5); \draw [thin, red, dashed] (-9,1.5)--(-9,3); \draw [thin, red, dashed] (9,1.5)--(9,3); \draw [thin, red] (-1,1.5)--(-1,3); \draw [thin, red] (1,1.5)--(1,3); \draw [thin, red] (3,1.5)--(3,3); \draw [thin, red] (5,1.5)--(5,3); \draw [thin, red] (7,1.5)--(7,3); \draw [thin, red] (8,3)--(6,3); \draw [thin, red, dashed] (8,3)--(9,3); \draw [thin, red] (8,6)--(6,6); \draw [thin, red, dashed] (8,6)--(9,6); \node [below] at (0,0) {$I_{n,k}$}; \node [above, blue] at (0,6.5) {$T_{n,k}$}; \node [ red] at (5,5) {$\tilde T_{n,k}$}; \end{tikzpicture} \caption{Dyadic partitions: $\{T_{n,k}\}_{n,k}$ in blue, $\{\tilde T_{n,k}\}_{n,k}$ in red.}\label{Fig2} \end{figure} \medskip Assume now that $\sum_{n,k}\mu_{n,k}^{M+1}=+\infty$. We shall prove that for every $\delta_{l_0}=2^{-l_0}$, $l_0\in\N$, \[ \P\bigl(\text{$\Lambda$ union of $M$ $\delta_{l_0}$-separated sequences}\bigr)=0. \] Split each side of $T_{n,k}$ into $2^{l_0}$ segments of the same length. This defines a partition of $T_{n,k}$ in $2^{2l_0}$ small boxes of side length $2^{-n} 2^{-l_0}$, which we denote by \[ T_{n,k}^{l_0,j}\qquad j=1,\dots, 2^{2l_0}. \] Let $X_{n,k}^{l_0,j}=N_{T_{n,k}^{l_0,j}}$ denote the corresponding counting variable, which follows a Poisson law of parameter $\mu_{n,k,l_0,j}=\mu(T_{n,k}^{l_0,j})$. It is enough to show that for any $l_0$, \[ \P(X_{n,k}^{l_0,j}> M \text{ for infinitely many }n,k,j)=1. \] By the second part of the Borel-Cantelli lemma, since the $X_{n,k}^{l_0,j}$ are independent, we shall be done as soon as we see that \begin{equation}\label{BCSep} \sum_{n,k}\sum_{j=1}^{2^{2 l_0}} \P\bigl(X_{n,k}^{l_0,j}\ge M+1\bigr)=+\infty. \end{equation} For any Poisson variable $X$ of parameter $\lambda$, the probability \[ \P(X\ge M+1)=e^{-\lambda}\bigl(\sum_{m=M+1}^\infty \frac{\lambda^m}{m!}\bigr)=1-e^{-\lambda}\bigl(\sum_{m=0}^M \frac{\lambda^m}{m!}\bigr) \] increases in $\lambda$. Hence there is no restriction in assuming that $0\leq\mu_{n,k,l_0,j}\leq \mu_{n,k}\leq 1/2$ for all $n,k,j$. Then we can use Taylor's formula \eqref{tay} to deduce that \[ \P\bigl(X_{n,k}^{l_0,j}\ge M+1\bigr)\simeq \frac{\mu_{n,k,l_0,j}^{M+1}}{(M+1)!}., \] and therefore \eqref{BCSep} is equivalent to \[ \sum_{n,k}\sum_{j=1}^{2^{2l_0}} \mu_{n,k,l_0,j}^{M+1}=+\infty. \] That this sum in infinite is just a consequence of the hypothesis and the elementary estimate \begin{align*} \mu_{n,k}^{M+1}=\Bigl(\sum_{j=1}^{2^{2l_0}} \mu_{n,k,l_0,j}\Bigr)^{M+1} \leq 2^{2l_0 (M+1)} \sum_{j=1}^{2^{2l_0}} \mu_{n,k,l_0,j}^{M+1}. \end{align*} \subsection{Proof of Theorem~\ref{thm:Carleson}} (a) Assume first that $\sum_n \mu_{n,k}^{\gamma}<+\infty$ for some $\gamma>1$. It is enough to check the Carleson condition \[ \sum_{\lambda\in Q(I)} (1-|\lambda|)\leq C |I| \] on the dyadic intervals $I_{n,k}$. Let $Q_{n,k}=Q(I_{n,k})$. Decomposing the sum on the different layers $A_m$, it is enough to show that almost surely there exists $C>0$ such that for all $n\geq 0$, $k=0,\dots, 2^{n-1}$ \[ \sum_{\lambda\in Q_{n,k}} (1-|\lambda|)\simeq \sum_{m\geq n} \sum_{j: T_{m,j}\subset Q_{n,k}} 2^{-m} X_{m,j}\leq C 2^{-n} . \] This is equivalent to \begin{equation}\label{eq:Carl-dyadic} \sup_{n,k}\ 2^n\sum_{m\geq n} \sum_{j: T_{m,j}\subset Q_{n,k}} 2^{-m} X_{m,j}<\infty \end{equation} Denote \[ X_{n,m,k}=N_{Q_{n,k}\cap A_m}=\#(\Lambda\cap Q_{n,k}\cap A_m)=\sum_{j: T_{m,j}\subset Q_{n,k}} X_{m,j}, \] which is a Poisson variable of parameter \[ \mu_{n,m,k}=\mu(Q_{n,k}\cap A_m)=\sum_{j: T_{m,j}\subset Q_{n,k}} \mu_{m,j}. \] Set \[ Y_{n,k}=2^n\sum_{m\ge n}2^{-m}X_{n,k,m}=\sum_{m\ge n}2^{n-m}X_{n,k,m}, \] so that \eqref{eq:Carl-dyadic} becomes $\sup_{n,k} Y_{n,k}<+\infty$. Let $A>0$ be a big constant to be fixed later on. Again by the Borel-Cantelli Lemma, it is enough to show that \begin{equation}\label{est:ynk} \sum_{n,k} \P\bigl(Y_{n,k}>A\bigr)<+\infty, \end{equation} since then $Y_{n,k}\leq A$ for all but maybe a finite number of $n,k$; in particular $\sup_{n,k} Y_{n,k}<\infty$. The first step of the following reasoning is an adaptation to the Poisson process of the proof given in \cite{CHKW}*{Theorem 1.1} and which allowed to improve the result on Carleson sequences for the probabilistic model with fixed radii and random arguments. However, while in the original proof the Carleson boxes $Q_{n,k}$ are decomposed into layers $Q_{n,k}\cap A_m$ ($m\ge n$), in this new situation (as well as for (b)), Carleson boxes are decomposed into top-halves $T_{m,j}\subset Q_{n,k}$, which requires more delicate arguments to reach the convergence needed in the Borel-Cantelli lemma. Recall that the probability generating function of a Poisson variable $X$ of parameter $\lambda$ is $\E(s^X)=e^{\lambda(s-1)}$. By the independence of the different $X_{n,k,m}$, $m\ge n$, \[ \E(s^{Y_{n,k}})=\prod_{m\ge n}\E((s^{2^{n-m}})^{X_{n,m,k}}) =\prod_{m\ge n}e^{\mu_{n,m,k}(s^{2^{n-m}}-1)}. \] Thus for any $s>1$, by Markov's inequality \[ \P(Y_{n,k}>A)=\P(s^{Y_{n,k}}>s^A) \le \frac{1}{s^A}\E(s^{Y_{n,k}})=\frac{1}{s^A}\prod_{m\ge n}e^{\mu_{n,m,k}(s^{2^{n-m}}-1)}. \] Using the estimate $x(a^{1/x}-1)\le a$, for $a,x>1$, with $a=s$ and $x=2^{m-n}$, \begin{align*} \log \P(Y_{n,k}>A)&\le -A\log s+\sum_{m\ge n}(s^{2^{n-m}}-1)\, \mu_{n,m,k}\\ &\le -A\log s+\sum_{m\ge n} s 2^{n-m} \, \mu_{n,m,k}\\ &=-A\log s+s\sum_{m\ge n} 2^{n-m} \sum_{j:T_{m,j}\subset Q_{n,k}}\mu_{m,j}. \end{align*} We want to optimize this estimate for $s>1$. Set \[ B_{n,k}=\sum_{m\ge n} 2^{-(m-n)} \sum_{j:T_{m,j}\subset Q_{n,k}}\mu_{m,j} \] and define \[ \phi (s)=-A\log s+s B_{n,k}. \] Let us observe first that the $B_{n,k}$ are uniformly bounded (they actually tend to 0). Indeed, let $\beta$ denote the conjugate exponent of $\gamma$ ($\frac 1{\gamma}+\frac 1{\beta}=1$). Since for $m\geq n$ there are $2^{m-n}$ boxes $T_{m,j}$ in $Q_{n,k}$, by H\"older's inequality on the sum in the index $j$ we deduce that \begin{align*} B_{n,k}&\le \sum_{m\ge n}2^{-(m-n)}\Bigl(\sum_{j:T_{m,j}\subset Q_{n,k}} \mu_{m,j}^{\gamma} \Bigr)^{1/\gamma}2^{(m-n)/\beta}\\ &=\sum_{m\ge n}2^{-(m-n)/\gamma}\Bigl(\sum_{j:T_{m,j}\subset Q_{n,k}} \mu_{m,j}^{\gamma} \Bigr)^{1/\gamma}<+\infty. \end{align*} Taking $A$ big enough we see that the minimum of $\phi$ is attained at $s_0=A/B_{n,k}>1$. Hence \[ \log \P(Y_{n,k}>A) \le \phi(s_0)=-A\log\frac{A}{B_{n,k}}+A. \] Therefore \[ \P\bigl(Y_{n,k}>A\bigr)\le \left(\frac{B_{n,k}}{A}\right)^Ae^A, \] and \[ \sum_{n,k} \P(Y_{n,k}>A) \le \left(\frac{e}{A}\right)^A\sum_{n,k}B_{n,k}^A. \] The estimate on $B_{n,k}$ obtained previously is not enough to prove that this last sum converges. In order to obtain a better estimate take $p>1$, to be chosen later on, its conjugate exponent $q$ (i.e. $\frac 1p+\frac 1q=1$), and apply H\"older's inequality in the following way: \begin{align*} B_{n,k}&=\sum_{\substack{m\geq n\\j:T_{m,j}\subset Q_{n,k}}} 2^{-(m-n)}\mu_{m,j} =2^n \sum_{\substack{m\geq n\\j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac mp}2^{-\frac mq}\mu_{m,j}\\ &\le 2^n\Bigl(\sum_{\substack{m\geq n\\j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\beta}p}\Bigr)^{1/\beta} \times\Bigl(\sum_{\substack{m\geq n\\j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\gamma}q}\mu^{\gamma}_{m,j}\Bigr)^{1/\gamma}. \end{align*} Choose now $p$ so that $1<p<\beta$; then \begin{align*} \sum_{\substack{m\geq n\\j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\beta}p} &=\sum_{m=n}^{\infty} 2^{-\frac{m\beta}p} \ 2^{m-n} =2^{-n}\sum_{m=n}^{\infty}2^{-m(\frac{\beta}p-1)} \simeq 2^{-n}2^{-n(\frac{\beta}p-1)}=2^{-n\frac{\beta}p}. \end{align*} Thus, from the above estimate, \[ B_{n,k}\le 2^{\frac nq} \Bigl(\sum_{\substack{m\geq n\\j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\gamma}q}\ \mu^{\gamma}_{m,j} \Bigr)^{1/\gamma}. \] Choosing $A=\gamma$ yields \[ \sum_{n,k}B_{n,k}^\gamma \le \sum_{n,k} 2^{\frac{n\gamma}q} \sum_{\substack{m\geq n\\j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\gamma}q}\ \mu^{\gamma}_{m,j} . \] We now apply Fubini's theorem to exchange the sums. The important observation here is that each $T_{m,j}$ has only one ancestor at each level $n\le m$ (i.e, one $T_{n,k}$ containing $T_{m,j}$). Hence \begin{align*} \sum_{n,k} B_{n,k}^{\gamma}&\le\sum_{m,j} 2^{-\frac{m\gamma}q}\ \mu^{\gamma}_{m,j} \sum_{\substack{n\leq m\\ k : Q_{n,k} \supseteq T_{m,j}}} 2^{\frac{n\gamma}p} =\sum_{m,j} 2^{-\frac{m\gamma}q}\ \mu^{\gamma}_{m,j} \sum_{n\le m} 2^{\frac {n\gamma} q}\\ &\le 2\sum_{m,j} 2^{-\frac{m\gamma}q} \mu^{\gamma}_{m,j}\ 2^{\frac{m\gamma}q} =2\sum_{m,j} \mu^{\gamma}_{m,j}. \end{align*} This finishes the proof of \eqref{est:ynk}, hence of this part of the theorem. \medskip Let us now assume that $\sum_{n,k}\mu_{n,k}^{\gamma} =+\infty$ for every $\gamma>1$. Suppose $M\ge 1$ is an integer. Since the sum diverges for $\gamma=M+1$, Theorem~\ref{thm:separation} implies that the sequence $\Lambda_{\mu}$ is almost surely not a union of $M$ separated sequences. In particular, a.s.\ there is $\lambda_0\in\Lambda_{\mu}$ such that $D_{\lambda_0}=\{z\in \D:\rho(\lambda_0,z)<1/2\}$ contains at least $M+1$ points of $\Lambda_{\mu}$. Then, letting $I_{\lambda_0}$ be the interval centered at $\lambda_0/|\lambda_0|$ with length $1-|\lambda_0|$, we have $\sum_{\lambda\in Q(I_{\lambda_0})}(1-|\lambda|)\gtrsim M |I_{\lambda_0}|$, where the underlying constant does not depend on $M$ or $\lambda_0$. This being true for every integer $M\ge 1$, the sequence cannot be 1-Carleson. \medskip (b) Proceeding as in the first implication of (a) we see that it is enough to prove that almost surely \begin{equation}\label{eq:carl-alpha} \sup_{n,k} Y_{n,k}<+\infty\ , \end{equation} where now \begin{equation}\label{ynk-alpha} \quad Y_{n,k}=2^{n\alpha}\sum_{m\geq n} 2^{-m\alpha} \sum_{j: T_{m,j}\subset Q_{n,k}} X_{m,j}. \end{equation} The same estimates as in (a) based on the probability generating function yield, for $s>1$, \[ \log \P\bigl(Y_{n,k}\geq A\bigr)\leq \phi(s)=-A\log s+ B_{n,k}, \] where now \[ B_{n,k}=\sum_{m\geq n} 2^{-(m-n)\alpha} \sum_{j: T_{m,j}\subset Q_{n,k}} \mu_{m,j}. \] As in (a), the hypotheses imply that $B_{n,k}$ is uniformly bounded: letting $\beta$ denote the conjugate exponent to $\gamma$ ($\frac 1{\gamma}+\frac 1{\beta}=1$) and noticing that $\alpha-1/\beta=1/\gamma-(1-\alpha)>0$, \begin{align*} B_{n,k}&\leq \sum_{m\geq n} 2^{-(m-n)\alpha} \Bigl(\sum_{j: T_{m,j}\subset Q_{n,k}} \mu_{m,j}^\gamma\Bigr)^{1/\gamma} \ 2^{(m-n)/\beta} \\ &= \sum_{m\geq n} 2^{-(m-n)(\alpha-1/\beta)} \Bigl(\sum_{j: T_{m,j}\subset Q_{n,k}} \mu_{m,j}^\gamma\Bigr)^{1/\gamma}. \end{align*} Therefore, optimizing the estimate for $s>1$ exactly as we did in (a), we obtain $\P\bigl(Y_{n,k}\geq A\bigr)\lesssim B_{n,k}^A$, and we are lead to prove that for some $A>0$ \begin{equation}\label{sum-c-alpha} \sum_{n,k} \P\bigl(Y_{n,k}\geq A\bigr)\lesssim \sum_{n,k} B_{n,k}^A<\infty. \end{equation} Again, we introduce an auxiliary weight $p$ -- to be determined later -- and its conjugate exponent $q$. Split $2^{-m\alpha}=2^{-\frac{m\alpha}p}2^{-\frac{m\alpha}q}$ and use H\"older's inequality to obtain \begin{align*} B_{n,k}& \le 2^{n\alpha}\Bigl(\sum_{\substack{m\geq n\\j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\alpha\beta}p}\Bigr)^{1/\beta} \times\Bigl(\sum_{\substack{m\geq n\\j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\alpha\gamma}q}\mu^{\gamma}_{m,j}\Bigr)^{1/\gamma}. \end{align*} The first sum is finite: since by hypothesis $\alpha\beta=\frac{\alpha \gamma}{\gamma-1}>1$ there exists $1<p<\frac{\alpha \gamma}{\gamma-1}$ and, \begin{align*} \sum_{\substack{m\geq n\\j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\alpha\beta}p}&=\sum_{m\geq n} 2^{-\frac{m\alpha\beta}p} 2^{m-n} =2^{-n} \sum_{m\geq n} 2^{-m(\frac{\alpha\beta}p-1)}\simeq 2^{-n\frac{\alpha\beta}p}. \end{align*} This implies that \[ B_{n,k}^\gamma\lesssim 2^{n\alpha\frac{\gamma}q} \sum_{\substack{m\geq n\\j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\alpha\gamma}q}\mu^{\gamma}_{m,j} \] and we can conlude the proof of \eqref{sum-c-alpha} as before: \begin{align*} \sum_{n,k} B_{n,k}^\gamma &\lesssim\sum_{n,k} 2^{n\alpha\frac{\gamma}q} \sum_{\substack{m\geq n\\j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\alpha\gamma}q} \mu^{\gamma}_{m,j} =\sum_{m,j} \mu^{\gamma}_{m,j} 2^{-m\alpha\frac{\gamma}q} \sum_{\substack{n\leq m\\ k: Q_{n,k} \supseteq T_{m,j} }} 2^{n\alpha\frac{\gamma}q}\\ &=\sum_{m,j} \mu^{\gamma}_{m,j} 2^{-m\alpha\frac{\gamma}q} \sum_{n\leq m} 2^{-n\alpha\frac{\gamma}q}\simeq \sum_{m,j} \mu^{\gamma}_{m,j} <+\infty. \end{align*} \medskip (c) Here we give a measure $\mu$ for which $\sum_{n,k}\mu_{n,k}^{1/(1-\alpha)}<+\infty$ but $\P(\Lambda_\mu\ \textrm{is $\alpha$-Carleson})=0$. Let \[ d\mu(z)= \frac{dm(z)}{(1-|z|^2)^{1+\alpha} \log\bigl(\frac e{1-|z|^2}\bigr)}, \] which is the measure $\mu=\mu(1+\alpha,1)$ given in the family of examples of Section~\ref{Examples}. By a simple computation (see \eqref{ex:radial}) \[ \mu_{n,k}\simeq \frac{2^{-n(1-\alpha)}}n\qquad n\geq 1,\ k=0, \dots, 2^n-1 \] and therefore, since $k$ ranges over $2^n$ terms, \begin{align*} \sum_{n,k}\mu_{n,k}^{1/(1-\alpha)}\simeq \sum_{n\geq 1} \frac{1}{n^{1/(1-\alpha)}}<+\infty. \end{align*} On the other hand, letting $Y_{n,k}$ be as in the proof of part (b) (see \eqref{ynk-alpha}) we get \beqa \E(Y_{n,k})&=&2^{n\alpha} \sum_{m\ge n}2^{-m\alpha}\sum_{j:T_{m,j}\subset Q_{n,k}}\mu_{m,j} \simeq 2^{n\alpha} \sum_{m\ge n}2^{-m\alpha} 2^{m-n}\frac{2^{-(1-\alpha)m}}{n}\\ &=&2^{-(1-\alpha)n}\sum_{m\ge n}\frac{1}{n}=+\infty \eeqa Thus the expected weight of any single Carleson window $Q_{n,k}$ is infinite and $\Lambda_{\mu}$ cannot be $\alpha$-Carleson. \medskip (d) One could think of considering a divergent series $\sum_{n,k}\mu_{n,k}^{\gamma}=+\infty$ such that $\sum_{n,k}\mu_{n,k}^{\gamma'} <+\infty$ for every $\gamma'>\gamma$, and then apply (b), showing that $\Lambda_{\mu}$ is $\alpha$-Carleson when $\gamma'<\frac{1}{1-\alpha}$, i.e. when $\alpha>1-\frac{1}{\gamma'}=\frac{\gamma'-1}{\gamma'}$. However, this does not yield the whole range $\alpha\in (0,1)$ for a fixed measure, as required by the statement. In order to construct an example working for all $\alpha\in (0,1)$, we pick a measure $\mu$ supported in a Stolz angle of vertex 1, i.e. let, for $n\geq 1$, \[ \mu_{n,k}=\begin{cases} \dfrac{1}{n^{1/\gamma}}&\textrm{if }k=0\\ \quad 0 &\textrm{if } k>1. \end{cases} \] (We could equivalently take the measure $\tau(2,1/\gamma)$ given in Subsection~\ref{Examples}, Example 3). Then \begin{equation}\label{SumDiv} \sum_{n,k}\mu_{n,k}^{\gamma}=\sum_n \frac{1}{n}=\infty \end{equation} but for every $\gamma'>\gamma$, \[ \sum_{n,k}\mu_{n,k}^{\gamma'}=\sum_n \frac{1}{n^{\gamma'/\gamma}}<+\infty. \] To prove that $\Lambda_\mu$ is almost surely $\alpha$-Carleson we will argue as before. Set $Y_{n,k}$ as in the proof of (b) (see \eqref{ynk-alpha}) and follow the same steps to prove that \[ \P(Y_{n,k}\ge A)\lesssim B_{n,k}^A, \] where \[ B_{n,k}=\sum_{m\ge n} \sum_{j:T_{m,j}\subset Q_{n,k}} \mu_{m,j}2^{-(m-n)\alpha}. \] By construction $B_{n,k}=0$ for all $k>0$. On the other hand \[ B_{n,0}=2^{n\alpha}\sum_{m\ge n}2^{-m\alpha}\mu_{m,0} =2^{n\alpha}\sum_{m\ge n}\frac{2^{-m\alpha}}{m^{1/\gamma}} \le \frac{1}{n^{1/\gamma}}. \] (Observe that this last expression is independent of $\alpha$.) Hence \[ \sum_{n,k}B_{n,k}^{\gamma'}=\sum_nB_{n,0}^{\gamma'} \le \sum_n \frac{1}{n^{\gamma'/\gamma}}<+\infty, \] and as in the proof of (b) the Borel-Cantelli lemma allows to conclude that $\Lambda$ is almost surely $\alpha$-Carleson. \section{Random interpolating sequences}\label{int_h} In this section we discuss several consequences of Theorems~\ref{thm:separation} and ~\ref{thm:Carleson} on random interpolating sequences $\Lambda_\mu$ for various spaces of holomorphic functions in $\D$. The results are rather straightforward consequences of the aforementioned theorems and the known conditions for such sequences. \subsection{Hardy (and Bergman) spaces}\label{HardyBergman} In this section we completely characterize the measures $\mu$ for which the associated Poisson process $\Lambda_\mu$ is almost surely an interpolating sequence for the Hardy spaces. Recall that a sequence $\Lambda=\{\lambda_n\}_{n\in\N}\subset\D$ is interpolating for \[ H^\infty=\bigl\{f\in H(\D) : \|f\|_\infty=\sup_{z\in D} |f(z)|<\infty\bigr\} \] whenever for every sequence of bounded values $\{w_n\}_{n\in\N}\subset\C$ there exists $f\in H^\infty$ such that $f(\lambda_n)=w_n$, $n\in\N$. According to a famous theorem by L. Carleson, $\Lambda$ is $H^\infty$-interpolating if and only if it is separated and 1-Carleson \cite{Ca}. This characterization extends to all Hardy spaces \[ H^p=\Bigl\{f\in H(\D) : \|f\|_p=\sup_{r<1}\Bigl(\int_0^{2\pi} |f(re^{it})|^p\, \frac{dt}{2\pi}\Bigr)^{1/p}<+\infty\Bigr\}\qquad 0<p<\infty, \] for which the interpolation problem is defined in a similar manner (the data $w_n$ to be interpolated should satisfy $\sum_n(1-|\lambda_n|^2)|w_n|^p<+\infty$, see e.g. \cite{Du}*{Chapter 9}). The separation condition given in Theorem~\ref{thm:separation} implies immediately that $\Lambda_\mu$ is 1-Carleson, by Theorem~\ref{thm:Carleson}, hence the following result follows. \begin{theorem}\label{thm:Hardy} Let $\Lambda_\mu$ be the Poisson process associated to a positive, $\sigma$-finite, locally finite measure $\mu$. Then, for any $0<p\leq \infty$, \[ \P\bigl(\Lambda_\mu\ \textrm{is $H^p$-interpolating}\bigr) = \begin{cases} 1\quad &\textrm{if $\ \displaystyle\sum\limits_{n,k}\mu_{n,k}^{2}<\infty$} \\ 0\quad &\textrm{if $\ \displaystyle\sum\limits_{n,k}\mu_{n,k}^{2}=\infty$}. \end{cases} \] \end{theorem} To complete the picture we discuss zero sequences $\Lambda$ for $H^p$, $0<p\leq \infty$. These are deterministically characterized by the Blaschke condition $\sum_{\lambda\in\Lambda}(1-|\lambda|)<\infty$. Noticing that $\{\sum_{\lambda\in\Lambda_\mu}(1-|\lambda|)<\infty\}$ is a tail event and using Kolmogorov's 0-1 law we get: \begin{proposition}\label{prop:Blaschke} Let $\Lambda_\mu$ be the Poisson process associated to a positive, $\sigma$-finite, locally finite measure $\mu$. Then, for any $0<p\leq \infty$, \[ \P\bigl(\Lambda_\mu\ \textrm{is a zero set for $H^p$}\bigr) = \begin{cases} 1\quad &\textrm{if $\ \displaystyle\sum\limits_{n,k}2^{-n}\mu_{n,k}<\infty$} \\ 0\quad &\textrm{if $\ \displaystyle\sum\limits_{n,k}2^{-n}\mu_{n,k}=\infty$}. \end{cases} \] \end{proposition} Observe that the condition is just \[ \E\bigl[\sum_{\lambda\in\Lambda_\mu}(1-|\lambda|)\bigr]=\E\bigl[\sum_{n,k}\sum_{\lambda\in T_{n,k}}(1-|\lambda|)\bigr] \simeq \sum\limits_{n,k}2^{-n}\E\bigl[X_{n,k}\bigr] =\sum\limits_{n,k}2^{-n}\mu_{n,k}<\infty. \] Observe also that $\sum_{k=0}^{2^n-1}\mu_{n,k}=\mu(A_n)$ for all $n\in\N$, hence \[ \sum\limits_{n,k}2^{-n}\mu_{n,k}=\sum_n 2^{-n}\mu(A_n). \] \begin{proof}[Proof of Proposition~\ref{prop:Blaschke}] Denote $X_n=N_{A_n}=\sum_{k=0}^{2^n-1} X_{n,k}$ and denote $\mu_n=\E[X_n]=\mu(A_n)$. Assume first that $\sum_{n} 2^{-n} \mu_n<+\infty$. Set $Y=\sum_n 2^{-n} X_n$ and observe that, by the independence of the different $X_n$, \begin{align*} \E[Y]&=\sum_n 2^{-n} \mu_n<+\infty\quad , \quad && \Var(Y)=\sum_n 2^{-2n}\mu_n<+\infty. \end{align*} Then, by Markov's inequality \[ \P(Y\ge 2\E(Y))\le \frac{1}{2}. \] Since $\{Y=\infty\}$ is a tail event, Kolmogorov's 0-1 law implies that $\P(Y=+\infty)=0$, and in particular the Blaschke sum is finite almost surely. Assume now that $\sum_{n} 2^{-n} \mu_n=+\infty$. Split the sum in two parts: \[ \sum_n 2^{-n}\mu_n=\sum_{n:\mu_n\le 2^n/n^2}2^{-n}\mu_n+ \sum_{n:\mu_n> 2^n/n^2}2^{-n}\mu_n. \] It is enough to consider the second sum, since the first one obviously converges. Since $\Var[X_n]=\mu_n$, Chebyshev's inequality yields, \[ \P(X_n\le \frac{1}{2}\mu_n) =\P(X_n\le \mu_n-\frac{\mu_n}{2})\leq \P(|X_n- \mu_n|\geq \frac{\mu_n}{2}) \le \frac{4}{\mu_n}. \] Hence \[ \sum_{n:\mu_n> 2^n/n^2} \P(X_n\le \frac{1}{2}\mu_n) \le \sum_{n:\mu_n> 2^n/n^2}\frac{4}{\mu_n}\le \sum_{n:\mu_n> 2^n/n^2}\frac{4n^2}{2^n} <+\infty. \] Now, by the Borel-Cantelli lemma, $X_n>\frac{1}{2}\mu_n$ for all but maybe a finite number of the $n$ with $\mu_n>2^n/n^2$; hence \[ \sum_{n:\mu_n> 2^n/n^2} 2^{-n}X_n \succsim\frac{1}{2}\sum_{n:\mu_n> 2^n/n^2}2^{-n}\mu_n, \] which diverges, by hypothesis. \end{proof} \subsubsection{Remark. Interpolation in Bergman spaces} Interpolating sequences $\Lambda$ for the (weighted) Bergman spaces \[ B_\alpha^p=\Bigl\{f\in H(\D) : \|f\|_{\alpha,p}^p=\int_{\D} |f(z)|^p (1-|z|^2)^{\alpha p-1} dm(z)<\infty\Bigr\}, \] with $0<\alpha$, $0<p\leq \infty$ are characterized by the separation together with the upper density condition \[ D_+(\Lambda):=\limsup_{r\to 1^-} \sup_{z\in\D} \frac{\sum\limits_{1/2<\rho(z,\lambda)\leq r}\log\frac 1{\rho(z,\lambda)}}{\log(\frac 1{1-r})}<\alpha \] (see \cite{Se2} and \cite{HKZ}*{Chapter 5} for both the definitions and the results). Since every $1$-Carleson sequence has density $D_+(\Lambda)=0$, the same conditions of Theorem~\ref{thm:Hardy} also characterize a.s. Bergman interpolating sequences, regardless of the indices $\alpha$ and $p$. Again, because of the big fluctuations of the Poisson process, the conditions required to have separation a.s. are so strong that they can only produce sequences of zero upper density. Another indication of the big fluctuations of the Poisson process is the following. For the invariant measure $d\nu(z)=\frac{dm(z)}{(1-|z|^2)^2}$, which obviously satisfies $\nu_{n,k}\simeq 1$ for all $n$, $k$, it is not difficult to see that almost surely, \[ D_+(\Lambda_{\nu})=+\infty\qquad\textrm{and}\quad D_-(\Lambda_{\nu}):=\liminf_{r\to 1^-} \inf_{z\in\D} \frac{\sum\limits_{1/2<\rho(z,\lambda)\leq r}\log\frac 1{\rho(z,\lambda)}}{\log(\frac 1{1-r})}=0. \] Therefore there are way too many points for $\Lambda_\nu$ to be interpolating for any $B_\alpha^p$, but there are too few for it to be sampling, since these sets must have strictly positive lower density $D_-(\Lambda)$ (see \cite{HKZ}*{Chapter 5}). \subsection{Interpolation in the Bloch space} We consider now interpolation in the Bloch space $\mathcal B$, consisting of functions $f$ holomorphic in $\D$ such that \[ \|f\|_{\mathcal B}:=|f(0)|+\sup_{z\in\D}|f'(z)|(1-|z|^2)<+\infty. \] Since Bloch functions satisfy the Lipschitz condition $|f(z)-f(w)|\leq \|f\|_{\mathcal B}\, \delta(z,w)$, where $\delta(z,w)=\frac{1}{2}\log\frac{1+\rho(z,w)}{1-\rho(z,w)}$ denotes the hyperbolic distance, A. Nicolau and B. B\o e defined interpolating sequences for $\mathcal B$ as those $\Lambda=\{\lambda_n\}_{n\in\N}$ such that for every sequence of values $\{v_n\}_{n\in\N}$ with $\sup\limits_{n\neq m}\frac{|v_n-v_m|}{\delta(\lambda_n,\lambda_m)}<\infty$ there exists $f\in\mathcal B$ with $f(\lambda_n)=v_n$, $n\in\N$ \cite{BN}. \begin{theorem*}[\cite{BN}*{pag.172}, \cite{S}*{Theorem 7}]\label{thmBN} A sequence $\Lambda$ of distinct points in $\D$ is an interpolating sequence for $\mathcal B$ if and only if: \begin{itemize} \item[(a)] $\Lambda$ can be expressed as the union of at most two separated sequences, \item[(b)] for some $0<\gamma<1$ and $C>0$, \[ \#\bigl\{\lambda\in\Lambda : \rho(z,\lambda)<r\bigr\}\le \frac{C}{(1-r)^{\gamma}} \] independently on $z\in\D$. \end{itemize} \end{theorem*} As explained in \cite{BN}, condition (b) can be replaced by: \begin{itemize} \item [(b)'] for some $0<\gamma<1$ and $C>0$, and for all Carleson windows $Q(I)$, \[ \#\bigl\{\lambda\in Q(I) : 2^{-(l+1)}|I|<1-|\lambda| < 2^{-l}|I|\bigr\}\leq C 2^{\gamma l}\ , \qquad l\geq 0. \] \end{itemize} In \cite{S}*{Corollary 2} it is mentioned that it can also be replaced by: \begin{itemize} \item [(b)''] there exist $0<\gamma<1$ and such that $\Lambda$ is $\gamma$-Carleson. \end{itemize} In view of conditions (a) and (b)'' the following characterization of Poisson processes which are a.s. Bloch interpolating sequences follows from Theorems~\ref{thm:separation} and ~\ref{thm:Carleson}(b) (with $\gamma\in (2/3,1)$). \begin{theorem} Let $\Lambda_\mu$ be the Poisson process associated to a positive, $\sigma$-finite, locally finite measure $\mu$. Then, \[ \P\bigl(\Lambda_\mu\ \textrm{is $\mathcal B$-interpolating}\bigr) = \begin{cases} 1\quad &\textrm{if $\ \displaystyle\sum\limits_{n,k}\mu_{n,k}^{3}<\infty$} \\ 0\quad &\textrm{if $\ \displaystyle\sum\limits_{n,k}\mu_{n,k}^{3}=\infty$}. \end{cases} \] \end{theorem} \emph{Note.} In case $\sum_{n,k}\mu_{n,k}^3<\infty$ it is also possible to prove (b)' directly, with the same methods employed in the proof of Theorem~\ref{thm:Carleson}. It is enough to prove the estimate for dyadic arcs $I_{n,k}$, and for those \[ \#\bigl\{\lambda\in Q(I_{n,k}) : 2^{-(l+1)}|I_{n,k}|<1-|\lambda| < 2^{-l}|I_{n,k}|\bigr\}\simeq \sum_{j : T_{n+l,j}\subset Q_{n,k}} X_{n+l, j}. \] In the above, the left hand side corresponds essentially to the number of points in the layer $Q(I_{n,k})\cap A_{n+l}$. Thus, with $m=n+l$, (b)' is equivalent to \[ \sup_{n,k}\sup_{m\geq n} 2^{-\gamma(m-n)} \sum_{j : T_{m,j}\subset Q_{n,k}} X_{m,j}<+\infty. \] Letting \[ Y_{n,k,m}=2^{-\gamma(m-n)} \sum_{j : T_{m,j}\subset Q_{n,k}} X_{m,j}\ ,\quad \E[Y_{n,k,m}]=2^{-\gamma(m-n)} \sum_{j : T_{m,j}\subset Q_{n,k}} \mu_{m,j} \] and proceeding as in the first part of the proof of Theorem~\ref{thm:Carleson}(a) we get (taking $A=3$): \begin{align*} \sum_{n,k}\sum_{m\geq n} \P\bigl(Y_{n,k,m}\geq 3\bigr)&\lesssim\sum_{n,k}\sum_{m\geq n} \Bigl[2^{-\gamma(m-n)} \sum_{j : T_{m,j}\subset Q_{n,k}} \mu_{m,j}\Bigr]^3\\ &\leq \sum_{n,k}\sum_{m\geq n} 2^{-3\gamma(m-n)} \sum_{j : T_{m,j}\subset Q_{n,k}} \mu_{m,j}^3\ 2^{2(m-n)}\\ &=\sum_{m,j} \mu_{m,j}^3 \sum_{n\leq m} \sum_{k : Q_{n,k} \supseteq T_{m,j}} 2^{-(3\gamma-2)(m-n)}. \end{align*} For any $\gamma>2/3$ this sum is bounded by $\sum_{m,j} \mu_{m,j}^3$, so we can conclude with the Borel-Cantelli lemma. \subsection{Interpolation in Dirichlet spaces} Our last set of results concerns interpolation in the Dirichlet spaces, \[ \mathcal D_\alpha=\bigl\{f\in H(\D) : \|f\|_{\mathcal D_\alpha}^2=|f(0)|^2+\int_{\mathbb D} |f'(z)|^2 (1-|z|^2)^{\alpha} dm(z)<\infty\bigr\}, \] with $\alpha\in(0,1)$. The limiting case $\alpha=1$ can be identified with the Hardy space $H^2$. In these spaces, interpolating sequences are characterized by the separation and a Carleson type condition. This was initially considered by W.S. Cohn, see \cite{Coh}; we refer also to the general result \cite{AHMR}. While separation is a simple condition, that in our random setting is completely characterized by Theorem \ref{thm:separation}, the characterization of Carleson measures in these spaces is much more delicate. This was achieved by D. Stegenga using the so-called $\alpha$-capacity \cite{St}. In our setting it is however possible to use an easier sufficient one-box condition that can be found in K. Seip's book, see \cite{S}*{Theorem 4, p.38}, which we recall here for the reader's convenience. \begin{theorem}[Seip] A separated sequence $\Lambda$ in $\D$ is interpolating for $\mathcal D_{\alpha}$, $0<\alpha<1$ if there exist $0<\alpha'<\alpha$ such that $\Lambda$ is $\alpha'$-Carleson. \end{theorem} The reader should be alerted that in Seip's book the space $\mathcal D_{\alpha}$ is defined in a slightly different way, and that the above statement is adapted to our definition. For these spaces Theorems \ref{thm:separation} and \ref{thm:Carleson} lead to less precise conclusions. Indeed, in view of Theorem~\ref{thm:Carleson}(c),(d) we cannot hope for complete characterizations if we do not impose additional conditions on the measure $\mu . \begin{theorem}\label{thm:Dirichlet} Let $\Lambda_\mu$ be the Poisson process associated to a positive, $\sigma$-finite, locally finite measure $\mu$. \begin{itemize} \item [(a)] If $1/2< \alpha<1$, then \[ P\bigl(\Lambda_\mu \text{ is interpolating for $\mathcal D_\alpha$}\bigr)= \begin{cases} 1\quad \textrm{if $\ \displaystyle\sum\limits_{n,k} \mu_{n,k}^{2}<+\infty$}\\ 0\quad \textrm{if $\ \displaystyle\sum\limits_{n,k} \mu_{n,k}^{2}=+\infty$}. \end{cases} \] \item [(b)] If $0\le \alpha<1/2$ and there exists $1<\gamma<\frac{1}{1-\alpha}$ such that $\sum\limits_{n,k} \mu_{n,k}^{\gamma}<+\infty$, then \[ P\bigl(\Lambda_\mu \text{ is interpolating for $\mathcal D_\alpha$}\bigr)=1. \] \end{itemize} \end{theorem} Clearly, the condition $\sum_{n,k}\mu_{n,k}^2<+\infty$ is also necessary in the case (b) (if the sum diverges, then $\Lambda_{\mu}$ is almost surely not separated). \begin{proof (a) If $\sum_{n,k}\mu_{n,k}^2=+\infty$, then $\Lambda_{\mu}$ is almost surely not separated by Theorem \ref{thm:separation}, hence it is almost surely not interpolating. If $\sum_{n,k}\mu_{n,k}^2<+\infty$, Theorem \ref{thm:separation} shows again that the sequence $\Lambda_{\mu}$ is almost surely separated. By Seip's theorem, it remains to show that $\Lambda_{\mu}$ is almost surely $\alpha'$-Carleson for some $\alpha'<\alpha$. Pick $1/2<\alpha'<\alpha<1$, so that $1/(1-\alpha')>2$. Choosing $\gamma\in (2,1/(1-\alpha'))$ we get \[ \sum_{n,k}\mu_{n,k}^{\gamma}\lesssim \sum_{n,k}\mu_{n,k}^2<+\infty, \] and by Theorem \ref{thm:Carleson}(b) we conclude that $\Lambda_{\mu}$ is almost surely $\alpha'$-Carleson. (b) If $\alpha<1/2$ then $1/(1-\alpha)<2$ and the value $\gamma$ given by the hypothesis satisfies $1<\gamma<2$. Therefore \[ \sum_{n,k}\mu_{n,k}^2\lesssim \sum_{n,k}\mu_{n,k}^{\gamma}<+\infty, \] which allows to deduce from Theorem \ref{thm:separation} that $\Lambda_{\mu}$ is almost surely separated. Since the inequality $\gamma<1/(1-\alpha)$ is strict, we also have $\gamma<1/(1-\alpha')$ for some $\alpha'<\alpha$ sufficiently close to $\alpha$. Again, Theorem \ref{thm:Carleson}(b) shows that $\Lambda_{\mu}$ is almost surely $\alpha'$-Carleson, and Seip's theorem implies that $\Lambda_{\mu}$ is almost surely interpolating. \end{proof} \subsection{Additional remarks and comments} The above results show several applications of our Theorems \ref{thm:separation} and \ref{thm:Carleson}, but they also give rise to many challenging questions. Is it possible to get a necessary counterpart of Theorem~\ref{thm:Carleson}(b) under reasonable conditions on $\mu$ (more general than the class considered in Section \ref{Examples} below)? Is it possible to get precise statements when $\alpha=1/2$? Also, the case of the classical Dirichlet space seems to be largely unexplored for Poisson point processes, while the situation regarding interpolation, separation and zero-sets for the radial probabilistic model is completely known for all $\alpha\in [0,1]$ (see \cites{CHKW,Bog}). \section{Examples and integral conditions for the measure $\mu$} In the first part of this final section we illustrate the above results with three simple families of measures on $\D$. In the second part we briefly discuss alternative, non-discrete, formulations of the conditions given in the previous statements. \subsection{Examples}\label{Examples} \emph{1. Radial measures}. Let $dm$ denote the normalized Lebesgue measure and let $d\nu(z)=\frac{dm(z)}{(1-|z|^2)^2}$ be the invariant measure in $\D$. Define \[ d\mu(a,b)(z)= \frac{dm(z)}{(1-|z|^2)^{a} \log^b\bigl(\frac e{1-|z|^2}\bigr)}= \frac{d\nu(z)}{(1-|z|^2)^{a-2} \log^b\bigl(\frac e{1-|z|^2}\bigr)}, \] where either $a> 1$, $b\in\R$, or $a=1$ and $b\leq 1$ (so that $\mu(a,b)(\D)=+\infty$). Observe that \[ \mu(a,b)_{n,k}\simeq \frac{2^{-n(2-a)}}{n^b}\qquad n\geq 1,\ k=0, \dots, 2^n-1, \] and therefore, for $\gamma>0$, \begin{equation}\label{ex:radial} \sum_{n,k} \mu(a,b)_{n,k}^\gamma\simeq \sum_n 2^n \frac{2^{-n(2-a)\gamma}}{n^{b\gamma}}= \sum_n\frac{2^{-n[(2-a)\gamma-1]}}{n^{b\gamma}}. \end{equation} \begin{proposition} Consider the Poisson process $\Lambda_{a,b}$ associated to the masure $\mu(a,b)$, with either $a>1$ or $a=1$ and $b\leq 1$. \begin{itemize} \item [(a)] $\Lambda_{a,b}$ can a.s. be expressed as a union of $M$ separated sequences if and only if either $a<2-\frac 1{M+1}$ and $b\in\R$, or $a=2-\frac 1{M+1}$ and $b>\frac 1{M+1}$. \item [(b)] In particular, $\Lambda_{a,b}$ is a.s. separated if and only if either $a<3/2$ and $b\in\R$, or $a=3/2$ and $b> 1/2$. \item [(c)] $\Lambda_{a,b}$ is a.s. a 1-Carleson sequence if and only if $a<2$, $b\in\R$. \item [(d)] Let $\alpha\in (0,1)$. Then $\Lambda_{a,b}$ is a.s. an $\alpha$-Carleson sequence if and only if $a<1+\alpha$ or $a=1+\alpha$ and $b> 1$. \end{itemize} \end{proposition} \begin{proof} (a) is immediate from Theorem~\ref{thm:separation} and \eqref{ex:radial} with $\gamma=M+1$. (c) If $a\geq 2$ the series in \eqref{ex:radial} diverges for all $\gamma>1$, thus by Theorem~\ref{thm:Carleson}(a) $\Lambda_{a,b}$ is a.s. not 1-Carleson. On the other hand, if $a<2$ there exists $\gamma$ such that $(2-a)\gamma-1>0$ (i.e, such that $\gamma>\frac 1{2-a})$. For that $\gamma$ the series in \eqref{ex:radial} converges, and we can conclude again by Theorem~\ref{thm:Carleson}(a). (d) Suppose first that $a<1+\alpha$. As in the previous case, since $2-a>1-\alpha$ there exists $\gamma\in(\frac 1{2-a},\frac 1{1-\alpha})$. For this $\gamma$ the series in \eqref{ex:radial} converges and we can apply Theorem~\ref{thm:Carleson}(b). If $a>1+\alpha$ and $b\in\R$, then $\Lambda_{\mu(a,b)}$ contains in the mean more points than $\Lambda_{\mu(1+\alpha,1)}$ for which we have shown in Theorem \ref{thm:Carleson}(c) that it is almost surely not $\alpha$-Carleson. It thus remains the case $a=1+\alpha$. Again, when $b=1$ --- and thus also when $b<1$ since then we have more points in the mean --- the proof of Theorem~\ref{thm:Carleson}(c) shows that the corresponding sequence is almost surely not $\alpha$-Carleson. Finally, suppose that $a=1+\alpha$ and $b>1$. Recall from \eqref{ynk-alpha} the notation \[ \quad Y_{n,k}=2^{n\alpha}\sum_{m\geq n} 2^{-m\alpha} \sum_{j: T_{m,j}\subset Q_{n,k}} X_{m,j}. \] In the proof of Theorem \ref{thm:Carleson}(b) we have shown that \[ P(Y_{n,k}\ge A)\le B_{n,k}^A, \] where \[ B_{n,k}=\sum_{m\geq n} 2^{-(m-n)\alpha} \sum_{j: T_{m,j}\subset Q_{n,k}} \mu_{m,j}. \] From the explicit form of $\mu_{m,j}$ we get \[ B_{n,k}\simeq \sum_{m \ge n}2^{-(m-n)\alpha}\times 2^{m-n}\times \frac{2^{-m(2-a)}}{m^b} =2^{n(\alpha-1)}\sum_{m\ge n}\frac{2^{-m(\alpha+1-a)}}{m^b} \] which converges exactly when $a<1+\alpha$ or when $a=1+\alpha$ and $b>1$ which is the case we are interested in here. In this situation, we get \[ B_{n,k}\simeq \frac{2^{-n(2-a)}}{n^b} \] Clearly, when $A\ge 1/(2-a)=1/(1-\alpha)$, then $\sum_{n,k}B_{n,k}^A$ converges, and the Borel-Cantelli lemma shows that $Y_{n,k}\ge A$ can happen for an at most finite number of Carleson windows $Q_{n,k}$. Hence $\Lambda_{\mu(a,b)}$ is a.s. $\alpha$-Carleson. \end{proof} \emph{ 2. Measures with a singularity on $\T$}. Define now \[ d\sigma(a,b)(z)= \frac{dm(z)}{|1-z|^{a} \log^b\bigl(\frac e{|1-z|}\bigr)}, \] where either $a>2$, $b\in\R$, or $a=2$ and $b\leq 1$ (so that $\sigma(a,b)(\D)=+\infty$). Here \begin{align*} \sigma (a,b)_{n,k}=\sigma (a,b)(T_{n,k})\simeq \frac{2^{-2n}}{[(k+1)2^{-n}]^a \log^b\bigl(\frac e{(k+1)2^{-n}}\bigr)},\quad n\in\N,\ k=0,\ldots, 2^{n-1}. \end{align*} Hence for $\gamma>1$, \begin{equation}\label{ex:radial1} \sum_{n,k} \sigma (a,b)_{n,k}^\gamma\simeq \sum_{n} {2^{-n\gamma(2-a)}} \sum_{k=1}^{2^n}\frac 1{\Big(k^a\log^b\bigl(\frac e{k2^{-n}}\bigr)\Big)^{\gamma}}. \end{equation} Let us examine the growth of the sum in $k$. For that, set \[ S_n(a,b,\gamma)=\sum_{k=1}^{2^n}\frac 1{k^{a\gamma}\log^{b\gamma}\bigl(\frac e{k2^{-n}}\bigr)} \simeq \int_1^{2^n}\frac{dx}{x^{a\gamma}\log^{b\gamma}\bigl(\frac e{x2^{-n}}\bigr)} \] The change of variable $t=\log\bigl(\frac e{x2^{-n}}\bigr)$ leads to \[ S_n(a,b,\gamma)\simeq \int_{\log(2^ne)}^1 \left(\frac{e^t}{e2^n}\right)^{a\gamma-1}\frac{-dt}{t^{b\gamma}} =\frac{2^{-n(a\gamma-1)}}{e^{a\gamma -1}}\int_{1}^{\log(2^ne)}e^{t(a\gamma-1)}\frac{dt}{t^{b\gamma}} \] Our standing assumption being $a>2$ or $a=2$ and $b\le 1$, we only need to consider these two cases. In both cases, $e^{t(a\gamma-1)}/t^{b\gamma}\to +\infty$ when $t\to+\infty$, and the last integral behaves essentially as the value in the upper bound of the integration interval \[ \int_{1}^{\log(2^ne)}e^{t(a\gamma-1)}\frac{dt}{t^{b\gamma}} \simeq \frac{2^{n(a\gamma-1)}}{n^{b\gamma}}. \] Hence \[ S_n(a,b,\gamma)\simeq \frac{1}{n^{b\gamma}}, \] and \begin{equation}\label{ex:radial3} \sum_{n,k} \sigma (a,b)_{n,k}^\gamma\simeq \sum_n 2^{-n\gamma(2-a)}\times \frac{1}{n^{\gamma b}}=\sum_n \frac{2^{-n\gamma(2-a)}}{n^{\gamma b}}. \end{equation} We are now in a position to prove the following result. \begin{proposition}\label{Prop:Ex2} Consider the Poisson process $\tilde \Lambda_{a,b}$ associated to the measure $\sigma(a,b)$, with either $a>2$ or $a=2$ and $b\leq 1$. \begin{itemize} \item [(a)] For $a>2$ the process $\tilde \Lambda_{a,b}$ is a.s.\ neither a finite union of separated sequences nor an $\alpha$-Carleson, for any $\alpha\in (0,1]$. \item [(b)] For $a=2$ the process $\tilde \Lambda_{2,b}$ is \begin{itemize} \item [(i)] the union of $M$ separated sequences if and only if $b>\frac 1{M+1}$, \item [(ii)] $\alpha$-Carleson for $\alpha\in (0,1)$ if $b>1-\alpha$. \end{itemize} \end{itemize} \end{proposition} \begin{proof} (a) is immediate from Theorems~\ref{thm:separation} and ~\ref{thm:Carleson}, since \eqref{ex:radial3} diverges for all $\gamma>0$. (b) In this case the series \eqref{ex:radial3} is just $\sum_n 1/n^{b\gamma}$. The case (i) follows from Theorem~\ref{thm:separation} with $\gamma=M+1$. For (ii), by the hypothesis $1/b<1/(1-\alpha)$, there exists $1/b<\gamma<1/(1-\alpha)$, for which the series \eqref{ex:radial3} converges. We can conclude by Theorem~\ref{thm:Carleson}. \end{proof} \emph{ 3. Measures in a cone}. Given a point $\zeta\in\T$, consider a Stolz region \[ \Gamma(\zeta)=\bigl\{z\in \D :\frac{|\zeta-z|}{1-|z|}<2\bigr\}. \] We discuss the previous measures restricted to $\Gamma(\zeta)$. With no restriction of generality we can assume that $\zeta=1$. Let thus \[ d\tau(a,b)(z)= \chi_{\Gamma(1)}(z) d\mu(a,b)(z)=\chi_{\Gamma(1)}(z)\frac{dm(z)}{(1-|z|^2)^{a} \log^b\bigl(\frac e{1-|z|^2}\bigr)}, \] where now either $a> 2$, $b\in\R$, or $a=2$ and $b\leq 1$ (so that $\nu(a,b)(\D)=+\infty$). Since in $\Gamma$ the measures $d\mu(a,b)$ and $d\sigma(a,b)$ behave similarly, we could replace $d\mu(a,b)$ by $d\sigma(a,b)$ in the definition of $d\tau(a,b)$. Observe that $\nu(a,b)_{n,k}$ is non-zero only for a finite number $N$ of $k$ at each level $n$, and that for those $k$ \[ \tau(a,b)_{n,k}\simeq \frac{2^{-n(2-a)}}{n^b}\qquad n\geq 1,\ k=0, \dots, N. \] Hence \begin{equation}\label{ex:radial2} \sum_{n,k} \tau(a,b)_{n,k}^\gamma\simeq \sum_n \frac{2^{-n(2-a)\gamma}}{n^{b\gamma}}, \end{equation} which is exactly the same estimate as in \eqref{ex:radial3} and thus immediately leads to the same result as Proposition \ref{Prop:Ex2}. This might look surprising since $\sigma(a,b)$ (and a fortiori $\mu(a,b)$) puts infinite mass outside $\Gamma(\zeta)$ (actually outside Stolz angles at $\zeta$ with arbitrary opening). \begin{proposition} Consider the Poisson process $\hat \Lambda_{a,b}$ associated to the masure $\tau(a,b)$, with either $a>2$ or $a=2$ and $b\leq 1$. \begin{itemize} \item [(a)] For $a>2$ the process $\hat \Lambda_{a,b}$ is a.s. not a finite union of separated sequences separated or $\alpha$-Carleson for any $\alpha\in (0,1]$. \item [(b)] For $a=2$ the process $\hat \Lambda_{2,b}$ is \begin{enumerate} \item the union of $M$ separated sequences if and only if $b>\frac 1{M+1}$, \item $\alpha$-Carleson for $\alpha\in (0,1)$ if $b>1-\alpha$. \end{enumerate} \end{itemize} \end{proposition} \subsection{Integral conditions on $\mu$} Given a locally finite, $\sigma$-finite measure $\mu$, a natural question is whether the discretized conditions appearing in Theorems \ref{thm:separation} and \ref{thm:Carleson} can be reformulated in terms of integrals. Let us assume that $\mu$ is absolutely continuous with respect to the Lebesgue (or the invariant) measure on $\D$. In view of the aforementioned discrete conditions this is not really restrictive, since in case $\mu$ had a singular part we could just redistribute its mass continuously on each $T_{n,k}$. Assume thus that $d\mu=h\, d\nu$, where $d\nu=\frac{dm(z)}{(1-|z|^2)^2}$ is the invariant measure, $h\geq 0$ and $h\in L^1_{loc }(\D; \nu)$ (but $h\notin L^1(\D; \nu)$, so that $\mu(\D)=\infty$). As a result of Jensen's inequality applied to $\nu$ on $T_{n,k}$, we deduce the following general observation. \begin{proposition} Let $\mu=h\, d\nu$, where $h\geq 0$ and $h\in L^1_{loc }(\D)$. For every $\gamma>1$ there exists $C>0$ such that \begin{equation}\label{intcond} \sum_{n,k}\mu_{n,k}^{\gamma}\leq C \int_{\D} h^{\gamma}(z)\, d\nu(z). \end{equation} \end{proposition} Of course without additional conditions on $h$ the conditions on the sum and on the integral cannot be equivalent, and there are standard construction methods to find measures for which the sum is convergent while the integral diverges. We do not go into details of that here. One is obviously more interested in situations where the sum and the integral conditions are equivalent. This is for instance the case when $h$ is radial with some regularity conditions (as in the first class of examples in the previous section) or when $h$ is subharmonic (as in the second class of examples). Another mildly regular case in which an equivalent reformulation in terms of integrals is possible is when $\mu$ is doubling, meaning that there exists $C>0$ such that $\mu(2D)\leq C \mu(D)$ for all open disc $D\subset\D$. Here $2D$ denotes the disk with the same center as $D$ but with double radious (in the pseudohyperbolic metric). Fixing any $c\in (0,1)$ and defining $F_\mu(z)=\mu(D(z,c))$ one immediately sees that for any $\gamma>1$ \[ \sum_{n,k}\mu_{n,k}^\gamma \simeq \int_{\D} F_\mu^\gamma (z)\, d\nu(z). \]
2024-02-18T23:40:41.376Z
2022-07-22T02:12:25.000Z
algebraic_stack_train_0000
3,076
11,488
proofpile-arXiv_065-15109
\section{Introduction} The equilibrium properties of interacting quantum systems at finite temperature can be described by the Matsubara formalism of quantum statistical mechanics.\cite{AGD75} In this formalism, single- and two-particle quantities are expressed in terms of Green's functions, self-energies, susceptibilities, and vertex functions in imaginary time. The imaginary time formalism has a long tradition in the calculation of properties of interacting systems,\cite{Bloch58,Luttinger60} and weak coupling methods such as the random phase approximation,\cite{Bohm53,GellMann57} the self-consistent second order approximation,\cite{Dahlen05,Phillips14,Phillips15,doi:10.1021/acs.jctc.5b00884,Kananenka16,Rusakov16,Welden16} or the $GW$ method\cite{Hedin65} can be formulated in terms of imaginary time Green's functions and self-energies. Numerical algorithms, including lattice Monte Carlo methods,\cite{Blankenbecler81} impurity solver algorithms,\cite{Hirsch86,Rubtsov05,Werner06,Gull08_ctaux,Gull11_RMP} diagrammatic Monte Carlo methods\cite{Prokofev07} are similarly based on finite-temperature Green's function formalism, as are some implementations of the dynamical mean field theory\cite{Georges96} and its extensions.\cite{Toschi07,Rubtsov08,Maier05} Finite temperature fermionic (bosonic) imaginary time Green's functions are antiperiodic (periodic) functions with period $\beta$ and can be reduced to the interval $[0,\beta]$. In most of the applications mentioned above, they are sampled on a uniform grid, typically with $10^2$ to $10^4$ grid discretization points. However, a uniform representation of the function is only efficient in effective model systems. In large multi-orbital systems, and especially in systems with realistic band structures, an accurate representation of Green's functions with a uniform discretization would require millions to billions of time slices per Green's function element, as the wide energy spacing of realistic Hamiltonians results in features on very small time scales. Therefore, more compact representations of Green's functions are needed in this context. A first attempt at constructing a more compact representation, the `uniform power mesh', was proposed by \textcite{Ku00}, see also Ref.~\onlinecite{Ku02}. There, a set of logarithmically spaced nodes is chosen on the imaginary time interval. The Green's function is then uniformly discretized between those nodes, using a constant number of points for each interval. This leads to a clustering of points near $0$ and $\beta$, where much of the rapid change of Green's functions for low-lying excitations takes place. Later, Legendre polynomial representations\cite{Boehnke11} were pioneered in the context of continuous-time Monte Carlo methods, where the compactness of the representation could reduce the number of observables that needed to be accounted for, and in the context of analytical continuation,\cite{Shinaoka17AC} where an intermediate basis of a singular value decomposed analytic continuation kernel\cite{Shinaoka17Basis} could further reduce the number of coefficients. This was followed by progress in the context of perturbation methods for realistic systems,\cite{doi:10.1021/acs.jctc.5b00884} where the combination of uniform power meshes and Legendre polynomial expansions drastically reduced the size of the imaginary time grid. In Matsubara frequency space, Ref.~\onlinecite{Kananenka16} showed that much of the Matsubara frequency dependence of Green's functions and self-energies can be represented by interpolation functions, thereby vastly reducing the number of frequencies required to obtain accurate results. For practical use in real materials simulations, a set of basis functions for imaginary time Green's functions and self-energies should satisfy at least the following criteria. First and foremost, it should be possible to represent the large energy spread of typical interacting systems with a small number of coefficients. Second, it should be straightforward to confirm that the representation is fully converged, i.e. that basis truncation errors are small. Finally, the mathematics of performing typical operations on Green's functions, such as evaluating a self-energy, a polarization bubble, a Dyson equation, or Fourier transforming data to frequency space and evaluating energies should be straightforward, both analytically and in terms of the numerical effort. The representations mentioned above satisfy some but not all of these requirements. In this paper, we therefore introduce an alternative representation of imaginary time Green's functions, based on approximating the Green's functions by a sum of scaled Chebyshev polynomials of the first kind. We test the performance of this expansion explicitly for a variety of systems in realistic basis sets, including periodic solids. We examine how the number of coefficients converges as a function of temperature, basis set size, and precision required, and we show how Fourier transforms and Dyson equations can be solved directly in Chebyshev space. The remainder of this paper is organized as follows. In section \ref{sec:method}, we present the detailed derivation of the method. In section \ref{sec:results}, we list and discuss the numerical results of our method as applied to realistic molecular and solid state calculations. We present conclusions in section \ref{sec:conclusions}. \section{Method}\label{sec:method} \subsection{Chebyshev expansion of response functions} Imaginary time Green's functions $G^{\nu\mu}(\tau)=-\langle c_\nu(\tau)c_\mu^\dagger(0)\rangle$ are defined for $0\leq \tau\leq\beta$, where $\beta$ is the inverse temperature and Greek letters correspond to orbital indices. Outside this interval, fermionic Green's functions satisfy $\beta$ anti-periodicity $G(-\tau)=-G(\beta-\tau)$, whereas bosonic response functions are $\beta$-periodic, $\chi(-\tau)=\chi(\beta - \tau)$. In the following we will work on the interval $[0,\beta]$, and use the mapping $x(\tau)=\frac{2\tau}{\beta}-1$ to map it to the interval $[-1,1]$. The Chebyshev polynomials of the first kind, $T_{j}(x)$,\cite{Mason03} form a complete basis for bounded functions in this interval. Any Green's function, or other response function can therefore be expanded into a sum of Chebyshev polynomials and approximated by a truncated Chebyshev series \begin{align} G^{\nu\mu}(x)&\approx\sumplim{j=0}{m} g_j^{\nu\mu}T_j(x)=\sum_{j=0}^{m} g_j^{\nu\mu}T_j(x)-\frac{1}{2}g_0^{\nu\mu}\label{Eq:chebyshev_series},\\ g_j^{\nu\mu}&=\frac{2}{\pi}\int_{-1}^1\frac{G^{\nu\mu}(x)T_j(x)}{\sqrt{1-x^2}}dx.\label{Eq:gdef} \end{align} The primed sum denotes the special treatment of the coefficient $g_0$ customary in this context.~\cite{NR93} Based on the discrete orthogonality properties of the Chebyshev polynomials,~\cite{NR93} if the values of $G^{\nu\mu}(x)$ are known at the zeros of the $m$th Chebyshev polynomial, $x_{k}=\cos\left(\frac{2k-1}{2m}\pi\right),\ \ k=1,\ldots,m$, the calculation of the coefficients in Eq.~\ref{Eq:gdef} simplifies to a discrete cosine transform. In addition, values of the Chebyshev approximant anywhere in the interval $0\leq\tau\leq\beta$ can be obtained from $g_j^{\nu\mu}$ using Clenshaw recursion relations.\cite{Clenshaw55} Chebyshev representations are particularly efficient for approximating analytic functions on the interval $[-1,1]$, as approximation theory guarantees that the magnitude of the coefficients $g_j^{\nu\mu}$ decays at least exponentially as $j\rightarrow\infty$, and that the maximum difference between $G$ and its Chebyshev approximant decreases exponentially.\cite{Mason03} The fermionic and bosonic imaginary time Green's functions, polarization functions, self-energies, and response functions appearing in finite-temperature many-body theory are all analytic functions between $0$ and $\beta$. As we will show in Sec.~\ref{sec:results}, fast convergence of Green's functions and self-energies with the number of Chebyshev coefficients is observed, and the discrete cosine transforms and recursion relations allow for quick numerical operations on the data in practice. We find that our examples converge to a precision of $10^{-10}$ within about $40$ coefficients for the simple realistic systems, such as hydrogen molecules, whereas around $500$ Chebyshev nodes are required to describe a krypton atom in a pseudopotential approximation. \subsection{Convolutions} Products of Matsubara Green's functions correspond to convolutions in imaginary time. The convolution \begin{align} A(t)=\int_0^\beta d\tau B(t-\tau)C(\tau)\label{Eq:conv} \end{align} with $A$, $B$, and $C$ Green's functions or self-energies, requires careful treatment of the discontinuity of $B(t-\tau)$ at $t=\tau$, so that standard Chebyshev convolution formulas \cite{Hale14} cannot be applied. Instead, we express Eq.~\ref{Eq:conv} by expanding the rescaled integral into Chebyshev components (appropriately rescaling the zero coefficients) \begin{align} \sum\nolimits'_ja_j^{\nu\mu} T_j(x)&=\sum_{kl\xi}b_k^{\nu\xi} c_l^{\xi\mu} I_{kl}(x),\\ I_{kl}(x)&=\frac{\beta}{2}\Big[\int_{-1}^x T_k(x-y-1)T_l(y)dy\\&\mp\int_x^1 T_k(x-y+1)T_l(y) dy\Big]\nonumber, \end{align} where the minus (plus) sign corresponds to a convolution of fermionic (bosonic) functions, $I_{kl}(x)$ is an integral of polynomials in $[-1,1]$ and can therefore be written as a Chebyshev series, \begin{align} I_{kl}(x)=\frac{\beta}{2}\sum\nolimits'_j t^j_{kl} T_j(x), \end{align} resulting in the formulation of the fermionic convolution as a matrix multiplication \begin{align} a_j^{\nu\mu}=\frac{\beta}{2}\sum_{kl\xi}b_k^{\nu\xi}c_l^{\xi\mu}t^j_{kl}. \end{align} This representation becomes more efficient than the Fourier representation whenever a very large number of Fourier components is required. A detailed derivation of recursion relations for bosonic and fermionic integrals $t^j_{kl}$ is provided in the appendix. \subsection{Dyson Equation}\label{sec:Dyson} Most diagrammatic algorithms are formulated in imaginary time, where the interaction vertex $V_{pqrs}$ is instantaneous. However, most contain a step for solving a Dyson equation, either for adjusting a chemical potential to the desired particle number or to obtain self-consistent propagators. This Dyson equation $G=G_0+G_0\Sigma G$ is most conveniently expressed in frequency space, where it can be solved for each frequency independently. In imaginary time, the Dyson equation determining $G$, given $G_0$ and $\Sigma$, corresponds to a Fredholm integral equation of the second kind.~\cite{Zemyan12,Sakkinen15} As in the case of the Fourier transforms and convolutions, the discontinuity at zero and the highly non-uniform nature of the Green's functions make uniform discretizations \cite{Dahlen05} inefficient. Defining $B(t)=\int d\tau G_0(t-\tau)\Sigma(\tau)$ and expanding into Chebyshev coefficients, we obtain the equation \begin{align} g_j^{\nu\mu}=g_{(0)j}^{\nu\mu}+\frac{\beta}{2}\sum_{kl\xi}b_k^{\nu\xi}g_l^{\xi\mu}t^j_{kl} \end{align} with $t^j_{kl}$ defined as above. This linear system can be recast as $\sum_{j\xi} A_{ij}^{\nu\xi}g_j^{\xi\mu}=g_{0i}^{\nu\mu}$ with a matrix $A_{ij}^{\nu\mu}=\delta_{ij}\delta_{\nu\mu}-\frac{\beta}{2}\sum_k b_k^{\nu\mu}t^j_{kl}$. The solution of the Fredholm integral equation is thereby mapped onto the solution of a system of linear equations, bypassing the Matsubara domain entirely. \subsection{Fourier Transforms}\label{sec:Fourier} Fourier transforms between time and frequency domains require a careful treatment of the Green's function around $\tau=0$. At this point, fermionic Green's functions are discontinuous due to the fermion anticommutation relation. This discontinuity is usually absorbed in an explicit treatment of the short time (high frequency) behavior using high frequency expansions and suitable model functions.\cite{Comanac07,Rusakov14,Gull11_RMP} Even with this high-frequency treatment, the number of Matsubara frequencies required for accurate energies and spectra of realistic systems at low temperature is very large. This is because the spacing of the Matsubara points is given by the inverse temperature, whereas the location of the main features of the function is given by the energy scale of the Hamiltonian. In realistic systems, these energy scales are different by many orders of magnitude, requiring millions to billions of frequency points. Adaptive grids, such as the one developed in Ref.~\onlinecite{Kananenka16}, provide a partial solution to this problem. Fourier transforms of Chebyshev polynomials to Matsubara frequencies $\omega_n=\frac{(2n+\zeta)\pi}{\beta}$ ($\zeta=0$ for bosons and $\zeta=1$ for fermions), are obtained by evaluating the integral\cite{Fokas12} \begin{align}\label{Eq:ftdef} \mathcal{F}(T_{m})(i\omega_n) &=\int_{0}^{\beta}d\tau T_{m}(x(\tau))e^{i \omega_n \tau} = \nonumber \\ & = \frac{\beta}{2}\int_{-1}^{1}dx T_{m}(x)e^{i\lambda_n \frac{{x+1}}{2}} = F^\zeta_{mn} \end{align} for dimensionless Matsubara frequencies $\lambda_n=(2n+\zeta)\pi$. In the special case of bosonic Matsubara frequency zero, we find that \begin{equation} F^0_{m0} = \frac{\beta}{2}\int_{-1}^1dx T_m(x) = \frac{\beta}{2} \begin{cases} \frac{1+(-1)^m}{1 - m^2}, &m\neq1,\\ 0, &m=1. \end{cases} \end{equation} For all non-zero $\lambda_n$, partial integration yields \begin{align}\label{eq:Imn} I^\zeta_{m}(n)&=\int_{-1}^{1}dxT_{m}(x)e^{i\lambda_n\frac{x+1}{2}} =\left.\frac{2}{i\lambda_n}e^{i\lambda_n\frac{x+1}{2}}T_{m}(x) \right|_{-1}^{1} \nonumber -\\ &-\frac{2}{i\lambda_n}\int_{-1}^{1}dxT_{m}'(x)e^{i\lambda_n\frac{x+1}{2}}, \end{align} where the boundary term evaluates to \begin{align} \left.\frac{2}{i\lambda_n}e^{i\lambda_n\frac{x+1}{2}}T_{m}(x)\right|_{-1}^{1}= 2i\frac{(-1)^{m} - (-1)^\zeta}{\lambda_n}. \end{align} Using $T_{m}'(x)=mU_{m-1}(x),$ where $U_m(x)$ are Chebyshev polynomials of the second kind related to $T_m(x)$ via \begin{align} U_{m}(x)=\begin{cases} 2\left(\sum_{j\ \text{odd}}^{m}T_{j}(x)\right) & m\ \text{odd},\\ 2\left(\sum_{j\ \text{even}}^{m}T_{j}(x)\right)-1 & m\ \text{even}, \end{cases} \end{align} we transform the second integral in (\ref{eq:Imn}) as \begin{align} &\frac{2}{i\lambda_n}\int_{-1}^{1}dxT_{m}'(x)e^{i\lambda_n\frac{x+1}{2}}= \frac{2m}{i\lambda_n}\int_{-1}^{1}U_{m-1}(x)e^{i\lambda_n \frac{x+1}{2}}=\nonumber \\ &=\frac{2m}{i\lambda_n}\int_{-1}^{1}dx e^{i\lambda_n \frac{x+1}{2}} \begin{cases} 2\sum_{j,\text{odd}}^{m-1} T_j(x), &m\ \text{even},\\ 2\sum_{j,\text{even}}^{m-1} T_j(x) - 1, &m\ \text{odd}. \end{cases} \end{align} This results in a recursion relation with respect to index $m$, \begin{align} &I^\zeta_m(n) = 2i\frac{(-1)^{m}-(-1)^\zeta}{\lambda_n} +\nonumber\\+&\frac{2im}{\lambda_n} \begin{cases} 2\left(\sum_{j\ \text{odd}}^{m-1}I^\zeta_{j}(n)\right), & m\ \text{even},\\ 2\left(\sum_{j\ \text{even}}^{m-1}I^\zeta_{j}(n)\right) -I^\zeta_0(n), & m\ \text{odd}, \end{cases} \end{align} which we start by explicitly computing $I^0_0(n) = 2\delta_{0,n}$ or $I^1_0(n) = 4i/\lambda_n$. This recursion relation is unstable\cite{doi:10.1093/imanum/drq036} and therefore has to be implemented in high precision arithmetic. With Eq.~\ref{Eq:ftdef} we then write the Fourier transform as \begin{align}\label{eq:FTMatMul} \mathcal{F}(G)(i\omega_{n})=\sum_{j}g_{j}F^\zeta_{jn}, \end{align} where the Fourier matrix $F^\zeta_{jn}$ is computed once and tabulated. The inverse transform is done by evaluating the function at the Chebyshev nodes and using a discrete cosine transform to obtain the corresponding coefficients. Accurate Fourier transforms and energy evaluations in Fourier space require high frequency expansion coefficients for the Green's function to at least third order, so that $G(i\omega_n)=\frac{c_1}{i\omega_n}+\frac{c_2}{(i\omega_n)^2}+\frac{c_3}{(i\omega_n)^3}+O((i\omega_n)^{-4})$. Fourier transform of the Green's function implies that $c_1= - (G(0^+) + G(\beta^-))$; $c_2=G'(0^+)+G'(\beta^-)$; and in general $c_{k+1}=(-1)^{k+1}(G^{(k)}(0^+)+G^{(k)}(\beta^-))$. These expansion coefficients are available to any order due to the identities \begin{align} \left.\frac{d^{p}T_{n}}{dx^{p}}\right|_{x=\pm1}=(\pm1)^{n+p}\prod_{k=0}^{p-1}\frac{n^{2}-k^{2}}{2k+1}, \end{align} and in particular \begin{align} \left.\frac{dT_{n}}{dx}\right|_{x=\pm1}&=(\pm1)^{n}n^{2},\\ \left.\frac{d^{2}T_{n}}{dx^{2}}\right|_{x=\pm1}&=(\pm1)^{n}\frac{n^{4}-n^{2}}{3}. \end{align} \begin{figure}[bth] \includegraphics[width=\columnwidth]{Fig1_1.pdf} \includegraphics[width=\columnwidth]{Fig1_2.pdf} \caption{Convergence of the Hartree-Fock Green's function with the number of Chebyshev polynomials. Red curves correspond to the sum of all differences with respect to the exact result. Blue curves correspond to the maximum difference when compared to the exact result. Top panel: H$_2$ molecule (open square) and H$_{10}$ molecule (open circle). Bottom panel: periodic one-dimensional LiH (open diamond) and three-dimensional Si crystal (filled circle). Parameters as specified in Table~\ref{tab:Parameters}.} \label{fig:ChebDiff} \end{figure} \section{Results}\label{sec:results} To demonstrate the efficiency of the proposed method we consider four test systems. In order to make the results reproducible, we use electronic structure systems in standardized basis sets that are well documented\cite{szabo1996modern} and readily available.\cite{FellerBSE,SchuchardtBSE} The first two systems are hydrogen molecules, both as H$_2$ and as a one-dimensional chain of ten hydrogen atoms. We use the minimal STO-3g basis\cite{Hehre69} and place the atoms at an inter-atomic distance of $d=1.5$\AA{}. These cases are chosen as `easy' examples of realistic calculations. We also consider two periodic systems. First, a one-dimensional periodic LiH solid in the triple-zeta quality basis set (pob-TZVP) from Ref.~\onlinecite{doi:10.1002/jcc.23153} and, second, a three-dimensional Si crystal in the following basis set: the innermost 1s, 2s, and 2p shells are replaced with the LANL2DZ effective core potentials,\cite{doi:10.1063/1.448799,doi:10.1063/1.448800,doi:10.1063/1.448975} while the basis functions for the outer 3s, 3p, and 3d shells are taken from the Si\_88-31G*\_nada\_1996 basis.\cite{doi:10.1002/SICI,doi:10.1021/jp002353q,Prencipe2006} All systems were evaluated at an inverse temperature of $\beta=100$Ha$^{-1}$. The detailed parameters are shown in Table~\ref{tab:Parameters}. \begingroup \squeezetable \begin{table}[tbh] \begin{ruledtabular} \begin{center} \begin{tabular}{ l | c | c | c } \multicolumn{4}{c}{Molecular systems}\\ \hline & Basis & \multicolumn{2}{c}{inter-atomic distance, \AA{}} \\ \hline H$_{2}$& STO-3g& \multicolumn{2}{c}{1.5000} \\ \hline H$_{10}$& STO-3g& \multicolumn{2}{c}{1.5000} \\ \hline\hline \multicolumn{4}{c}{Periodic systems}\\ \hline & Basis & Unit cell coordinates, \AA{} & Translation vectors, \AA{} \\ \hline LiH& pob-TZVP& \begin{tabular}{@{}c@{}}Li 0.0 0.0 0.0\\H 1.671286 0.0 0.0 \end{tabular} & (3.342572, 0.0, 0.0) \\ \hline Si& Custom, see text& \begin{tabular}{@{}c@{}}Si 0.0000 0.0000 0.0000\\Si 1.3575 1.3575 1.3575\end{tabular} & \begin{tabular}{@{}c@{}} (0.0, 2.7150, 2.7150)\\ (2.7150, 0.0, 2.7150)\\ (2.7150, 2.7150, 0.0) \end{tabular}\\ \end{tabular} \end{center} \end{ruledtabular} \caption{\label{tab:Parameters}Geometries and basis sets for the systems used. All systems were evaluated at an inverse temperature of $\beta=100$Ha$^{-1}$.} \end{table} \endgroup The exponential convergence predicted by theory can be observed in practice. Fig.~\ref{fig:ChebDiff} shows the convergence of the Chebyshev Green's function to the exactly evaluated Hartree-Fock solution as a function of the number of coefficients. Shown is the difference $\Delta$ between the interpolated Green's function and a reference Green's function evaluated analytically on a uniform-power grid (with 15 power points and 18 uniform points between each pair of power points) as a function of the number of coefficients, both as the maximum deviation (dark red curves) and as the sum of all deviations at all points (blue curves). It is evident that the Chebyshev approximation converges to the exactly evaluated Hartree-Fock solution as a function of the number of expansion coefficients, until numerical roundoff errors are reached at a precision of $10^{-12}$. For the hydrogen systems used here, between $30$ and $40$ coefficients lead to a maximum uncertainty of around $10^{-10}$. More complex systems require more coefficients, as illustrated by the one-dimensional LiH and three-dimensional Si, which only reach a precision of $10^{-8}$ within $180$ and $230$ coefficients respectively (lower panel of Fig.~\ref{fig:ChebDiff}). This convergence stems from the fast decay of the Chebyshev expansion coefficients. Fig.~\ref{fig:ChebCoeff} illustrates this point by showing the maximum magnitude of the Chebyshev coefficients of a given polynomial order as a function of order. There, the noise floor is reached for around $40$ (H$_{2}$) or $80$ (H$_{10}$) coefficients, at a coefficient size of around $10^{-16}$. \begin{figure}[bth] \includegraphics[width=\columnwidth]{Fig2.pdf} \caption{Exponential decay of the Chebyshev coefficients for the Hartree-Fock Green's function. H$_2$ molecule (green) and H$_{10}$ molecule (red, dashed) with parameters as specified in Table~\ref{tab:Parameters}. }\label{fig:ChebCoeff} \end{figure} \begin{figure}[bth] \includegraphics[width=\columnwidth]{Fig3_1.pdf} \includegraphics[width=\columnwidth]{Fig3_2.pdf} \caption{Number of Chebyshev coefficients required to resolve the Hartree-Fock Green's function (top panel) and the bare second order self-energy (bottom panel) at temperature $T$ measured in Hartree up to the precision indicated, for one-dimensional H$_{10}$ and one-dimensional periodic LiH. Parameters as specified in Table~\ref{tab:Parameters}. }\label{fig:ConvWithJT} \end{figure} Fig.~\ref{fig:ConvWithJT} shows the number of Chebyshev coefficients required to reach a predetermined precision as temperature is varied. The top panel analyzes the Hartree-Fock Green's function as the temperature is lowered, the bottom panel analyzes the second order self-energy. The systems used are a linear chain of ten hydrogen atoms in the STO-3g basis and a periodic one-dimensional arrangement of LiH, both systems with parameters chosen as in Table~\ref{tab:Parameters}. For the systems illustrated here, the log-log axes suggest a power law behavior and a square-root fit shows that the scaling as a function of temperature grows slower than $\sim T^{-1/2}$, similar to observations in the context of model calculations in a Legendre basis.\cite{2018arXiv180307257C} \begin{figure}[tbh] \includegraphics[width=\columnwidth]{Fig4_1.pdf} \includegraphics[width=\columnwidth]{Fig4_2.pdf} \caption{Total (red) and maximum (blue) difference between the Chebyshev Green's function and the exact Hartree-Fock Green's function as a function of the number of Chebyshev coefficients, for a Krypton atom in four different basis sets\cite{doi:10.1063/1.1622924,doi:10.1063/1.1622923,SchuchardtBSE} without (top panel) and with (bottom panel) effective core potentials.}\label{fig:BasisDependence} \end{figure} The number of Chebyshev points required is strongly system dependent, and depends in particular on the energy spread of the atomic orbitals. This is illustrated in the top panel of Fig.~\ref{fig:BasisDependence}, which shows the maximum and total difference between an exactly evaluated Hartree-Fock Green's function and its Chebyshev approximant for a Kr atom as a function of the number of coefficients. One can see that, for the bases chosen, the maximum error remains $\sim 10^{-4}$ even when $700$ components are chosen, independent of the basis. This slow convergence is due to low-lying core states, which are fully occupied and in $\tau$-domain represented as an exponential decay towards zero with a large decay constant. Alternatively, choosing effective core potentials (lower panel of Fig.~\ref{fig:BasisDependence}) eliminates these low-lying states and causes a more rapid convergence of the polynomial expansion with the number of coefficients (maximum difference of $10^{-9}$ at $\sim 450$ coefficients). Whether a different treatment of the core states, for example by analytically modeling them with a delta function in real frequency, is more effective than a brute force expansion into Chebyshev polynomials is an open question for future research. \begin{figure}[tbh] \includegraphics[width=\columnwidth]{Fig5.pdf} \caption{Convergence of the power grid interpolation of a Hartree-Fock Green's function with the total number of points in the grid for the periodic one-dimensional LiH solid. $p$ denotes the power discretization of the grid, parameters as specified in Table~\ref{tab:Parameters}.}\label{fig:PowerGridDiff} \end{figure} In order to contrast these results to the commonly used uniform power grids, Fig.~\ref{fig:PowerGridDiff} shows the convergence of power grid data to the exact result for the periodic one-dimensional LiH solid of Fig.~\ref{fig:ChebDiff} (lower panel). Data on the power grid is interpolated using cubic splines. This leads to a convergence $\sim u^{-4}$ as a function of the uniform spacing $u$. It is evident that there is an `optimal' number of power points ($12$, in this case) which minimizes the prefactor of the convergence to the exact result (but does not change the scaling). For results accurate to $10^{-8}$, $350$ $\tau$-points were necessary for the optimal choice of power grid parameters, around twice as many as for the Chebyshev grid. \begin{figure}[tbh] \includegraphics[width=\columnwidth]{Fig6.pdf} \caption{Convergence of the Dyson equation solution for the H$_2$ molecule in discretized imaginary time using the method proposed in Ref.~\onlinecite{Stan09}. $p$ denotes the number of power points of the grid and parameters are as specified in Table~\ref{tab:Parameters}.} \label{fig:DiscreteDysonDiff} \end{figure} The full power of the Chebyshev representation becomes apparent when Fourier integrals or a solution of the Dyson equation have to be computed for data known in the imaginary time domain. Fig.~\ref{fig:DiscreteDysonDiff} shows the convergence of the solution of the Dyson equation using trapezoidal integration in imaginary time, as originally proposed in Ref.~\onlinecite{Stan09}. The system studied is H$_2$ in the STO-3g basis, discrete imaginary time points are defined on a power grid with different numbers of power points. The precision of this method is limited by the convergence rate of the trapezoidal integration of the uniform part, which is only quadratic, such that even with $500$ time slices, only a precision of roughly $10^{-4}$ can be achieved. In contrast, Fig.~\ref{fig:ChebCollocationDiff} shows that with the method introduced in this paper, convergence is faster than exponential. Using around $50$ slices, a precision close to $10^{-10}$ can be reached. Similar behavior is obtained whenever Fourier integrals need to be computed from a uniform-power grid (not shown here). Data is usually first interpolated by a spline onto a uniform grid and then Fourier transformed to Matsubara frequencies using a fast Fourier transform. The convergence of the spline to the exact result is the leading contribution to the error of the transform and leads to inaccuracies in the intermediate-to-high frequency region. In contrast, the closed form of Eq.~\ref{eq:FTMatMul} avoids this interpolation step entirely. Finally, we summarize the different aspects of basis functions for imaginary time Green's functions in Tab.~\ref{tab:aspects}. The comparison is subjective by nature, and the suitability of a given basis set will very much depend on the application. We list the compactness (or suitability for large realistic systems), the cost of constructing the basis set (cheap or expensive), the ways of evaluating arbitrary imaginary-time points (via interpolation, recursion, analytic continuation formula, fast Fourier transform, or non-equidistant FFT), the ways of evaluating Matsubara points, and the preferred (or so far tested) ways of solving the Dyson equation. \begin{table}[tbh] \begin{center} \begin{tabular}{ | l || l | l | l | l | l |} \hline \hline Basis & Comp. & Const. & Imag & Mat & Dyson \\ \hline\hline\hline Uniform & No & cheap & interp. & FFT & Fourier\\ \hline Power & No & cheap & interp. & Fourier & Fourier\\ \hline\hline Chebyshev & Yes & cheap & Recursion & Sec.~\ref{sec:Fourier} & Sec.~\ref{sec:Dyson}\\ \hline Legendre & $\sim$Yes & cheap & Recursion & Ref.~\onlinecite{Boehnke11} & Fourier\\ \hline\hline IR \onlinecite{Shinaoka17Basis} & Very& exp. & AC & AC & AC\\ \hline\hline Matsubara & No& cheap & FFT & - & diag.\\ \hline Spline \onlinecite{doi:10.1021/acs.jctc.5b00884}& $\sim$Yes& exp. & NFFT & interp. &diag.\\ \hline \hline \end{tabular} \end{center} \caption{\label{tab:aspects}Subjective comparison of different aspects of the various basis sets for finite-T Green's functions. The basis sets vastly differ in their Compactness (Comp.), basis construction effort (Const.), way of evaluating imaginary time (Imag.) or Matsubara (Mat.) Green's function values, and ways of solving the Dyson equation (via Fourier to Matsubara space, where the equation is diagonal, or as described in the main text).} \end{table} \begin{figure}[tbh] \includegraphics[width=\columnwidth]{Fig7.pdf} \caption{Convergence of the Dyson equation solution for H$_2$ and H$_{10}$ with the number of Chebyshev polynomials with parameters specified in Table~\ref{tab:Parameters}. Red lines: Sum of errors. Blue lines: Maximum error. Squares: H$_2$. Circles: H$_{10}$.}\label{fig:ChebCollocationDiff} \end{figure} \section{Conclusions}\label{sec:conclusions} In conclusion, we have explored the use of an orthogonal polynomial basis for imaginary time Green's functions in the context of realistic materials. We have observed the exponential convergence guaranteed by the analytic nature of Green's functions in practice, and shown that for typical systems, substantially fewer imaginary time points are needed than for a uniform-power grid. The convergence rate of the expansion depends on the system. While low-lying core states present a difficulty for this basis and lead to a slow convergence of the expansion, the complex spectral behavior near the chemical potential is well captured by the first $50$-$100$ coefficients in the systems examined here. We have also shown that convolutions, Dyson equations, and Fourier transforms, which correspond to commonly used operations on imaginary time Green's functions, can be performed accurately and efficiently. This paves the way for using Chebyshev approximated imaginary time Green's functions in calculations of realistic and model systems, replacing the uniform and uniform-power grids that are so far being used in this context. \acknowledgments{ EG and SI were supported by the Simons Foundation via the Simons Collaboration on the Many-Electron Problem. IK was supported by DOE ER 46932, DZ and AAR by NSF-CHE-1453894. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. } \bibliographystyle{apsrev4-1}
2024-02-18T23:40:41.865Z
2018-08-17T02:01:42.000Z
algebraic_stack_train_0000
3,101
5,079
proofpile-arXiv_065-15170
\section{Introduction} Images captured in the wild are often degraded in visibility, colors, and contrasts caused by haze, fog and smoke. Recovering high-quality clear images from degraded images (a.k.a. image dehazing) is beneficial for both low-level image processing and high-level computer vision tasks. Dehazed images are more visually appealing to generate for image processing tasks. Dehazed images can improve the robustness of vision systems that often assume clear images as input. Typical applications that benefit from image dehazing include image super-resolution, visual surveillance, and autonomous driving. Image dehazing is highly desired because of the increasing demand of deploying visual system for real-world applications. Image dehazing is a challenging problem. The effect of haze is caused by atmospheric absorption and scattering that depend on the distance of the scene points from the camera. In computer vision, the hazy image is often described by a simplified physical model, i.e., the atmospheric scattering model \cite{mccartney1976optics,narasimhan2002vision,he2011single,li2017reside}, \begin{equation} I(x) = J(x)t(x) + A(1-t(x)), \label{eq:phy} \end{equation} where $I(x)$ is the observed hazy image, $J(x)$ is the scene radiance (clear image), $t(x)$ is the medium transmission map, and $A$ is the global atmospheric light. When the atmosphere is homogeneous, $t(x)$ can be further expressed as a function of the scene depth d(x) and the scattering coefficient $\beta$ of the atmosphere as $t(x) = \exp(-\beta d(x))$. The goal of image dehazing is to recover clear image $J(x)$ from hazy image $I(x)$. Single image dehazing is particularly challenging. It is under-constrained because haze is dependent on many factors, including the unknown depth information that is difficult to recover from a single image. The atmospheric scattering model \eqref{eq:phy} has been extensively used in previous methods for single image dehazing \cite{fattal2008single,tan2008visibility,tarel2009fast,he2011single,meng2013efficient,fattal2014dehazing,berman2016non,chen2016robust}. These works either separately or jointly estimate the transmission map $t(x)$ and the atmospheric light $A$ to generate the clear image from a hazy image. Due to the under-constrained nature of single image dehazing, the success of previous methods often relies on hand-crafted priors such as dark channel prior \cite{he2011single}, contrast color-lines \cite{fattal2014dehazing}, color attenuation prior \cite{zhu2015fast}, and non-local pior \cite{berman2016non}. However, it is difficult for these priors to be always satisfied in practice. For example, dark channel prior is known to be unreliable for areas that are similar to the atmospheric light. More recent works learn convolutional neural networks (CNNs) to estimate components in the atmospheric scattering model for image dehazing \cite{cai2016dehazenet,ren2016single,li2017aod,li2018cascaded,zhang2018densely,yang2018towards}. These methods are often trained with limited (synthetic) images, and use only a few layers of convolutional filters. The learned shallow networks have limited capacity to represent or process images, making them difficult to surpass the prior-based methods. In contrast, training deep neural networks with large-scale data has made significant progress and achieved state-of-the-art performance in many vision tasks \cite{krizhevsky2012imagenet,simonyan2014very,he2016deep}. Moreover, the deep features extracted by a pre-trained deep network are used as powerful image representation in many applications, such as domain invariant recognition \cite{donahue2014decaf}, perceptual evaluation \cite{zhang2018unreasonable}, and characterizing image statistics \cite{gatys2016image}. More recently, the architecture of CNNs itself has been recognized as a prior for image processing \cite{ulyanov2017deep}. In this paper, we study how to release the power of \emph{deep} network for single image dehazing. We propose an encoder-decoder architecture as an end-to-end system for single image dehazing. We exploit the representation power of deep features by adopting the convolutional layers of the deep VGG net \cite{simonyan2014very} as our encoder, and pre-train the encoder on large-scale image classification task \cite{russakovsky2015imagenet}. We add skip connections with instance normalization between the encoder and decoder, and then train decoder with both $\ell_2$ reconstruction loss and VGG perceptual loss \cite{zhang2018unreasonable}. We show that the recently proposed instance normalization \cite{ulyanov2016instance}, which is designed for image style transfer, is also effective in image dehazing. The proposed method effectively learns the statistics of clear images based on the deep feature representation, which benefits the dehazing process on the input image. Our approach outperforms the state-of-the-art results by a large margin on a recently released benchmark dataset \cite{li2017reside}, and performs surprisingly well in several cross-domain experiments. Our method depends on neither the explicit atmospheric scattering model nor the hand-crafted image priors, and only exploits the deep network architecture and pre-trained models to tackle the under-constrained single image dehazing problem. Our simple yet effective network can serve as a strong baseline for future study in this topic. \section{Related work} \label{sec:related} Traditional methods focus on representing human knowledge as priors for image processing. \citet{tan2008visibility} assumes higher contrast of clear images and proposes a patch-based contrast-maximization method. \citet{fattal2008single} assumes the transmission and surface shading are locally uncorrelated, and estimates the albedo of the scene. Dark channel prior (DCP) ~\cite{he2011single} assumes local patches contain low intensity pixels in at lease one color channel and hence estimates the transmission map. Fast visibility restoration (FVR) ~\cite{tarel2009fast} is a filtering approach by atmospheric veil inference and corner preserving smoothing. \citet{meng2013efficient} uses boundary constraint and contextual regularization (BCCR), and \citet{chen2016robust} uses gradient residual minimization (GRM) to surpress artifacts. \citet{tang2014investigating} combines priors by learning with random forests model. Color attenuation prior (CAP) \cite{zhu2015fast} assumes a linear model of brightness and the saturation and then learns the coefficients. \citet{berman2016non} assumes each color cluster in the clear image becomes a line in RGB space, and proposes non-local image dehazing (NLD). There is an increasing interest in applying convolutional neural networks (CNNs) for image dehazing. DehazeNet~\cite{cai2016dehazenet} and multi-scale convolutional neural networks (MSCNN) \cite{ren2016single} are trained to estimate the transmission map. AOD-Net\cite{li2017aod} estimates a new variable based on the transformation of the atmospheric scattering model. \citet{zhang2018densely} and \citet{li2018cascaded} estimate transmission map and atmospheric light by separate CNNs. \citet{yang2018towards} adversarially train generators for components of the atmospheric scattering model. These methods use relatively small CNNs and do not exploit the pre-trained deep networks for image representation. A few days before our submission, we notice a preprint \cite{cheng2018semantic} that also uses the pre-trained deep networks. The proposed method is quite different from \cite{cheng2018semantic}: we use encoder-decoder with skip connections, while \citet{cheng2018semantic} only use feature maps extracted from one layer of the pre-trained network as input; we study instance normalization and demonstrate its effectiveness; we train an end-to-end system from hazy image to clear image, while \citet{cheng2018semantic} estimate transmission map and atmospheric light; we can generate impressive results without explicitly applying the atmospheric scattering model. Deep neural networks can be used as ``priors'' for image generation and image processing. The architecture of CNNs itself can be a constraint for image processing \cite{ulyanov2017deep} and image generation \cite{kingma2013auto,goodfellow2014generative}. A pre-trained deep networks can be used as general purpose feature extractors \cite{donahue2014decaf} and perceptual metric \cite{zhang2018unreasonable}. The second-order information of the features extracted by a pre-trained network describes the style of images \cite{gatys2016image}. Instance normalization layers that effectively change the statistics of deep features are widely used for image style transfer \cite{ulyanov2016instance,dumoulin2016learned,ghiasi2017exploring,huang2017arbitrary}. \begin{figure}[t] \centerline{ \includegraphics[width=0.85\linewidth]{fig_net.pdf} } \vspace{0.2cm} \caption{ The proposed network: encoder-decoder with skip connections and instance normalization (IN); convolutional layers of pre-trained VGG~\cite{simonyan2014very} are used as encoder; $\ell_2$ reconstruction loss and VGG perceptual loss are used for training decoder and IN layers.} \label{fig:net} \end{figure} \section{VGG-based U-Net with instance normalization} We propose an end-to-end encoder-decoder network architecture for single image dehazing, as shown in \cref{fig:net}. The input is a hazy image, and the output is the desired clear image. We introduce different components of the network in the following paragraphs of this section. \textbf{Encoder. } Our encoder uses the convolutional layers of the VGG net \cite{simonyan2014very} pre-trained on Imagenet large-scale image classification task \cite{russakovsky2015imagenet}. VGG net contains five blocks of convolutional layers, and we use the first three blocks and the first convolutional layer of the forth block. Each block contains several convolutional layers, and each convolutional layer is equipped with ReLU \cite{krizhevsky2012imagenet} as activation function. The width (number of channels) and size (height and width) of convolutional layers are shown in \cref{fig:net}. There is a maxpooling layer of stride two between blocks, which enlarges the receptive field of higher layers. The width of convolutional layer is doubled after the subsampling of feature maps by maxpooling. The pre-trained VGG net is a powerful feature extractor for perceptual metric \cite{zhang2018unreasonable} and image statistics \cite{gatys2016image}. Our encoder is deep and wide, and the extracted deep features are capable to capture the semantic information of the input image. We fix the encoder during training to exploit the power of pre-trained VGG net as ``priors'', and avoid overfitting from relatively small number of samples in image dehazing dataset. \textbf{Decoder and skip connection. } Our decoder is designed to be roughly symmetric to the encoder. The decoder also contains four blocks, and each block contains several convolutional layers. The last layer of the first three blocks of the decoder uses transposed convolutional layer to upsample the feature maps. We use ReLU activation for convolutional and transposed convolutional layers except for the last layer, where we use Tanh as activation function. We add skip connections from the output of the first convolutional layer of encoder block 1,2,3 to the input of decoder block 4,3,2 by concatenating (cat) the feature maps, respectively. Hence our deep encoder-decoder network has a U-Net \cite{ronneberger2015u,isola2016image} structure except that our skip connections are based on blocks instead of layers . We use trainable instance normalization for skip connections, and have instance normalization before each convolutional layer in decoder except the first one. Our deep encoder-decoder network has large capacity, and skip connections make the information smoothly flow to easily train a large network. \textbf{Instance normalization. } We briefly review instance normalization \cite{ulyanov2016instance}, and discuss our motivation in applying instance normalization for single image dehazing. Let $x\in {\mathbb R}^{N\times C\times H\times W}$ represent the feature map of a convolutional layer from a minibatch of samples, where $N$ is the batch size, $C$ is the width of the layer (number of channels), $H$ and $W$ are height and width of the feature map. $x_{nchw}$ denotes the element at height $h$, width $w$ of the $c$th channel from the $n$th sample, and instance normalization layer can be written as, \begin{equation} \begin{split} IN(x_{nchw}) & = \gamma_{nc} \left( \frac{x_{nchw} - \mu_{nc}}{\sigma_{nc}} \right) + \beta_{nc}, \text{ where } \\ \mu_{nc} & = \frac{1}{HW} \sum_{h=1}^{H} \sum_{w=1}^{W} x_{nchw}, \ \sigma_{nc} = \sqrt{\frac{1}{HW} \sum_{h=1}^{H} \sum_{w=1}^{W} (x_{nchw}-\mu_{nc})^2 + \epsilon}, \end{split} \end{equation} $\gamma_{nc}, \beta_{nc}$ are learnable affine parameters, $\epsilon$ is a very small constant, and $\mu_{nc}, \sigma_{nc}^2$ represent the mean and variance for each feature map per channel per sample. If we replace instance level variables $\gamma_{nc}, \beta_{nc}, \mu_{nc}, \sigma_{nc}^2$ with batch level variables $\gamma_{c}, \beta_{c}, \mu_{c}, \sigma_{c}^2$ that are estimated for all samples of a minibatch, we get the well-known batch normalization layer \cite{ioffe2015batch}. We show instance normalization is preferred than batch normalization for single image dehazing in our experimental ablation study. The learnable affine parameters $\gamma_{nc}, \beta_{nc}$ of instance normalization shift the first and second order statistics (mean and variance) of the feature maps. Instance normalization is effective for image style transfer, and the style of images can be represented by learned affine parameters \cite{dumoulin2016learned}. Shifting the statistics of deep features extracted by pre-trained networks has achieved impressive results for arbitrary style transfer \cite{huang2017arbitrary}. Shifting the statistics of images is intuitive for dehazing, however, it can be difficult to decide the exact amount to change because haze depends on the unknown depth. The deep features extracted by a pre-trained VGG net contain semantic information to effectively infer depth for haze, and hence the learned affine parameters effectively shift the statistics of images. We apply instance normalization on the deep features extracted by pre-trained VGG net for single image dehazing. \textbf{Training loss. } Our network is trained with both reconstruction loss and VGG perceptual loss. Denoting the training pairs of hazy image and clear image as $(I_n, T_n), n=1,\ldots, N$, we use the mean squared loss, \begin{equation} \min_{F} \frac{1}{N} \sum_{n=1}^{N} \| F(I_n) - T_n \|^2 + \lambda \| g(F(I_n)) - g(T_n) \|^2, \label{eq:loss} \end{equation} where $F$ represents the trainable instance normalization and decoder layers in our network, $g$ represents the perceptual function, and $\lambda$ is a hyperparameter. We set $\lambda=1$ , and use the features extracted by the first convultional layer of the third block from the pre-trained VGG net as perceptual function. \section{Experiments} In this section, we conduct various experiments on both synthetic and natural images to demonstrate the effectiveness of the proposed method. The atmospheric scattering model is widely used to synthesize images for both training and testing. The hazy images are synthesized from groundtruth clear images and grountruth depth images \cite{li2017reside,ancuti2016d}, or estimated depth images \cite{sakaridis2017semantic}. We train our model on the recently released RESIDE-standard dataset \cite{li2017reside}. RESIDE-standard contains 13,990 images for training, and 500 images for testing. These images are generated by existing indoor depth datasets, NYU2~\cite{silberman2012indoor} and Middlebury stereo~\cite{scharstein2014high}. The atmospheric scattering model is used, where atmospheric lights $A$ is randomly chosen between (0.7, 1.0) for each channel, and scattering coefficient $\beta$ is randomly selected between (0.6, 1.8). We also apply our model trained on RESIDE-standard for cross-domain evaluation on D-Hazy~\cite{ancuti2016d}, I-Haze~\cite{ancuti2018ihaze} and O-Haze~\cite{ancuti2018ohaze} dataset. D-Hazy dataset \cite{ancuti2016d} is another synthetic dataset, which contains 23 images synthesized from Middlebury and 1449 images synthesized from NYU2, with atmospheric lights $A=(1,1,1)$ and scattering coefficient $\beta=1$. Though D-Hazy dataset use the same clean images as RESIDE-standard, the generated hazy images are quite different. I-Haze~\cite{ancuti2018ihaze} and O-Haze~\cite{ancuti2018ohaze} are two recent released datasets on natural indoor and outdoor images, respectively. I-Haze contains 35 pairs of indoor images and O-Haze contains 45 pairs of outdoor images, where the hazy images are generated by using a physical haze machine. We compare our results quantitatively and qualitatively with previous methods. We compare with prior-based methods, DCP~\cite{he2011single}, FVR~\cite{tarel2009fast}, BCCR~\cite{meng2013efficient} , GRM~\cite{chen2016robust}, CAP~\cite{zhu2015fast} and NLD~\cite{berman2016non} . We also compare with learning-based methods DehazeNet~\cite{cai2016dehazenet}, MSCNN~\cite{ren2016single} , and AOD-Net~\cite{li2017aod}. We have provided a brief review of these baseline methods in \cref{sec:related}. We use peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as metrics for quantitative evaluation. For the benchmark evaluation on RESIDE-side, all the learning-based methods are trained on the same dataset. For cross-domain evaluation on D-Hazy, O-Haze and I-Haze, we use the released best pre-trained model for the learning-based baseline methods. We train our model by SGD with minibatch size 16 and learning rate 0.1 for 60 epochs, and linearly decrease the learning rate after 30 epochs. We use momentum 0.9 and weight decay $10^{-4}$ for all our experiments. We will release our Pytorch code and pre-trained models. \subsection{Quantitative evaluation on benchmark dataset} \begin{table}[t] \centering \caption{Quantitative results on RESIDE-standard dataset \cite{li2017reside}.} \vspace{0.3cm} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & DCP~\cite{he2011single} & FVR~\cite{tarel2009fast} & BCCR~\cite{meng2013efficient} & GRM~\cite{chen2016robust} & CAP~\cite{zhu2015fast} \\ \hline PSNR & 16.62 & 15.72 & 16.88 & 18.86 &19.05 \\ SSIM & 0.8179 & 0.7483 & 0.7913 & \underline{0.8553} & 0.8364 \\ \hline & NLD~\cite{berman2016non} & DehazeNet~\cite{cai2016dehazenet} & MSCNN~\cite{ren2016single} & AOD-Net~\cite{li2017aod} & \textbf{Ours} \\ \hline PSNR & 17.29 & \underline{21.14} & 17.57 & 19.06 & \textbf{27.79}\\ SSIM & 0.7489 & 0.8472 & 0.8102 & 0.8504 & \textbf{0.9556}\\ \hline \end{tabular}% \label{tab:soa}% \vspace{-0.3cm} \end{table}% We present the performance of our network and baseline methods on the RESIDE-standard benchmark dataset \cite{li2017reside} in \cref{tab:soa}. Our network and the learning-based baselines \cite{cai2016dehazenet,ren2016single,li2017aod} are trained on the provided synthetic data, and evaluated on the separate testing set. We evaluate our results by metrics provided by \cite{li2017reside}, and compare with the baseline results reported in \cite{li2017reside}. The learning-based methods perform slightly better than the prior-based method. CAP~\cite{zhu2015fast} performs best in prior-based method, which has a learning phase for the coefficients of the linear model. DehazeNet~\cite{zhu2015fast} performs best in baseline methods, which uses a relatively small network to predict components. Our approach outperforms all the baseline methods on both PSNR and SSIM by a large margin. The synthetic data for both training and testing are generated by the atmospheric scattering model, and the baseline methods explicitly use the atmospheric scattering model. In contrast, our approach only uses instance normalization to transform the statistics of deep features . The superior performance of our network on the benchmark dataset demonstrate the effectiveness of \emph{deep} networks and instance normalization for single image dehazing. \subsection{Ablation study} \begin{table}[tbhp] \centering \caption{Ablation study on RESIDE-standard dataset. } \vspace{0.3cm} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Skip & NA & BN & IN & NA & BN \\ \hline Dec & NA & NA & NA & BN & BN \\ \hline PSNR & 18.24 & 25.67 & 26.00 & 25.99 &26.38 \\ SSIM & 0.7945 & 0.9442 & 0.9414 & 0.9385 & 0.9519 \\ \hline \hline Skip & IN & NA & BN & IN & Perceptual\\ \cline{1-5} Dec & BN & IN & IN & IN & loss \\ \hline PSNR & 26.89 & 26.57 & 27.67& \underline{27.75} & \textbf{27.79}\\ SSIM & 0.9535 & 0.9381 & 0.9543& \underline{0.9549} & \textbf{0.9556}\\ \hline \end{tabular}% \label{tab:ablation}% \vspace{-0.2cm} \end{table}% We provide more discussion on the proposed network. We verify the effectiveness of instance normalization with ablation study on network structures, as shown in \cref{tab:ablation}. We use no normalization (NA), batch normalization (BN), or instance normalization (IN) for skip connections and decoders, respectively. The normalization layers are added before each convolutional layer of the decoder except for the first layer. All the results in \cref{tab:ablation} are obtained by only using reconstruction loss ($\lambda=0$ in loss function \eqref{eq:loss}) except for the last one, where IN and combined loss ($\lambda=1$) are used. We train and evaluate our network on the RESIDE-standard dataset. First, comparing the NA results in \cref{tab:ablation} with previous best results in \cref{tab:soa}, our encoder-decoder only achieves competitive results. Second, adding normalization to either skip connections or decoder significantly improves the performance of our network. The normalization layers for decoder are implicitly applied to the features from the skip connections, which makes the result of only normalizing decoder slightly better than only normalizing skip connections. Third, instance normalization works better than batch normalization, which demonstrates the effectiveness of shifting the mean and variance of deep features at instance level. Finally, the perceptual loss only helps a little for quantitative evaluation, but it can help generate more visually appealing output images. We show an qualitative example in \cref{fig:abla}, where the hazy input, the groundtruth clear image, outputs of our network without normalization layers and no perceptual loss (NA-NA), our network with instance normalization and no perceptual loss (IN-IN), and our network with instance normalization and perceptual loss (IN-IN-Percep). We enlarge the bottom left corner of the results to show more details. The results of IN-IN look much better than NA-NA. The enlarged area of the result with perceptual loss (IN-IN-Percep) looks sharper and clearer. \begin{figure}[t] \vspace{-0.1cm} \centerline{ \includegraphics[width=0.95\linewidth]{fig_abla.pdf} } \vspace{0.2cm} \caption{ An example of qualitative results in ablation study. We zoom in the bottom left corner of the images to show more details in the second row.} \label{fig:abla} \vspace{-0.3cm} \end{figure}% \subsection{Cross-domain evaluation} \begin{table}[tbhp] \centering \caption{Quantitative results for cross-domain evaluation.} \vspace{0.2cm} \setlength\tabcolsep{3pt} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{D-Hazy-NYU~\cite{ancuti2016d}} & \multicolumn{2}{c|}{D-Hazy-MB~\cite{ancuti2016d}} & \multicolumn{2}{c|}{I-Haze~\cite{ancuti2018ihaze}} & \multicolumn{2}{c|}{O-Haze~\cite{ancuti2018ohaze}} \\ \hline & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM\\ \hline DCP~\cite{he2011single} & 11.56 & 0.6695 & 12.13 & 0.6752 & 13.41 & 0.4930 & 17.01& 0.4875\\ \hline CAP~\cite{zhu2015fast} & 13.29 & 0.7266 & \underline{14.36} & \textbf{0.7526} & 15.27 & 0.5603 & 16.68 & 0.4810 \\ \hline DehazeNet~\cite{cai2016dehazenet} & 13.02 & 0.7256 & 13.78 & 0.7342 & \textbf{16.73} & \underline{0.6263} & \textbf{17.90} & \textbf{0.5514} \\ \hline MSCNN~\cite{ren2016single} & \underline{13.67} & \underline{0.7413} & 13.97 & \underline{0.7488} & 15.93 & 0.5896 & 16.27 & 0.4947 \\ \hline AOD-Net~\cite{li2017aod} & 12.44 & 0.7147 & 13.48 & 0.7470 & 15.00 & 0.5828 & 16.22 & 0.4142\\ \hline Ours & \textbf{18.11} & \textbf{0.8268} & \textbf{15.63} & 0.7338 & \underline{16.04} & \textbf{0.6332} & \underline{17.46} & \underline{0.5337} \\ \hline \end{tabular}% \label{tab:cross}% \vspace{-0.3cm} \end{table}% In this section, we focus on the cross-domain performance by evaluating our network trained on RESIDE-standard \cite{li2017reside} on the cross domain datasets, D-Hazy~\cite{ancuti2016d}, I-Haze~\cite{ancuti2018ihaze} and O-Haze~\cite{ancuti2018ohaze}. We compare with baseline methods that have publicly available code, and these are strong baselines according to benchmark evaluation in \cref{tab:soa}. For learning-based methods DehazeNet~\cite{cai2016dehazenet}, MSCNN~\cite{ren2016single}, and AOD-Net~\cite{li2017aod}, we use the best model the authors have released. MSCNN~\cite{ren2016single} and AOD-Net~\cite{li2017aod} are trained with synthetic images similar to RESIDE-standard, while DehazeNet~\cite{cai2016dehazenet} is trained with patches of web images. We present the quantitative results in \cref{tab:cross}, where we use bold to label the best results and underline to label the second best results. Our approach achieves best results, or close to the best results for all the cross-domain evaluations. Our first observation is that the learning-based methods~\cite{cai2016dehazenet,ren2016single,li2017aod}, including ours, generalize reasonably well and perform equally or better than the prior-based methods~\cite{he2011single,zhu2015fast}. Our network performs well on the cross-domain D-Hazy dataset~\cite{ancuti2016d}. Particularly, our approach outperforms all baseline methods by a large margin on the images synthesized from NYU depth dataset. D-Hazy dataset is synthesized by the same clear images as our training data RESIDE-standard, but uses different parameters of the atmospheric scattering model. Our trained network has effectively captured the statistics of the deep features of the desired clear images. I-Haze~\cite{ancuti2018ihaze} and O-Haze~\cite{ancuti2018ohaze} images look quite different from our training images, and our network may have difficulty to infer the exact statistics of deep features for these images. DehazeNet~\cite{cai2016dehazenet} may have gained some advantage on these two datasets because it is trained on patches of web images. Our approach still produces competitive results compared with DehazeNet~\cite{cai2016dehazenet}, and outperforms all the other baselines. Notice again that our network does not use the powerful atmospheric scattering model, and is only trained on a limited number of indoor synthetic images. The cross-domain evaluation further demonstrates the power of \emph{deep} features and instance normalization in our approach. \vspace{-0.1cm} \subsection{Qualitative evaluation} \begin{figure}[tbhp] \vspace{0.1cm} \centerline{ \includegraphics[width=1\linewidth]{fig_vis.pdf} } \vspace{0.25cm} \caption{ Qualitative evaluation on cross-domain dataset. The four examples are from D-Hazy-NYU~\cite{ancuti2016d}, D-Hazy-MB~\cite{ancuti2016d}, I-Haze~\cite{ancuti2018ihaze} and O-Haze~\cite{ancuti2018ohaze}, respectively. Best viewed in color and zoomed in. } \label{fig:vis} \end{figure} We present qualitative results from cross-domain evaluation in \cref{fig:vis}. The images are from D-Hazy-NYU~\cite{ancuti2016d}, D-Hazy-MB~\cite{ancuti2016d}, I-Haze~\cite{ancuti2018ihaze} and O-Haze~\cite{ancuti2018ohaze}, respectively. We show the hazy image and groundtruth clear image, and compare our results with DCP~\cite{he2011single}, CAP~\cite{zhu2015fast}, DehazeNet~\cite{cai2016dehazenet}, MSCNN~\cite{ren2016single}, and AOD-Net~\cite{li2017aod}. We use the best released model for the learning-based baselines~\cite{cai2016dehazenet,ren2016single,li2017aod}, and train our network on RESIDE-standard~\cite{li2017reside}. Our network makes the best efforts to remove haze and recover the real color of images, as shown in \cref{fig:vis}. The results of baselines still have a large amount of undesired haze and look blurry (row 2,3,4). Particularly, the baselines have difficulty in dark areas of the image, and DCP also has difficulty in area of white and blue walls (row 1,3). For the outdoor image (row 4), our network produces a little artifact due to the significant domain difference between the desired images and the training indoor images. Use regularizers such as total variation \cite{rudin1992nonlinear} may help reduce these artifacts, and we plan to investigate it in the future. Our simple yet effective network has generated visually appealing results, without depending on extra constraints like the atmospheric scattering model. \vspace{0.2cm} \section{Discussion} We proposed a simple yet effective end-to-end system for single image dehazing. Our network has an encoder-decoder architecutre with skip connections. We manipulated the statistics of deep features extracted by pre-trained VGG net and demonstrated the effectiveness of instance normalization for image dehazing. Moreover, without explicitly using the atmospheric scattering model, our approach outperforms previous methods by a large margin on the benchmark datasets. Notice that both the training and testing data are generated by the atmospheric scattering model, and the baseline methods all explicitly use the model. Our network effectively learns the transformation from hazy image to clear image with limited synthetic data, and generalizes reasonably well. The atmospheric scattering model is powerful and has been successfully deployed for image dehazing in the past decade. However, the atmospheric scattering model, as a simplified model, also constrained the learnable components to be ``linearly'' combined by element-wise multiplication and summation, which may not be ideal for training deep models. Our study sheds light on the power of \emph{deep} neural networks and the \emph{deep} features extracted by pre-trained network for single image dehazing, and encourages the rethinking on how to effectively exploit the physical model for haze. How will physical model help when training powerful deep networks? It is still an open question, and our approach serves as a strong baseline for future study. Our network outperforms state-of-the-art methods by a large margin on the benchmark dataset, and achieves competitive results on cross-domain evaluation. The key idea of our approach is to apply instance normalization to shift the statistics of deep features for image dehazing. For cross-domain evaluation, it may be difficult to effectively infer the desired statistics of deep features of clear images that is quite different from the training data. Our generalization ability can be significantly improved by training from large-scale natural images. In the future, we will explore adversarial training to use unpaired hazy and clear images that are easier to collect from the web. \bibliographystyle{plainnat} \small
2024-02-18T23:40:42.082Z
2018-05-10T02:02:59.000Z
algebraic_stack_train_0000
3,119
4,756
proofpile-arXiv_065-15232
\section*{Introduction} The goal of this article is to add the generating function constraint to the Legendrian notion of cobordism initiated by Arnol'd in \cite{Arnold}, and to understand what kind of structure comes out. If the definitions works in the one-jet spaces $J^1 (M)$ for any smooth manifold $M$, our main concern is the study of Legendrian cobordisms between (long) Legendrian knots in the $3$-dimensional standard contact space $J^1(\mathbb{R})$. Legendrian knots studied modulo Legendrian isotopy form a more subtle theory than the one of smooth knots. In first approximation, one may combine the smooth theory with the \textit{stabilization operation}, which consists in adding a zig-zag in the front projection (see Figure \ref{fig1}).\\ \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{stabilization} \end{center} \caption{Stabilizations of a Legendrian knot (front projection).} \label{fig1} \end{figure} This operation changes the Legendrian isotopy class of the knot, but not its smooth one. Such a theory can be reduced to the three classical invariants of a Legendrian knot: the topological type, the \textit{Thurston--Bennequin invariant} -- or the self linking number of the knot with a push upward copy of it -- and \textit{Maslov index} -- or he rotation number of its Lagrangian projection. We will say in this introduction that a Legendrian knot which can not be obtained from another one by stabilization is \textit{maximal}. When all Legendrian representatives of a smooth isotopy class of a knot can be listed by doing successive stabilisations to a maximal Legendrian knot, the corresponding topological type is commonly called \textit{simple}. The Legendrian knot theory does not end here, as there exist different Legendrian isotopy classes of knots with the same classical invariants. It is in general difficult to determine when a Legendrian knot is maximal, and if the corresponding topological type is simple. The first counter-example is the one of Chekanov-Eliashberg \cite{Ch1}, with two distinct Legendrian representatives of the smooth knot type $5_2$, which have the same Thurston-Bennequin invariant and the same Maslov index. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{CE-knots} \end{center} \label{fig2} \caption{Wave fronts of the two Chekanov-Eliashberg knots.} \end{figure} When working with Legendrian submanifolds in one-jet spaces of functions, generating functions (gf) appear quite automatically as a tool -- here we work more specifically with \textit{generating function} which are \textit{quadratic at infinity} (\textit{gfqi}). Generating functions can be first seen as a Legendrian submanifold factory, constructing many but not all of them. Such constructed Legendrian submanifolds have specific features. In dimension $3$, gf Legendrian knots are maximal (i.e. they have no stabilizations), with Maslov index equal to zero. As a consequence, there exist topological types of knots which do not admit a Legendrian representative equipped with a gf. For instance, one of the two (long) trefoils has no Legendrian representative which has a gfqi, while the other does -- see Proposition \ref{prop1.18}. More generally, Morse theoretic techniques allowed by gf's are used efficiently to provide numerous invariants for Legendrian knots (rulings \cite{PuCh}, \cite{FuRu}, GF-homology \cite{Tr1}, \cite{Tr2}, etc). \\ The classical smooth notions of cobordism and concordance have found natural and relevant analogues in contact and symplectic topology. For the last twenty years, the most studied notion has been the one of Lagrangian cobordism between Legendrian submanifolds. To the search for obstructions to the existence of such cobordisms initiated by Baptiste Chantraine in \cite{ChB}, Lisa Traynor and her collaborators successfully added the ingredient of gf's and came out with the notion of \textit{gf-compatible Lagrangian cobordisms} \cite{BST}, \cite{ST}. For instance, Sabloff and Traynor in \cite{ST} found out that there is no gf-compatible Lagrangian cobordisms between the two Chekanov--Eliashberg knots. Here we are interested in \textit{Legendrian cobordisms} (see Subsection \ref{ss1.2}), meaning an $(n+1)$-dimensional Legendrian submanifold of $\mathbb{R}^{2n+3}$ between two $n$-dimensional Legendrian submanifolds of $\mathbb{R}^{2n+1}$. Because of the extra dimension, this notion is significantly more flexible than the one of (exact) Lagrangian cobordism. The notion of Legendrian cobordism is, in some sense, the extension of Legendrian isotopy study: looking at $1$-parameter families of embedded Legendrian submanifolds, one extends the notion of Legendrian isotopy by adding natural accidents (such as immersion or cobordism moments of Legendrian type, see local descritption Figure \ref{fig5}), whereas the notion of Lagrandian cobordism refines the smooth cobordism theory by adding the symplectic constraint. A notable difference is that Lagrangian cobordisms induce embedded smooth cobordisms, while Legendrian cobordisms project on immersed ones. The gf-equipment extends more naturally to Legendrian cobordisms than to Lagrangian ones. Additionally, it forces the study of $1$-parameter families of Legendrian submanifolds among maximal Legendrian submanifolds. Note that Legendrian knots modulo Legendrian cobordisms have been classified by Arnol'd \cite{Arnold}: two Legendrian knots are Legendrian cobordant if and only if they have the same Maslov index. Our issue is a subclassification, as it concerns Legendrian knots among those with fixed Maslov index equal to zero. In this note, we study Legendrian cobordisms with respect to the gfqi-equipment. In particular, we will see that the construction of the smooth concordance group -- the operation here is the connected sum of knots -- has a natural analogue, and this analogue respects the gfqi-equipment. \begin{thm}{(Theorem \ref{thm2.9}.)} The set of Legendrian knots equipped with a gfqi equivalence class up to Legendrian concordance which is compatible with the gfqi equipment forms a group. \end{thm} The paper is organized as follows: Section 1 is devoted to the necessary tool box of Legendrian geometry. The definitions of Legendrian cobordism (Definition \ref{def1.8}) and gfqi (Definition \ref{def1.13}) are recalled, as well as fundamental results concerning gfqi's (Chekanov Theorem \ref{thm1.16} and Th\'{e}ret--Viterbo Theorem \ref{thm1.15}). We construct the concordance group of Legendrian knots -- \textit{gfqi-concordance} -- in Section 2, and show that the construction fits with gfqi's (Theorem \ref{thm2.9}). Section 3 is devoted to the notion of Legendrian homotopy -- which is a particular case of concordance -- with respect to gfqi's -- \textit{gfqi-homotopy}. We give an additional construction (Proposition \ref{prop3.4}) based on the sum operation \cite{Moi}. \\ The general motives behind those constructions is to discuss the rigidity of the following different notions: Legendrian concordance, Lagrangian concordance, and Legendrian homotopy, with respect to gf's. At the end of this note, some interrogations remains open as far as the author knows : how does the set of gfqi's change along a gfqi-cobordisms? Thanks to the results of Chekanov and Viterbo-Th\'{e}ret, we know for instance that the number of gfqi's does not change along a Legendrian isotopy (see Proposition \ref{prop2.12}), while it can pass from one to infinitely many through a gfqi-cobordism \cite{BST}. What about gfqi-concordance? and gfqi-homotopy? \section*{Acknowledgements} I'm grateful to my PhD advisor Emmanuel Ferrand as he supervised my work and gave numerous advices during this note realization. I thank Lisa Traynor, Josh Sabbloff and Samantha Pezzimenti for many interesting discussions concerning Largangian cobordisms and generating functions. I'm also grateful to Sylvain Courtes for enlightening remarks and explanations concerning generating functions. \\ The author is now supported by the grant SFB \textit{Symplectic Structures in Geometry, Algebra and Dynamics} and the University of Cologne, Germany. \vspace*{1cm} \section{Preliminaries~: Legendrian things in one-jet spaces}\label{sec1} \vspace{0.2cm} \subsection{Contact structure} Consider a smooth manifold $M$ of dimension $m$, endowed with a system of local coordinates $x=(x_1, \dots , x_m)$. We refer to $M$ as the \textit{base space}. The space of $1$-jets of functions based on $M$, $J^1(M)=\mathbb{R} \times T^*M$, is a $(2m+1)$-dimensional manifold. By fixing a Riemannian metric on $M$ -- for $M=\mathbb{R}^m$, we choose the Euclidian metric -- $J^1(M)$ is locally endowed with coordinates $(u,x,y)$, with $u \in \mathbb{R}$ and $y=(y_1, \dots , y_m)$ canonically associated to $x$. It carries a natural contact structure $\xi$, which is the hyperplane field defined as $$\xi = \ker (\mathrm{d} u -y\mathrm{d} x)= \ker (\mathrm{d} u - \sum_{i=1}^m y_i \mathrm{d} x_i). $$ A contact structure is a maximally non integrable distribution: a submanifold everywhere tangent to $\xi$ must have dimension no greater than $m$. \begin{df}\label{def1} A \textbf{Legendrian submanifold} $L \subset J^1(M)$ is an $m$-dimensional smooth submanifold of $J^1(M)$ which is everywhere tangent to the contact structure, i.e. $$(\mathrm{d} u - y \mathrm{d} x)_{\vert L}\equiv 0 . $$ \end{df} \begin{ex}\label{ex11} Let $f$ be a smooth function defined on $M$. Its \textbf{$1$-graph} $$j^1f=\lbrace ( u=f(x),x,y=\partial_x f (x) ) \ \vert \ x \in M\rbrace$$ is an elementary example of a Legendrian embedding of $M$ into $J^1(M)$. \end{ex} \begin{rk} The cotangent bundle $T^*M$ is naturally endowed with the standard symplectic form $\omega=\mathrm{d}( y \mathrm{d} x)$. Thus, the projection of a Legendrian submanifold $L \subset J^1(M)$ onto $(T^*M,\omega)$ is an immersed exact Lagrangian submanifold. \end{rk} To work with Legendrian geometry, it is convenient to use the \textit{front projection} \begin{align*} \mathbf{p_F} : \ J^1(M) \ & \longrightarrow \mathbb{R} \times M \\ (u,x,y) &\longmapsto \ (u,x) . \end{align*} The front projection of a Legendrian submanifold is shortly called its \textbf{front}. We systematically picture Legendrian submanifolds of dimensions $1$ and $2$ via their fronts.\\ \begin{exs}${}$\\ $\bullet$ \ If a Legendrian submanifold is the $1$-graph of a function $f$, its front is the graph of $f$.\\ $\bullet$ \ In the case $m=1$, the ambient space has dimension $3$, two types of singularities may appear generically in a front: double points and (right or left) cusps. For $m=2$, a swallow tail may appear (see Figure \ref{fig3}, as well as lines of double points, lines of cusps, \dots (see \cite{2} for an exhaustive and detailed description of two dimensional wave fronts).\\ \end{exs} \begin{df} A \textbf{Legendrian isotopy} is a smooth one-parameter family of Legendrian submanifolds. \end{df} \begin{ex}${}$ In the case of $m=1$, an analogue of the Reidemeister theorem holds: a Legendrian isotopy of knots or links can be seen as a succession of a finite number of local moves on the wave front. The three types of Legendrian Reidemeister moves are illustrated in Figure \ref{fig5} (a). \end{ex} \subsection{Legendrian cobordisms}\label{ss1.2} The notion of Legendrian cobordism as introduced by Arnol'd in \cite{Arnold} consists of an $(n+1)$-dimensional Legendrian submanifold between two $n$-dimensional Legendrian manifolds. It requires a certain \textit{reduction operation} commonly used in Legendrian (and Lagrangian) geometry. To obtain this Legendrian notion of cobordism, one may consider $M=N\times [0,1]$, where $N$ is a smooth manifold of dimension $n$, and the $1$-jet space $J^1(N\times [0,1])$ endowed with local coordinates $(u,q,t,p,s)$, where $t \in [0,1]$ and $s$ is the dual coordinate of $t$. It carries the natural contact structure $\xi = \ker (\mathrm{d} u - p \mathrm{d} q -t \mathrm{d} s)$. \begin{df} Let $\mathcal{L}$ be a subset in $J^1(N \times [0,1])$, and set $t_0 \in [0,1]$. The \textbf{slice of $\mathcal{L}$ at time $t=t_0$} is the projection of $\mathcal{L} \cap \lbrace t=t_0 \rbrace $ on $J^1(N)$ forgetting $s$. We denote it by $\mathcal{L}_{\lceil t=t_0}$, $$\mathcal{L}_{\lceil t=t_0}= \lbrace (u,q,p) \ \vert \ \exists s \in \mathbb{R} \text{ s.t. } (u,q,t_0,p,s)\in \mathcal{L} \rbrace.$$ \end{df} As a consequence of Thom's Transverality Lemma \cite{Thom}, one can show that the slice at time $t=t_0$ of a Legendrian submanifold of $J^1(N \times [0,1])$ in generic position is a Legendrian submanifold of $J^1(N)$ for almost every $t_0 \in [0,1]$. At the level of fronts, it appears that a generic front in $\mathbb{R} \times N \times [0,1]$ intersected with the hyperplane $\mathbb{R} \times N \times \lbrace t=t_0 \rbrace$ gives a front in $\mathbb{R} \times N$ (see Figures \ref{fig3} and \ref{fig4}). \begin{df}\label{def1.8} A \textbf{Legendrian cobordism} consists of a Legendrian submanifold $\mathcal{L} \subset J^1(N \times [0,1])$ and two Legendrian submanifolds $L_0$ and $L_1 \subset J^1(N)$ such that $$L_0 = \mathcal{L}_{\lceil t=0} \quad \text{ and } \quad L_1 = \mathcal{L}_{\lceil t=1} .$$ \end{df} \begin{ex} A Legendrian isotopy $(L_t)_{t\in [0,1]}$ can be seen as a Legendrian cobordism $\mathcal{L}$ between $L_0$ and $L_1$ such that each slice $\mathcal{L}_{t=t_0}$, $t_0 \in [0,1]$, is an embedded Legendrian submanifold. \end{ex} \begin{ex} In the case of Legendrian cobordisms between Legendrian knots, the front projection allows us to make reliable illustrations. The first Reidemeister move for instance can be pictured as slices of a swallow tail (Figure \ref{fig3}). In addition to the three Reidemeister moves, three other local moves can be performed on a wave front to obtain Legendrian cobordisms. One is illustrated in Figure \ref{fig4}. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{iso1} \end{center} \caption{The swallow tail and the first Reidemeister move: R I.} \label{fig3} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{cob2} \end{center} \caption{Cobordism move: the Legendrian saddle point.} \label{fig4} \end{figure} \begin{figure}[ht] \begin{center} \subfigure[(a)]{\includegraphics[scale=0.35]{isomoves}} \qquad \qquad \subfigure[(b)]{\includegraphics[scale=0.35]{hom-cob-moves}} \end{center} \caption{The isotopy moves (red), the homotopy move (green) and the cobordisms moves (blue).} \label{fig5} \end{figure} Comparing Legendrian cobordisms with their smooth analogues, one can observe that the two cobordism moves -- saddle point and spherical -- are completed with a third extra move: the homotopy move (see Figure \ref{fig5}). We will come back to the \textit{homotopy} notion in Section $3$. \end{ex} \subsection{Generating functions} The notion of generating function is based on another reduction operation. Let $k$ be a positive integer. Consider the space $J^1(M\times \mathbb{R}^k)= \mathbb{R} \times T^*(M\times \mathbb{R}^k)$, endowed with coordinates $(u,x,y,w,v)$, where $w \in \mathbb{R}^k$ stands for the extra variable in the base space, and $v \in \mathbb{R}^k$ for the corresponding dual coordinates. Let $\mathcal{L}$ be a subset of $J^1(M\times \mathbb{R}^k)$. \begin{df} The \textbf{contour of $\mathcal{L}$ in the direction of $M$} is the projection of $\mathcal{L} \cap \lbrace v=0 \rbrace$ on $J^1(M)$. \end{df} As for the slice operation, the contour of a Legendrian submanifold of $J^1(M \times \mathbb{R}^k)$ in generic position is a Legendrian submanifold of $J^1(M)$. In particular, it permits building numerous non-trivial Legendrian submanifolds from $1$-graphs of functions. \begin{df} A \textbf{generating function} (gf) for a Legendrian submanifold $L \subset J^1(M \times \mathbb{R}^k)$ is a function $F$ defined on a product $M \times \mathbb{R}^k$ such that $L$ is the contour of the $1$-graph of $F$ in the direction of $M$, $$L = \lbrace (u,x,y) \ \vert \ \exists w, \partial_w F(x,w)=0, u=F(x,w), y=\partial_x F(x,w) \rbrace .$$ \end{df} When a Legendrian has a gf, it has infinitely many of them. It is not relevant to distinguish every one of them. There are two operations which do not change the underlying Morse dynamics of gf: the \textit{stabilization} operation and the \textit{fiberwise diffeomorphism} operation. Together they define a notion of equivalence class for gf such that each invariant constructed from the Morse dynamics of a gf is in fact an invariant of the equivalent class of the gf. \noindent $\bullet$ Consider $F$ a gf defined on $M \times \mathbb{R}^k$. Let $k'$ be an integer, and $Q'$ be a non-degenerate quadratic form defined on $\mathbb{R}^{k'}$. Then one may replace $F$ by $F \oplus Q'$, defined on $M \times \mathbb{R}^k \times \mathbb{R}^{k'}$ by $F \oplus Q (q,w,w')=F(q,w)+Q'(w')$. This operation is called a \textbf{stabilization}. \noindent $\bullet$ Let $F$ be a gf defined on $M \times \mathbb{R}^k$, and $\Phi$ be a fiberwise diffeomorphism of $M \times \mathbb{R}^k$ -- $\Phi(q,w)=(q,\phi_q(w))$ with $\phi_q$ a diffeomorphism of $\mathbb{R}^k$ for every $q \in M$. One may replace $F$ by the composition $F \circ \Phi$. This operation is called a \textbf{fiberwise diffeomorphism}. \\ If for two generating functions $F$ and $F'$, there exists a gf $F_0$ such that $F$ and $F'$ descend from $F_0$ by successive stabilizations and fiberwise diffemorphisms operations, then $F$ and $F'$ are declared to be \textbf{equivalent}. \begin{df}\label{def1.13} A gf is said \textbf{quadratic at infinity} (\textbf{gfqi}) if it is equivalent to a gf of the form $f+Q$, where $f$ defined on $M\times \mathbb{R}^k$ is a compactly supported gf, and $Q$ is a non-degenerate quadratic form on $\mathbb{R}^k$. \end{df} Note that a Legendrian submanifold which is the contour of a gfqi must be equal to the zero section outside a compact set of $J^1(M)$. \begin{ex} It is possible to realise a Legendrian representative for a long trefoil as the contour of a gfqi $F$ defined on $\mathbb{R} \times \mathbb{R}$. The movie of the one-parameter family of functions $(F(q,.))_{q \in \mathbb{R}}$ is depicted in Figure \ref{figg}. The wave front of the contour of $F$ is also called the \textit{Cerf diagram} of the family $(F(q,.))_{q \in \mathbb{R}}$, \cite{Cerf}. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.3]{trefoilmovie} \end{center} \caption{} \label{figg} \end{figure} \end{ex} In Sections 2 and 3, we will use two fundamental results concerning gfqi's. The first one is the persistence of gfqi's under Legendrian isotopies. \begin{thm}{(Chekanov \cite{Ch}.)}\label{thm1.16} If $(L_t)_{t\in [0,1]}$ is a Legendrian isotopy, and $L_0$ admits a gfqi $F$, then there exists $\mathcal{F}$ defined on $[0,1]\times M \times \mathbb{R}^k$ such that, for every $t\in [0,1]$, $\mathcal{F}(t, \cdot, \cdot) = F_t$ is a gfqi for $L_t$, and $F_0$ is equivalent to the initial $F$. \end{thm} The second result is the uniqueness of the gfqi class for the zero section. \begin{thm}{(Th\'{e}ret--Viterbo \cite{Th}, \cite{Vi}.)}\label{thm1.15} If a Legendrian submanifold is isotopic to the zero section, then it admits a unique equivalence class of gfqi's. \end{thm} In particular, the zero section $\mathcal{O}$ admits a unique equivalence class of gfqi's, and we denote it by $F_{\mathcal{O}}$.\\ Back to Legendrian knots, a Morse theoretic argument leads to the observation that a stabilization can never appear when gfqi's -- or any gf's with reasonable behavior at infinity -- are involved. Thus, only maximal Legendrian knots are constructible by gfqi's. Another Morse theoretic observation is that such Legendrian knots must have Maslov index equal to zero. It permits to conclude that all maximal Legendrian knots are not reachable by gfqi's, and show the following result. \begin{prop}\label{prop1.18} The lefthand trefoil does not have any Legendrian long representative that admits a gfqi. \end{prop} \noindent \textit{Proof.} It follows from the following fact, proved by Etnyre and Honda in \cite{EtHo}: \textit{the two (topological) trefoil knots are simple.} As a consequence, all long Legendrian representatives of the lefthand trefoil can be obtained by stabilizing the maximal representative depicted in Figure \ref{fig6}, whose Maslov index is not zero. Thus all long Legendrian representatives of the lefthand trefoil with Maslov index zero present a zigzag, therefore can not be realized by a gfqi. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.3]{lefthand-trefoil} \end{center} \caption{The maximal (long) Legendrian lefthand trefoil.} \label{fig6} \end{figure} \qed \vspace*{0.5cm} \section{Concordance of gfqi-knots}\label{sec2} \subsection{Definitions} \begin{df} A \textbf{Legendrian knot} is a connected Legendrian submanifold of $J^1(\mathbb{R})$. \end{df} \begin{df} A \textbf{gfqi-equipped knot} is a pair $(L,F)$ where $L$ is a Legendrian knot and $F$ is a gfqi equivalence class for $L$. \end{df} Thus, a gfqi-equipped knot is equal to the zero section outside a compact set -- it is not a compact but a long knot. Consider the smallest connected open set $U$ in the base space $\mathbb{R}$ such that $L \cap {}^cU= \mathcal{O} \cap {}^cU$. We will refer to $U$ as the \textit{support} of $L$. \begin{df} A \textbf{gfqi-cobordism} between two gfqi-equipped knots $(L_0,F_0)$ and $(L_1,F_1)$ consists in a pair $(\mathcal{L},\mathcal{F})$ where $\mathcal{L}$ is a Legendrian cobordism between $L_0$ and $L_1$, and $\mathcal{F}$ is a gfqi equivalence class for $\mathcal{L}$ such that, $$\mathcal{F}_{\vert t=0}=F_0 \quad \text{and} \quad \mathcal{F}_{\vert t=1}=F_1.$$ \end{df} All gfqi-equipped knots are gfqi-cobordant, as a consequence of the following lemma. \begin{lem}{(Bourgeois--Sabloff--Traynor \cite{BST}.\footnote{The proof made for generating functions which are \textit{linear at infinity} fits also to the gfqi case.})} \label{lem2.4} Let $(L,F)$ be a gfqi-equipped knot. There exists a gfqi-cobordism between $(L,F)$ and $(\mathcal{O},F_{\mathcal{O}})$. \end{lem} \subsection{The gfqi-concordance group} \begin{df} A \textbf{gfqi-concordance} is a gfqi-cobordism $(\mathcal{L},\mathcal{F})$ such that $\mathcal{L}$ is diffeomorphic to the base space $\mathbb{R} \times [0,1]$. \end{df} The notion of gfqi-concordance defines an equivalence relation on the set of gfqi-equipped knots. We denote by $[L,F]$ the equivalence class of a gfqi-equipped knot $(L,F)$ modulo gfqi-concordance. \subsubsection{The connected sum operation} Let $(L,F)$ and $(L',F')$ be two gfqi-equipped knots. \begin{df}\label{def2.6} The \textbf{connected sum} of $L$ with $L'$ consists in the concatenation of $L$ then $L'$ (see Figure \ref{fig7}). \end{df} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.6]{connectedsum} \end{center} \caption{The connected sum $L\# L'$.} \label{fig7} \end{figure} The resulting knot, denoted $L \# L'$, has a natural gfqi equipment obtained from $F$ and $F'$: $$F \# F' \ : \ (q,w,w') \mapsto F(q,w) + F(q,w').$$ \begin{rk} Note that one may have to change $F$ and $F'$ by $F(\cdot +T,\cdot )$ and $F'(\cdot -T, \cdot )$ respectively, with $T \in \mathbb{R}$ large enough, in order to disconnect the supports of $L$ and $L'$ in the base space. \end{rk} \begin{rk} Reduced to the Legendrian factor, The connected sum operation is commutative modulo isotopy \cite{FuTa}. However, it is not clear that the gfqi-equipment is compatible. Thanks to Chekanov's construction, one can follow this isotopy from $L \#L'$ to $L' \# L$ with a one parameter family of gfqi's, starting with $F \# F'$, but maybe not ending with a gfqi in the same equivalence class as $F' \# F$. \end{rk} \begin{df} The \textbf{Legendrian mirror} of a Legendrian knot $L$ is the Legendrian knot $\bar{L}$ whose front is the symmetrical of the front of $L$ with respect to the $u$-axis. \end{df} If a Legendrian knot $L$ is equipped with a gfqi $F$, we naturally equip its mirror $\bar{L}$ with the gfqi $\bar{F}$ such that: $$\bar{F}(q,\bar{w})=F(-q,\bar{w}).$$ \begin{thm}\label{thm2.9} Let $(L,F)$ be an gfqi-equipped knot. Then $$[\bar{L},\bar{F}]+[L,F]=[L,F]+[\bar{L},\bar{F}]=[\mathcal{O},F_{\mathcal{O}}].$$ \end{thm} \noindent \textit{Proof.} We use a well-known construction in Legendrian geometry called \textit{front spinning}. It consists of including the wave front of a Legendrian submanifold in a bigger space, and then rotating it using the additionnal coordinates in order to create a bigger wave front (see Ekholm-Etnyre-Sullivan \cite{EES} and Golovko \cite{G}). Here consider $\mathbf{p_F}(L)$ the wave front of $L$, which is one-dimensional. Suppose its support is located in the half space $\lbrace q\geqslant 0 \rbrace$. Let's add the $t$-coordinate and consider $\mathbf{p_F}(L) \cap \lbrace q \geqslant 0 \rbrace$ included in the subspace \noindent $\lbrace (u,q,t) \ \vert \ q \geqslant, t=0 \rbrace \subset \mathbb{R}^2 \times [0,1]$. Then make half a turn around the $u$-axis with this singular curve, and keep the trace of the rotation all along to create a $2$-dimensional object in $\mathbb{R}^2 \times [0,1]$ (see Figure \ref{fig8}).\\ \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{spinning} \end{center} \caption{Construction of the concordance between $\bar{L}\# L$ and $\mathcal{O}$.} \label{fig8} \end{figure} The result of this operation is the wave front of a Legendrian surface $\Sigma L$ in $J^1(\mathbb{R} \times [0,1])$, with a two component boundary. One component lives in the subspace $\lbrace t=0 \rbrace$ and corresponds to the connected sum $\bar{L} \# L$. The second is the subspace $\lbrace (u,q,t,p,s) \ \vert \ u=p=s=0, t=1 \rbrace$, and corresponds to the zero section $\mathcal{O} \subset J^1(\mathbb{R})$. Thus, the Legendrian surface $\Sigma L $ realizes a Legendrian cobordism between $\bar{L} \# L$ and the zero section. As $\Sigma$ has clearly genus zero, it is a concordance.\\ If $F$ is a gfqi for $L$, then $\Sigma L$ has the following gfqi, $$\mathcal{F}: (q,t,w) \mapsto F(\sqrt{q^2+t^2},w).$$ For $t=1$, the restriction $\mathcal{F}_{\vert t=1} $ is a gfqi for $\mathcal{O}$. Thanks to the unicity result of Viterbo-Th\'{e}ret -- Theorem \ref{thm1.15} -- we obtain for sure the zero section endowed with its unique gfqi-equipment, $(\mathcal{O},\mathcal{F}_{\mathcal{O}})$, at the end of this gfqi-concordance.\footnote{Note that it is not needed to call for Viterbo-Th\'{e}ret theorem here, as this fact follows by construction.} Replace $F$ by an equivalent form $f+Q$, where $f$ is a gf compactly supported and $Q$ a non-degenerate quadratic form of the $w$-variable. For $t=0$, let us write $$\mathcal{F}_{t=0}(q,0,w)=f(-q,w)+f(q,w)+Q(w),$$ and compare with $$\bar{F}\# F (q,\bar{w},w)=f(-q,\bar{w})+Q(\bar{w})+ f(q,w)+Q(w).$$ In \cite{Th}, in order to prove the invariance of uniqueness property under isotopies, Th\'{e}ret was led to prove the following result. \begin{lem}\label{lem2.10} If $(F_t)_{t\in [0,1]}$ is a smooth path of gfqi's which have all the same contour, $L_t=L \ , \ \forall t$ , then $F_0$ and $F_1$ are equivalent. \end{lem} Thanks to this technical Lemma, it is sufficient to link the equivalent classes of $\mathcal{F}_{t=0}$ and $\bar{F}\# F$ by a path of gfqi's whose contour is constant and equal to $\bar{L}\# L$, in order to conclude. Suppose $F$ is defined on $\mathbb{R}\times \mathbb{R}^k$. We replace $\mathcal{F}_{t=0}$ by the equivalent gfqi $F_0$ defined on $\mathbb{R} \times \mathbb{R}^k \times \mathbb{R}^k$ by $$F_0(q,w,\bar{w})=f(-q,w)+f(q,w)+Q(w)+Q(\bar{w}). $$ Then, let us define the path of gfqi's $(F_t)_{t\in [0,1]}$ by $$F_t(q,w,\bar{w})=f(-q,\cos(\tfrac{\pi}{2} t) w +\sin(\tfrac{\pi}{2} t)\bar{w})+f(q,w) + Q(w)+Q(\bar{w}).$$ One can check that the contour remains constant, and clearly this path links $F_0$ with $F_1= \bar{F} \# F$. \qed\\ \begin{rk} We wonder if the gfqi-concordance group is trivial or not. Remind that Chekanov theorem claims that, if $(L_0, F_0)$ is a gfqi-equipped knot, and $(L_t)_{t \in [0,1]}$ is a Legendrian isotopy from $L_0$ to $L_1$, then there exists -- a one parameter family of gfqi's $(F_t)_{t\in [0,1]}$, ending with -- a gfqi $F_1$ for $L_1$ such that $[L_0,F_0]=[L_1,F_1]$. However, the gfqi class $F_1$ for $L_1$ can not be fixed in advance. Moreover, a stronger version of Lemma \ref{lem2.10} holds: \end{rk} \begin{prop}\label{prop2.12} Let $(F_t)_{t\in [0,1]}$ and $(F_t')_{t\in [0,1]}$ be two smooth paths of gfqi's, such that the corresponding contours $(L_t)_{t\in [0,1]}$ form the same Legendrian isotopy. Suppose $F_0$ and $F_0'$ are equivalent. Then $F_1$ and $F_1'$ are equivalent. \end{prop} \noindent \textit{Proof.} In \cite{Th}, Th\'{e}ret proved that the set of gfqi's forms a Serre fibration over the set of Legendrian submanifolds in $J^1(M)$ which are diffeomorphic to the base space $M$.\footnote{More precisely, it is done for Lagrangian submanifolds in \cite{Th}, and adapted for the Legendrian case in \cite{14}.} Consider the path of gfqi's $F\star F'^{-1}$, formed by composing $(F_t)_{t\in [0,1]}$ with $(F_t')_{t\in [0,1]}$ traveled the other way around. It gives a path from $F_1'$ to $F_1$ passing by $F_0=F_0'$, which projects onto the loop of Legendrian submanifolds formed by composing the path $(L_t)_{t_\in [0,1]}$ with its inverse. It is contractible, and retracts on the constant loop $(L_1)_{t\in [0,1]}$. Thus there exists a retraction by deformation from the path $F \star F'^{-1}$ to a path $(\tilde{F}_t)_{t\in[0,1]}$, with $\tilde{F}_0=F_1'$ and $\tilde{F}_1=F_1$, which satisfies the assumption of Lemma \ref{lem2.10}. \qed\\ In other words, the number of equivalence classes of gfqi's for a Legendrian knot remains the same along Legendrian isotopies.\\ \noindent \textbf{Question:} How does this number change along a gfqi-concordance (or \textit{homotopy}, see next Section)? \section{A gfqi-homotopy construction}\label{sec21} This third part is devoted to another cobordism notion with respect to the existence of gfqi's, between Legendrian isotopy and gfqi-concordance. The following construction emphasizes the flexibility that exists among gfqi-concordances. \begin{df}\label{def21} A \textbf{Legendrian homotopy} is a Legendrian cobordism $\mathcal{L}$ such that all the $t_0$-slices $\mathcal{L}_{\lceil t =t_0}, t_0 \in [0,1]$ are in immersed Legendrian submanifolds. \end{df} A Legendrian homotopy is a particular case of a concordance, avoiding cobordisms moments. In terms of local decomposition on a wave front (see Figure \ref{fig5}), a generic Legendrian homotopy can be decomposed using only Reidemester moves I, II, III, and the homotopy move IV.\\ In \cite{EF}, E. Ferrand shows that the homotopy move can be avoided, by replacing it with a sequence of isotopy and cobordism moves. In other words, if there is a Legendrian cobordism between two Legendrian knots, one may change it to another cobordism which projects onto $J^1(\mathbb{R}) \times [0,1]$ on an smooth embedded cobordism. However, this sequence of isotopy and cobordism moves reveals stabilised knots or links, which are forbidden in our context of working with generating functions. \begin{df} A \textbf{homotopy with a gfqi} is a Legendrian homotopy $\mathcal{L}$ such that there exists a gfqi $\mathcal{F}$ for $\mathcal{L}$. \end{df} Let $(L,F)$ be a gfqi-equipped knot. Remind that Theorem \ref{thm2.9} implies that there exists a concordance between the connected sum $\bar{L} \# L$ and the zero-section $\mathcal{O}$ which has a gfqi. \begin{prop}\label{prop3.4} Let $L$ be a Legendrian knot having a gfqi $F$. Then there exists a homotopy with a gfqi between $L \# \bar{L} \# L$ and $L$, $$L \# \bar{L} \# L \underset{\text{hom. gfqi}}{\sim} L.$$ \end{prop} \noindent \textit{Proof.} The construction is based on the \textit{sum} operation, defined in \cite{Moi}. Starting with two generating functions $F_1$ and $F_2$ for respectively two Legendrian submanifolds $L_1$ and $L_2$, one obtains a third Legendrian object, denoted $L_1 \underset{\smile}{+} L_2$, by summing up the gf's over the base space. It is generically an embedded Legendrian submanifold, which admits a gf $F_1 \underset{\smile}{+} F_2$ defined as $$F_1 \underset{\smile}{+} F_2(q, w_1,w_2)=F_1(q,w_1)+F_2(q,w_2).$$ If $L_1$ and $L_2$ have disjoint supports over the base space, then the sum operation is nothing else than the connected sum operation, Definition \ref{def2.6}. Note also that, if $F_1$ and $F_2$ are quadratic at infinity, so is $F_1 \underset{\smile}{+} F_2$. Consider now the Legendrian submanifold $L_H$, isotopic to the zero section, described by its wave front in red in Figure \ref{fig9}. It must lie in such a way that over the support of $L$, $L_H$ consists of three horizontal strings. We will adjust the spacing between the two upper horizontal strings. If the knot $L$ has is height equal to $h$, then we start with a spacing $H$ such that $H>h$. Summing up $L$ with $L_H$ then gives three copies of $L$ attached at each level of $L_H$. The resulting Legendrian submanifold is isotopic to the connected sum $L \# \bar{L} \# L$. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.9]{homotopy1} \end{center} \caption{} \label{fig9} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.9]{homotopy2} \end{center} \caption{} \label{fig10} \end{figure} We then decrease the spacing between the two upper strings of $L_H$ from $H$ to $\epsilon$, for a sufficiantly small enough $\epsilon$, see Figure \ref{fig10}. The Legendrian submanifold $L \underset{\smile}{+} L_{\epsilon}$ at the end of the procedure is isotopic to $L$. The $1$-parameter family $(L \underset{\smile}{+} L_{t})_{t\in [\epsilon, H]}$ is a homotopy, which at the end gives us a homotopy from $L \# \bar{L} \# L$ to $L$. There is a simple $1$-parameter family of gfqi's $(F_t)_{t \in [\epsilon, H]}$ for $(L_t)_{t\in [\epsilon , H]}$, which can be extended in a gfqi for the whole homotopy from $L \# \bar{L} \# L$ to $L$ thanks to Chekanov's theorem. \qed\\ The different constructions of Legendrian cobordisms with gfqi's should be compared with the notions of gf-compatible Lagrangian cobordisms of L. Traynor and collaborators (J. Sabloff, S. Pezzimenti). Note that the homotopy move corresponds to the immersed points of gf-compatible Lagrangian cobordisms. As far as we know, it is clear that homotopy moves can not be avoided in all gf-compatible (immersed) Lagrangian cobordisms. For instance, there is no embedded gf-compatible Lagrangian cobordism between the two Chekanov's knots, \cite{ST}. On the one hand, one wonders when homotopy moves can be removed with respect to gf's to create embedded Lagrangian cobordisms. On the other hand, we ask if it is possible to systematically remove cobordism moments rather than immersed points. In other words, using the terminology of this note, is there any obstruction for the existence of a gfqi-homotopy between two gfqi-equipped knots (i.e. with respect to the gfqi-equipment)? Note that a positive answer would imply that the gfqi-concordance group is trivial.\\ \noindent \textbf{Question:} Let $L$ be a gfqi-knot such that there exist two different gfqi classes $F_1$ and $F_2$ whose contour is $L$. Does it exist a gfqi-isotopy from $(L,F_1)$ to $(L,F_2)$? a gfqi-concordance? a gfqi-homotopy?
2024-02-18T23:40:42.352Z
2018-05-10T02:11:15.000Z
algebraic_stack_train_0000
3,133
6,055
proofpile-arXiv_065-15255
\section{Introduction} Topological methods play an important role in the modern study of infinite discrete groups. Recall that an Eilenberg--Mac Lane complex of type $K(G,1)$ is an aspherical CW complex with fundamental group $G$. For any group $G$ a $K(G,1)$ complex exists, and it is unique up to homotopy equivalence. While the existence of such spaces is elementary, it is often a much harder problem to find a $K(G,1)$ complex which is suitably `nice' to be used for doing calculations. This is important if one wants to compute homology and cohomology groups. This is part of the motivation for the study of higher order topological finiteness properties of groups, a topic which goes back to pioneering work of Wall~\cite{Wall1965} and Serre~\cite{Serre1971}. We recall that a group is of type ${\rm F}_n$ if there is a $K(G,1)$-complex with a finite $n$-skeleton. The property ${\rm F}_1$ is equivalent to finite generation, while a group is of type ${\rm F}_2$ if and only if it is finitely presented, so ${\rm F}_n$ gives a natural higher dimensional analogue of these two fundamental finiteness properties. The geometric dimension of a group $G$, denoted $\mathrm{gd}(G)$, is the minimum dimension of a $K(G,1)$ complex. The topological finiteness property ${\rm F}_n$ and geometric dimension correspond, respectively, to the homological finiteness property ${\rm FP}_n$ and the cohomological dimension of the group. The study of topological and homological finiteness properties is an active area of research. We refer the reader to~\cite[Chapter~8]{BrownCohomologyBook},~\cite[Chapters~6-9]{GeogheganBook} and~\cite{Brown2010} for more background on this topic. The homological finiteness properties ${\rm FP}_n$ and cohomological dimension have also been extensively studied more generally for monoids. One major motivation for studying homological finiteness properties of monoids comes from important connections with the theory of rewriting systems, and the word problem for finitely presented monoids. It is well known that there are finitely presented monoids with undecidable word problem. Given that the word problem is undecidable in general, a central theme running through the development of geometric and combinatorial group and monoid theory has been to identify and study classes of finitely presented monoids all of whose members have solvable word problem. A finite complete rewriting system is a finite presentation for a monoid of a particular form (both confluent and Noetherian) which gives a solution of the word problem for the monoid; see~\cite{BookAndOtto}. Complete rewriting systems are also of interest because of their close connection with the theory of Gr\"{o}bner--Shirshov bases; see~\cite{Ufnarovski1998}. The connection between complete rewriting systems and homological finiteness properties is given by the Anick-Groves-Squier Theorem which shows that a monoid that admits such a presentation must be of type ${\rm FP}\sb \infty$; see \cite{Anick1986, Squier1987} and \cite{Brown1989}. The property ${\rm FP}_n$ for monoids also arises in the study of Bieri--Neumann--Strebel--Renz invariants of groups; see~\cite{Bieri1987}. A number of other interesting homological and homotopical finiteness properties have been studied in relation to monoids defined by compete rewriting systems; see~\cite{AlonsoHermiller2003, Pride2005, Guiraud2012}. The cohomological dimension of monoids has also received attention in the literature; see for example~\cite{Cheng1980, Guba1998, Margolis2015}. In fact, for monoids these properties depend on whether one works with left $\mathbb{Z}M$-modules or right $\mathbb{Z}M$-modules, giving rise to the notions of both left- and right-${\rm FP}_n$, and left and right cohomological dimension. In general these are independent of each other; see~\cite{Cohen1992, Guba1998, Pride2006}. Working with bimodules resolutions of the $({\mathbb{Z}M}, {\mathbb{Z}M})$-bimodule ${\mathbb{Z}M}$ one obtains the notion bi-${\rm FP}\sb n$ introduced and studied in~\cite{KobayashiOtto2001}. This property is of interest from the point of view of Hochschild cohomology, which is the standard notion of cohomology for rings; see~\cite{HochschildCoh, Mitchell1972}. For more background on the study of homological finiteness properties in monoid theory, and the connections with the theory of string rewriting systems, see~\cite{Brown1989,Cohen1997,Otto1997}. While homological finiteness properties of monoids have been extensively studied, in contrast, until recently there was no corresponding theory of topological finiteness properties of monoids. The results in this paper are part of a research programme of the authors, initiated in \cite{GraySteinberg1}, aimed at developing such a theory. A central theme of this work is that the topological approach allows for less technical, and more conceptual, proofs than had previously been possible using only algebraic means. Other recent results in the literature where topological methods have been usefully applied in the study of monoids include, e.g. \cite{ Brittenham2009, Meakin2015, Margolis2017}. This paper is the sequel to the article~\cite{GraySteinberg1} where we set out the foundations of $M$-equivariant homotopy theory for monoids acting on CW complexes, and the corresponding study of topological finiteness properties of monoids. In that paper we introduced the notion of a left equivariant classifying space for a monoid, which is a contractible projective $M$-CW complex. A left equivariant classifying space always exists, for any monoid $M$, and it is unique up to $M$-homotopy equivalence. We then define the corresponding finiteness conditions left-${\rm F}_n$ and left geometric dimension in the obvious natural way in terms of the existence of a left equivariant classifying space satisfying appropriate finiteness properties. It follows easily from the definitions that left-${\rm F}_n$ implies left-${\rm FP}_n$, and that the left geometric dimension is an upper bound on the left cohomological dimension of the monoid. There are obvious dual definitions and statements working with right actions. We also developed a two-sided analogue of this theory in~\cite{GraySteinberg1}, with two-sided $M$ actions, defining the notion of a bi-equivariant classifying space for a monoid, and the resulting finiteness properties bi-${\rm F}_n$ and geometric dimension. It follows from the definitions that bi-${\rm F}_n$ implies bi-${\rm FP}_n$ (in the sense of~\cite{KobayashiOtto2001}) and that the geometric dimension is an upper bound for the Hochschild cohomological dimension. See Section~\ref{sec_prelims} below for full details and formal definitions of all of these notions. The aim of this paper is to apply the ideas and results from~\cite{GraySteinberg1} to solve some open problems concerning homological finiteness properties of monoids that seemed resistant to algebraic techniques. Let us begin with some history. An important open problem is whether every one-relator monoid has decidable word problem. While the question is open in general, it has been solved in a number of special cases; see Adjan~\cite{Adjan1966} and Adjan and Oganesyan~\cite{Adyan1987}. Related to this is another open question which asks whether every one-relator monoid admits a presentation by a finite complete rewriting system. Of course, a positive answer to the this question would imply a positive solution to the word problem. In light of the Anick-Groves-Squier Theorem which states that monoids which admit finite complete presentations are of type right- and left-${\rm FP}_\infty$, it is natural to ask whether all one-relator monoids are of type ${\rm FP}_\infty$. This question was posed by Kobayashi in~\cite[Problem~1]{Kobayashi2000}. The question is also natural given the fact that all one-relator groups are all of type ${\rm FP}_\infty$, as a consequence of Lyndon's Identity Theorem for one-relator groups; see Lyndon \cite{Lyndon1950}. The first positive result concerning the word problem for one-relator monoids dealt with the case of, so-called, special one-relator monoids~\cite{Adjan1966}. A \emph{special} monoid is one defined by a finite presentation of the form $\langle A\mid w_1=1,\ldots,w_k=1\rangle$. They were first studied in the sixties by Adjan~\cite{Adjan1966} and Makanin~\cite{Makanin66}. Adjan proved that the group of units of a one-relator special monoid is a one-relator group and reduced the word problem of the monoid to that of the group, which has a decidable word problem by Magnus's theorem~\cite{LyndonAndSchupp}. Makanin proved more generally that the group of units of a $k$-relator special monoid is a $k$-relator group and reduced the word problem of the monoid to that of the group. See~\cite{Zhang} for a modern approach to these results. Thus there is a much closer connection for special monoids between the group of units and the monoid than is customary. One of the main results of this paper is that if $M = \langle A\mid w_1=1,\ldots,w_k=1\rangle$, and if $G$ is the group of units of $M$, then if $G$ is of type ${\rm FP}\sb n$ with $1\leq n\leq \infty$, then $M$ is also of type left- and right-${\rm FP}\sb n$. Moreover, we prove that both the left and right cohomological dimensions of $M$ are bounded below by $\mathop{\mathrm{cd}} G$, and are bounded above by $\max\{2,\mathop{\mathrm{cd}} G\}$. We shall also prove the topological analogues of these results, obtaining the corresponding statements with right and left-${\rm F}_n$ and geometric dimension. These results are obtained by proving new results about the geometry of Cayley digraphs of special monoids, including the observation that the quotient of the Cayley digraph by its strongly connected components is a regular rooted tree on which the monoid acts by simplicial maps. We use this to show how one can construct a left equivariant classifying space for a special monoid from an equivariant classifying space for its group of units. We shall then go on to apply these results to prove a Lyndon's Identity Theorem \cite{Lyndon1950} for one-relator monoids of the form $\langle A\mid w=1\rangle$. Specifically, we show that our results can be applied to construct equivariant classifying spaces for one-relator monoids of this form, which have finitely many orbits of cells in each dimension, and have dimension at most $2$ unless the monoid has torsion. We apply this to give a positive answer to Kobayashi's question \cite[Problem~1]{Kobayashi2000} on homological finiteness properties of one-relator monoids, in the case of one-relator monoids of the form $\langle A\mid w=1\rangle$, by proving that all such monoids are of type left- and right-${\rm F}_\infty$ and ${\rm FP}_\infty$. We also show that if $M=\langle A\mid w=1\rangle$ with $w$ not a proper power then the left and right cohomological dimension of $M$ are bounded above by $2$, and if $w$ is a proper power then they are both equal to $\infty$. The analogous topological result for the left and right geometric dimension of a one-relator special monoid is also obtained. In fact, it will follow from our results that when $w$ is not a proper power then the Cayley complex of the one-relator monoid $M$ is an equivariant classifying space for $M$ of dimension at most $2$. This is the analogue, for one-relator special monoids, of the fact that the presentation complex of a torsion-free one-relator group is aspherical and is thus a $K(G,1)$ complex for the group of dimension at most $2$; see \cite{Cockcroft1954, DyerVasquez1973}. These results on special monoids, and one-relator monoids, will be given in Section~\ref{sec_special}. The results we obtain in this paper for special one-relator monoids form an important infinite family of base cases for the main result in our article \cite{GraySteinberg3} where we prove a Lyndon's Identity Theorem for arbitrary one-relator monoids $\langle A\mid u=v \rangle$. Applying this result, in \cite{GraySteinberg3} we give a positive answer to Kobayashi's question by showing that every one-relator monoid $\langle A \mid u=v \rangle$ is of type left- and right-${\rm FP}_{\infty}$. In Section~\ref{sec_amalg} below we prove several new results about the preservation of topological and homological finiteness properties for amalgamated free products of monoids. Monoid amalgamated products are far more complicated than group ones. For example, an amalgamated free product of finite monoids can have an undecidable word problem, and the factors do not necessarily embed, or intersect, in the base monoid; see~\cite{Sapir2000}. In particular there are no normal form results at our disposal when working with monoid amalgamated free products. We give a method for constructing an equivariant classifying spaces for an amalgamated free product of monoids $L = M_1 \ast_W M_2$ from equivariant classifying spaces of the monoids $M_1$, $M_2$ and $W$. To do this, we use homological ideas of Dicks~\cite{Dicks80} on derivations to construct a Bass--Serre tree $T$ for the amalgam $L$. We also develop an analogous theory in the two-sided case. These constructions are used to prove several results about the closure properties of ${\rm F}_n$, ${\rm FP}_n$, and geometric and cohomological dimension. Finally, in Section~\ref{sec_HNNOttoPride} we consider HNN extensions construction for monoids, in the sense of Otto and Pride \cite{Pride2004}, and those defined by Howie \cite{Howie1963}. As in the case of amalgamated free products, we give constructions of equivariant classifying spaces, and apply these to deduce results about the closure properties of topological and homological finiteness properties. This also involves constructing appropriate Bass--Serre trees. As special cases of our results we recover generalisations of a number of results of Otto and Pride from~\cite{Pride2004} and~\cite{Pride2005}. \section{Preliminaries} \label{sec_prelims} In this section we recall some of the relevant background from~\cite{GraySteinberg1} needed for the rest of the article. For full details, and proofs of the statements made here we refer the reader to~\cite[Sections~2-4]{GraySteinberg1}. For additional general background on algebraic topology, and topological methods in group theory, we refer the reader to~\cite{May1999} and~\cite{GeogheganBook}. \subsection{The category of $M$-sets} Let $M$ be a monoid. A \emph{left $M$-set} consists of a set $X$ and a mapping $M \times X \rightarrow X$ written $(m,x) \mapsto mx$ called a \emph{left action}, such that $1x=x$ and $m(nx) = (mn)x$ for all $m,n \in M$ and $x \in X$. Right $M$-sets are defined dually, they are the same thing as left $M^{op}$-sets, where $M^{op}$ is the \emph{opposite} of the monoid $M$ which is the monoid with the same underlying set $M$ and multiplication given by $x \cdot y = yx$. A \emph{bi-$M$-set} is an $M\times M^{op}$-set. A mapping $f\colon X \rightarrow Y$ between $M$-sets is \emph{$M$-equivariant} if $f(mx) = mf(x)$ for all $x \in X$, $m \in M$, and $M$-sets together with $M$-equivariant mappings form a category. If $X$ is an $M$-set and $A\subseteq X$, then $A$ is said to be a \emph{free basis for $X$} if and only if each element of $X$ can be uniquely expressed as $ma$ with $m\in M$ and $a\in A$. The free left $M$-set on $A$ exists and can be realised as the set $M \times A$ with action $m(m',a) = (mm',a)$. Note that if $G$ is a group, then a left $G$-set $X$ is free if and only if $G$ acts freely on $X$, that is, each element of $X$ has trivial stabilizer. In this case, any set of orbit representatives is a basis. An $M$-set $P$ is \emph{projective} if any $M$-equivariant surjective mapping $f\colon X\to P$ has an $M$-equivariant section $s\colon P\to X$ with $f\circ s=1_P$. Every free $M$-set is projective, and an $M$-set is projective if and only if it is a retract of a free one. Each projective $M$-set $P$ is isomorphic to an $M$-set of the form $\coprod_{a\in A} Me_a$ (disjoint union, which is the coproduct in the category of $M$-sets) with $e_a\in E(M)$, where $E(M)$ denotes the set of idempotents of the monoid $M$ In particular, projective $G$-sets are the same thing as free $G$-sets for a group $G$. If $A$ is a right $M$-set and $B$ is a left $M$-set, then $A\otimes_M B$ is the quotient of $A\times B$ by the least equivalence relation $\sim$ such that $(am,b)\sim (a,mb)$ for all $a\in A$, $b\in B$ and $m\in M$. We write $a\otimes b$ for the class of $(a,b)$ and note that the mapping $(a,b)\mapsto a\otimes b$ is universal for mappings $f\colon A\times B\to X$ with $X$ a set and $f(am,b)=f(a,mb)$. If $M$ happens to be a group, then $M$ acts on $A\times B$ via $m(a,b)=(am^{-1},mb)$ and $A\otimes_M B$ is just the set of orbits of this action. The tensor product $A\otimes_M()$ preserves all colimits because it is a left adjoint to the functor $X\mapsto X^A$. If $B$ is a left $M$-set there is a natural preorder relation $\leq$ on $B$ where $x \leq y$ if and only if $Mx \subseteq My$. We write $x \approx y$ if there is a sequence $z_1, z_2, \ldots, z_n$ of elements of $B$ such that for each $0 \leq i \leq n-1$ either $z_i \leq z_{i+1}$ or $z_i \geq z_{i+1}$. This is clearly an equivalence relation and we call the $\approx$-classes of $B$ the \emph{weak orbits} of the $M$-set. This corresponds to the notion of the weakly connected components in a directed graph. If $B$ is a right $M$-set then we use $B/M$ to denote the set of weak orbits of the $M$-set while if $B$ is a left $M$-set we use $M\backslash B$ to denote the set of weak orbits. Note that if $1$ denotes the trivial right $M$-set and $B$ is a left $M$-set, then we have $M\backslash B=1\otimes_M B$. Let $M,N$ be monoids. An \emph{$M$-$N$-biset} is an $M\times N^{op}$-set. If $A$ is an $M$-$N$-biset and $B$ is a left $N$-set, then the equivalence relation defining $A\otimes_N B$ is left $M$-invariant and so $A\otimes_N B$ is a left $M$-set with action $m(a\otimes b) = ma\otimes b$. \subsection{Projective $M$-CW complexes} A \emph{left $M$-space} is a topological space $X$ with a continuous left action $M\times X\to X$ where $M$ has the discrete topology. A right $M$-space is the same thing as an $M^{op}$-space and a \emph{bi-$M$-space} is an $M\times M^{op}$-space. Each $M$-set can be viewed as a discrete $M$-space. Colimits in the category of $M$-spaces are formed by taking colimits in the category of spaces and observing that the result has a natural $M$-action. Our main interest in this article will be in $M$-spaces $X$ where $X$ is a CW complex. Following~\cite{GraySteinberg1} we define a (projective) \emph{$M$-cell} of dimension $n$ to be an $M$-space of the form $Me\times B^n$ where $e\in E(M)$ is an idempotent and $B^n$ has the trivial action. In the special case $e=1$, we call it a \emph{free $M$-cell}. We then define a projective $M$-CW complex in an inductive fashion by imitating the usual definition of a CW complex but by attaching $M$-cells $Me\times B^n$ via $M$-equivariant maps from $Me\times S^{n-1}$ to the $(n-1)$-skeleton. Formally, a \emph{projective (left) relative $M$-CW complex} is a pair $(X,A)$ of $M$-spaces such that $X=\varinjlim X_n$ with $i_n\colon X_n\to X_{n+1}$ inclusions, $X_{-1}=A$, $X_0 = P_0\cup A$ with $P_0$ a projective $M$-set and where $X_n$ is obtained as a pushout of $M$-spaces \begin{equation}\label{eq:pushout} \begin{tikzcd}P_n\times S^{n-1}\ar{r}\ar[d,hook] & X_{n-1}\ar[d,hook]\\ P_n\times B^n\ar{r} & X_n \end{tikzcd} \end{equation} with $P_n$ a projective $M$-set and $B^n$ having a trivial $M$-action for $n\geq 1$. The set $X_n$ is the \emph{$n$-skeleton} of $X$ and if $X_n=X$ and $P_n\neq \emptyset$, then $X$ is said to have \emph{dimension} $n$. Since $P_n$ is isomorphic to a coproduct of $M$-sets of the form $Me$ with $e\in E(M)$, we are indeed attaching $M$-cells at each step. If $A=\emptyset$, we call $X$ a \emph{projective $M$-CW complex}. Note that a projective $M$-CW complex is a CW complex and the $M$-action is cellular (in fact, takes $n$-cells to $n$-cells). We can define projective right $M$-CW complexes and projective bi-$M$-CW complexes by replacing $M$ with $M^{op}$ and $M\times M^{op}$, respectively. We say that $X$ is a \emph{free $M$-CW complex} if each $P_n$ is a free $M$-set. A projective $M$-CW complex $X$ is of \emph{$M$-finite type} if $P_n$ is a finitely generated projective $M$-set for each $n$, and we say that $X$ is \emph{$M$-finite} if it is finite dimensional and of $M$-finite type (i.e., $X$ is constructed from finitely many $M$-cells). The degree $n$ component of the cellular chain complex for the projective $M$-CW complex $X$ is isomorphic to $\mathbb ZP_n$ as a $\mathbb ZM$-module, and hence is projective. \begin{comment} More generally we define an $M$-CW complex in the same way as above except that the $P_i$ are allowed to be arbitrary $M$-sets. \end{comment} A \emph{projective $M$-CW subcomplex} of $X$ is an $M$-invariant subcomplex $A\subseteq X$ which is a union of $M$-cells of $X$. If $X$ is a projective $M$-CW complex then so is $Y=X\times I$ where $I$ is given the trivial action. If we retain the above notation, then $Y_0=X_0\times \partial I\cong X_0\coprod X_0$. The $n$-cells for $n\geq 1$ are obtained from attaching $P_n\times B^n\times \partial I\cong (P_n\coprod P_n)\times B^n$ and $P_{n-1}\times B^{n-1}\times I$. Notice that $X\times \partial I$ is a projective $M$-CW subcomplex of $X\times I$. An \emph{$M$-homotopy} between $M$-equivariant continuous maps $f,g\colon X\to Y$ between $M$-spaces $X$ and $Y$ is an $M$-equivariant mapping $H\colon X\times I\to Y$ with $H(x,0)=f(x)$ and $H(x,1)=g(x)$ for $x\in X$ where $I$ is viewed as having the trivial $M$-action. We write $f\simeq_M g$ in this case. We say that $X,Y$ are \emph{$M$-homotopy equivalent}, written $X\simeq_M Y$, if there are $M$-equivariant continuous mappings (called \emph{$M$-homotopy equivalences}) $f\colon X\to Y$ and $g\colon Y\to X$ such that $gf\simeq_M 1_X$ and $fg\simeq_M 1_Y$. Every $M$-equivariant continuous mapping of projective $M$-CW complexes is $M$-homotopy equivalent to a cellular one. This is the \emph{cellular approximation theorem}; see \cite[Theorem 2.8]{GraySteinberg1}. If $X$ is a left $M$-space and $A$ is a right $M$-set, then $A\otimes_M X$ is a topological space with the quotient topology. The following base change result will be used frequently below. \begin{Prop}\label{c:base.change.cw} \emph{\cite[Proposition~3.1 and Corollary~3.2]{GraySteinberg1}} If $A$ is an $M$-$N$-biset that is projective (free) as an $M$-set and $X$ is a projective (free) $N$-CW complex, then $A\otimes_N X$ is a projective (free) $M$-CW complex. If $A$ is in addition finitely generated as an $M$-set and $X$ is of $N$-finite type, then $A\otimes_N X$ is of $M$-finite type. Moreover, $\dim A\otimes_N X=\dim X$. \end{Prop} \begin{Rmk}\label{r:tensor.with.free} We shall use the observation that if $X$ is a free right $M$-set on $A$, then $A$ is in bijection with $X/M$ and hence $X\cong X/M\times M$ as a right $M$-set where $M$ acts trivially on $X/M$. Hence if $Y$ is a projective $M$-CW complex, then $X\otimes_M Y\cong \coprod_A Y\cong X/M\times Y$ where $X/M$ has the discrete topology. Moreover, these homeomorphisms come from isomorphisms of the CW structure. \end{Rmk} \subsection{Equivariant classifying spaces and topological finiteness properties for monoids} A \emph{(left) equivariant classifying space} $X$ for a monoid $M$ is a projective $M$-CW complex which is contractible. A right equivariant classifying space for $M$ will be a left equivariant classifying space for $M^{op}$. In some cases an equivariant classifying space for a monoid may be constructed using the Cayley digraph of the monoid as the $1$-skeleton. Recall that if $M$ is a monoid and $A\subseteq M$, then the (right) \emph{Cayley digraph} $\Gamma(M,A)$ of $M$ with respect to $A$ is the graph with vertex set $M$ and with edges in bijection with $M\times A$ where the directed edge (arc) corresponding to $(m,a)$ starts at $m$ and ends at $ma$. Note that $\Gamma(M,A)$ is a free $M$-graph and is $M$-finite if and only if $A$ is finite (see Section~\ref{sec_amalg} below for the definition of $M$-graph). \begin{comment} \begin{Example}\label{ex_bicyclic} Let $B$ denote the bicyclic monoid which is defined by the monoid presentation $\langle a,b\mid ab=1\rangle$. It is not hard to see that each element of $B$ is uniquely represented by a word of the form $b^i a^j$ where $i,j \geq 0$. Figure~\ref{fi:equiv} shows an equivariant classifying space for $B$. The $1$-skeleton is the Cayley graph of $B$ and there is a $2$-cell glued in for each path labelled $ab$. \begin{figure \begin{center} \begin{tikzpicture}[shorten >=1pt,scale=1, vertices/.style={draw, fill=black, circle, inner sep=1pt}] \node[vertices,label=left:{}] (A) at (0,0) {}; \node[vertices,label=left:{}] (B) at (0,1) {}; \node[vertices,label=left:{}] (C) at (0,2) {}; \node at (0,2.5) {$\vdots$}; \node[vertices,label=left:{}] (D) at (1.5,0) {}; \node[vertices,label=left:{}] (E) at (1.5,1) {}; \node[vertices,label=left:{}] (F) at (1.5,2) {}; \node at (1.5,2.5) {$\vdots$}; \node[vertices,label=left:{}] (G) at (3,0) {}; \node at (3.5,0) {$\cdots$}; \path (A) edge[->, bend left] node [left] {$a$} (B) (B) edge[->, bend left] node [left] {$a$} (C); \path (C) edge[->, bend left] node [right] {$b$} (B) (B) edge[->, bend left] node [right] {$b$} (A); \path (A) edge[->] node [below] {$b$} (D); \path (D) edge[->, bend left] node [left] {$a$} (E) (E) edge[->, bend left] node [left] {$a$} (F); \path (F) edge[->, bend left] node [right] {$b$} (E) (E) edge[->, bend left] node [right] {$b$} (D); \path (D) edge[->] node [below] {$b$} (G); \draw [black,fill=gray!20] (0,0.5) ellipse (.14cm and .45cm); \draw [black,fill=gray!20] (0,1.5) ellipse (.14cm and .45cm); \draw [black,fill=gray!20] (1.5,0.5) ellipse (.14cm and .45cm); \draw [black,fill=gray!20] (1.5,1.5) ellipse (.14cm and .45cm); \end{tikzpicture} \end{center} \caption{An equivariant classifying space for the bicyclic monoid.\label{fi:equiv}} \end{figure} \end{Example} \end{comment} Equivariant classifying spaces of monoids always and are unique up to $M$-homotopy equivalence; see~\cite[Theorem~6.3 \& Corollary~6.5]{GraySteinberg1}. The definition of equivariant classifying spaces for monoids leads naturally to the definitions of the following topological finiteness properties. A monoid $M$ is of type \emph{left-${\rm F}_n$} (for a non-negative integer $n$) if there is an equivariant classifying space $X$ for $M$ such that $X_n$ is $M$-finite, i.e., such that $M\backslash X$ has finite $n$-skeleton. We say that $M$ is of type \emph{left-${\rm F}_{\infty}$} if $M$ has an equivariant classifying space $X$ that is of $M$-finite type, i.e., $M\backslash X$ is of finite type. The monoid $M$ is defined to have type \emph{right-${\rm F}_n$} if $M^{op}$ is of type left-${\rm F}_n$ for $0\leq n\leq \infty$. The \emph{left geometric dimension} of $M$ is defined to be the minimum dimension of a left equivariant classifying space for $M$. The right geometric dimension is defined dually. The homological analogue of left-${\rm F}\sb n$ is the finiteness property left-${\rm FP}\sb n$, where a monoid $M$ is said to be of type left-${\rm FP}\sb n$ if there is a projective resolution $P = (P_i)_{i \geq 0}$ of the trivial left ${\mathbb{Z}M}$-module $\mathbb{Z}$ such that $P_i$ is finitely generated for $i \leq n$. There is a dual notion of right-${\rm FP}\sb n$, and we say a monoid is of type ${\rm FP}\sb n$ if it is both of type left- and right-${\rm FP}\sb n$. For any monoid $M$, if $M$ is of type left-${\rm F}_n$ for some $0 \leq n \leq \infty$ then it is of type left-${\rm FP}_n$. Indeed, if $X$ is an equivariant classifying space for $M$ then the augmented cellular chain complex of $X$ gives a projective ${\mathbb{Z}M}$-resolution of the trivial ${\mathbb{Z}M}$-module $\mathbb{Z}$ with the desired finiteness properties. If $M$ is a monoid of type left-${\rm F}_2$, then $M$ is of type left-${\rm F}_n$ if and only if $M$ is of type left-${\rm FP}_n$ for $0\leq n\leq \infty$. In particular, for finitely presented monoids the conditions left-${\rm F}_n$ and left-${\rm FP}_n$ are equivalent. In the special case that the monoid $M$ is a group, the definition of left-${\rm F}_n$ above is easily seen to agree with the usual definition of ${\rm F}_n$ for groups. The left geometric dimension is clearly an upper bound on the left cohomological dimension, denoted $\mathop{\mathrm{left \; cd}} M$, of a monoid $M$ where the left \emph{cohomological dimension} of $M$ is the shortest length of a projective resolution of the trivial left ${\mathbb{Z}M}$-module $\mathbb Z$. To define the bilateral notion of a classifying space, first recall that $M$ is an $M\times M^{op}$-set via the action $(m_L,m_R)m=m_Lmm_R$. We say that a projective $M\times M^{op}$-CW complex $X$ is a \emph{bi-equivariant classifying space for $M$} if $\pi_0(X)\cong M$ as an $M\times M^{op}$-set and each component of $X$ is contractible; equivalently, $X$ has an $M\times M^{op}$-equivariant homotopy equivalence to the discrete $M\times M^{op}$-set $M$. We can augment the cellular chain complex of $X$ via the canonical surjection $\varepsilon\colon C_0(X)\to H_0(X)\cong \mathbb Z\pi_0(X)\cong \mathbb ZM$. Since each component of $X$ is contractible, this gives a projective bimodule resolution of $\mathbb ZM$. A bi-equivariant classifying space may be constructed for any monoid~\cite[Corollary~7.4]{GraySteinberg1}. As in the one-sided case, bi-equivariant classifying spaces are unique up to $M\times M^{op}$-homotopy equivalence; see~\cite[Theorem~7.2]{GraySteinberg1}. A monoid $M$ is said to be of type \emph{bi-${\rm F}_n$} if there is a bi-equivariant classifying space $X$ for $M$ such that $X_n$ is $M\times M^{op}$-finite, i.e., $M\backslash X/M$ has finite $n$-skeleton. We say that $M$ is of type \emph{bi-${\rm F}_{\infty}$} if $M$ has a bi-equivariant classifying space $X$ that is of $M\times M^{op}$-finite type, i.e., $M\backslash X/M$ is of finite type. We define the \emph{geometric dimension} of $M$ to be the minimum dimension of a bi-equivariant classifying space for $M$. The homological analogue of bi-${\rm F}_n$ is the property \emph{bi-${\rm FP}\sb n$} (in the sense of~\cite{KobayashiOtto2001}), where a monoid is said to be of type \emph{bi-${\rm FP}\sb n$} if there is a projective resolution \[ \cdots \rightarrow P_1 \rightarrow P_0 \rightarrow \mathbb ZM \rightarrow 0 \] of the $(\mathbb ZM,\mathbb ZM)$-bimodule $\mathbb ZM$, where $P_0, P_1, \ldots, P_n$ are finitely generated projective $(\mathbb{Z}M, \mathbb{Z}M)$-bimodules. For $0\leq n\leq \infty$, if $M$ is of type bi-${\rm F}_n$, then it is of type bi-${\rm FP}_n$. If $M$ is of type bi-${\rm F}_n$ for $0\leq n\leq\infty$, then $M$ is of type left-${\rm F}_n$ and type right-${\rm F}_n$. If $M$ is a monoid of type bi-${\rm F}_2$, then $M$ is of type bi-${\rm F}_n$ if and only if $M$ is of type bi-${\rm FP}_n$ for $0\leq n\leq \infty$; see~\cite[Theorem 7.15]{GraySteinberg1}. In particular, for finitely presented monoids bi-${\rm F}_n$ and bi-${\rm FP}_n$ are equivalent. The \emph{Hochschild cohomological dimension} of $M$, written $\dim M$, is the length of a shortest projective resolution of $\mathbb ZM$ as a $\mathbb Z[M\times M^{op}]$-module. The Hochschild cohomological dimension bounds both the left and right cohomological dimension and the geometric dimension bounds the Hochschild cohomological dimension. The geometric dimension also bounds both the left and right geometric dimensions because if $X$ is a bi-equivariant classifying space for $M$ of dimension $n$, then $X/M$ is an equivariant classifying space of dimension $n$. \subsection{A theorem of Brown} We end this section by recalling a result of Brown which will be useful for proofs of results about homological finiteness properties of monoids. Unless otherwise stated, all modules considered here are left modules. Let us say that a module $V$ over a (unital) ring $R$ is of type ${\rm FP}_n$ if it has a projective resolution that is finitely generated through degree $n$; this is equivalent to having a free resolution that is finitely generated through degree $n$; see~\cite[Proposition~4.3]{BrownCohomologyBook}. We say that $V$ is of type ${\rm FP}_{\infty}$ if it has a projective (equivalently, free) resolution that is finitely generated in all degrees. So a monoid is of type left ${\rm FP}_n$ if and only if the trivial left module is of type ${\rm FP}_n$. One says that $V$ has \emph{projective dimension} at most $d$ if it has a projective resolution of length $d$. Note that the left cohomological dimension of a monoid is the projective dimension of the trivial left module. Notice also that both the class of modules of type ${\rm FP}_n$ and the class of modules having projective dimension at most $d$ are closed under direct sum. The following is lemma of K.~Brown~\cite{Brown1982}. Recall that a morphism of chain complexes is a \emph{weak equivalence} if it induces an isomorphism on homology. \begin{Lemma}\emph{ \cite[Lemma~1.5]{Brown1982} }\label{l:brown} Let $R$ be a ring and $C=(C_i)$ a chain complex of (left) $R$-modules and, for each $i$, let $(P_{ij})_{j\geq 0}$ be a projective resolution of $C_i$. Then one can find a chain complex $Q=(Q_n)$ with $Q_n = \bigoplus_{i+j=n} P_{ij}$ such that there is a weak equivalence $f\colon Q\to C$. \end{Lemma} \begin{Cor}\label{c:fp.resolved} Suppose that $R$ is a ring and \[C_n\longrightarrow C_{n-1}\longrightarrow\cdots\longrightarrow C_0\longrightarrow V\] is a partial resolution of an $R$-module $V$. \begin{enumerate} \item If $C_i$ is of type ${\rm FP}_{n-i}$, for $0\leq i\leq n$, then $V$ is of type ${\rm FP}_n$. \item Let $d\geq n$ and suppose that $C_n\to C_{n-1}$ is injective. If $C_i$ has a projective dimension of at most $d-i$, for $0\leq i\leq n$, then $V$ has a projective dimension at most $d$. \end{enumerate} \end{Cor} \begin{proof} To prove the first item, put $C=(C_i)$ and let $(P_{ij})_{j\geq 0}$ be a projective resolution of $C_i$ by finitely generated projectives that is finitely generated through degree $n-i$. Then the chain complex $Q$ from Lemma~\ref{l:brown} is a complex of projectives with $Q_k$ finitely generated, for $0\leq k\leq n$, with $H_0(Q)\cong H_0(C)=V$ and $H_q(Q)\cong H_q(C)=0$ for $0<q<n$. Thus if we augment \[Q_n\longrightarrow Q_{n-1}\longrightarrow \cdots \longrightarrow Q_0\] by the natural epimorphism $Q_0\to H_0(Q)\cong V$, we obtain a partial projective resolution of $V$ of length $n$ by finitely generated projectives. For the second item, again let $C=(C_i)$ and let $(P_{ij})_{j\geq 0}$ be a projective resolution of $C_i$ of length at most $d-i$. Then the chain complex $Q$ from Lemma~\ref{l:brown} is a complex of projectives of length at most $d$ with $H_0(Q)\cong H_0(C)\cong V$ and $H_q(Q)=H_q(C)=0$ for $q>0$. Thus if we augment $Q$ by the canonical epimorphism $Q_0\to H_0(Q)\cong V$, we obtain a projective resolution of $V$ of length at most $d$. \end{proof} Next we show that projective dimension and ${\rm FP}_n$ are stable under flat base extension. \begin{Lemma}\label{l:flat.base} Suppose that $\varphi\colon R\to S$ is a ring homomorphism and that $S$ is flat as a right $R$-module. Let $V$ be a left $R$-module. \begin{enumerate} \item If $V$ is of type ${\rm FP}_n$, then $S\otimes_R V$ is of type ${\rm FP}_n$ as an $S$-module. \item If $V$ has projective dimension at most $d$, then $S\otimes_R V$ has projective dimension at most $d$ over $S$. \end{enumerate} \end{Lemma} \begin{proof} Since $S\otimes_R R\cong S$ and tensor products preserve direct sums and retracts, it follows that if $P$ is a (finitely generated) projective $R$-module, then $S\otimes_R P$ is a (finitely generated) projective $S$-module. If $(P_i)$ is a projective resolution of $V$, then by flatness of $S$ and the preceding observation, we obtain that $(S\otimes_R P_i)$ is a projective resolution of $S\otimes_R V$ with $S\otimes_R P_i$ finitely generated whenever $P_i$ is. The result follows. \end{proof} A typical way to apply Corollary~\ref{c:fp.resolved} in order to prove that a monoid $M$ is of type ${\rm FP}\sb n$ is to find an action of $M$ by cellular mappings on a contractible CW complex $X$ such that the $i^{th}$-cellular chain group $C_i(X)$ is of type ${\rm FP}_{n-i}$ as a $\mathbb ZM$-module for $0\leq i\leq n$. \section{Special monoids and one-relator monoids} \label{sec_special} Let $M$ be the monoid defined by the finite presentation $\langle A\mid w_1=1,\ldots,w_k=1\rangle$. Presentations of this form are called \emph{special}, and monoids which admit such presentations are called \emph{special monoids}. Special presentations were first studied by Adjan~\cite{Adjan1966} and Makanin~\cite{Makanin66}. The main aim of this section is to prove some results which relate the topological and homological finiteness properties of special monoids to the corresponding properties holding in their group of units. By specialising to the case of one-relator monoids and combining with results of Adjan~\cite{Adjan1966} and Lyndon~\cite{Lyndon1950} we then obtain a result characterising homological and cohomological finiteness properties of special one-relator monoids. These results answer an important case of the open problem of Kobayashi~\cite{Kobayashi2000} which asks whether all one-relator monoids are of type right and left-${\rm FP}\sb \infty$. As discussed in the introduction to this paper, additional motivation for this question comes from its connection to the question of whether one-relator monoids admit presentations by finite complete rewriting systems which, in turn, relates to the longstanding open problem of whether such monoids have decidable word problem. For rewriting systems we follow~\cite[Chapter~12]{HoltBook}. We recall some basic definitions and notation here. Let $A$ be a non-empty set, known as an alphabet, and let $A^*$ denote the free monoid of all words over $A$. If $w = a_1 a_2 \ldots a_n \in A^*$, with $a_i \in A$ for $1 \leq i \leq n$, then we write $|w| = n$ and call this the \emph{length} of the word $w$. A \emph{rewriting system} $\mathfrak{R}$ over $A$ is a subset of $A^* \times A^*$. The pair $\langle A \mid \mathfrak{R} \rangle$ is called a \emph{monoid presentation}. The elements of $\mathfrak{R}$ are called \emph{rewrite rules}. For words $u,v \in A^*$ we write $u \rightarrow_\mathfrak{R} v$ if there are words $\alpha, \beta \in A^*$ and a rewrite rule $(l,r)$ in $\mathfrak{R}$ such that $u = \alpha l \beta$ and $v = \alpha r \beta$. We use $\rightarrow_\mathfrak{R}^*$ to denote the reflexive transitive closure of $\rightarrow_\mathfrak{R}$, while $\leftrightarrow_\mathfrak{R}^*$ denotes the symmetric closure of $\rightarrow_\mathfrak{R}^*$. The relation $\leftrightarrow_\mathfrak{R}^*$ defines a congruence on $A^*$ and the quotient $A^* / \leftrightarrow_\mathfrak{R}^*$ is called the \emph{monoid defined by the presentation} $\langle A \mid \mathfrak{R} \rangle$. For any word $w \in A^*$ we use $[w]_\mathfrak{R}$ to denote the $\leftrightarrow_\mathfrak{R}^*$-class of the word $w$. So for words $u, v \in A^*$ when we write $u=v$ it means that $u$ and $v$ are equal as words in $A^*$, while $[u]_\mathfrak{R}=[v]_\mathfrak{R}$ means that $u$ and $v$ represent the same element of the monoid defined by the presentation. We also sometimes write $u=_{\mathfrak{R}} v$ to mean the $[u]_\mathfrak{R}=[v]_\mathfrak{R}$. When the set of rewrite rules with respect to which we are working with is clear from context, we shall often omit the subscript $\mathfrak{R}$ and simply write $[u]$, $\rightarrow$, $\rightarrow^*$ and $\leftrightarrow^*$. A word $u$ is called \emph{irreducible} if no rewrite rule can be applied to it, that is, there is no word $v$ such that $u \rightarrow v$. We use $\mathop{\mathrm{Irr}}\nolimits(\mathfrak{R})$ to denote the set of irreducible words of the system $\mathfrak{R}$. The rewriting system $\mathfrak{R}$ is \emph{Noetherian} if there is no infinite chain of words $u_i \in A^*$ with $u_i \rightarrow u_{i+1}$ for all $i \geq 1$. The system is \emph{confluent} if whenever $u \rightarrow^* u_1$ and $u \rightarrow^* u_2$ there is a word $v \in A^*$ such that $u_1 \rightarrow^* v$ and $u_2 \rightarrow^* v$. A rewriting system that is both Noetherian and confluent is called \emph{complete}. If $\mathfrak{R}$ is a complete rewriting system then each $\leftrightarrow^*$ equivalence class contains a unique irreducible word. Thus in this situation, $\mathop{\mathrm{Irr}}\nolimits(\mathfrak{R})$ provides a set of normal forms for the elements of the monoid defined by the presentation $\langle A \mid \mathfrak{R} \rangle$. Let $M=\langle A\mid w_1=1,\ldots,w_k=1\rangle = \langle A \mid T \rangle$ be the finitely presented special monoid defined above. The symbol $M$ will be used to denote this monoid for the remainder of this section. We call $w_1, w_2, \ldots, w_k$ the \emph{defining relators} of this presentation. Let $\Gamma(M,A)$ denote the right Cayley graph of $M$ with respect to $A$. The strongly connected components of $\Gamma(M,A)$ are called the \emph{Sch\"{u}tzenberger graphs} of $M$. Here we say that two vertices $u$ and $v$ of a directed graph belong to the same strongly connected component if and only if there is a directed path from $u$ to $v$, and also a directed path from $v$ to $u$. Our aim is to prove that any two Sch\"{u}tzenberger graphs of $M$ are isomorphic to each other and that, modulo the Sch\"{u}tzenberger graphs, the Cayley graph of $M$ has a tree-like structure. We begin by summarising some results of Zhang~\cite{Zhang} on special monoids that will be used extensively below. Let $G$ be the group of units of $M$. By~\cite[Theorem~3.7]{Zhang}, we have that $G$ has a group presentation with $k$ defining relations. Let $R$ be the submonoid of right invertible elements. Then $R$ is isomorphic to a free product of $G$ with a finitely generated free monoid by~\cite[Theorem~4.4]{Zhang}. In more detail, we say that a word $u \in A^*$ is invertible if $[u] \in M$ is invertible. Let $u \in A^+$ be a non-empty invertible word. We say that the invertible word $u$ is \emph{indecomposable} if no non-empty proper prefix of $u$ is invertible. Every non-empty invertible word $v$ has a unique decomposition $v = v_1 v_2 \ldots v_l$ where each $v_i$ is indecomposable. To obtain this decomposition, first write $v = v_1 u_1$ where $v_1$ is the shortest non-empty invertible prefix of $v$. Since $v$ and $v_1$ are invertible it follows that $u_1$ is invertible. If $u_1$ is non-empty we repeat this process writing $u_1 = v_2 u_2$ where $v_2$ is the shortest non-empty invertible prefix of $u_1$. Continuing in this way gives the decomposition $v = v_1 v_2 \ldots v_l$. It is unique since if $v_1' v_2' \ldots v_k'$ were some other such decomposition then $v_1 v_2 \ldots v_l = v_1' v_2' \ldots v_k'$, neither $v_1$ nor $v_1'$ can be a proper prefix of the other, hence $v_1 = v_1'$, and then inductively we see that $v_i = v_i'$ for all $i$. We call $u \in A^+$ a \emph{minimal invertible word} if it is indecomposable and invertible and the length of $u$ does not exceed the length of any of the relators in $T$. Each relation word $w_i$ in $T$ represents the identity of $M$ and thus is invertible. Therefore each relation word $w_i$ has a unique decomposition $ w_i = w_{i,1} w_{i,2} \ldots w_{i, n_i} $ into indecomposable invertible words. The words $w_{i,j}$ for $1 \leq i \leq n$, $1 \leq j \leq n_j$ are called the \emph{minimal factors} of the relators of the presentation. Each minimal factor is clearly a minimal invertible word. Let $\Delta$ be the set of all minimal invertible words $\delta \in A^*$ such that $\delta$ is equal in $M$ to at least one of the minimal factors $w_{i,j}$ of the relators. Clearly $\Delta$ is a finite set of words over $A$. It is also immediate from the definition that $\Delta$ contains in particular all of the minimal factors $w_{i,j}$ of the relators. It is also a consequence of the definitions that no non-empty proper prefix of a word from $\Delta$ can be equal to a non-empty proper suffix of a word from $\Delta$. On the other hand, a word from $\Delta$ can, in general, arise as a subword of a word from $\Delta$ (and there examples where this happens). It also follows from the definitions that $\Delta$ is a prefix code, meaning that no word from $\Delta$ is a prefix of any other word from $\Delta$. It follows that $\Delta$ freely generates a free submonoid of $A^*$. The elements represented by the words from $\Delta$ give a finite generating set for the group of units $G$ of the monoid $M$. Indeed, it may be shown that every indecomposable invertible word $v$ is equal in $M$ to some word from $\Delta$; see~\cite[Lemma~3.4]{Zhang}, and every invertible word can be written as a product of indecomposable invertible words. A finite presentation for the group of units $G$ of $M$, with respect to the finite generating set $\Delta$ may be constructed in the following way. We partition the finite set of words $\Delta$ as the disjoint union $ \Delta = \Delta_1 \cup \Delta_2 \cup \ldots \cup \Delta_m $ of non-empty sets where two words belong to the same set $\Delta_j$ if and only if they represent the same element of the monoid $M$. Note that two distinct factors $w_{i,j}$ could well represent the same element of $M$ even if they are not equal as words. Set $B = \{b_1, b_2, \ldots, b_m\}$ and define a map $\phi$ from $\Delta$ to $B$ which maps every word from the set $\Delta_j$ to the letter $b_j$. Extend this to a surjective homomorphism $\phi\colon \Delta^* \rightarrow B^*$. Note that for any word $v \in A^*$, if $v \in \Delta^*$ then as observed above $v$ has a unique decomposition $v = v_1 v_2 \ldots v_l$ where each $v_i \in \Delta^*$ and thus the mapping $\phi$ is well-defined on the subset $\Delta^*$ of $A^*$. Let $T_0$ be the rewriting system over the alphabet $B$ given by applying $\phi$ to each of the relators from the presentation $\langle A \mid T \rangle$ (recall that each $w_j\in \Delta^*$) to obtain \[ T_0 = \{ (s,1): \mbox{$s$ is some cyclic permutation of some $\phi(w_j)$}\}. \] This means for each relator $w_j$ from $T$, we decompose $w_j$ into its minimal factors, then read the factors recording the sets $\Delta_i$ to which each of them belongs, and then write down the corresponding word over $B$, and all of its cyclic conjugates. \begin{Thm} \emph{~\cite[Theorem~3.7]{Zhang} } Let $M$ be the monoid defined by a finite special presentation $\langle A \mid T \rangle$. Then $\langle B \mid T_0 \rangle$ is a finite monoid presentation for the group of units $G$ of $M$. \end{Thm} It follows that $\langle B\mid \phi(w_1)=1,\ldots, \phi(w_k)=1\rangle$ is a group presentation for the group of units of $M$ with the same number of defining relations as the presentation of $M$. Choose and fix some order on the finite alphabet $A$, and for words $x,y \in A^*$ write $x < y$ if $x$ precedes $y$ in the resulting shortlex ordering~\cite[Definition~2.60]{HoltBook}. Now define a rewriting system $S = S(T)$ over $A$ as follows: \[ S = \{ (u,v) \mid u, v \in \Delta^*\colon \phi(u) =_{T_0} \phi(v) \ \& \ u > v \}. \] In fact, it follows from the results of Zhang that the condition $\phi(u) =_{T_0} \phi(v)$ is equivalent to saying that $u =_T v$, i.e. that $u$ and $v$ represent the same element of the group of units of the monoid $M$. So the condition $\phi(u) =_{T_0} \phi(v)$ could be replaced by the condition $u =_T v$ in the definition of $S$. \begin{Thm} \emph{ \cite[Proposition~3.2]{Zhang} } The infinite presentation $\langle A \mid S \rangle$ is Noetherian, confluent and defines the monoid $M$. In fact, the rewriting systems $T$ and $S=S(T)$ are equivalent, that is, $\leftrightarrow_S^* = \leftrightarrow_T^*$. \end{Thm} We shall prove statements about $M$ by working with the irreducible words $\mathop{\mathrm{Irr}}\nolimits(S)$ associated with this infinite complete rewriting system. For the rest of this section, when we say a word over the alphabet $A$ is irreducible, we mean that it is irreducible with respect to the rewriting system $S$. The submonoid of right units $R$ is generated by the prefixes of the words from $\Delta$. Indeed, let $I$ be the set of non-empty prefixes of words from $\Delta$, that is, \[ I = \{ x \in A^+ \mid xy \in \Delta \ \mbox{for some} \ y \in A^* \}. \] Clearly all words in the set $I$ represent right invertible elements of $M$. Conversely we have the following result. \begin{Lemma}\emph{\cite[Lemma~3.3]{Zhang}} Let $u \in A^*$ be irreducible modulo $S = S(T)$. If $[u]_T$ is right invertible, then $u \in I^*$. \end{Lemma} It follows from this lemma that $I$ constitutes a finite generating set for the submonoid $R$ of right units of the monoid $M$ (that is, the submonoid of all right invertible elements). Furthermore, Zhang proves that the subset \[ I_0 = \mathop{\mathrm{Irr}}\nolimits(S) \cap (I \setminus I^2) \] of $I$ is also a finite generating set for $R$. It may also be shown that for each $1 \leq i \leq m$ the set $I_0 \cap \Delta_i$ contains exactly one word. Zhang proves that for each $i$ the set $\Delta_i$ is closed under applications of the rewrite rule $\rightarrow_S$, and it may be shown that the unique irreducible word in each set $\Delta_i$ belongs to $I_0$; see~\cite[Lemma~4.2]{Zhang}. Let $C$ be an alphabet in bijective correspondence with the set $I_0$. Write $C = B \cup Z$ where $B$ is the alphabet defined above with is in natural bijective correspondence with the intersection $I_0 \cap \Delta_i$, and $Z$ corresponds to all the remaining words from the set $I_0$. \begin{Thm}\emph{ \cite[Theorem~4.4]{Zhang} } With the above notation, the submonoid of right units $R$ of $M$ is defined by the finite presentation $\langle B \cup Z \mid T_0 \rangle$. In particular, $R$ is a free product of the group of units $G$ and a finitely generated free monoid. \end{Thm} Monoid free products will be formally defined in Section~\ref{sec_amalg} below. Our next goal is to show that the Cayley graph of a special monoid has a tree-like structure. The action of the monoid on the corresponding tree will be used to construct a free resolution of the trivial module. Let $\mathcal T$ be the set of irreducible words in $A^*$ with no suffix in $I$. \begin{Lemma}\label{l:preserve.irred} Let $w\in\mathcal T$ and let $u\in A^*$ be irreducible. Then $wu$ is irreducible. \end{Lemma} \begin{proof} If $wu$ is not irreducible, then since both $w$ and $u$ are irreducible it follows that $w=xy$ and $u=zw$ with $yz$ a left hand side of a rewrite rule and $y,z$ both non-empty. But every left hand side of a rewrite rule is in $\Delta^*$ and so $y$ has a non-empty suffix $v$ that is a prefix of an element of $\Delta$. But then $v\in I$, contradicting that $w\in \mathcal T$. \end{proof} We recall the definition of the pre-order $\leq_{\mathrel{\mathscr{R}}}$ on the monoid $M$. For all $m,n \in M$ we write $m \leq_{\mathrel{\mathscr{R}}} n$ if and only if $mM \subseteq nM$, and write $m \mathrel{\mathscr{R}} n$ if $m \leq_{\mathrel{\mathscr{R}}} n$ and $n \leq_{\mathrel{\mathscr{R}}} m$. Obviously $\mathrel{\mathscr{R}}$ is an equivalence relation on $M$, usually called Green's $\mathscr{R}$-relation, and $M / \mathrel{\mathscr{R}}$ is a poset with the order induced by $\leq_{\mathrel{\mathscr{R}}}$. In terms of the right Cayley graph $\Gamma(M,A)$ of $M$ we have $m \leq_{\mathrel{\mathscr{R}}} n$ if and only if there is a directed path from $n$ to $m$, while the $\mathrel{\mathscr{R}}$-classes are the vertex sets of the Sch\"{u}tzenberger graphs of the monoid. Let $\mathcal{L}$ be a subset of $A^*$ containing the empty word. For any two words $\alpha, \beta \in \mathcal{L}$ write $\alpha \preceq \beta$ if and only if $\beta$ is a prefix of $\alpha$. This defines a poset which we denote by $P_{\mathcal{L}}$. This poset is the reversal of the prefix order on the set of words $\mathcal{L}$. This poset is countable since $A$ is finite. The empty word is the unique maximal element of the poset. This poset is \emph{locally-finite} in the sense that every interval $[x,y]$ in this poset contains finitely many elements. In fact the principal filter of every element in this poset is finite since a word admits only finitely many prefixes. Recall that if $s$ and $t$ are elements of a poset $P$ then we say $s$ \emph{covers} $t$ if $s < t$ and $[s,t] = \{s,t\}$. A locally finite poset is completely determined by its cover relations. The \emph{Hasse diagram} of a poset $P$ is a graph whose edges are the cover relations. Hasse diagrams are drawn in such a way that if $s < t$ then $t$ is drawn with a higher vertical coordinate than $s$. \begin{Prop}\label{prop:tree} Let $\mathcal L\subseteq A^*$ contain the empty word. Then the Hasse diagram of $P_{\mathcal{L}}$ is a rooted tree (with root the empty word). \end{Prop} \begin{proof} For $n\geq 0$, let $\mathcal L_n$ consist of those words from $\mathcal L$ of length at most $n$. Let $\Lambda$ (respectively, $\Lambda_n$) be the Hasse diagram of $P_{\mathcal{L}}$ (respectively, $P_{\mathcal{L}_n}$). Then $\Lambda=\varinjlim \Lambda_n$ and hence, since a direct limit of trees is a tree, it suffices to handle the case that $\mathcal L$ is finite. We proceed by induction on $|\mathcal L|$. If $|\mathcal L|=1$, then $\Lambda$ consists of a single vertex and there is nothing to prove. Assume true for languages with at most $n$ elements and suppose that $\mathcal L$ has $n+1$ elements. Suppose that $w\in \mathcal L$ has maximum length. Let $v$ be the longest proper prefix of $w$ belonging to $\mathcal L$ (it could be the empty word). Let $\Lambda'$ be the Hasse diagram of $P_{\mathcal{L}\setminus \{w\}}$; it is a rooted tree with root the empty word by induction. Then there is an edge between $v$ to $w$ in $\Lambda$ and that is the only edge incident on $w$. Hence $\Lambda$ and $\Lambda'$ have the same Euler characteristic and so $\Lambda$ is a tree (as $\Lambda'$ was). \end{proof} It is possible for an element of $P_{\mathcal{L}}$ to cover infinitely many distinct elements of $P_{\mathcal{L}}$. For example if $\mathcal{L}= \{\epsilon, ab, aab, aaab, aaaab, \ldots \}$ then $\epsilon$ covers all the other words in this set. The following fact is essentially established in~\cite[Lemma~5.2]{Zhang} and the discussion afterwards. \begin{Prop}\label{p:transversal} Every element $m\in M$ can uniquely be expressed in the form $m=[w_m]u_m$ with $w_m\in \mathcal T$ and $u_m\in R$. Moreover, the irreducible word $v\in A^*$ representing $m$ is $w_mt$ where $t\in I^*$ is the longest suffix of $v$ in $I^*$ and $[t]=u_m$. Furthermore, if $m,n\in M$, then $m\leq_{\mathscr R} n$ if and only if $w_n$ is a prefix of $w_m$. Hence the Hasse diagram of $M/\mathscr R$ is a tree rooted at $1$. \end{Prop} \begin{proof} Let $v\in A^*$ be the irreducible word with $[v]=m$. Then $v=v'v''$ where $v''$ is the longest suffix in $I^*$. It follows that $v'\in \mathcal T$ and $v''$ represents an element of $R$. This shows the existence of such a factorization. For uniqueness, let $w\in \mathcal T$ and $x\in A^*$ be an irreducible word representing an element of $R$. By~\cite[Lemma~3.3]{Zhang}, we have that $x\in I^*$. Then $wx$ is irreducible by Lemma~\ref{l:preserve.irred}. Thus $wx=v'v''$. By choice of $v''$, we must have $|x|\leq |v''|$. If $|x|<|v''|$, then some non-empty prefix of $v''$ is a suffix of $w$. As $I$ is prefix-closed, whence so is $I^*$, this contradicts that $w\in \mathcal T$. Thus $x=v''$ and hence $w=v'$. This establishes the uniqueness of the decomposition. Suppose now that $m=nn'$ with $n'\in M$. Let $z$ be a right inverse of $u_m$ and let $v$ be an irreducible word representing $u_nn'z$. Then $w_nv$ is an irreducible word representing $nn'z=mz=[w_m]u_mz=[w_m]$ by Lemma~\ref{l:preserve.irred}. Thus $w_m=w_nv$ and so $w_n$ is a prefix of $w_m$. Conversely, suppose that $w_n$ is a prefix of $w_m$. Clearly, $[w_n]\mathrel{\mathscr R} n$ and $[w_m]\mathrel{\mathscr R} m$ as $u_m,u_n$ are right invertible. So it suffices to observe that $[w_m]\leq_{\mathscr R} [w_n]$. The final statement follows from Proposition~\ref{prop:tree}. \end{proof} Retaining the notation of Proposition~\ref{p:transversal} we obtain the following immediate corollary. \begin{Cor}\label{c:free.right} The action of $R$ on the right of $M$ is free with transversal $\mathscr T=\{[w]\mid w\in\mathcal T\}$. Furthermore, $M/R\cong M/\mathscr R$. \end{Cor} Another corollary is that all principal right ideals of $M$ are isomorphic as right $M$-sets. \begin{Cor}\label{c:ISO.princ.right} Let $n\in M$. Then the mapping $\varphi_n\colon M\to nM$ given by $\varphi_n(m)=[w_n]m$ is an isomorphism of right $M$-sets. \end{Cor} \begin{proof} As $nM=[w_n]M$, the map $\varphi_n$ is clearly a surjective homomorphism of right $M$-sets. To see that is an isomorphism, suppose that $\varphi_n(m)=\varphi_n(m')$. Let $v,v'\in A^*$ be irreducible words representing $m,m'$, respectively. Then $w_nv$ and $w_nv'$ are irreducible by Lemma~\ref{l:preserve.irred}. As they represent the same element of $M$, we deduce that $v=v'$ and so $m=m'$. \end{proof} We now generalize Corollary~\ref{c:ISO.princ.right} to show that every right ideal of $M$ is a free $M$-set. \begin{Thm}\label{t:right.free} Let $M$ be a special monoid. Then every right ideal of $M$ is a free right $M$-set and dually every left ideal of $M$ is a free left $M$-set. \end{Thm} \begin{proof} Let $X$ be a right ideal of $M$ and let $X'=\{w\in \mathcal T\mid [w]\in X\}$. Let $U'$ be the set of elements $w\in X'$ with no proper prefix in $X'$. We claim that $X$ is freely generated as an $M$-set by $U=\{[w]\mid w\in U'\}$. By Proposition~\ref{p:transversal} if $s,t\in U$ are distinct, then $sM\cap tM=\emptyset$. Indeed, if $m\in sM\cap tM$, then $w_m$ has both $w_s$ and $w_t$ as prefixes and hence either $w_s$ is a prefix of $w_t$, or vice versa, contradicting the definition of $U'$. Also, by Corollary~\ref{c:ISO.princ.right}, for each $s\in U$, we have that $sM\cong M$ as a right $M$-set. It follows that $U$ freely generates a sub-$M$-subset $Y$ of $X$. We show that $Y=X$. If $m\in X$, then $m=[w_m]u_m$ with $w_m\in \mathcal T$ and $u_m\in R$. Then $[w_m]\in X$ as $[w_m]\mathrel{\mathscr{R}} m$. Let $w\in \mathcal T$ be the shortest prefix of $w_m$ with $[w]\in X$. Then $w\in U'$ and $m\in [w]M\subseteq Y$. This completes the proof. \end{proof} \begin{Rmk}\label{r:fg} Note that if $X$ is a free right $M$-set on a subset $B$ and if $X$ has a finite generating set, then $B$ is finite. Indeed, if $C$ is a finite generating set for $X$, then there is a finite subset $B'\subseteq B$ such that $C\subseteq B'M$. But then $B\subseteq B'M$ and hence $B=B'$ by freeness of the action. \end{Rmk} Let $\Gamma(M,A)$ be the Cayley graph of $M$ with respect to $A$. Let $\Gamma(M,A,m)$ denote the strongly connected component of $m$ (also called the Sch\"utz\-en\-ber\-ger graph of $m$). An immediate geometric consequence of Corollary~\ref{c:ISO.princ.right} is the following. \begin{Cor}\label{c:Cayley.geometry} Let $n\in M$. Then there is an isomorphism of $A$-labeled graphs $\Gamma(M,A,1)\to \Gamma(M,A,n)$ sending $1$ to $[w_n]$. If $\Gamma_n$ is the induced subgraph of $\Gamma(M,A)$ consisting of all vertices accessible from $n$, then $\Gamma(M,A)$ is isomorphic to $\Gamma_n$ as an $A$-labeled graph via an isomorphism taking $1$ to $[w_n]$. \end{Cor} Corollary~\ref{c:Cayley.geometry} recovers as a special case the result~\cite[Theorem~4.6]{Malheiro2005} that all the maximal subgroups of a special monoid are isomorphic to each other. This is because the Sch\"{u}tzenberger group of a regular $\mathrel{\mathscr{R}}$-class is isomorphic to the automorphism group of its labelled Sch\"{u}tzenberger graph \cite[Theorem~3]{Stephen1996}. Next we wish to show that there is a unique edge entering any strongly connected component of $\Gamma(M,A)$ other than the strong component of $1$, and that it ends at an element of $\mathscr T$ (see Corollary~\ref{c:free.right} for the notation). Let us say that an edge of a digraph \emph{enters} a strong component $C$ of the graph if its initial vertex is not in $C$ and its terminal vertex is in $C$. \begin{Prop}\label{p:entrance} Let $n\in \mathscr T\setminus \{1\}$ (and so $n=[w_n]$). Then if $w_n=xa$ with $a\in A$, we have that $[x]>_{\mathscr R} n$, $[a]\notin R$ and $[x]\xrightarrow{\,\,a\,\,}n$ is the unique edge entering $\Gamma(M,A,n)$. \end{Prop} \begin{proof} Note that $x$ is irreducible. Let $x=x'x''$ with $x''$ the longest suffix of $x$ in $I^*$. Then $x'=w_{[x]}$ and $w_n$ is not a prefix of $x'$. Thus $[x]>_{\mathscr R} [w_n]=n$ by Proposition~\ref{p:transversal}. It follows that $[x]\xrightarrow{\,\,a\,\,}n$ enters $\Gamma(M,A,n)$ and hence $a\notin R$. Suppose that $m\xrightarrow{\,\,b\,\,}m'$ enters $\Gamma(M,A,n)$. Let $w$ be an irreducible word representing $m$. Then $w=w_my$ where $y\in I^*$ is the longest suffix of $w$ in $I^*$. We claim that $wb$ has no suffix in $I$. Indeed, if it did, then since $I$ is prefix-closed and $w_m$ has no suffix in $I$, we must have that $yb$ has a suffix in $I$. Then $yb=rs$ where $s\in I$. Since $r$ is a prefix of $y$ and $I$ (and hence $I^*$) is prefix-closed, we obtain that $yb=rs\in I^*$. Thus $yb$ represents an element of $R$ and so \[m'=[w_myb]\mathrel{\mathscr R}[w_m]\mathrel{\mathscr R} m\] a contradiction. Thus $wb$ has no suffix in $I$. We claim that $wb$ is irreducible. Suppose that $wb$ is not irreducible. Then since $w$ is irreducible, each left hand side in the rewriting system belongs to $\Delta^*$ and $\Delta\subseteq I$, we must have that $wb$ has a suffix in $I$, a contradiction. Putting it all together, we deduce that $wb\in \mathcal T$ and so $wb=w_n$ by Proposition~\ref{p:transversal}. It follows that $b=a$ and $w=x$, completing the proof. \end{proof} Let $\Gamma$ be the directed graph obtained from $\Gamma(M,A)$ by collapsing each strongly connected component (and its internal edges) to a point. So the vertex set of $\Gamma$ is $M/\mathscr R$ and there is an edge $(m,a)$ from the $\mathscr{R}$-class $R_m$ of $m$ to the $\mathscr{R}$-class $R_{ma}$ of $ma$ if $m\in M$, $a\in A$ and $R_m\neq R_{ma}$. We aim to show that $\Gamma$ is a regular rooted tree isomorphic to the Hasse diagram of $M/\mathscr{R}$. Note that this tree can be of infinite degree. \begin{Thm}\label{t:looks.like.Hasse} The graph $\Gamma$ is isomorphic as a digraph to the Hasse diagram of $M/\mathscr{R}$ ordered by $\geq_{\mathscr R}$. This graph is a regular rooted tree with root the strong component of $1$. \end{Thm} \begin{proof} We retain the above notation. Suppose first that $w,w'\in \mathcal T$ and there is an edge from $\Gamma(M,A,[w'])$ to $\Gamma(M,A,[w])$; it is unique by Proposition~\ref{p:entrance}. Then, by Proposition~\ref{p:entrance}, we have that if $w=xa$ with $a\in A$, then $[x]\mathrel{\mathscr R} [w']$. Thus if $x'$ is the longest suffix of $x$ belonging to $I^*$, then $x=w'x'$ and $w=w'x'a$. Since $I$ is prefix-closed, it follows that if $y$ is any non-empty prefix of $x'$, then $w'y$ has a suffix in $I$ and hence does not belong to $\mathcal T$. Thus in the prefix order on $\mathcal T$, there is no element between $w'$ and $w$. It follows from Proposition~\ref{p:transversal} that in the Hasse diagram of $M/\mathscr{R}$ with respect to $\geq_{\mathscr R}$, there is an edge from $R_{[w']}$ to $R_{[w]}$. Conversely, suppose that there is an edge in the Hasse diagram from $R_{[w']}$ to $R_{[w]}$ with $w,w'\in \mathcal T$. Then $w'$ is a proper prefix of $w$ by Proposition~\ref{p:transversal} and so $w=w'y$ with $y\in A^*$ irreducible and non-empty. Let $a\in A$ be the last letter of $y$, so $y=y'a$. Then $[w']\leq_{\mathscr R}[w'y']\leq_{\mathscr R}[w]$ and so one of these inequalities is an equality. Since $w$ is not a prefix of $w'y'$, it follows from Proposition~\ref{p:transversal} (or by ~\cite[Lemma~5.2]{Zhang}) that the second inequality is strict. Thus $[w'y']$ belongs to the strong component of $[w']$ and the image of the edge $[w'y']\xrightarrow{\,\,a\,\,}[w]$ connects the strong component of $[w']$ to the strong component of $[w]$ in $\Gamma$ (and is the only such edge by Proposition~\ref{p:entrance}). Since the reverse prefix order on any set of words containing the empty word is a rooted tree, it follows that $\Gamma$ is a rooted tree with root the strong component of $1$. By construction of $\Gamma$ and Corollary~\ref{c:Cayley.geometry} it follows that all vertices have the same cardinality set of children. \end{proof} Note that in general if $M$ is a monoid generated by a finite set $A$, and if $R'$ and $R''$ are $\mathrel{\mathscr{R}}$-classes of $M$ such that $R'$ covers $R''$ in the poset $M / \mathrel{\mathscr{R}}$, then there must exist elements $x \in R'$ and $y \in R''$ and a generator $a \in A$ such that $xa = y$ in $M$. The second part of the proof of the above theorem shows that in a finitely generated special monoid in this situation there are unique elements $x \in R', y \in R''$ and $a \in A$ satisfying these properties. We note that the left action of $M$ on $\Gamma(M,A)$ induces a left action of $M$ on $\Gamma$ by cellular mappings since strong components are mapped into strong components. However, elements of $m$ can collapse edges to a point. In fact, $\Gamma$ (being a tree) is a simplicial graph ($1$-dimensional simplicial complex) and $M$ acts by simplicial mappings. For example, consider the bicyclic monoid $B=\langle a,b\mid ab=1\rangle$. Then since $a \mathrel{\mathscr R} 1$ left multiplication by $a$ collapses the vertices corresponding to the strong components of $1$ and $b$ and hence collapses the edge between these components. We can view the vertex set of $\Gamma$ as $M/R$ and so if we use the simplicial chain complex for $\Gamma$, we have $C_0(\Gamma)\cong \mathbb Z[M/R]\cong \mathbb ZM\otimes_{\mathbb ZR}\mathbb Z$ as a $\mathbb ZM$-module. We can identify $C_1(\Gamma)$ as a $\mathbb ZM$-module with the quotient $C_1(\Gamma(M,A))/N$ where $N$ is the $\mathbb ZM$-submodule generated as an abelian group by edges $m\xrightarrow{\,\,a\,\,}ma$ with $a\in A$ and $m\mathrel{\mathscr R} ma$. Note that $C_1(\Gamma(M,A))$ is a free $\mathbb ZM$-module of rank $|A|$. We shall show that $N$ is a free $\mathbb ZM$-module of finite rank, as well. It will then follow that $C_1(\Gamma)$ is of type ${\rm FP}_{\infty}$ with projective dimension at most $1$. Note that $N$ is the direct sum over all $a\in A$ of the submodules $N_a$ spanned by edges $m\xrightarrow{\,\,a\,\,}ma$ with $m\mathrel{\mathscr R}ma$ and so it suffices to show that each of these submodules $N_a$ is a finitely generated free $\mathbb ZM$-module. \begin{Prop}\label{p:loops.free} Let $a\in A$. Then $N_a$ is a finitely generated free $\mathbb ZM$-module. Consequently, $N$ is a finitely generated free $\mathbb ZM$-module. \end{Prop} \begin{proof} Let $L=\{m\in M\mid m\mathrel{\mathscr R} ma\}$. Then $L$ is a left ideal of $M$ and $N_a\cong \mathbb ZL$. First observe that if $a\in R$, then $L=M$ and there is nothing to prove. So assume that $a\in A\setminus R$. By Theorem~\ref{t:right.free} we have that $L$ is a free left $M$-set. By Remark~\ref{r:fg} it suffices to prove that $L$ is finitely generated. We claim that $L$ is generated by $I'=\{[w]\in L\mid w\in I\}$ , which is finite as $I$ is finite. Let $m\in L$ and let $w\in A^*$ be irreducible with $[w]=m$. There are two cases. Assume first that $wa$ is irreducible. Then since $ma\mathrel{\mathscr R}m$, it follows from Proposition~\ref{p:transversal} that $wa\notin\mathcal T$ (as $wa$ is not a prefix of $w$) and so $wa=sxa$ with $xa\in I$. Since $a\notin R$, we must have that $x$ is non-empty. Since $I$ is prefix-closed, $x\in I$. Thus $[x],[xa]\in R$ and hence $[x]\mathrel{\mathscr{R}}[x]a$. Then $m=[s][x]$ and $[x]\in I'$. So $m\in MI'$. Next assume that $wa$ is not irreducible. Then $wa=sxa$ with $xa\in \Delta$, as $w$ is irreducible. But $a\notin R$ and so $x$ is non-empty. Thus $x\in I$. Also $xa\in \Delta\subseteq I$. Thus $[x],[xa]\in R$ and so $[x]\mathrel{\mathscr R}[x]a$. Also, $m=[s][x]$ with $[x]\in I'$ and so $m\in MI'$. This completes the proof. \end{proof} Now all is in place to prove the first main result of this section. \begin{Thm}\label{t:special} Let $M$ be a finitely presented special monoid with group of units $G$. \begin{enumerate} \item If $G$ is of type ${\rm FP}_n$ with $1\leq n\leq \infty$, then $M$ is of type left-${\rm FP}_n$ and of type right-${\rm FP}_n$. \item $\mathop{\mathrm{cd}} G\leq \mathrm{left \; cd} M \leq \max\{2,\mathop{\mathrm{cd}} G\}$ and $\mathop{\mathrm{cd}} G\leq \mathrm{right \; cd} M\leq \max\{2,\mathop{\mathrm{cd}} G\}$. \end{enumerate} \end{Thm} \begin{proof} We retain the above notation. We prove the results for left-${\rm FP}_n$ and left cohomological dimension (the other results are dual). First note that if $L$ denotes the submonoid of left invertible elements, then $M$ is a free left $L$-set by the dual of Proposition~\ref{p:transversal}. If $B$ is the basis of $M$ as a left $L$-set, then each element $m\in M$ can be expressed uniquely as $u_mb_m$ with $b_m\in B$ and $u_m\in L$. But then if $g\in G$ with $gm=m$, we must have $gu_mb_m=u_mb_m$. It follows that $gu_m=u_m$ by uniqueness. But since $L$ is a free product of $G$ with a finitely generated free monoid by~\cite[Theorem~4.4]{Zhang}, it follows that $G$ acts freely on the left of $L$ and so $g=1$. Thus $\mathbb ZM$ is a free left $\mathbb ZG$-module and so $\mathop{\mathrm {cd}} G\leq \mathop{\mathrm {left \; cd}} M$ as any projective resolution of $\mathbb Z$ over $\mathbb ZM$ is a projective resolution over $\mathbb ZG$. The graph $\Gamma$ is a tree with a simplicial action by $M$ described above. So we have an exact sequence of $\mathbb ZM$-modules \[0\longrightarrow C_1(\Gamma)\longrightarrow C_0(\Gamma)\longrightarrow \mathbb Z\longrightarrow 0.\] We have identified $C_1(\Gamma)\cong C_1(\Gamma(M,A))/N$ where $C_1(\Gamma(M,A))$ is free of rank $|A|$ and $N$ is a finitely generated free module by Proposition~\ref{p:loops.free}. Thus $C_1(\Gamma)$ is of type ${\rm FP}_{\infty}$ and has projective dimension at most $1$. On the other hand, $C_0(\Gamma)\cong \mathbb Z[M/R]\cong \mathbb ZM\otimes_{\mathbb ZR} \mathbb Z$. By Zhang's theorem~\cite[Theorem~4.4]{Zhang}, $R=G\ast C^*$ where $C$ is a finite alphabet, and hence $R$ is of type ${\rm FP}_n$ whenever $G$ is, and, $\mathop{\mathrm{left \; cd}} R\leq \max\{1,\mathop{\mathrm{cd}} G\}$ by~\cite[Theorem~5.5]{CremannsOtto1998} (or see Corollaries~\ref{c:free.prod.fp} and~\ref{c:free.prod.geom} below). Note that a finitely generated free monoid is of type ${\rm FP}_{\infty}$ and of cohomological dimension $1$ because its Cayley graph is a tree and a free $M$-CW complex of finite type of dimension $1$. As $\mathbb ZM$ is a free, and hence flat, right $\mathbb ZR$-module by Corollary~\ref{c:free.right}, it follows from Lemma~\ref{l:flat.base} that $C_0(\Gamma)\cong \mathbb ZM\otimes_{\mathbb ZR}\mathbb Z$ is of type ${\rm FP}_n$ and of projective dimension at most $\max\{1,\mathop{\mathrm{cd}} G\}$. The result now follows from an application of Corollary~\ref{c:fp.resolved}. \end{proof} In general the left- and right-cohomological dimensions of a monoid are not equal. In fact they are completely independent of each other; see~\cite{Guba1998}. One immediate corollary of the above result is that if $M$ is a finitely presented special monoid with left- and right-cohomological dimensions both at least equal to $2$, then the left cohomological dimension of $M$ is equal to its right cohomological dimension. As an application of Theorem~\ref{t:special} we now show how it can be used to prove that all special one-relator monoids are of type ${\rm FP}\sb \infty$, answering a case of a question of Kobayashi. We also recover Kobayashi's result (see~\cite[Theorem~7.2]{Kobayashi1998} and~\cite[Corollary 7.5]{Kobayashi2000}) that if the relator is not a proper power then the cohomological dimension is at most $2$. A word $u \in A^*$ is called \emph{primitive} if it is not a proper power in $A^*$. \begin{lem}\emph{\cite[Corollary~4.2]{LyndonSchutz1962}} For every nonempty word $w \in A^*$ there is a unique primitive word $p$ and a unique integer $k \geq 1$ such that $w = p^k$. \end{lem} The following lemma is well known. We include it here for completeness. \begin{lem}\label{lem_proppower} Let $M=\langle A\mid w=1\rangle$. Write $w = p^k$ where $p$ is a primitive word and $k \geq 1$. The group of units $G$ of $M$ is a one-relator group with torsion if and only if $k > 1$. \end{lem} \begin{proof} Since it is a prefix and suffix of $w$, it follows that $p$ is invertible in $M$. Therefore, the decomposition of $w$ into indecomposable invertible factors has the form $w = (p_1 p_2 \ldots p_l)^k$ where $p_1 p_2 \ldots p_l$ is the decomposition of $p$ into indecomposable invertible factors. Let $P = \{ p_i : 1 \leq i \leq l \} \subseteq A^*$. Let $X = \{ x_{p} : p \in P \}$ be an alphabet in bijection with the set of words $P$, so distinct words $p_i$ and $p_j$ from $P$ correspond to distinct letters $x_{p_i}$ and $x_{p_j}$ from the alphabet $X$. It follows from~\cite[Lemma 96]{Adjan1966} that the group of units of the monoid $M$ is isomorphic to the group defined by the group presentation $\mathrm{Gp}\langle X \mid (x_{p_1} x_{p_2} \ldots x_{p_l})^k=1 \rangle$. Observe that $x_{p_1} x_{p_2} \ldots x_{p_l} \in X^*$, i.e. this is a positive word over the alphabet $X$. In particular the word $(x_{p_1} x_{p_2} \ldots x_{p_l})^k$ is cyclically reduced. Since the word $p_1 p_2 \ldots p_l$ is primitive by assumption it follows that the word $x_{p_1} x_{p_2} \ldots x_{p_l} \in X^*$ is also primitive. Hence $(x_{p_1} x_{p_2} \ldots x_{p_l})^k$ is a proper power if and only if $k > 1$. But then by a well-known result of Karrass, Magnus and Solitar characterising elements of finite order in one-relator groups~\cite[Theorem~5.2]{LyndonAndSchupp} it follows that the group of units of $M$ is a one-relator group with torsion if and only if $k > 1$. \end{proof} Well-written accounts of the result~\cite[Lemma 96]{Adjan1966} of Adjan used in the previous proof may be found in~\cite[Section~1]{Lallement1974} and~\cite[Section~2]{Lallement1988}. The following result gives a positive answer to Kobayashi's question \cite[Problem~1]{Kobayashi2000} in the case of special one-relator monoids \begin{Cor}\label{c:special.one} Let $M$ be the one-relator monoid $\langle A\mid w=1\rangle$. Then $M$ is of type left- and right-${\rm FP}_{\infty}$. Moreover, if $w$ is not a proper power then $\mathop{\mathrm{left \; cd}} M\leq 2$ and $\mathop{\mathrm{right \; cd}} M\leq 2$, and otherwise $\mathop{\mathrm{left \; cd}} M= \mathop{\mathrm{right \; cd}} M= \infty$. \end{Cor} \begin{proof} We prove the results for left-${\rm FP}_\infty$ and left cohomological dimension (the other results are dual). The group of units $G$ of $M$ is a one-relator group by Adjan's theorem~\cite[Lemma 96]{Adjan1966} (this also follows from the results of Zhang described above), and hence of type ${\rm FP}_{\infty}$ by Lyndon's theorem~\cite{Lyndon1950}. This proves the first statement in light of Theorem~\ref{t:special}. The second statement follows since by Lemma~\ref{lem_proppower} the group $G$ is a one-relator group whose defining relator is not a proper power in the first case and is a proper power in the second. By a theorem of Lyndon~\cite{Lyndon1950} $G$ has cohomological dimension at most $2$ in the first case and has infinite cohomological dimension in the second. The result now follows from Theorem~\ref{t:special}. \end{proof} We now turn our attention to proving the topological analogue of Theorem~\ref{t:special}. We do this by showing how an equivariant classifying space for a special monoid may be constructed from an equivariant classifying space for its group of units. Note that while for finitely presented monoids it follows from~\cite{GraySteinberg1} that the properties left ${\rm FP}_n$ and left ${\rm F}_n$ are equivalent, in contrast it is not known whether $\mathrm{left \; cd}(M)$ and $\mathrm{left \; gd}(M)$ coincide (this is even open for groups). Therefore, the second part of the following theorem is not an immediate consequence of Theorem~\ref{t:special}. \begin{Thm}\label{t:special:topological} Let $M$ be a finitely presented special monoid with group of units $G$. \begin{enumerate} \item If $G$ is of type ${\rm F}_n$ with $1\leq n\leq \infty$, then $M$ is of type left- and right-${\rm F}_n$. \item $\mathop{\mathrm{gd}} G\leq \mathop{\mathrm{left \; gd}} M \leq \max\{2,\mathop{\mathrm{gd}} G\}$ and $\mathop{\mathrm{gd}} G\leq \mathop{\mathrm{right \; gd}} M \leq \max\{2,\mathop{\mathrm{gd}} G\}$. \end{enumerate} \end{Thm} \begin{proof} We prove the results for left-${\rm F}_n$ and left geometric dimension. The other results are dual. It is proved in~\cite[Section~6]{GraySteinberg1} for finitely presented monoids the properties left-${\rm F}_n$ and left-${\rm FP}_n$ coincide. Now part (1) of the theorem follows from the first part of Theorem~\ref{t:special}. (One can also see this directly from the construction below.) To prove part (2), first note that we showed that $M$ was a free left $G$-set at the beginning of the proof of Theorem~\ref{t:special}. Hence any free $M$-CW complex is a free $G$-CW complex. Also note that Theorem~\ref{t:right.free} implies that every projective $M$-set is free, as $Me$ is a left ideal for any idempotent $e$. Thus any projective $M$-CW complex $X$ is a free $M$-CW complex and so it follows that $G\backslash X$ is $K(G,1)$-space. The inequality $\mathop{\mathrm{gd}} G\leq \mathop{\mathrm{right \; gd}} M$ follows. We shall now explain how to construct an equivariant classifying space for $M$ of dimension $\max\{2, \mathrm{gd}(G) \}$. Let $X_G$ be an equivariant classifying space for the group $G$. Since $G$ is a group it follows that the projective $G$-CW complex $X_G$ is a free $G$-CW complex. By Zhang's theorem~\cite[Theorem~4.4]{Zhang}, the submonoid of right units $R$ of $M$ is isomorphic to the monoid free product $G\ast C^*$ where $C^*$ is a free monoid over a finite alphabet $C$. The right Cayley graph $\Gamma(C^*)$ of $C^*$ with respect to the generating set $C$ is a tree and thus is a free equivariant classifying space for the monoid $C^*$. In particular $C^*$ is of geometric dimension at most $1$. Let $X$ be the left equivariant classifying space for $R \cong G \ast C^*$ given by the construction in the proof of Theorem~\ref{t:bass.serre.free} in Section~\ref{sec_amalg} below. From the construction it follows that $X$ is a free $R$-CW complex and an equivariant classifying space for $R$. (If $X_G$ has a $G$-finite $n$-skeleton, then $X$ has an $R$-finite $n$-skeleton.) It also follows from the construction of $X$ that $\dim X \leq \max\{1, \dim X_G\}$ (compare with Theorem~\ref{t:amalg.cd}). Now $M$ is an $M$-$R$-biset, which is free as a left $M$-set and is also free as a right $R$-set by Corollary~\ref{c:free.right}, and $X$ is a free left $R$-CW complex. It follows from Proposition~\ref{c:base.change.cw} that $M \otimes_R X$ is a free left $M$-CW complex with $\dim M \otimes_R X=\dim X$. (It will have $M$-finite $n$-skeleton if $X$ has $R$-finite $n$-skeleton.) The complex $M \otimes_R X$ is a disjoint union of copies of $X$, one for each $\mathscr{R}$-class of $M$ by Remark~\ref{r:tensor.with.free}. To make this concrete, take the transversal $\mathcal{T}$ of the $\mathscr{R}$-classes of $M$ defined above, which is a basis for $M$ as a free right $R$-set. Then each element of $M\otimes_R X$ can be uniquely written in the form $t\otimes x$ with $t\in \mathcal T$ and $x\in X$ and $M\otimes_R X=\coprod_{t\in \mathcal T} t\otimes X$. We say that two elements $m \otimes x$ and $m' \otimes x'$ of $M \otimes_R X$ belong to the same copy of $X$ in $M \otimes_R X$ if and only if $m \mathrel{\mathscr{R}} m'$. Fix a basepoint $x_0 \in \mathcal{Q} \subseteq X_0$. Next we connect the space $M \otimes_R X$ by attaching edges $m \otimes x_0 \rightarrow ma \otimes x_0$ for each $m \in M$ and $a \in A$. This is the same as attaching a free $M$-cell $M \times B^1$ of dimension $1$ based at $1 \otimes x_0 \rightarrow a \otimes x_0$ for each $a \in A$. Let $Y$ denote the resulting free $M$-CW complex. The $\mathscr{R}$-order in the monoid $M$ induces in a natural way an order on the copies of $X$ in $Y$, and there is an edge joining two distinct copies of $X$ in $Y$ if and only if there is an edge in the right Cayley graph of $M$ joining the corresponding $\mathscr{R}$-classes. Moreover, it follows from the definition of $Y$, and Proposition~\ref{p:entrance}, that there is at most one edge joining any pair of distinct copies of $X$ in $Y$. It follows that if we contract each of the copies of $X$ in $Y$ we obtain the graph $\Gamma$ in Theorem~\ref{t:looks.like.Hasse}, which is a regular rooted tree, together with possibly infinitely many loops at each vertex. These loops arise from the edges $m \otimes x_0 \rightarrow ma \otimes x_0$ where $m \mathrel{\mathscr{R}} ma$ added in the construction of $Y$. (Notice that if $M\otimes_R X$ has $M$-finite $n$-skeleton, then so does $Y$.) To turn $Y$ into an equivariant classifying space for $M$ we add $2$-cells to deal with these loops, in the following way. It follows from Proposition~\ref{p:loops.free} that for each $a \in A$, the set $L=\{m\in M\mid m\mathrel{\mathscr R} ma\}$ is a free left $M$-set generated by a finite set $F_a \subseteq L$ with $F_a \subseteq R$. For each $r \in F_a$, choose a path in $p_r$ in $1 \otimes X$ from $1 \otimes x_0$ to $1 \otimes rx_0$, choose a path $q_r$ in $1 \otimes X$ from $1 \otimes x_0$ to $1 \otimes ra x_0$, and let $e_r$ denote the edge in $Y$ labelled by $a$ from $1 \otimes rx_0$ to $1 \otimes ra x_0$. Note that since $r \in F_a \subseteq L$ it follows that $r \in R$ and $ra \in R$ and so $1 \otimes rx_0 = r \otimes x_0$ and $1 \otimes ra x_0 = ra \otimes x_0$ and hence $e_r$ is indeed one of the edges that was added during the construction of $Y$. Now for each $a \in A$ attach a free $2$-cell $M \times B^2$ to $Y$ by attaching a $2$-cell at $1 \otimes x_0$ with boundary path $p_r e_r q_r^{-1}$ and all of its translates under the action of $M$. We do this for each $a \in A$ and call the resulting complex $Z$. Now if we contract the copies of $X$ in $Z$, we obtain the tree $\Gamma$, together with loops at each vertex each of which bounds a single disk. Thus $Z$ is homotopy equivalent to the tree $\Gamma$, and hence is contractible. This shows that $Z$ is an equivariant classifying space for the monoid $M$. (Note that if $Y$ has $M$-finite $n$-skeleton, then so does $Z$ hence giving an alternative proof that if $G$ is of type ${\rm F}_n$, then $M$ is of type left-${\rm F}_n$.) To complete the proof, since the free $M$-CW complex $Z$ was constructed from $M \otimes_R X$ by attaching $1$-cells and $2$-cells, and since we have already observed that $\dim M \otimes_R X = \dim X \leq \max \{ 1, \dim X_G \}$, it follows that $ \dim Z \leq \max \{ 2, \dim X_G \} $ and hence $\mathrm{left \; gd}(M) \leq \max\{2, \mathrm{gd}(G) \}$. \end{proof} For special one-relator monoids we obtain the following corollary which is the topological analogue of Corollary~\ref{c:special.one}. \begin{Cor}\label{c:special.one:topological} Let $M$ be the one-relator monoid $\langle A\mid w=1\rangle$. Then $M$ is of type left- and right-${\rm F}_{\infty}$. Moreover, if $w$ is not a proper power then $\mathop{\mathrm{left \; gd}} M\leq 2$ and $\mathop{\mathrm{right \; gd}} M\leq 2$, and otherwise $\mathop{\mathrm{left \; gd}} M= \mathop{\mathrm{right \; gd}} M = \infty$. \end{Cor} In particular this results says that for every special one-relator monoid whose defining relator is not a proper power admits an equivariant classifying space of dimension at most $2$. In fact, in this case it turns out that the Cayley complex of the monoid gives an equivariant classifying space of dimension at most $2$, as the following result demonstrates. \begin{Thm}\label{thm_CayleyComplex} Let $M=\langle A\mid w=1\rangle$ such that $w$ is not a proper power. Let $X$ be the $2$-complex obtained by filling in each loop labeled by $w$ in the Cayley graph $\Gamma(M,A)$ of $M$. Then $X$ is left equivariant classifying space for $M$ with dimension at most $2$. \end{Thm} \begin{proof} It follows from the proof of \cite[Theorem~6.14]{GraySteinberg1} that $X$ is an $M$-finite simply connected free $M$-CW complex of dimension at most 2. It is shown in \cite[Corollary~7.5]{Kobayashi2000} that the presentation $\langle A\mid w=1\rangle$ is strictly aspherical in the sense defined in \cite[Section~2]{Kobayashi1998}. The cellular chain complex of $X$ gives a free resolution displayed in Equation~(7.2) in \cite[Theorem~7.2]{Kobayashi1998}. This shows that $X$ is acyclic. Since $X$ is acyclic and simply connected it follows from the Whitehead and Hurewicz theorems that $X$ is contractible, and hence $X$ is a left equivariant classifying space for the monoid $M$. \end{proof} The analogous result to Theorem~\ref{thm_CayleyComplex} is also known to hold for one-relator groups. This was first observed in~\cite{Cockcroft1954} and is a consequence of Lyndon's Identity Theorem~\cite{Lyndon1950}. A more topological proof is given in~\cite{DyerVasquez1973}. We currently do not know whether the two-sided analogues of the results proved in this section, for bi-${\rm F}\sb n$ and (two-sided) geometric dimension, hold. One way to establish these results might be to seek a better understanding of the two-sided Cayley graphs of special monoids. As mentioned in the introduction, building on the ideas presented in this section, in \cite{GraySteinberg3} we have extended these results to arbitrary one-relator monoids. In particular in \cite {GraySteinberg3} we give a positive answer to Kobayashi's question \cite[Problem~1]{Kobayashi2000} by showing that every one-relator monoid $\langle A \mid u=v \rangle$ is of type left- and right-${\rm F}_\infty$ and $ {\rm FP}_{\infty}$. \section{Amalgamated free products} \label{sec_amalg} For graph of groups, including free products with amalgamation and HNN extensions, there are well-established methods for constructing a $K(G,1)$ from $K(G,1)$s of the vertex and edge groups; see for example~\cite[page~92]{Hatcher2002}. This can then be used to prove results for groups about the behaviour of the properties ${\rm F}\sb n$ and geometric dimension for amalgamated free products and HNN extensions. In this section, and the two sections that follow it, we use topological methods to investigate the behaviour of topological and homological finiteness properties of monoids, for free products with amalgamation, and HNN extension constructions. A \emph{monoid amalgam} is a triple $[M_1,M_2;W]$ where $M_1,M_2$ are monoids with a common submonoid $W$. The \emph{amalgamated free product} is then the pushout in the diagram \begin{equation}\label{eq:pushout.mon} \begin{tikzcd} W\ar{r}\ar{d} & M_1\ar{d}\\ M_2\ar{r} & M_1\ast_W M_2\end{tikzcd} \end{equation} in the category of monoids. Monoid amalgamated products are \textbf{much} more complicated than group ones. For instance, the amalgamated free product of finite monoids can have an undecidable word problem, and the factors do not have to embed or intersect in the base monoid; see~\cite{Sapir2000}. So there are no normal forms available in complete generality that allow one construct a Bass--Serre tree. We use instead the homological ideas of Dicks. For more details about these methods we refer the reader to~\cite[Chapter 1, Sections 4-7]{DicksAndDunwoody}. An $M$-graph $X$ is an one-dimensional CW complex with a cellular action by $M$ sending edges to edges. Given an $M$-graph $X$ we use $V$ to denote its set of $0$-cells and $E$ to denote its set of $1$-cells. Given any $M$-graph, if we choose some orientation for the edges, then the attaching maps of the $1$-cells define functions $\iota, \tau$ from $E$ to $V$ where in $X$ each oriented edge $e$ starts at $\iota e$ and ends at $\tau e$. We call $V$ and $E$ the vertex set, and edge set respectively, of the $M$-graph $X$. We shall assume that the monoid action preserves the orientation. It shall sometimes be useful to think of an $M$-graph as given by a tuple $(X,V,E,\iota,\tau)$ where $X$ is an $M$-set, $X = V \cup E$ a disjoin union where each of $V$ and $E$ is closed under the action of $M$, and $\iota, \tau\colon E \rightarrow V$ are $M$-equivariant maps. Let $M$ be a monoid and let $X$ be an $M$-graph. Let $\mathbb{Z}V$ and $\mathbb{Z}E$ denote the free abelian groups on $V$ and $E$, respectively. The cellular boundary map of $X$ is the $M$-linear map $\partial\colon \mathbb{Z}E \rightarrow \mathbb{Z}V$ with $\partial(e) = \tau e - \iota e$ for all $e \in E$. The sequence \[\mathbb{Z}E \xrightarrow{\,\,\partial\,\,} \mathbb{Z}V \xrightarrow{\,\,\epsilon\,\,} \mathbb{Z} \longrightarrow 0 \] is the augmented cellular chain complex of $X$, where $\epsilon$ is the augmentation map sending $\sum_{v \in V} n_v v$ to $\sum_{v \in V} n_v$ (i.e., each element of the basis $V$ is mapped to $1$). Throughout this section we shall frequently be confronted with the task of showing that a given $M$-graph is a tree or a forest. To do this, it is useful to recall that the $M$-graph $X$ is a forest if and only if $\partial\colon \mathbb{Z}E \rightarrow \mathbb{Z}V$ is injective; see~\cite[Lemmas 6.4]{DicksAndDunwoody}, i.e., \[0\longrightarrow \mathbb{Z}E \xrightarrow{\,\,\partial\,\,} \mathbb{Z}V \xrightarrow{\,\,\epsilon\,\,} \mathbb{Z} \longrightarrow 0 \] is exact. The results in this section improve, and give simpler proofs of, several results of Cremanns and Otto~\cite{CremannsOtto1998} on the behaviour of ${\rm FP}_n$ under free products and certain rather restricted free products of monoids with amalgamation. The proofs in Cremanns and Otto are quite long and technical, as is often the case for results in this area. The results in this section demonstrate the type of result our topological methods were introduced to prove. They show that the topological approach may be used to prove more general results in a less technical and more conceptual way. Our results also generalise and simplify proofs of some results of Kobayashi~\cite{Kobayashi2010} on preservation of left-, right- and bi-${\rm FP}_n$ under free products (see for example~\cite[Proposition 4.1]{Kobayashi2010}). There are no bi-${\rm FP}\sb n$ analogues in the literature of the two-sided results we obtain below on the behaviour of bi-${\rm F}\sb n$ and geometric dimension for free products with amalgamation. Also, as far as we are aware, the results that we obtain here are the first to appear in the literature on cohomological dimension of amalgamated free products of monoids. A monoid presentation is said to have finite homological type, abbreviated to FHT, if the, so-called, homotopy bimodule of the given presentation is finitely generated. The homotopy bimodule is a $\mathbb{Z}M$-bimodule constructed from a complex of $\mathbb{Z}A^*$-bimodules defined using the set of defining relations $R$ of the presentation $\langle A \mid \mathfrak{R} \rangle$ of the monoid $M$, and a particular family of disjoint circuits in the derivation graph associated with the presentation. The property FHT was originally introduced by Wang and Pride~\cite{WangPride2000}. We refer the reader to that paper, or to~\cite[Section~3]{KobayashiOtto2001}, for full details of the definition of FHT. It was proved in~\cite{KobayashiOtto2003} that for finitely presented monoids FHT and bi-${\rm FP}_3$ (equivalently bi-${\rm F}_3$) are equivalent. So some of the results below also have an interpretation in terms of FHT. \subsection{The one-sided setting} Let us define a tree $T$ for a pushout diagram \eqref{eq:pushout.mon}. Let us assume that $f_i\colon W\to M_i$, for $i=1,2$, is the homomorphism in the diagram and put $L=M_1\ast_W M_2$ for the pushout. The right multiplicative actions of $M_1$, $M_2$ and $W$ give three different partitions of $L$ into weak orbits. Since $W \leq M_i$ the $W$-orbits give a finer partition than both the $M_1$- and $M_2$-orbits. We can then define a directed bipartite graph $T$ with one part given by the $M_1$-orbits and the other part given by the $M_2$-orbits. When an $M_1$-orbit intersects an $M_2$-orbit, that intersection will be a union of $W$-orbits, and in this case we draw directed edges from the $M_1$-orbit to the $M_2$-orbit labelled by the $W$-orbits in this intersection. In more detail, let $T$ be the $L$-graph with vertex set \[V=L/M_1\coprod L/M_2\] and edge set \[E=L/W\] where $M_1,M_2,W$ act on the right of $L$ by first applying the canonical map to the pushout and then right multiplying. We write $[x]_K$ for the class of $x\in L$ in $L/K$. The edge $[x]_W$ connects $[x]_{M_1}$ with $[x]_{M_2}$ (and we usually think of it as oriented in this direction). The incidence here is easily seen to be well defined and the action of $L$ on the left of these sets is by cellular mappings sending edges to edges and preserving orientation. Hence $T$ is an $L$-graph. \begin{Lemma}\label{l:connected.bs.tree} The graph $T$ is connected. \end{Lemma} \begin{proof} The pushout $L$, being a quotient of the free product $M_1\ast M_2$, is generated by the images of $M_1$ and $M_2$ under the natural maps (which we omit from the notation even though they need not be injective). We define the length of $x\in L$ to be the minimum $k$ such that $x=x_1\cdots x_k$ with $x_i\in M_1\cup M_2$. We prove by induction on the length of $x$ that there is a path in $T$ from $[1]_{M_1}$ to $[x]_{M_1}$. If $x=1$, this is trivial, so assume the statement is true for length $k$ and $x=x_1\cdots x_{k+1}$. Let $p$ be a path from $[1]_{M_1}$ to $[x_2\cdots x_{k+1}]_{M_1}$. Then $x_1p$ is a path from $[x_1]_{M_1}$ to $[x]_{M_1}$. If $x_1\in M_1$, then $[x_1]_{M_1}=[1]_{M_1}$ and so $x_1p$ is a path from $[1]_{M_1}$ to $[x]_{M_1}$. If $x_1\in M_2$, then $[x_1]_W$ is an edge connecting $[x_1]_{M_1}$ and $[x_1]_{M_2}=[1]_{M_2}$ and $[1]_W$ is an edge connecting $[1]_{M_1}$ with $[1]_{M_2}$ and so there is a path from $[1]_{M_1}$ to $[x]_{M_1}$. Finally, if $x\in L$, then $[x]_{M_2}$ is connected by $[x]_W$ to $[x]_{M_1}$, which in turn is connected by a path to $[1]_{M_1}$. Thus $T$ is connected. \end{proof} We aim to prove that $T$ is a tree by showing that the cellular boundary map $\partial\colon \mathbb{Z}E \rightarrow \mathbb{Z}V$ is injective. To prove this we shall make use of semidirect products of monoids and the concept of a derivation. An account of this theory for groups may be found in~\cite{DicksAndDunwoody} where it is applied to show that the standard graph of the fundamental group of a graph of groups is a tree; see~\cite[Theorem~7.6]{DicksAndDunwoody}. Let $M$ be a monoid and let $A$ be a left ${\mathbb{Z}M}$-module. Then we can form the semidirect product $A \rtimes M$, of the abelian group $A$ and the monoid $M$, with elements $A \times M$ and multiplication given by \[ (a, m) (a', m') = (a + m a', mm'). \] The natural projection $\pi\colon A \rtimes M \rightarrow M$, $(a,m) \mapsto m$ is clearly a monoid homomorphism. A \emph{splitting} of this projection is a monoid homomorphism $\sigma\colon M \rightarrow A \rtimes M$ such that $\pi(\sigma(m)) = m$ for all $m \in M$. Associated to any splitting $\sigma$ of $\pi$ is a mapping $d\colon M \rightarrow A$ defined as the unique function satisfying \[ \sigma(m) = (d(m),m) \] for all $m \in M$. It follows from the fact that $\sigma$ is a homomorphism that the function $d\colon M \rightarrow A$ must satisfy \begin{equation}\label{eq_der} d(mm') = d(m) + md(m') \end{equation} for all $m, m' \in M$. Any function $d\colon M \rightarrow A$ satisfying \eqref{eq_der} is called a \emph{derivation}. A derivation is called \emph{inner} if it is of the form $d(m)=ma-a$ for some $a\in A$. It is easy to check that a mapping $d\colon M\to A$ is a derivation if and only if $m\mapsto (d(m),m)$ provides a splitting of the semidirect product projection $A\rtimes M\to M$. \begin{Lemma}\label{l:acyclic.bs.tree} The graph $T$ is a tree. \end{Lemma} \begin{proof} Since $T$ is connected by Lemma~\ref{l:connected.bs.tree}, it suffices to show that the cellular boundary map $\partial\colon \mathbb ZE\to \mathbb ZV$ is injective. To show, this we define a left inverse $\beta\colon \mathbb ZV\to \mathbb ZE$. In what follows, we abuse notation by identifying an element of $M_1$, $M_2$ or $W$ with its image in $L$. First define $\varphi_1\colon M_1\to \mathbb ZE\rtimes L$ by $\varphi_1(m_1) = (0,m_1)$. Then $\varphi_1$ is clearly a monoid homomorphism. Define $\varphi_2\colon M_2\to \mathbb ZE\rtimes L$ by $\varphi_2(m_2) = ([1]_W-[m_2]_W,m_2)$. Notice that $m_2\mapsto [1]_W-[m_2]_W$ is the inner derivation of the $\mathbb ZM_2$-module $\mathbb ZE$ associated to $-[1]_W\in \mathbb ZE$ and hence $\varphi_2$ is a homomorphism. Next, we observe that $\varphi_1f_1=\varphi_2f_2$. Indeed, if $w\in W$, then $\varphi_1f_1(w)=(0,w)$ and $\varphi_2f_2(w) = ([1]_W-[w]_W,w)=(0,w)$ as $[1]_W=[w]_W$. Thus there is a well defined homomorphism $\varphi\colon L\to \mathbb ZE\rtimes L$ extending $\varphi_1,\varphi_2$ by the universal property of a pushout. This map must split the semidirect product projection by construction of $\varphi_1,\varphi_2$. Indeed, for all $m_1 \in L$ in the image of $M_1$ we have $ \varphi(m_1) = \varphi_1(m_1) = (0,m_1) $ and for all $m_2 \in L$ in the image of $M_2$ we have \[ \varphi(m_2) = \varphi_2(m_2) = ([1]_W - [m_2]_w, m_2). \] It follows that for all $m_1 \in L$ in the image of $M_1$ we have $\pi(\varphi(m_1))=m_1$, and for all $m_2 \in L$ in the image of $M_2$ we have $\pi(\varphi(m_2))=m_2$. Since, as already observed above, $L$ is generated by the images of $M_1$ and $M_2$ under the natural maps, and since $\pi$ and $\varphi$ are homomorphisms, we conclude that $\pi(\varphi(l))=l$ for all $l \in L$, as required. It follows that $\varphi(x) = (d(x),x)$ for some derivation $d\colon L\to \mathbb ZE$ with the property that $d(m_1)=0$ for $m_1\in M_1$ and $d(m_2) = [1]_W-[m_2]_W$ for $m_2\in M_2$. Define $\beta\colon \mathbb ZV\to \mathbb ZE$ by $\beta([x]_{M_1}) =d(x)$ and $\beta([x]_{M_2}) = d(x)+[x]_W$ for $x\in L$. We must show that this is well defined. First suppose that $x\in L$ and $m_1\in M_1$. Then $d(xm_1)= xd(m_1)+d(x)=d(x)$ because $d$ vanishes on the image of $M_1$. If $x\in L$ and $m_2\in M_2$, then \begin{align*} d(xm_2)+[xm_2]_W&=xd(m_2)+d(x)+[xm_2]_W \\&= x([1]_W-[m_2]_W)+d(x)+[xm_2]_W = d(x)+[x]_W. \end{align*} It follows that $\beta$ is well defined. We now compute \[\beta\partial([x]_W) = \beta([x]_{M_2})-\beta([x]_{M_1}) = d(x)+[x]_W-d(x)=[x]_W\] for $x\in L$. Thus $\beta\partial=1_{\mathbb ZE}$ and so $\partial$ is injective. This completes the proof that $T$ is a tree. \end{proof} Since $T$ is a tree we obtain an exact sequence of ${\mathbb{Z}L}$-modules \[ 0 \longrightarrow \mathbb{Z}E \xrightarrow{\,\,\partial\,\,} \mathbb{Z}V \xrightarrow{\,\,\epsilon\,\,} \mathbb{Z} \longrightarrow 0 \] where $E,V$ are the edge and vertex sets of $T$, respectively. See~\cite[Theorem 6.6]{DicksAndDunwoody}. The exactness of this cellular chain complex of $T$ can be reformulated in the following manner. \begin{Cor}\label{c:exact.mv.begin} There is an exact sequence of ${\mathbb{Z}L}$-modules \[0\longrightarrow \mathbb ZL\otimes_{\mathbb ZW} \mathbb Z\longrightarrow (\mathbb ZL\otimes_{\mathbb ZM_1}\mathbb Z)\oplus (\mathbb ZL\otimes_{\mathbb ZM_2} \mathbb Z)\longrightarrow \mathbb Z\longrightarrow 0\] where $L=M_1\ast_{W} M_2$ is the pushout. \end{Cor} \begin{proof} This follows from the definition of $T$, the fact that $T$ is a tree, and the observation that $\mathbb Z[L/K]\cong \mathbb ZL\otimes_{\mathbb ZK}\mathbb Z$ for $K=M_1,M_2,W$. \end{proof} We call $T$ the \emph{Bass--Serre tree} of the pushout. If $f\colon X\to Y$ and $g\colon X\to Z$ are continuous mappings of topological spaces, the \emph{homotopy pushout} of $f,g$ is the space obtained by attaching $X\times I$ to $Y\coprod Z$ by the mapping $h\colon X\times \partial I\to Y\coprod Z$ with $h(x,0)=f(x)$ and $h(x,1)=g(x)$. If $X,Y,Z$ are CW complexes and $f,g$ are cellular mappings, then $h$ is cellular and so the homotopy pushout $U$ of $f$ and $g$ is a CW complex. If, in addition, $X,Y,Z$ are projective $M$-CW complexes and $f,g$ are cellular and $M$-equivariant, then $U$ is a projective $M$-CW complex by~\cite[Lemma~2.1]{GraySteinberg1}. Moreover, by the description of the cells coming from the proof of \cite[Lemma~2.1]{GraySteinberg1}, if $Y,Z$ have $M$-finite $n$-skeleton and $X$ has $M$-finite $(n-1)$-skeleton (whence $X\times I$ has $M$-finite $n$-skeleton), then $U$ has $M$-finite $n$-skeleton. The homotopy pushout construction is functorial with respect to commutative diagrams \[ \begin{tikzcd} Y\ar{r}{f_1}\ar{rd}{r}\ar{d}[swap]{f_2}& X_1\ar{rd}{s} &\\ X_2\ar{rd}[swap]{t} &Y'\ar{r}{g_1}\ar{d}{g_2} & X_1'\\ & X_2' & &\end{tikzcd} \] Moreover, if $r,s,t$ are homotopy equivalences, then it is well known that the induced mapping of homotopy pushouts is a homotopy equivalence; see for example~\cite[Theorem~4.2.1]{tomDieckBook}, or~\cite[page 19]{DwyerHennBook} where it is observed that homotopy colimits have the strong homotopy equivalence property. For the reader's convenience, we shall prove a special case of this fact that will be crucial in what follows. Recall that if $Y$ is a space, the \emph{suspension} of $Y$ is the space $\Sigma Y=Y\times I/(Y\times \{0\}\cup Y\times \{1\})$. If $Y$ is contractible, then the mapping $\Sigma Y\to I$ induced by the projection $Y\times I\to I$ is a homotopy equivalence. \begin{Lemma}\label{l:pushout.graph} Let $M$ be a monoid and $X_1,X_2,Y$ locally path connected $M$-spaces. Assume that the natural mappings $r_i\colon X_i\to \pi_0(X_i)$, for $i=1,2$, and $r\colon Y\to \pi_0(Y)$ are homotopy equivalences (where the set of path components is given the discrete topology). Let $f_i\colon Y\to X_i$ be continuous mappings, for $i=1,2$, and let $Z$ be the homotopy pushout of $X_1,X_2$ along $Y$, which is naturally an $M$-space. Let $\Gamma$ be the $M$-graph with vertex set $\pi_0(X_1)\coprod \pi_0(X_2)$ and edge set $\pi_0(Y)$ where the edge corresponding to $C\in \pi_0(Y)$ connects the component of $f_1(C)$ to the component of $f_2(C)$; this is the homotopy pushout of $\pi_0(X_1)$ and $\pi_0(X_2)$ along $\pi_0(Y)$. Then the natural $M$-equivariant mapping $h\colon Z\to \Gamma$ is a homotopy equivalence. \end{Lemma} \begin{proof} The mapping $h$ takes an element of $X_i$ to its path component and an element $(y,t)\in Y\times I$ to $(C,t)$ where $C$ is the component of $y$. This is well defined, by construction of the homotopy pushout, and is $M$-equivariant. As the connected components of $X_i$, for $i=1,2$, are disjoint and contractible subcomplexes, $Z$ is homotopy equivalent to the space obtained by contracting each of these subcomplexes to a point. Then $Z$ has the homotopy type of the CW complex obtained by adjunction of $\coprod_{C\in \pi_0(Y)} \Sigma C$ to the discrete set $\pi_0(X_1)\coprod \pi_0(X_2)$ where $\Sigma C$ is attached via the mapping sending $(y,0)$ to the component of $f_1(C)$ and $(y,1)$ to the component of $f_2(C)$. Since the mapping $\Sigma C\to I$ induced by the projection $C\times I\to I$ is a homotopy equivalence by contractibility of $C$, it follows that $h$ is a homotopy equivalence. This completes the proof. \end{proof} We now prove some preservation results for amalgamated free products. We shall apply the observation in Remark~\ref{r:tensor.with.free} without comment. \begin{Thm}\label{t:bass.serre.free} Let $[M_1,M_2;W]$ be an amalgam of monoids such that $M_1,M_2$ are free as right $W$-sets. If $M_1,M_2$ are of type left-${\rm F}_n$ and $W$ is of type left-${\rm F}_{n-1}$, then $M_1\ast_W M_2$ is of type left-${\rm F}_n$. \end{Thm} \begin{proof} Let $X_i$ be an equivariant classifying space for $M_i$ with $M_i$-finite $n$-skeleton, for $i=1,2$, and let $Y$ be an equivariant classifying space for $W$ with $W$-finite $(n-1)$-skeleton. By \cite[Lemma~6.2]{GraySteinberg1} and the cellular approximation theorem (\cite[Theorem 2.8]{GraySteinberg1}), we can find $W$-equivariant cellular mappings $f_i\colon Y\to X_i$, for $i=1,2$. Let $L=M_1\ast_W M_2$. By McDuff~\cite{McDuff1979}, $L$ is a free right $M_i$-set, for $i=1,2$, and a free right $W$-set. Then $X_i'=L\otimes_{M_i} X_i$, for $i=1,2$, is a projective $L$-CW complex with $L$-finite $n$-skeleton and $Y'=L\otimes_W Y$ is a projective $L$-CW complex with $L$-finite $(n-1)$-skeleton by Proposition~\ref{c:base.change.cw}. Let $\tilde{f}_i\colon Y'\to X_i'$ be the map induced by $f_i$, for $i=1,2$, and let $Z$ be the homotopy pushout of $\tilde{f}_1,\tilde{f}_2$. It is a projective $L$-CW complex. We claim that $Z$ is an equivariant classifying space for $L$. Note that $Z$ has an $L$-finite $n$-skeleton by construction. Our goal is to show that $Z$ is homotopy equivalent to the Bass--Serre tree $T$. By \cite[Proposition~3.4]{GraySteinberg1}, we have that $\pi_0(X_i')\cong L\otimes_{M_i} \pi_0(X_i)\cong L/M_i$ and $\pi_0(Y')\cong L\otimes_W \pi_0(Y)\cong L/W$ and $f_i$ induces the natural mapping $L/W\to L/M_i$ under these identifications, for $i=1,2$. As $X_i'\cong L/M_i\times X_i$ and $Y'\cong L/W\times Y$ (by freeness of $L$ as a right $K$-set for $K=M_1,M_2,W$) and $X_i$, for $i=1,2$, and $Y$ are contractible, the projections $X_i'\to \pi_0(X_i')$, for $i=1,2$, and $Y'\to \pi_0(Y')$ are homotopy equivalences. It follows that $Z$ is homotopy equivalent to $T$, by Lemma~\ref{l:pushout.graph}, and hence contractible. This completes the proof. \end{proof} Note that we do not assume that the monoids $M_1$ and $M_2$ are finitely generated, or finitely presented, in the above result. Recall that a monoid can be of type left-${\rm F}_2$ without being finitely presented, and can be of type left-${\rm F}_1$ without being finitely generated; see~\cite[Section~6]{GraySteinberg1}. The hypotheses of Theorem~\ref{t:bass.serre.free} hold if $W$ is trivial or if $M_1,M_2$ are left cancellative and $W$ is a group. As another example, if we consider $\mathbb N$, then, for any $k>0$, $\mathbb N$ is a free $k\mathbb N$-set with basis $\{0,1,\ldots, k-1\}$. Since $k\mathbb N\cong \mathbb N$, it follows from Theorem~\ref{t:bass.serre.free} that $\mathbb N\ast_{k\mathbb N=m\mathbb N} \mathbb N$ is of type left-${\rm F}_{\infty}$, as $\mathbb N$ is of type left-${\rm F}_{\infty}$, for any $k,m>0$. As a special case of Theorem~\ref{t:bass.serre.free} we obtain the following result as a corollary. \begin{Cor}\label{c:free.prod.fp} A free product $M\ast N$ of monoids of type left-${\rm F}_n$ is of type left-${\rm F}_n$. If $M,N$ are finitely presented monoids, then $M\ast N$ is of type left-${\rm F}\sb n$ if and only if $M$ and $N$ both are of type left-${\rm F}\sb n$. \end{Cor} \begin{proof} If $M$ and $N$ are of type left-${\rm F}_n$, then $M \ast N$ is of type left-${\rm F}_n$ by Theorem~\ref{t:bass.serre.free} as $M,N$ are free $\{1\}$-sets. Conversely, if $M,N$ are finitely presented, then so is $M\ast N$ and hence left ${\rm F}\sb n$ is equivalent to left-${\rm FP}\sb n$ for these monoids. A result of Pride~\cite{Pride2006} says that a retract of a left-${\rm FP}\sb n$ monoid is left-${\rm FP}\sb n$. As $M,N$ are retracts of $M\ast N$, the converse follows. \end{proof} The fact that for finitely presented monoids $M,N$ of type left-${\rm FP}_n$, the free product $M\ast N$ is of type left-${\rm FP}_n$ was first proved in~\cite[Theorem~5.5]{CremannsOtto1998}. The following corollary is classical. \begin{Cor} If $[G_1,G_2;H]$ is an amalgam of groups with $G_1,G_2$ of type left-${\rm F}_n$ and $H$ of type left-${\rm F}_{n-1}$, then $G_1\ast_H G_2$ is of type left ${\rm F}_n$. \end{Cor} \begin{proof} Since $G_1,G_2$ are free left $H$-sets, this follows from Theorem~\ref{t:bass.serre.free}. \end{proof} The homotopy pushout construction in the proof of Theorem~\ref{t:bass.serre.free} also serves to establish the following. \begin{Thm}\label{t:amalg.cd} Let $[M_1,M_2;W]$ be an amalgam of monoids such that $M_1,M_2$ are free as right $W$-sets. Suppose that $d_i$ is the left geometric dimension of $M_i$, for $i=1,2$, and $d$ is the left geometric dimension of $W$. Then the left geometric dimension of $M_1\ast_W M_2$ is bounded above by $\max\{d_1,d_2,d+1\}$. \end{Thm} \begin{Cor}\label{c:free.prod.geom} Let $M$ and $N$ be monoids of left geometric dimension at most $n$. Then $M\ast N$ has left geometric dimension at most $\max\{n,1\}$. \end{Cor} We now wish to prove a homological analogue of Theorem~\ref{t:bass.serre.free}. \begin{Thm}\label{t:bass.serre.flat} Let $[M_1,M_2;W]$ be an amalgam of monoids such that $\mathbb ZL$ is flat as a right $\mathbb ZM_1$-, $\mathbb ZM_2$- and $\mathbb ZW$-module, where $L=M_1\ast_W M_2$. If $M_1,M_2$ are of type left-${\rm FP}_n$ and $W$ is of type left-${\rm FP}_{n-1}$, then $M_1\ast_W M_2$ is of type left-${\rm FP}_n$. \end{Thm} \begin{proof} By Lemma~\ref{l:flat.base} and the hypotheses, we deduce that $\mathbb ZL\otimes_{\mathbb ZM_i} \mathbb Z$ is of type ${\rm FP}_n$, for $i=1,2$, and $\mathbb ZL\otimes_{\mathbb ZW}\mathbb Z$ is of type ${\rm FP}_{n-1}$. The result now follows by applying Corollary~\ref{c:fp.resolved} to the exact sequence in Corollary~\ref{c:exact.mv.begin}. \end{proof} \begin{remark} It is reasonable to consider whether it might be possible to weaken the hypothesis of Theorem~\ref{t:bass.serre.flat} to just assuming that $\mathbb ZM_1$ and $\mathbb ZM_2$ are flat as $\mathbb ZW$-modules. In~\cite[Lemma~5.2(a)]{Fiedorowicz1984}, Fiedorowicz claims that if $[M_1,M_2;W]$ is an amalgam of monoids such that $\mathbb ZM_1$ and $\mathbb ZM_2$ are flat as left $\mathbb ZW$-modules, then $\mathbb ZL$ (where $L=M_1\ast_W M_2$) is flat as a left $\mathbb ZM_i$-module, for $i=1,2$, and as a left $\mathbb ZW$-module. Unfortunately his result is not correct. The following counterexample to~\cite[Lemma~5.2(a)]{Fiedorowicz1984} is due to Tyler Lawson (see~\cite{MOExample}), whom we thank for allowing us to reproduce it. Let \[ M_1 = \langle a, a^{-1} \mid aa^{-1} = 1, \; a^{-1}a = 1 \rangle, \quad W = \{ b \}^*, \quad \mbox{and} \quad M_2 = \{ c,d \}^*. \] So $M_1$ is isomorphic to the infinite cyclic group, and $N$ and $M_2$ are the free monoids of ranks $1$ and $2$, respectively. Let $f_1\colon N \rightarrow M_1$ be the homomorphism which maps $b \mapsto a$, let $f_2\colon N \rightarrow M_2$ be the homomorphism which maps $b \mapsto c$, and let $L$ be the monoid amalgam $[M_1,M_2;W]$ with respect to the embeddings $f_1$ and $f_2$. Then $L$ is isomorphic to the monoid with presentation \[ \langle a,a^{-1}, d \mid aa^{-1} = 1, a^{-1}a = 1 \rangle, \] that is, to $\mathbb Z\ast \{d\}^*$. As the commutative ring $\mathbb ZM_1$ is a localization of $\mathbb ZW$, it is clearly flat as a left $\mathbb ZW$-module. Since $W$ is a free factor in $M_2$, we have that $M_2$ is a free left $W$-set and hence $\mathbb ZM_2$ is a free left $\mathbb ZW$-module (and thus flat). On the other hand, $\mathbb{Z}L$ is not flat as a left $\mathbb{Z}M_2$-module. This may be shown by considering the exact sequence of $\mathbb{Z}M_2$-modules \[ 0 \rightarrow \mathbb{Z}M_2 \oplus \mathbb{Z}M_2 \rightarrow \mathbb{Z}M_2 \rightarrow \mathbb{Z} \rightarrow 0, \] where the first map sends $(u,v)$ to $uc + vd$, and the second sends $c$ and $d$ to zero. Here $\mathbb{Z}$ is made a left $\mathbb{Z}M_2$-module by having $c$ and $d$ annihilate it rather than via the trivial module structure. Tensoring this sequence over $\mathbb{Z}M_2$ on the left by $\mathbb{Z}L$ gives the sequence \[ 0 \rightarrow \mathbb{Z}L \oplus \mathbb{Z}L \rightarrow \mathbb{Z}L \rightarrow 0 \rightarrow 0, \] which is not left exact since the first factor of the direct sum is taken isomorphically to the middle term by invertibility of $a$. Hence $\mathbb{Z}L$ is not flat as a $\mathbb{Z}M_2$-module. A nearly identical proof was given by Bergman to show that universal localization does not preserve flatness in the non-commutative setting~\cite[Page~70]{Bergman74}. Since~\cite[Lemma~5.2(a)]{Fiedorowicz1984} does not hold, it cannot be used to weaken the hypothesis of Theorem~\ref{t:bass.serre.flat} to assuming only that $\mathbb ZM_1$ and $\mathbb ZM_2$ are flat as $\mathbb ZW$-modules. Similarly~\cite[Lemma~5.2(a)]{Fiedorowicz1984} cannot be used to weaken the hypotheses of any of Theorems~\ref{t:bass.serre.flat.cd}, \ref{t:bass.serre.flat.bi} or \ref{t:bass.serre.flat.bi2}. \end{remark} It follows from results of McDuff~\cite{McDuff1979} that the hypotheses of Theorem~\ref{t:bass.serre.free} are satisfied when $M_1$ and $M_2$ are free as $W$-sets which gives the following corollary. \begin{Cor}\label{t:bass.serre.free.hom} Let $[M_1,M_2;W]$ be an amalgam of monoids such that $M_1,M_2$ are free as right $W$-sets. If $M_1,M_2$ are of type left-${\rm FP}_n$ and $W$ is of type left-${\rm FP}_{n-1}$, then $M_1\ast_W M_2$ is of type left-${\rm FP}_n$. \end{Cor} Corollary~\ref{t:bass.serre.free.hom} applies, in particular, when $W$ is trivial. Thus we obtain the following improvement on~\cite[Theorem~5.5]{CremannsOtto1998} in which we do not need to assume the factors are finitely presented. \begin{Cor}\label{c:free.prod.hom} Let $M_1,M_2$ be monoids of type left-${\rm FP}_n$. Then $M_1\ast M_2$ is of type left-${\rm FP}_n$. \end{Cor} \begin{Thm}\label{t:bass.serre.flat.cd} Let $[M_1,M_2;W]$ be an amalgam of monoids such that $\mathbb ZL$, where $L=M_1\ast_W M_2$, is flat as a right $\mathbb ZM_1$-, $\mathbb ZM_2$- and $\mathbb ZW$-module. If $M_1,M_2$ have left cohomological dimension at most $d$ and $W$ has left cohomological dimension at most $d-1$, then $M_1\ast_W M_2$ has left cohomological dimension at most $d$. \end{Thm} \begin{proof} By Lemma~\ref{l:flat.base} and the hypotheses, we deduce that $\mathbb ZL\otimes_{\mathbb ZM_i} \mathbb Z$ is of cohomological dimension at most $d$, for $i=1,2$, and $\mathbb ZL\otimes_{\mathbb ZA}\mathbb Z$ is of cohomological dimension $d-1$. We deduce the theorem by applying Corollary~\ref{c:fp.resolved} to the exact sequence in Corollary~\ref{c:exact.mv.begin}. \end{proof} Again, combining this with results of McDuff~\cite{McDuff1979} gives the following. \begin{Cor}\label{t:amalg.cd.cohom} Let $[M_1,M_2;W]$ be an amalgam of monoids such that $M_1,M_2$ are free as right $W$-sets. Suppose that $d_i$ is the left cohomological dimension of $M_i$, for $i=1,2$, and $d$ is the left cohomological dimension of $W$. Then the left cohomological dimension of $M_1\ast_W M_2$ is bounded above by $\max\{d_1,d_2,d+1\}$. \end{Cor} \subsection{The two-sided setting} We need some preliminary properties of tensor products before investigating amalgams in the two-sided context. \begin{Prop}\label{p:tensor.id} If $f\colon M\to N$ is a monoid homomorphism, then there is an $N\times N^{op}$ isomorphism $F\colon N\otimes_M M\otimes_M N\to N\otimes_M N$ defined by $F(n\otimes m\otimes n')=nm\otimes n'$. \end{Prop} \begin{proof} The mapping $h\colon N\times M\times N\to N\otimes_M N$ given by $(n,m,n')\mapsto nm\otimes n'$ is $N\times N^{op}$-equivariant and satisfies $(nm',m,m''n')\mapsto nm'm\otimes m''n'=nm'mm''\otimes n'=h(n,m'mm'',n')$ and so the mapping $F$ is well defined. The mapping $k\colon N\times N\to N\otimes_M M\otimes_M N$ given by $(n,n')\mapsto n\otimes 1\otimes n$ satisfies $k(nm,n')=nm\otimes 1\otimes n' = n\otimes m\otimes n'=n\otimes 1\otimes mn'=k(n,mn')$ for $m\in M$ and hence induces a mapping $N\otimes_M N\to N\otimes_M M\otimes_M N$. Clearly, $h$ and $k$ induce inverse mappings as $nm\otimes 1\otimes n'=n\otimes m\otimes n'$ for $m\in M$. \end{proof} The next proposition will frequently be used to decongest notation. \ \begin{Prop}\label{p:bi.tensor.iso} Let $A$ be right $M$-set, $B$ a left $M$-set and $C$ a left $M\times M^{op}$-set. Then $A\otimes_M C\otimes_M B$ is naturally isomorphic to $(A\times B)\otimes_{M\times M^{op}} C$ in the category of sets where we view $A\times B$ as a right $M\times M^{op}$-set via the action $(a,b)(m,m') = (am,m'b)$. \end{Prop} \begin{proof} Define $f\colon A\times C\times B\to (A\times B)\otimes_{M\times M^{op}} C$ by $f(a,c,b)=(a,b)\otimes c$. Then $f(am,c,m'b) = (am,m'b)\otimes c=(a,b)\otimes mcm'$ and so $f$ induces a well-defined mapping $A\otimes_M C\otimes_M B\to (A\times B)\otimes_{M\times M^{op}} C$. Define $g\colon A\times B\times C\to A\otimes_M C\otimes_M B$ by $g(a,b,c) = a\otimes c\otimes b$. Then $g(am,m'b,c) = am\otimes c\otimes m'b = a\otimes mcm'\otimes b= g(a,b,mcm')$ and so $g$ induces a well-defined mapping $(A\times B)\otimes_{M\times M^{op}} C\to A\otimes_M C\otimes_M B$. The maps induced by $f$ and $g$ are clearly mutually inverse and natural in $A,B,C$. \end{proof} \begin{Rmk}\label{r:same.arg} A nearly identical proof shows that if $A$ is a right $\mathbb ZM$-module, $B$ is a left $\mathbb ZM$-module and $C$ is an $\mathbb ZM$-bimodule, then we have that $A\otimes_{\mathbb ZM} C\otimes_{\mathbb ZM}B\cong (A\otimes B)\otimes_{\mathbb ZM\otimes \mathbb ZM^{op}} C$ as abelian groups and the isomorphism is natural. \end{Rmk} \begin{Prop}\label{p:tensor.free} Suppose that $A$ is a free right $M$-set, $B$ is a free left $M$-set and $C$ is an $M$-$M$-biset. Then $A\otimes_M C\otimes_M B$ is naturally isomorphic to $A/M\times C\times M\backslash B$ in the category of sets. \end{Prop} \begin{proof} By freeness, $A\otimes_M C\cong A/M\times C$ via $a\otimes c\mapsto ([a],c)$ where $[a]$ is the class of $a$ and, moreover, this is a right $M$-set isomorphism. Therefore, $A\otimes_M C\otimes_M B\cong (A/M\times C)\otimes_M B\cong A/M\times C\times M\backslash B$ because $B$ is a free left $M$-set on $M\backslash B$. The isomorphism is clearly natural in $A,B,C$. \end{proof} We now wish to consider a pushout diagram \eqref{eq:pushout.mon} in the bimodule setting. Let us assume that $f_i\colon W\to M_i$ is the homomorphism in the diagram, for $i=1,2$, and we continue to use $L$ to denote the pushout. Let us proceed to define a forest $T$. The vertex set of $T$ will be \[V=(L\otimes_{M_1} L)\coprod (L\otimes_{M_2} L)\] and the edge set will be \[E=L\otimes_W L.\] We shall write $[x,y]_{K}$ for the tensor $x\otimes y$ in $L\otimes_{K} L$ for $K=M_1,M_2,W$. The edge $[x,y]_W$ will connect $[x,y]_{M_1}$ to $[x,y]_{M_2}$, and we think of it as oriented in this direction. Note that $T$ is an $L\times L^{op}$-graph. Note that $[x,y]_K\mapsto xy$ is well defined for any of $K=M_1,M_2,W$. \begin{Lemma}\label{l:bass.serre.com} There is an $L\times L^{op}$-equivariant isomorphism $\pi_0(T)\to L$ induced by the multiplication map on vertices. \end{Lemma} \begin{proof} As an edge $[x,y]_W$ connects $[x,y]_{M_1}$ to $[x,y]_{M_2}$, we have that multiplication $[x,y]_{M_i}\mapsto xy$ on vertices induces an $L\times L^{op}$-equivariant surjective mapping $\pi_0(T)\to L$. To prove the injectivity, we first claim that $[1,x]_{M_1}$ is connected by an edge path to $[x,1]_{M_1}$ for all $x\in L$ by induction on the length of $x$. If $x=1$, there is nothing to prove. So assume the claim for length $k$ and let $x=x_1\cdots x_{k+1}$ with $x_i\in M_1\cup M_2$ (again abusing notation as $M_i$ need not embed in $L$). Let $p$ be a path from $[1,x_2\cdots x_{k+1}]_{M_1}$ to $[x_2\cdots x_{k+1},1]_{M_1}$. Then $x_1p_1$ is a path from $[x_1,x_2\cdots x_{k+1}]_{M_1}$ to $[x,1]_{M_1}$. If $x_1\in M_1$, then $[x_1,x_2\cdots x_{k+1}]_{M_1}=[1,x]_{M_1}$ and we are done. If $x_1\in M_2$, then $[x_1,x_2\cdots x_{k+1}]_W$ is an edge between $[x_1,x_2\cdots x_{k+1}]_{M_1}$ and $[1,x]_{M_2}$. But $[1,x]_{W}$ is an edge from $[1,x]_{M_1}$ to $[1,x]_{M_2}$ and so we are again done in this case. If $x=x_1x_2$ with $x_1,x_2\in L$, there is a path $p$ from $[1,x_1]_{M_1}$ to $[x_1,1]_{M_1}$ by the above claim. Then $px_2$ is path from $[1,x]_{M_1}$ to $[x_1,x_2]_{M_1}$. Thus any two vertices $[u,v]_{M_1}$ and $[u',v']_{M_1}$ with $uv=u'v'$ are connected in $T$. But $[u,v]_W$ connects $[u,v]_{M_2}$ to $[u,v]_{M_1}$ and hence any two vertices $[u,v]_{M_i}$ and $[u',v']_{M_j}$ with $uv=u'v'$ are connected for all $i,j\in \{1,2\}$. This completes the proof. \end{proof} Next we prove that $T$ is a forest. Note that $\mathbb ZE$ is a $\mathbb ZL$-bimodule. If $A$ is a bimodule over a monoid ring $\mathbb ZK$ then we can form the two-sided semidirect product $A\bowtie K$, of the abelian group $A$ and the monoid $K$, with elements $A \times K$ and multiplication given by \[ (a,k)(a',k') = (ak' + ka', kk'). \] A \emph{splitting} $\sigma$ of the projection $\pi\colon A \bowtie K \rightarrow K$ is a monoid homomorphism $\sigma\colon K \rightarrow A \bowtie K$ such that $\pi(\sigma(k))=k$ for all $k \in K$. A mapping $d\colon K\to A$ is a \emph{derivation} if \[d(kk') =kd(k') + d(k)k'\] for all $k, k' \in K$. A derivation in \emph{inner} if $d(k) = ka-ak$ for some $a\in A$. Derivations correspond to splittings of the two-sided semidirect product projection $A\bowtie K\to K$, each splitting being of the form $k\mapsto (d(k),k)$ with $d$ a derivation. \begin{Lemma}\label{l:forest} The graph $T$ is a forest. \end{Lemma} \begin{proof} A graph with vertex set $V$ and edge set $E$ is a forest if and only if the cellular boundary map $\partial\colon \mathbb ZE\to \mathbb ZV$ is injective. We again use derivations to construct a left inverse to $\partial$. As usual, we identify elements of $M_1$, $M_2$ and $W$ with their images in $L$ (abusing notation). Define $\varphi_1\colon M_1\to \mathbb ZE\bowtie L$ by $\varphi_1(m_1)=(0,m_1)$; this is clearly a homomorphism. Next define $\varphi_2\colon M_2\to \mathbb ZE\bowtie L$ by $\varphi_2(m_2) = ([1,m_2]_W-[m_2,1]_W,m_2)$. Note that $m_2\mapsto [1,m_2]_W-[m_2,1]_W$ is the inner derivation of the $\mathbb ZM_2$-bimodule $\mathbb ZE$ associated to the element $-[1,1]_W$ and hence $\varphi_2$ is a homomorphism. If $w\in W$, then \[\varphi_2f_2(w) = ([1,w]_W-[w,1]_W,w) = (0,w)=\varphi_1f_1(w)\] as $[1,w]_W=[w,1]_W$ for $w\in W$. Therefore, there is a homomorphism $\varphi\colon L\to \mathbb ZE\bowtie L$ extending $\varphi_1,\varphi_2$, which is a splitting of the projection by construction. Thus $\varphi(x)=(d(x),x)$ for some derivation $d\colon L\to \mathbb ZE$ satisfying $d(m_1)=0$ for $m_1\in M_1$ and $d(m_2) = [1,m_2]_W-[m_2,1]_W$ for $m_2\in M_2$. We now define $\beta\colon \mathbb ZV\to \mathbb ZE$ by $\beta([x,y]_{M_1}) = d(x)y$ and $\beta([x,y]_{M_2}) = d(x)y+[x,y]_W$. To show that this is well defined, we need that if $m_1\in M_1$, then $[xm_1,y]_{M_1}$ and $[x,m_1y]_{M_1}$ are sent to the same element and if $m_2\in M_2$, then $[xm_2,y]_{M_2}$ and $[x,m_2y]_{M_2}$ are sent to the same element. But $d(xm_1)y = xd(m_1)y+d(x)m_1y = d(x)m_1y$ because $d(m_1)=0$. Also, we compute \begin{align*} d(xm_2)y+[xm_2,y]_W&=xd(m_2)y+d(x)m_2y+[xm_2,y]_W\\ &=x([1,m_2]_W-[m_2,1]_W)y+d(x)m_2y+[xm_2,y]_W\\ &= d(x)m_2y+[x,m_2y]_W. \end{align*} We then obtain \[\beta\partial([x,y]_W)=\beta([x,y]_{M_2})-\beta([x,y]_{M_1}) = d(x)y+[x,y]_W-d(x)y=[x,y]_W.\] Thus $\beta\partial=1_{\mathbb ZE}$ and hence $\partial$ is injective. This completes the proof that $T$ is a forest. \end{proof} We call $T$ the \emph{Bass--Serre forest} of the pushout. Since $H_0(T)\cong \mathbb Z\pi_0(T)\cong \mathbb ZL$ as an $L\times L^{op}$-bimodule (by Lemma~\ref{l:bass.serre.com}), Lemma~\ref{l:forest} has the following reinterpretation. \begin{Cor}\label{c:exact.bimod.forest} There is an exact sequence of $L\times L^{op}$-modules \[0\longrightarrow \mathbb ZL\otimes_{\mathbb ZW} \mathbb ZL\longrightarrow (\mathbb ZL\otimes_{\mathbb ZM_1}\mathbb ZL)\oplus (\mathbb ZL\otimes_{\mathbb ZM_2} \mathbb ZL)\longrightarrow \mathbb ZL\longrightarrow 0\] where $L=M_1\ast_{W} M_2$ is the pushout. \end{Cor} \begin{proof} This follows by consideration of the cellular chain complex of the forest $T$ and using that $\mathbb ZV/\partial\mathbb ZE=H_0(T)\cong \mathbb ZL$, as observed before the corollary. \end{proof} \begin{Thm}\label{t:bass.serre.free.bi} Let $[M_1,M_2;W]$ be an amalgam of monoids such that $M_1,M_2$ are free as both left and right $W$-sets. If $M_1,M_2$ are of type bi-${\rm F}_n$ and $W$ is of type bi-${\rm F}_{n-1}$, then $M_1\ast_W M_2$ is of type bi-${\rm F}_n$. \end{Thm} \begin{proof} Let $X_i$ be a bi-equivariant classifying space for $M_i$ with $M_i \times M_i^{op}$-finite $n$-skeleton, for $i=1,2$, and $Y$ a bi-equivariant classifying space for $W$ with $W \times W^{op}$-finite $(n-1)$-skeleton. Fix bi-equivariant isomorphisms $r_i\colon M_i\to \pi_0(X_i)$ and $r\colon W\to \pi_0(Y)$. By \cite[Lemma~7.1]{GraySteinberg1} and the cellular approximation theorem \cite[Theorem 2.8]{GraySteinberg1}, we can find $W\times W^{op}$-equivariant cellular mappings $f_i\colon Y\to X_i$, for $i=1,2$, such that the composition of $r$ with the composition of the mapping induced by $f_i$ with $r_i^{-1}$ is the inclusion, for $i=1,2$. Let $L=M_1\ast_W M_2$. By McDuff~\cite{McDuff1979}, $L$ is a free as both a left and a right $M_i$-set, for $i=1,2$, and as a left and right $W$-set. For $i=1,2$, $X_i'=L\otimes_{M_i} X_i\otimes_{M_i} L\cong (L\times L^{op})\otimes_{L\times L^{op}} X_i$ (the isomorphism by Proposition~\ref{p:bi.tensor.iso}) is a projective $L\times L^{op}$-CW complex with $L\times L^{op}$-finite $n$-skeleton and $Y'=L\otimes_W Y\otimes_W L\cong (L\times L^{op})\otimes_{L\times L^{op}} Y$ is a projective $L\times L^{op}$-CW complex with $L\times L^{op}$-finite $(n-1)$-skeleton by Proposition~\ref{c:base.change.cw}. Let $F_i\colon Y'\to X_i'$ be the mapping induced by $f_i$, for $i=1,2$, and let $Z$ be the homotopy pushout of $F_1,F_2$; it is a projective $L\times L^{op}$-CW complex. We claim that $Z$ is a bi-equivariant classifying space for $L$. Note that $Z$ has an $L\times L^{op}$-finite $n$-skeleton by construction. Our goal is to show that $Z$ is homotopy equivalent to the Bass--Serre forest $T$ via an $L\times L^{op}$-equivariant homotopy equivalence. By~\cite[Proposition~3.4]{GraySteinberg1} % and Proposition~\ref{p:tensor.id} we have that $\pi_0(X_i')\cong L\otimes_{M_i} M\otimes_{M_i} L\cong L\otimes_{M_i} L$, for $i=1,2$, and $\pi_0(Y')\cong L\otimes_W W\otimes_W L\cong L\otimes_W L$ and, moreover, $F_i$ induces the natural mapping $L\otimes_W L\to L\otimes_{M_i} L$, for $i=1,2$ (by construction). Thus, by Lemma~\ref{l:pushout.graph}, it suffices to show that the projections $X_i'\to \pi_0(X_i')$, for $i=1,2$, and $Y'\to \pi_0(Y)$ are homotopy equivalences. Since $L$ is free as a left and right $M_i$-set, for $i=1,2$, and as a left and right $W$-set, we have by Proposition~\ref{p:tensor.free} that $X_i'\cong L/M_i\times X_i\times M_i\backslash L$ (for $i=1,2$) and $Y'\cong L/W\times Y\times W\backslash L$. As $X_1,X_2,Y$ are homotopy equivalent to their sets of path components via the canonical projection, we deduce that the projections to path components are, indeed, homotopy equivalences for $X_1',X_2',Y'$. This completes the proof. \end{proof} The hypotheses of Theorem~\ref{t:bass.serre.free.bi}, of course, hold if $W$ is trivial. It also holds if we amalgamate two copies of $\mathbb N$ along cyclic submonoids. So $\mathbb N\ast_{k\mathbb N=m\mathbb N}\mathbb N$ is of type bi-${\rm F}_{\infty}$ for any $m,k>0$. \begin{Cor} A free product $M\ast N$ of monoids of type bi-${\rm F}_n$ is of type bi-${\rm F}_n$. If $M,N$ are finitely presented monoids, then $M\ast N$ is of type bi-${\rm FP}_n$ if and only if $M$ and $N$ both are of type bi-${\rm FP}_n$. \end{Cor} \begin{proof} The first statement follows from Theorem~\ref{t:bass.serre.free.bi}. The second follows from the equivalence of bi-${\rm F}_n$ and bi-${\rm FP}_n$ for finitely presented monoids and the result of Pride~\cite{Pride2006} that the class of monoids of type bi-${\rm FP}_n$ is closed under retracts. \end{proof} The hypotheses of Theorem~\ref{t:bass.serre.free.bi} also hold if $M_1,M_2$ are cancellative and $W$ is a group. The homotopy pushout construction in the proof of Theroem~\ref{t:bass.serre.free.bi} yields the following theorem. \begin{Thm}\label{t:amalg.cd.bi} Let $[M_1,M_2;W]$ be an amalgam of monoids such that $M_1,M_2$ are free as left and right $W$-sets. Suppose that $d_i$ is the geometric dimension of $M_i$, for $i=1,2$ and $d$ is the geometric dimension of $W$. Then the geometric dimension of $M_1\ast_W M_2$ is bounded above by $\max\{d_1,d_2,d+1\}$. \end{Thm} Since only the trivial monoid has geometric dimension $0$, we obtain the following special case. \begin{Cor}\label{c:free.prod.geom.bi} Let $M$ and $N$ be monoids of geometric dimension at most $n$. Then $M\ast N$ has geometric dimension at most $n$. \end{Cor} Next we wish to consider the homological analogue. \begin{Prop}\label{p:flatness.bi} Suppose that $A$ is a flat right $\mathbb ZM$-module and $B$ is a flat left $\mathbb ZM$-module. Then $A\otimes B$ is a flat right $\mathbb ZM\otimes \mathbb ZM^{op}$-module (with respect to the structure $(a\otimes b)(m,m') = am\otimes m'b$). \end{Prop} \begin{proof} If $0\longrightarrow J\longrightarrow K\longrightarrow L\longrightarrow 0$ is a short exact sequences of $M$-bimodules, then $0\longrightarrow A\otimes_{\mathbb ZM} J\longrightarrow A\otimes_{\mathbb ZM}K\longrightarrow A\otimes_{\mathbb ZM}L\longrightarrow 0$ is exact by flatness of $A$. Therefore, \[0\longrightarrow A\otimes_{\mathbb ZM} J\otimes_{\mathbb ZM} B\longrightarrow A\otimes_{\mathbb ZM}K\otimes_{\mathbb ZM}B\longrightarrow A\otimes_{\mathbb ZM}L\otimes_{\mathbb ZM}B\longrightarrow 0\] is exact by flatness of $B$. The result now follows by Remark~\ref{r:same.arg}. \end{proof} \begin{Thm}\label{t:bass.serre.flat.bi} Let $[M_1,M_2;W]$ be an amalgam of monoids such that $\mathbb ZL$ is flat as both a left and right $\mathbb ZM_i$-module and $\mathbb ZW$-module, for $i=1,2$, where $L=M_1\ast_W M_2$. If $M_1,M_2$ are of type bi-${\rm FP}_n$ and $W$ is of type bi-${\rm FP}_{n-1}$, then $M_1\ast_W M_2$ is of type bi-${\rm FP}_n$. \end{Thm} \begin{proof} Note that $\mathbb Z[L\times L^{op}]\cong \mathbb ZL\otimes \mathbb ZL^{op}$ is a flat right $\mathbb Z[M_i\times M_i^{op}]$-module, for $i=1,2$, and a flat right-$\mathbb Z[W\times W^{op}]$-module by Proposition~\ref{p:flatness.bi}. By Lemma~\ref{l:flat.base} and the hypotheses, we deduce that $\mathbb Z[L\times L^{op}]\otimes_{\mathbb Z[M_i\times M_i^{op}]} \mathbb ZM_i$ is of type ${\rm FP}_n$, for $i=1,2$, and $\mathbb Z[L\times L^{op}]\otimes_{\mathbb Z[W\times W^{op}]}\mathbb ZW$ is of type ${\rm FP}_{n-1}$. The result now follows by applying Corollary~\ref{c:fp.resolved} to the exact sequence in Corollary~\ref{c:exact.bimod.forest}, in light of Proposition~\ref{p:tensor.id} and Proposition~\ref{p:bi.tensor.iso}. \end{proof} \begin{Thm}\label{t:bass.serre.flat.bi2} Suppose that $[M_1,M_2;W]$ is an amalgam of monoids such that $M_i$ has Hochschild cohomological dimension at most $d$, for $i=1,2$, $W$ has Hochschild cohomological dimension at most $d-1$, and $\mathbb ZL$ is flat as both a left and right $\mathbb ZM_i$-module and $\mathbb ZW$-module, for $i=1,2$, where $L=M_1\ast_W M_2$. Then $M_1\ast_W M_2$ has Hochschild cohomological dimension at most $d$. \end{Thm} As with the one-sided results, combining these results with results of McDuff~\cite{McDuff1979} gives the following corollaries. \begin{Cor}\label{t:bass.serre.free.bi.hom} Let $[M_1,M_2;W]$ be an amalgam of monoids such that $M_1,M_2$ are free as both left and right $W$-sets. If $M_1,M_2$ are of type bi-${\rm FP}_n$ and $W$ is of type bi-${\rm FP}_{n-1}$, then $M_1\ast_W M_2$ is of type bi-${\rm FP}_n$. This applies, in particular, to free products. \end{Cor} \begin{Cor}\label{t:amalg.cd.bi.cohom} Let $[M_1,M_2;W]$ be an amalgam of monoids such that $M_1,M_2$ are free as left and right $W$-sets. Suppose that $d_i$ is the Hochschild cohomological dimension of $M_i$, for $i=1,2$ and $d$ is the Hochschild cohomological dimension of $W$. Then the Hochschild cohomological dimension of $M_1\ast_W M_2$ is bounded above by $\max\{d_1,d_2,d+1\}$. \end{Cor} We remark that the results of this section and the previous section have analogues for the amalgamation of a finite family of monoids over a common submonoid. \section{HNN extensions} \label{sec_HNNOttoPride} In this section we shall present several new theorems about the behaviour of homological and topological finiteness properties for HNN extensions of monoids. Several natural HNN extension definitions for monoids have arisen in the literature in different contexts. First in this section we consider a generalization of a construction of Otto and Pride, which they used to distinguish finite derivation type from finite homological type~\cite{Pride2004}. Let $M$ be a monoid, $A$ a submonoid and $\varphi\colon A\to M$ a homomorphism. The free monoid generated by a set $A$ is denoted by $A^*$. Then the \emph{Otto-Pride extension} of $M$ with base monoid $A$ is the quotient $L$ of the free product $M\ast \{t\}^*$ by the smallest congruence such that $at=t\varphi(a)$ for $a\in A$, i.e., $L=\langle M,t\mid at=t\varphi(a), a\in A\rangle$. For example, if $A=M$ and $\varphi$ is the trivial homomorphism, then the Otto-Pride extension is the monoid $M\cup \ov M$ where $\ov M$ is an adjoined set of right zeroes in bijection with $M$. Otto and Pride have considered Otto-Pride extensions of groups where $\varphi$ is injective, in~\cite{Pride2004} and~\cite{Pride2005}. \subsection{The one-sided case} The following model for $L$ will be useful for constructing normal forms and for proving flatness results. \begin{Prop}\label{p:hnnlike.as.tensor} View $M$ as a right $A$-set via right multiplication and as a left $A$-set via the action $a\odot m=\varphi(a)m$ for $a\in A$. Then $L$ is isomorphic to the monoid with underlying set $R=\coprod_{i=0}^{\infty}R_i$, where $R_0=M$ and $R_{i+1}=R_i\otimes_A M$, and with multiplication defined by \[(m_1\otimes \cdots\otimes m_k)(m_1'\otimes \cdots \otimes m'_\ell) = m_1\otimes \cdots \otimes m_{k-1}\otimes m_km_1'\otimes m_2'\otimes \cdots \otimes m'_{\ell}.\] In particular, $M$ and $t^*$ embed in $L$ (where $t$ is identified with $1\otimes 1\in R_1$). \end{Prop} \begin{proof} It is a straightforward exercise to verify that $R$ is a monoid with identity $1\in R_0=M$. Define $f\colon M\cup \{t\}\to R$ by $f(m)=m$ and $f(t)=1\otimes 1$. Then if $a\in A$, we have that $f(a)f(t)=a\otimes 1=1\otimes \varphi(a)=f(t)f(\varphi(a))$ and so $f$ induces a homomorphism $f\colon L\to R$. Note that $f$ is surjective. Indeed, $R_0$ is in the image of $f$ by construction. Assume that $R_i$ is in the image of $f$ and let $m_1\otimes\cdots \otimes m_{i+1}\in R_i$. If $f(x)=m_1\otimes \cdots\otimes m_i$ (by induction), then $f(xtm_{i+1}) = m_1\otimes \cdots \otimes m_i\otimes m_{i+1}$. Now define $g\colon R\to L$ by $g(m_1\otimes \cdots \otimes m_i) = m_1tm_2t\cdots tm_i$. It is easy to verify that this is well defined using the defining relations of $L$ and trivially $g$ is a homomorphism. Now $gf(m)=m$ for $m\in M$ and $gf(t)=g(1\otimes 1)=t$. Therefore, $gf=1_L$ and so $f$ is injective. This concludes the proof that $f$ is an isomorphism. \end{proof} As a corollary, we can deduce a normal form theorem for $L$ if $M$ is free as a right $A$-set. \begin{Cor}\label{c:normal.form} Let $\varphi\colon A\to M$ be a homomorphism with $A$ a submonoid of $M$. Let $L=\langle M,t\mid at=t\varphi(a), a\in A\rangle$ be the Otto-Pride extension. Suppose that $M$ is a free right $A$-set with basis $C$ containing $1$. Then every element of $M$ can be uniquely written in the form $c_0tc_1\cdots tc_ka$ with $k\geq 0$, $c_i\in C$ and $a\in A$. Consequently, $L$ is free both as a right $M$-set and a right $A$-set. \end{Cor} \begin{proof} Since $M$ is free as a right $A$-set on $C$, retaining the notation of Proposition~\ref{p:hnnlike.as.tensor}, we have that $R_i\cong C^{i+1}\times A$ via the mapping $(c_0,\ldots, c_i,a)\mapsto c_0\otimes c_1\otimes \cdots \otimes c_ia$. Composing this mapping with the isomorphism $g$ in the proof of Proposition~\ref{p:hnnlike.as.tensor} provides the desired normal form. Clearly, $L$ is a free right $M$-set on the normal forms with $c_k=1=a$ and $L$ is a free right $A$-set on the normal forms with $a=1$. This completes the proof. \end{proof} Note that if $M$ is left cancellative and $A$ is a group, then $M$ is a free right $A$-set. \begin{Cor}\label{c:flat.hnnlike} Let $M$ be a monoid, $A$ a submonoid and $\varphi\colon A\to M$ be a homomorphism. Let $L=\langle M,t\mid at=t\varphi(a), a\in A\rangle$ be the Otto-Pride extension. Suppose that $\mathbb ZM$ is flat as a right $\mathbb ZA$-module. Then $\mathbb ZL$ is flat both as a right $\mathbb ZM$-module and a right $\mathbb ZA$-module. \end{Cor} \begin{proof} Put $V_0=\mathbb ZM$ and $V_{i+1}=V_i\otimes_{\mathbb ZA}\mathbb ZM$. Then by Proposition~\ref{p:hnnlike.as.tensor}, we have that as a right $\mathbb ZM$-module, $\mathbb ZL\cong \bigoplus_{i\geq 0} V_i$ so it suffices to show that $V_i$ is flat as both a right $\mathbb ZM$-module and a right $\mathbb ZA$-module. We prove this by induction. As $V_0$ is a free right $\mathbb ZM$-module and a flat $\mathbb ZA$-module, by assumption, this case is handled. Assume that $V_i$ is flat both as a right $\mathbb ZM$-module and a right $\mathbb ZA$-module. Let $h\colon U\to W$ be an injective homomorphism of $\mathbb ZM$-modules (respectively, $\mathbb ZA$-modules). Then the induced mapping $\mathbb ZM\otimes_{\mathbb ZM} U\to \mathbb ZM\otimes_{\mathbb ZM} W$ (respectively, $\mathbb ZM\otimes_{\mathbb ZA} U\to \mathbb ZM\otimes_{\mathbb ZA} W$) is injective since $\mathbb ZM$ is flat as a right module over both $\mathbb ZM$ and $\mathbb ZA$. Then tensoring these injective mappings on the left with $V_i$ over $\mathbb ZA$ results in an injective mapping by flatness of $V_i$. Thus we see that $V_{i+1}$ is flat as a right $\mathbb ZM$-module and as a right $\mathbb ZA$-module. \end{proof} We now construct a Bass--Serre tree for Otto-Pride extensions. Again fix a monoid $M$ together with a homomorphism $\varphi\colon A\to M$ from a submonoid $A$ and let $L$ be the Otto-Pride extension. We define a graph $T$ with vertex set $V=L/M$ and edge set $E=L/A$. An edge $[x]_A$ connects $[x]_M$ to $[xt]_M$ (oriented in this way), where $[x]_K$ denotes the class of $x$ in $L/K$. This is well defined because if $a\in A$, then $[xa]_M=[x]_M$ and $[xat]_M = [xt\varphi(a)]_M=[xt]_M$. Clearly, the left action of $L$ is by cellular mappings sending edges to edges and so $T$ is an $L$-graph. We aim to prove that $T$ is a tree. \begin{Lemma}\label{l:bserre.op.conn} The graph $T$ is connected. \end{Lemma} \begin{proof} The monoid $L$ is generated by $M\cup \{t\}$. The length of an element $x$ is its shortest expression as a product in these generators. We prove by induction on length that there is a path from $[1]_M$ to $[x]_M$. If $x=1$, there is nothing to prove. Assume that $x=yz$ with $y\in M\cup \{t\}$ and $z$ of length one shorter. Let $p$ be a path from $[1]_M$ to $[z]_M$. Then $yp$ is a path from $[y]_M$ to $[x]_M$. If $y\in M$, then $[y]_M=[1]_M$ and we are done. If $y=t$, then since $[1]_A$ connects $[1]_M$ with $[t]_M=[y]_M$ and so we are done in this case, as well. It follows that $T$ is connected. \end{proof} Next we use derivations to prove that $T$ is a tree. \begin{Lemma}\label{l:bserre.op.tree} The graph $T$ is a tree. \end{Lemma} \begin{proof} We prove that $\partial\colon \mathbb ZE\to \mathbb ZV$ is injective. It will then follows that $T$ is a tree as it was already shown to be connected in Lemma~\ref{l:bserre.op.conn}. Define $\gamma\colon M\cup\{t\}\to \mathbb ZE\rtimes L$ by $\gamma(m) = (0,m)$ for $m\in M$ and $\gamma(t)=([1]_A,t)$. Then if $a\in A$, we have that $\gamma(a)\gamma(t) = (0,a)([1]_A,t) = ([a]_A,at)=([1]_A,t\varphi(a)) = ([1]_A,t)(0,\varphi(a))=\gamma(t)\gamma(\varphi(a))$. Therefore, $\gamma$ extends to a homomorphism $\gamma\colon L\to \mathbb ZE\rtimes L$ splitting the semidirect product. Thus $\gamma(x)=(d(x),x)$ for some derivation $d\colon L\to \mathbb ZE$ with $d(m)=0$ for $m\in M$ and $d(t)=[1]_A$. Define $\beta\colon \mathbb ZV\to \mathbb ZE$ by $\beta([x]_M) = d(x)$. This is well defined because if $m\in M$, then $d(xm)=xd(m)+d(x)=d(x)$ as $d(m)=0$. Now we compute that $\beta\partial([x]_A) = \beta([xt]_M)-\beta([x]_M) = d(xt)-d(x) = xd(t)+d(x)-d(x)=x[1]_A=[x]_A$. Therefore, $\beta\partial=1_{\mathbb ZE}$ and hence $\partial$ is injective. We conclude that $T$ is a tree. \end{proof} We call $T$ the \emph{Bass--Serre tree} of the extension. Lemma~\ref{l:bserre.op.tree} can be restated in terms of exact sequences using that $\mathbb Z[L/K]\cong \mathbb ZL\otimes_{\mathbb ZK} \mathbb Z$ for $K=M,A$. \begin{Cor}\label{c:exact.seq.OP} There is an exact sequence \[0\longrightarrow \mathbb ZL\otimes_{\mathbb ZA} \mathbb Z\longrightarrow\mathbb ZL\otimes_{\mathbb ZM}\mathbb Z\longrightarrow \mathbb Z\longrightarrow 0\] of left $\mathbb ZL$-modules. \end{Cor} The analogue of the homotopy pushout that we shall need in this context is the homotopy coequalizer. If $f,g\colon Y\to X$ are continuous mappings, then the \emph{homotopy coequalizer} $M(f,g)$ is the space obtained by gluing $Y\times I$ to $X$ via the mapping $h\colon Y\times \partial I\to X$ given by $h(y,0)=f(y)$ and $h(y,1)=g(y)$. If $X$ and $Y$ are CW complexes and $f,g$ are cellular, then $M(f,g)$ is a CW complex. If $X,Y$ are projective $M$-CW complexes and $f,g$ are $M$-equivariant and cellular, then $M(f,g)$ is a projective $M$-CW complex by \cite[Lemma~2.1]{GraySteinberg1}. Moreover, if $X$ has $M$-finite $n$-skeleton and $Y$ has $M$-finite $(n-1)$-skeleton, then $M(f,g)$ has $M$-finite $n$-skeleton. Homotopy coequalizers like homotopy pushouts, are examples of homotopy colimits. If $f',g'\colon Y'\to X'$ are continuous mappings and $r\colon Y\to Y'$ and $s\colon X\to X'$ are continuous such that \[\begin{tikzcd} Y \arrow[yshift=0.7ex]{r}{f} \arrow[yshift=-0.7ex,swap]{r}{g}\arrow{d}[swap]{r} & X \arrow{d}{s} \\ Y'\arrow[yshift=0.7ex]{r}{f'} \arrow[yshift=-0.7ex]{r}[swap]{g'}&X' \end{tikzcd}\] commutes, then there is an induced continuous mapping $t\colon M(f,g)\to M(f',g')$ (which will be $M$-equivariant if all spaces are $M$-spaces and all maps are $M$-equivariant). Moreover, if $r,s$ are homotopy equivalences, then so is $t$; see~\cite[page 19]{DwyerHennBook}. For example, the graph $T$ is the homotopy coequalizer of $i,j\colon L/A\to L/M$ given by $i([x]_A) = [x]_A$ and $j([x]_A) = [xt]_A$ (where these sets are viewed as discrete spaces). \begin{Thm}\label{t:ottopride.one.side} Let $M$ be a monoid, $A$ a submonoid and $\varphi\colon A\to M$ be a homomorphism. Let $L=\langle M,t\mid at=t\varphi(a), a\in A\rangle$ be the Otto-Pride extension. Suppose that $M$ is free as a right $A$-set. If $M$ is of type left-${\rm F}_n$ and $A$ is of type left-${\rm F}_{n-1}$, then $L$ is of type left-${\rm F}_n$. \end{Thm} \begin{proof} Let $X$ be an equivariant classifying space for $M$ with $M$-finite $n$-skeleton and let $Y$ be an equivariant classifying space for $A$ with $A$-finite $(n-1)$-skeleton. Using \cite[Lemma~6.2]{GraySteinberg1} and the cellular approximation theorem \cite[Theorem 2.8]{GraySteinberg1}, we can find continuous cellular mappings $f,g\colon Y\to X$ such that $f(ay)=af(y)$ and $g(ay)=\varphi(a)g(y)$ for all $a\in A$ and $y\in Y$. To construct $g$, we view $X$ as an $A$-space via the action $a\odot x=\varphi(a)x$ for $a\in A$. Let $X'=L\otimes_M X$ and $Y'=L\otimes_A Y$. These are projective $L$-CW complexes by Proposition~\ref{c:base.change.cw} and $X'$ has $L$-finite $n$-skeleton, $Y'$ has $L$-finite $(n-1)$-skeleton. Let $F\colon Y'\to X'$ be the mapping induced by $f$ and define $G\colon Y'\to X'$ by $G(u\otimes y) =ut\otimes g(y)$. The latter is well defined since if $a\in A$, then $uat\otimes g(y)=ut\varphi(a)\otimes g(y) = ut\otimes \varphi(a)g(y)=ut\otimes g(ay)$. Clearly, $G$ is $L$-equivariant, continuous and cellular. Let $Z=M(F,G)$ be the homotopy coequalizer. Then $Z$ is a projective $L$-CW complex with $L$-finite $n$-skeleton. We aim to show that $Z$ is homotopy equivalent to $T$ and hence contractible. By~\cite[Proposition~3.4]{GraySteinberg1} we have that $\pi_0(Y')\cong L\otimes_A \pi_0(Y)\cong L/A$ and $\pi_0(X')\cong L\otimes_M \pi_0(X)\cong L/M$ as $X,Y$ are connected. By construction $F$ and $G$ induce the mappings $[u]_A\mapsto [u]_M$ and $[u]_A\mapsto [ut]_M$, respectively, on path components under these identifications. As the tree $T$ is the homotopy coequalizer of these two mappings, it suffices to show that the projections $X'\to \pi_0(X')$ and $Y'\to \pi_0(Y')$ are homotopy equivalences. Then $Z$ will be homotopy equivalent to $T$. Since $L$ is free as a right $M$-set and as a right $A$-set, we have that $X'\cong L/M\times X$ and $Y'\cong L/A\times Y$ as $L$-CW complexes. As $X$ and $Y$ are contractible and $L/M$ and $L/A$ are discrete, we deduce that the projections to connected components are homotopy equivalences in both cases. This completes the proof. \end{proof} The proof of Theorem~\ref{t:ottopride.one.side} can be used to show that if $M$ is free as a right $A$-set, $M$ has left geometric dimension $d$ and $A$ has left geometric dimension $d'$, then $L$ has left geometric dimension at most $\max\{d,d'+1\}$. The hypothesis of Theorem~\ref{t:ottopride.one.side} applies if $M$ is left cancellative and $A$ is a group or if $M=\mathbb N$ and $A$ is a cyclic submonoid. Next we prove the homological analogue of Theorem~\ref{t:ottopride.one.side} under the weaker assumption of flatness. \begin{Thm}\label{t:ottopride.one.side.flat} Let $M$ be a monoid and let $\varphi\colon A\to M$ be a homomorphism from a submonoid $A$ of $M$. Let $L=\langle M,t\mid at=t\varphi(a), a\in A\rangle$ be the Otto-Pride extension. Suppose that $\mathbb ZM$ is flat as a right $\mathbb ZA$-module. If $M$ is of type left-${\rm FP}_n$ and $A$ is of type left-${\rm FP}_{n-1}$, then $L$ is of type left-${\rm FP}_n$. \end{Thm} \begin{proof} By Corollary~\ref{c:flat.hnnlike}, $\mathbb ZL$ is flat as a right $\mathbb ZM$-module and as a right $\mathbb ZA$-module. It follows from Lemma~\ref{l:flat.base} and the hypotheses that $\mathbb ZL\otimes_{\mathbb ZM} \mathbb Z$ is of type ${\rm FP}_n$ and $\mathbb ZL\otimes_{\mathbb ZA}\mathbb Z$ is of type ${\rm FP}_{n-1}$. The result now follows by applying Corollary~\ref{c:fp.resolved} to the exact sequence in Corollary~\ref{c:exact.seq.OP}. \end{proof} One can prove similarly the following theorem. \begin{Thm}\label{t:ottopride.one.side.flat.cd} Let $M$ be a monoid and $\varphi\colon A\to M$ a homomorphism from a submonoid $A$ of $M$. Let $L=\langle M,t\mid at=t\varphi(a), a\in A\rangle$ be the Otto-Pride extension. Suppose that $\mathbb ZM$ is flat as a right $\mathbb ZA$-module. If $M$ has left cohomological dimension at most $d$ and $A$ has left cohomological dimension at most $d-1$, then $L$ has left cohomological dimension at most $d$. \end{Thm} \subsection{The two-sided case} It turns out that in the two-sided setting we shall need to consider Otto-Pride extensions corresponding to injective monoid homomorphisms $\varphi\colon A\to M$ from a submonoid $A$ of $M$ in order to make the construction left-right dual. Putting $B=\varphi(A)$, we have that $B$ is the isomorphic to $A$. Otto and Pride considered the special case when $M$ and $A$ are groups (and hence so is $B$). We shall call an Otto-Pride extension \emph{HNN-like} if $\varphi$ is injective. Let $L$ be the Otto-Pride extension. It is straightforward to check $L=\langle M,t\mid tb=\varphi^{-1}(b)t, b\in B\rangle$ and hence left/right duals of Proposition~\ref{p:hnnlike.as.tensor} and Corollary~\ref{c:normal.form} are valid with $B$ in the role of $A$ and using left sets instead of right sets. Note that an HNN-like Otto-Pride extension of groups, which is the case considered by Otto and Pride, embeds as a submonoid of the corresponding group HNN extension (note that the Otto-Pride extension does not contain $t^{-1}$ and hence is a monoid, not a group). Our results give geometric proofs of a number of the results of~\cite{Pride2004} and~\cite{Pride2005}. In what follows, we shall always view $L$ as a right $A$-set via left multiplication and as a left $A$-set via $a\odot x= \varphi(a)x$. Therefore, we view $L\times L^{op}$ as a right $A\times A^{op}$-set via $(x,y)(a,a') = (xa,\varphi(a')y)$. \begin{Prop}\label{p:basic.tensor.ids} There is an isomorphism \[L\otimes_A L\cong L\otimes_A A\otimes_A L\cong (L\times L^{op})\otimes_{A\times A^{op}} A\] of left $L\times L^{op}$-sets. \end{Prop} \begin{proof} The first isomorphism is given by $x\otimes y\mapsto x\otimes 1\otimes y$ with inverse $x\otimes a\otimes y\mapsto xa\otimes y$ (the reader should check that these are well defined and equivariant). The second isomorphism sends $x\otimes a\otimes y$ to $(x,y)\otimes a$ with inverse mapping $(x,y)\otimes a$ to $x\otimes a\otimes y$. The reader should again check that this is well defined and equivariant. \end{proof} We now associate a Bass--Serre forest $T$ to an HNN-like Otto-pride extension. The vertex set of $T$ is $V=L\otimes_M L$ and the edge set is $E=L\otimes_A L$. Again, we write $[x,y]_K$ for the tensor $x\otimes y$ of $L\otimes_K L$, for $K=M,A$. With this notation, the edge $[x,y]_A$ connects $[x,ty]_M$ to $[xt,y]_M$ (which we think of as oriented in this way). To check that this is well defined, observe that if $x,y\in L$ and $a\in A$, then $[xa,y]_A=[x,\varphi(a)y]_A$ and $[xa,ty]_M = [x,aty]_M = [x,t\varphi(a)y]_M$ and $[xat,y]_M=[xt\varphi(a),y]_M = [xt,\varphi(a)y]_M$. By construction, $T$ is an $L\times L^{op}$-graph. It is immediate from the definition of the incidences in $T$ that the multiplication mapping $L\otimes_M L\to L$ induces an $L\times L^{op}$-equivariant surjection $\pi_0(T)\to L$. We aim to show that it is an isomorphism. \begin{Lemma}\label{l:hnnlike.bi.iso.comp} The multiplication mapping $L\otimes_M L\to L$ induces an $L\times L^{op}$-equivariant isomorphism of $\pi_0(T)$ with $L$. \end{Lemma} \begin{proof} We first prove by induction on the length of $x$ as a product of elements of $M\cup\{t\}$ that there is a path from $[1,x]_M$ to $[x,1]_M$. If $x=1$ , there is nothing to prove. Otherwise, assume $x=uy$ with $u\in M\cup \{t\}$ and $y$ of shorter length. Let $p$ be a path from $[1,y]_M$ to $[y,1]_M$. Then $up$ is a path from $[u,y]_M$ to $[x,1]_M$. If $u\in M$, then $[u,y]_M=[1,x]_M$ and we are done. If $u=t$, then $[1,y]_A$ is an edge connecting $[1,x]_M=[1,ty]_M$ to $[t,y]_M=[u,y]_M$ and so we are again done. Now if $x=uv$ in $L$, then by the above, there is a path $p$ from $[1,u]_M$ to $[u,1]_M$. Then $pv$ is a path from $[1,x]_M$ to $[u,v]_M$. If follows that all vertices $[u',v']_M$ with $u'v'=x$ are in a single connected component and hence the multiplication map induces an isomorphism from $\pi_0(T)$ to $L$. \end{proof} Next we use derivations to prove that $T$ is a forest. \begin{Lemma}\label{l:hnnlike.forest.bi} The graph $T$ is a forest. \end{Lemma} \begin{proof} It suffices to prove that the cellular boundary map $\partial\colon \mathbb ZE\to \mathbb ZV$ is injective. Define a mapping $\gamma\colon M\cup \{t\}\to \mathbb ZE\bowtie L$ by $\gamma(m) = (0,m)$ for $m\in M$ and $\gamma(t) = ([1,1]_A,t)$. If $a\in A$, then we compute $\gamma(a)\gamma(t) = ([a,1]_A,at) = ([1,\varphi(a)]_A,t\varphi(a))=\gamma(t)\gamma(\varphi(a))$ and hence $\gamma$ extends to a homomorphism $\gamma\colon L\to \mathbb ZE\bowtie L$ splitting the two-sided semidirect product projection. Thus $\gamma(x)=(d(x),x)$ for some derivation $d\colon L\to \mathbb ZE$ such that $d(m)=0$ for $m\in M$ and $d(t)=[1,1]_A$. Define $\beta\colon \mathbb ZV\to \mathbb ZE$ by $\beta([x,y]_M) = d(x)y$. We must verify that $\beta$ is well defined. If $m\in M$, then $d(xm)y = xd(m)y+d(x)my = d(x)my$ because $d(m)=0$. This shows that $\beta$ is well defined. Next we compute that \begin{align*} \beta\partial([x,y]_A) &= \beta([xt,y]_M)-\beta([x,ty]_M) = d(xt)y-d(x)ty\\ &= xd(t)y+d(x)ty-d(x)ty= x[1,1]_Ay=[x,y]_A \end{align*} as $d(t)=[1,1]_A$. This establishes that $\beta\partial=1_{\mathbb ZE}$ and hence $T$ is a forest. \end{proof} We call $T$ the \emph{Bass--Serre forest} for $L$. The exactness of the sequence \[0\longrightarrow \mathbb ZE\longrightarrow \mathbb ZV\longrightarrow H_0(T)\longrightarrow 0,\] coming from $T$ being a forest, together with the isomorphism $\mathbb ZL\cong \mathbb Z\pi_0(L)\cong H_0(T)$ coming from Lemma~\ref{l:hnnlike.bi.iso.comp}, yields the following exact sequence. \begin{Cor}\label{c:low.exact.hnnlike} Let $L$ be the HNN-like Otto-Pride extension associated to a monomorphism $\varphi\colon A\to M$ with $A$ a submonoid of $M$. Then there is an exact sequence \[0\longrightarrow \mathbb ZL\otimes_{\mathbb ZA} \mathbb ZL\longrightarrow \mathbb ZL\otimes_{\mathbb ZM} \mathbb ZL\longrightarrow \mathbb ZL\longrightarrow0\] where $\mathbb ZL$ is viewed as a right $\mathbb ZA$-module via the inclusion and as a left $\mathbb ZA$-module via $\varphi$. \end{Cor} Suppose that we have an HNN-like Otto-Pride extension $L$ with base monoid $A$ and monomorphism $\varphi\colon A\to M$. Put $B=\varphi(A)$. \begin{Prop}\label{p:hhnlike.bi.free} If $M$ is free as a right $A$-set and as a left $B$-set, then $L$ is free as both a right and a left $M$-set. Moreover, $L$ is free as a right $A$-set and a left $B$-set. Hence $L$ is free as a left $A$-set via the action $a\odot x=\varphi(a)x$ for $a\in A$ and $x\in L$. \end{Prop} \begin{proof} This follows from Corollary~\ref{c:normal.form} and its dual. \end{proof} The flat version is the following. \begin{Prop}\label{p:hhnlike.bi.flat} If $\mathbb ZM$ is a flat right $\mathbb ZA$-module and a flat left $\mathbb ZB$-module, then $\mathbb ZL$ is flat as both a right and a left $\mathbb ZM$-module. Furthermore, $\mathbb ZL$ is flat as a right $\mathbb ZA$-module and a left $\mathbb ZB$-module. Thus $\mathbb ZL$ is flat as a left $\mathbb ZA$-module via the $\mathbb ZA$-module structure coming from $\varphi$. \end{Prop} \begin{proof} This follows from Corollary~\ref{c:flat.hnnlike} and its dual. \end{proof} We can now investigate the two-sided topological and homological finiteness of HNN-like Otto-Pride extensions. The following theorem generalises~\cite[Theorem~1]{Pride2004} and~\cite[Theorem~5]{Pride2005}. \begin{Thm}\label{t:op.hnn.bi} Let $L$ be an HNN-like Otto-Pride extension of $M$ with respect to an injective homomorphism $\varphi\colon A\to M$ and put $B=\varphi(A)$. Suppose that $M$ is free as a right $A$-set and as a left $B$-set. Then if $M$ is of type bi-${\rm F}_n$ and $A$ is of type bi-${\rm F}_{n-1}$, then $L$ is of type bi-${\rm F}_n$. \end{Thm} \begin{proof} Let $X$ be a bi-equivariant classifying space for $M$ with $M$-finite $n$-skeleton and $Y$ a bi-equivariant classifying space for $N$ with $N$-finite $(n-1)$-skeleton. Let $r\colon M\to \pi_0(X)$ and $r'\colon A\to \pi_0(Y)$ be equivariant isomorphisms. By \cite[Lemma~7.1]{GraySteinberg1} and the cellular approximation theorem \cite[Theorem 2.8]{GraySteinberg1}, we can find cellular mappings $f_1,f_2\colon Y\to X$ such that $f_1(aya')=af_1(y)a'$ and $f_2(aya') =\varphi(a)f_2(y)\varphi(a')$ for $a,a'\in A$ and $y\in Y$ and, moreover, $r^{-1} (f_1)_{\ast}r'$ is the inclusion and $r^{-1} (f_2)_{\ast}r'=\varphi$ where $(f_i)_{\ast}$ is the induced mapping on the set of path components, for $i=1,2$. In what follows, we view $L$ as a (free) right $A$-set via the inclusion and a (free) left $A$-set via $\varphi$. Put $X'=L\otimes_M X\otimes_M L$ and $Y'=L\otimes_A Y\otimes_A L$. They are projective $L\times L^{op}$-CW complexes with $L\times L^{op}$-finite $n$-, $(n-1)$-skeletons, respectively, by Proposition~\ref{c:base.change.cw} and~\ref{p:bi.tensor.iso}. Define $F_1,F_2\colon Y'\to X'$ by $F_1(u\otimes y\otimes v) = u\otimes f_1(y)\otimes tv$ and $F_2(u\otimes y\otimes v) = ut\otimes f_2(y)\otimes v$. Let us verify that this is well defined. If $a,a'\in A$, then we have that $ua\otimes f_1(y)\otimes t\varphi(a')v = ua\otimes f_1(y)\otimes a'tv =u\otimes f_1(aya')\otimes tv$ and so $F_1$ is well defined. Also, we have that $uat\otimes f_2(y)\otimes \varphi(a')v = ut\varphi(a)\otimes f_2(y)\otimes \varphi(a')v=ut\otimes \varphi(a)f_2(y)\varphi(a')\otimes v=ut\otimes f_2(aya')\otimes v$ and so $F_2$ is well defined. Clearly, $F_1,F_2$ are continuous $L\times L^{op}$-equivariant cellular mappings. Let $Z=M(F_1,F_2)$ be the homotopy coequalizer. It a projective $L\times L^{op}$-CW complex with $L\times L^{op}$-finite $n$-skeleton by construction. We prove that $Z$ is a bi-equivariant classifying space for $Z$. To do this it suffices to construct an $L\times L^{op}$-equivariant homotopy equivalence to the Bass--Serre forest $T$. First note, by~\cite[Proposition~3.4]{GraySteinberg1}, that $\pi_0(X')\cong L\otimes_M M\otimes_M L\cong L\otimes_M L$ (by Proposition~\ref{p:tensor.id}) and $\pi_0(Y')\cong L\otimes_A A\otimes_A L\cong L\otimes_A L$ (by Proposition~\ref{p:basic.tensor.ids}). The mapping $L\otimes_A L\to L\otimes_M L$ induced by $F_1$ is $u\otimes v\mapsto u\otimes tv$ and the mapping induced by $F_2$ is $u\otimes v\mapsto ut\otimes v$. As $T$ is the homotopy coequalizer of these two mappings of discrete sets $L\otimes_A L\to L\otimes_M L$, to complete the proof it suffices to show that $X'$ and $Y'$ are homotopy equivalent to their sets of path components (via the natural projections). But this follows because $X$ and $Y$ are homotopy equivalent to their respective sets of path components and the isomorphisms $X'\cong L/M\times X\times M\backslash L$ and $Y'\cong L/A\times Y\times B\backslash L$ coming from $L$ being free as both a left and right $M$-set and as a right $A$-set and left $B$-set (cf.~Proposition~\ref{p:hhnlike.bi.free}). \end{proof} The hypotheses of Theorem~\ref{t:op.hnn.bi} hold if $M$ and $A$ are groups or, more generally, if $M$ is cancellative and $A$ is a group. It also holds if $M=\mathbb N$ and $A$ is a cyclic submonoid. The proof of Theorem~\ref{t:op.hnn.bi} shows that if $M$ is free as a right $A$-set and a left $B$-set, $M$ has geometric dimension $d$ and $A$ has geometric dimension $d'$, then $L$ has geometric dimension at most $\max\{d,d'+1\}$. The flat homological analogue of Theorem~\ref{t:op.hnn.bi} has a similar proof. \begin{Thm}\label{t:op.hnn.bi.flat} Let $L$ be an HNN-like Otto-Pride extension of $M$ with respect to a monomorphism $\varphi\colon A\to M$ and put $B=\varphi(A)$. Assume that $\mathbb ZM$ is flat as a right $\mathbb ZA$-module and as a left $\mathbb ZB$-module. If $M$ is of type bi-${\rm FP}_n$ and $A$ is of type bi-${\rm FP}_{n-1}$, then $L$ is of type bi-${\rm FP}_n$. \end{Thm} \begin{proof} We have that $\mathbb ZL$ is flat as both a right and a left $\mathbb ZA$-module and as a right and a left $\mathbb ZM$-module by Proposition~\ref{p:hhnlike.bi.flat} (viewing $L$ as a left $A$-module via $\varphi$). Therefore, $\mathbb Z[L\times L^{op}]\cong \mathbb ZL\otimes \mathbb ZL^{op}$ is flat as both a right $\mathbb Z[M\times M^{op}]$-module and as a right-$\mathbb Z[A\times A^{op}]$-module by Proposition~\ref{p:flatness.bi}. Applying Lemma~\ref{l:flat.base} and the hypotheses, we conclude that $\mathbb Z[L\times L^{op}]\otimes_{\mathbb Z[M\times M^{op}]} \mathbb ZM$ is of type ${\rm FP}_n$ and $\mathbb Z[L\times L^{op}]\otimes_{\mathbb Z[A\times A^{op}]}\mathbb ZA$ is of type ${\rm FP}_{n-1}$. The result now follows by applying Corollary~\ref{c:fp.resolved} to the exact sequence in Corollary~\ref{c:low.exact.hnnlike}, taking into account Proposition~\ref{p:tensor.id}, Proposition~\ref{p:bi.tensor.iso} and Proposition~\ref{p:basic.tensor.ids},. \end{proof} As an example, if $M$ is any group containing a copy of $\mathbb Z$ and $A=\mathbb N$, viewed as a submonoid of $M$, then since $\mathbb ZM$ is free as a module over the group ring of $\mathbb Z$, which in turn is flat over the monoid ring of $\mathbb N$, being a localization, we conclude that $\mathbb ZM$ is flat over the monoid ring of $\mathbb N$. One can similarly prove that if $L$ is an HNN-like Otto-Pride extension of $M$ with respect to a monomorphism $\varphi\colon A\to M$ and $\mathbb ZM$ is flat as a right $\mathbb ZA$-module and as a left $\mathbb ZB$-module, where $B=\varphi(A)$, then if $M$ has Hochschild cohomological dimension at most $d$ and $A$ has Hochschild cohomological dimension at most $d-1$, then $L$ has Hochschild cohomological dimension at most $d$. We end this section by briefly explaining what happens for a different HNN extensions of monoids construction of the sort considered by Howie~\cite{Howie1963}. Suppose that $M$ is a monoid and $A,B$ are isomorphic submonoids via an isomorphism $\varphi\colon A\to B$. Let $C$ be an infinite cyclic group generated by $t$. The \emph{HNN extension} of $M$ with base monoids $A,B$ is the quotient $L$ of the free product $M\ast C$ by the congruence generated by the relations $at=t\varphi(a)$ for $a\in A$. In other words, $L=\langle M,t,t^{-1}\mid tt^{-1}=1=t^{-1} t, at=t\varphi(a), \forall a\in A\rangle$. The following results may be proved in a similar way to Theorems~\ref{t:ottopride.one.side} and Theorem~\ref{t:op.hnn.bi}, respectively, using suitably modified definition of Bass-–Serre tree, and Bass-Serre forest, for these contexts. \begin{Thm}\label{t:hnn.official.one.side} Let $L$ be an HNN extension of $M$ with base monoids $A,B$. Suppose that, furthermore, $M$ is free as both a right $A$-set and a right $B$-set. If $M$ is of type left-${\rm F}_n$ and $A$ is of type left-${\rm F}_{n-1}$, then $L$ is of type left-${\rm F}_n$. \end{Thm} \begin{Thm}\label{t:hnn.official.two.side} Let $L$ be an HNN extension of $M$ with base monoids $A,B$. Suppose that, furthermore, $M$ is free as both a right and a left $A$-set (via the inclusion) and as a right and a left $B$-set. If $M$ is of type left-${\rm F}_n$ and $A$ is of type bi-${\rm F}_{n-1}$, then $L$ is of type bi-${\rm F}_n$. \end{Thm} Theorem~\ref{t:hnn.official.one.side} recovers the usual topological finiteness result for HNN extensions of groups. It also applies if $M$ is left cancellative and $A$ is a group. The analogue of Theorem~\ref{t:hnn.official.one.side} for left geometric dimensions states that if $M$ is free as both a right $A$-set and a right $B$-set, $M$ has left geometric dimension at most $d$ and $A$ has geometric dimension at most $d-1$, then $L$ has geometric dimension at most $d$. Theorem~\ref{t:hnn.official.two.side} applies if $M$ is cancellative and $A$ is a group. Similarly, if $M$ is free as both a right and a left $A$-set and as a right and a left $B$-set, then if $M$ has geometric dimension at most $d$ and $A$ has geometric dimension at most $d-1$, then $L$ has geometric dimension at most $d$.
2024-02-18T23:40:42.452Z
2020-02-10T02:13:12.000Z
algebraic_stack_train_0000
3,138
30,656
proofpile-arXiv_065-15309
\section{Introduction}\label{sec:introduction} Search engines play an important role in our everyday lives. One way to improve them is by getting a better understanding of user search behavior, such as clicks, dwell times, mouse movements, etc. So far, models of user behavior have focused on modeling and predicting single events, e.g., clicks~\citep{chuklin2015click} and mouse movements~\citep{huang-clicks-2011}, and properties of these events, e.g., time between clicks~\citep{borisov2016context}. In this paper for the first time we focus on modeling and predicting sequences of information interaction events and, in particular, sequences of clicks. Although people tend to make only one (or sometimes no) click on a \acf{SERP}, multi-click query sessions constitute a significant part of search traffic. For example, about 23\% of the query sessions in the Yandex relevance prediction challenge dataset contain multiple clicks (see \S\ref{sec:data} for details). It is commonly assumed that users traverse search results from top to bottom, which leads to the assumption that clicks are ordered by the position of search results. However, it was shown that in practice this assumption does not always hold, and that up to 27.9\%--30.4\% of multi-click sequences, depending on the dataset, are not ordered by position~\cite{wang2015incorporating}. We aim to create tools that help us understand, model and predict sequences of clicks on search engine results, which is important because it provides an opportunity for improving the user search experience. For example, knowing that a user is likely to click on many results or that there are high chances that the user will interact with the results in an order other than the one in which the results are presented on a \ac{SERP} can be used by a search engine to proactively show an advice or make a change in the ranking. We propose a \acfi{CSM} that predicts a probability distribution over click sequences. At the core of our model is a neural network with encoder-decoder architecture. We implement the encoder as a \ac{bi-RNN} that goes over the search engine result page from top to bottom and from bottom to top and outputs contextual embeddings of the results. We implement the decoder as a \ac{RNN} with an attention mechanism. The decoder is initialized with the final states of the forward and backward \acp{RNN} of the encoder. It is used to predict the sequence of positions of the clicked results. The whole network is trained by maximizing the likelihood of the observed click sequences. We evaluate our proposed \ac{CSM} using a publicly available click log and show that \ac{CSM} provides good means to generate a short list of $K$ click sequences that contains the observed click sequence with a high probability. We present an analysis of the performance of \ac{CSM} for query sessions with different numbers of clicks and query sessions in which clicks are ordered/not ordered by position. We measure the performance of \ac{CSM} on two new tasks: predicting the number of clicks and predicting ordered/unordered sequences of clicks. Additionally, we show that \ac{CSM} achieves state-of-the-art results on the standard click prediction task, which allows us to compare \ac{CSM} to traditional click models that model and predict single events, namely clicks. Overall, we make the following contributions: \begin{itemize} \item We formulate a novel problem of predicting click sequences. \item To solve this problem, we propose a \acf{CSM} based on neural networks. \item We evaluate \ac{CSM} on a range of prediction tasks, namely predicting click sequences, predicting the number of clicks, predicting ordered/unordered sequences of clicks and, finally, predicting clicks themselves. \end{itemize} \noindent% As to the potential impact of the proposed \ac{CSM} model, we believe it can be used to predict that \begin{inparaenum}[(i)] \item a user will click on more than one result, which may indicate that a user has a complex information need~\citep{kravi-one-2016}; or that \item a user will interact with the results not in the order in which these results are presented on the SERP, which may indicate that a user is struggling and there are problems in the ranking of the results~\citep{odijk-struggling-2015}. \end{inparaenum} \ac{CSM} can help us identify queries for which there is a room for improvement (in terms of user experience) and it can serve as a quick analysis tool to interpret how a particular change in the ranking of the results will influence user click behavior. The rest of the paper is structured as follows. In Section~\ref{sec:problem statement} we provide a precise statement of the click sequence modeling and prediction problems that we are tackling. Section~\ref{sec:method} introduces our neural network based model for predicting click sequences. In Section~\ref{sec:experimental setup} we describe the setup of our experiments and Section~\ref{sec:results} presents the results of those experiments. We describe related work in Section~\ref{sec:related work} and conclude in Section~\ref{sec:conclusion and future work}. \section{Problem Statement}\label{sec:problem statement} In this section, we formulate the problem of click sequence prediction (\S\ref{sec:problem}) and propose three prediction tasks that can be solved by a model capable of predicting click sequences (\S\ref{sec:prediction tasks}). \subsection{Problem}\label{sec:problem} Since the number and order of clicks may vary even for the same query and ranking of results (e.g., due to different users and contexts), there exists no unique \emph{correct} click sequence, but a (possibly infinite) set of \emph{probably correct} sequences does exist. Therefore, the main goal of this paper is to build a model that, given a query and a ranking of results, describes these probably correct click sequences. To achieve this goal, we define the \emph{click sequence prediction problem} as follows. First, we learn a probability distribution $\mathcal{M}$ over all possible click sequences. Second, we use this learned distribution to obtain the $K$ most probable click sequences. These $K$ sequences are then used to reason about the properties of the set of probably correct sequences mentioned above, e.g., predicting the expected number of clicks, the expected order of clicks, etc. More formally, we define a click sequence model $\mathcal{M}$ as follows: \begin{equation} \mathcal{M}: \quad P(s \mid q, r_1, \dots, r_N), \label{eq:problem} \end{equation} where $q$ is a query, $r_1, \dots, r_N$ is an ordered list of results and $s$ is a sequence of positions of the clicked results $(p_1, \dots, p_{S})$. \subsection{Prediction tasks}\label{sec:prediction tasks} There are many possible applications for a model~$\mathcal{M}$ that is capable of \begin{inparaenum}[(i)] \item predicting a probability distribution over click sequences (Eq.~\ref{eq:problem}), and \item retrieving the $K$ most probable click sequences. \end{inparaenum} It can be used to simulate user behavior, which is important in \emph{online learning to rank} research~\citep{hofmann2013balancing,schuth2016multileave}, or as a tool for analyzing how a particular change in the ranking of results will influence user click behavior. However, we do not investigate these applications in this work. Instead, we address three tasks that are both practically useful and help to evaluate the performance of the model~$\mathcal{M}$. \smallskip\noindent \textbf{Task 1 (predicting the number of clicks).} The goal of this task is to predict on how many results a user will click. Clicking on more than one result might indicate that a user has a complex information need. Clicking on more than three or four results might indicate that a user is struggling or doing an exhaustive search~\citep{hassan-awadallah-struggling-2014,odijk-struggling-2015}. Both signals can be used by a search system to proactively show an advice or make a change in the ranking. Thus, we formally define Task~1 as predicting whether a user will click on $\le L$ results. To estimate the probability of clicking on $\le L$ results, we generate a large number $K$ (e.g., $K = 1024$) most probable click sequences $s_1, \dots, s_{K}$ and marginalize over those sequences that have $\le L$ clicks: \begin{equation} P(|s| \le L) = \sum_{s \in \{s_1, \dots, s_K\}} P(s) \mathbbm{1}[|s| \le L]. \label{eq:estimating probability of <= L clicks} \end{equation} \smallskip\noindent \textbf{Task 2 (predicting non-consecutive click sequences).} The goal of this task is to predict whether a user will interact with results in the order these results are presented on a \ac{SERP} or in a different order, which we refer to as a \textit{non-consecutive} order. Interacting with results in a non-consecutive order might indicate that a user is struggling~\citep{scaria-last-2014}. As mentioned in Task~1, such a signal can be used by a search engine to proactively show an advice or make a change in the ranking. Similarly to Task 1, we estimate the probability of clicking on results in a non-consecutive order by summing probabilities of the $K$ most probable click sequences $s_1, \dots, s_K$ according to $\mathcal{M}$ in which a user clicks on a result $r_i$ after clicking on a result $r_j$ located below $r_i$ ($i < j$): \begin{equation} P(\downarrow\mathrel{\mspace{-1mu}}\uparrow) = \sum_{s \in \{s_1, \dots, s_K\}} P(s) \mathbbm{1}[s \text{ is non-consecutive}]. \label{eq:estimating probability that click sequence will be ordered by position} \end{equation} \smallskip\noindent \textbf{Task 3 (predicting clicks).} The last task is actually a standard task solved by click models~\citep{chuklin2015click}. The goal is to predict a subset of the presented results~$r_1, \dots, r_{N}$ on which a user will click. Being able to predict that a user will not interact with a subset of results, opens the door for reranking~\citep{yandex-personalized-2014}. Similarly to Task 1 and Task 2, we estimate the click probability for position $p$ by summing probabilities of the $K$ most probable click sequences $s_1, \dots, s_K$ according to $\mathcal{M}$ in which a user clicks on that position: \begin{equation} P_{\text{click}}(p \mid q, r_1, \dots, r_N) = \sum_{s \in \{s_1, \dots, s_K\}} P(s) \mathbbm{1}[p \in s]. \label{eq:estimating unconditional click probabilities} \end{equation} In practice, it is probably better to use simpler models to predict clicks. So we use this task mainly to compare with existing work. We expect the results for a good model~$\mathcal{M}$ to be not much worse compared to the results for click models specifically developed for this task~\cite{chuklin2015click}. In fact, as we show in \S\ref{sec:results}, \ac{CSM} achieves state-of-the-art performance on this task. \section{Method}\label{sec:method} In this section we propose the \acfi{CSM}, a model for predicting click sequences. We use $s$ to denote a sequence of positions of the clicked results with a special \ac{EOS} token appended to it, i.e., $s = (p_1, \dots, p_k, \text{EOS})$. \ac{CSM} is a neural network that is trained to maximize the likelihood of observed click sequences: \begin{equation} \mathcal{L}(s_1, \dots, s_{|S|}) \rightarrow \max_{\Theta}, \label{eq:maximize_likelihood} \end{equation} where $\Theta$ denotes the network parameters and $S = (s_1, \dots, s_{|S|})$ denotes click sequences used for training. The network consists of two parts, called \emph{encoder} and \emph{decoder}. The encoder takes a user's query~$q$ and a list of search engine results $r_1, \dots, r_N$ as input and computes embeddings of the results, $\mathbf{r}_1, \dots, \mathbf{r}_N$. The embedded results $\mathbf{r}_1, \dots, \mathbf{r}_N$ are passed to the decoder, which at each timestep $t=0, 1, \dots$ outputs a probability distribution over $N+1$ positions. Positions $1, \dots, N$ correspond to clicking on the results $r_1, \dots, r_N$. The $(N + 1)$-th position corresponds to predicting that there will be no clicks (\ac{EOS}). Upon observing a click at timestep~$t$, the decoder updates its current state using the position $p_t$ of the clicked result. Figure~\ref{fig:sequence_diagram} illustrates the workflow in the form of a UML sequence diagram. \begin{figure}[h!] \includegraphics[width=\linewidth]{data/sequence_diagram.png} \caption{Modeling click sequences with \ac{CSM}.} \label{fig:sequence_diagram} \end{figure} In \S\ref{sec:network_architecture} we discuss the implementation of the encoder and decoder. Then, in \S\ref{sec:beam_search} we explain how to achieve the main goal of this study, i.e., predict $K$ most probable click sequences using \ac{CSM}. Finally, in \S\ref{sec:training} we specify training details. \subsection{Network architecture}\label{sec:network_architecture} \smallskip\noindent \textbf{Encoder.} The aim of the encoder is to obtain information from a user's query~$q$ and results~$r_1, \dots, r_N$ presented on a \ac{SERP}, and pass this information to the decoder. We represent the encoded information as a list of embeddings~$\mathbf{q}, \mathbf{r}_1, \dots, \mathbf{r}_N$, where each result embedding~$\mathbf{r}_i$ should contain:% \begin{inparaenum}[(i)] \item information about the result~$r_i$, \item the results surrounding $r_i$ and \item the query~$q$. \end{inparaenum} Below we sometimes use $\mathbf{r}_0$ instead of $\mathbf{q}$ to simplify the notation. We propose to implement the encoder as a bidirectional recurrent neural network, which goes over the \ac{SERP} in the top down order, i.e., $q, r_1, \dots, r_N$, and in the reverse order, i.e., $r_N, \dots, r_1, q$. The first, \emph{forward RNN}, produces embeddings $\overrightarrow{\mathbf{q}} (=\overrightarrow{\mathbf{r}_{0:0}}), \overrightarrow{\mathbf{r}_{0:1}}, \dots, \overrightarrow{\mathbf{r}_{0:N}}$. The second, \emph{backward RNN}, produces embeddings $\overleftarrow{\mathbf{r}_{N:N}}$, \dots, $\overleftarrow{\mathbf{r}_{N:1}}$, $\overleftarrow{\mathbf{q}} (= \overleftarrow{\mathbf{r}_{N:0}})$. These embeddings are concatenated to form the final embeddings $\mathbf{q} (=\mathbf{r}_0), \mathbf{r}_1, \dots, \mathbf{r}_N$ produced by the encoder. We represent $q, r_1, \dots, r_N$ using the best performing behavioral features proposed in~\citep{borisov2016neural}. These features count the number of times a particular click pattern, i.e., a set of positions of the clicked results, was observed on a \ac{SERP}. A query $q$ is represented as a $2 ^ N$ dimensional vector, where each component counts the number of times a click pattern was observed in query sessions generated by $q$. The representation of a search result $r$ consists of two parts, both of size $N 2 ^ N$. The components of the first part count, for each position $p=1, \dots, N$, the number of times a click pattern was observed in query sessions in which $r$ appears on position $p$. The components of the second part are similar, but include only query sessions generated by $q$. We apply a linear transformation to these sparse behavioral features to obtain the embeddings $\mathbf{x}_0, \dots, \mathbf{x}_N$ of $q, r_1, \dots, r_N$, which are passed to the \acp{RNN}. We describe the encoder formally using Eqs.~\ref{eq:encoder_features}--\ref{eq:encoder_concatenation}: \begin{eqnarray} \mathbf{x}_i &=& \begin{cases} \text{Embed}(q) & i=0 \\ \text{Embed}(r_i) & i=1, \dots, N \end{cases} \label{eq:encoder_features} \\ \overrightarrow{\mathbf{r}_{0:0}}, \dots, \overrightarrow{\mathbf{r}_{0:N}} &=& \text{RNN}_{\text{forward}}(\mathbf{x}_0, \dots, \mathbf{x}_N) \label{eq:encoder_forward_rnn} \\ \overleftarrow{\mathbf{r}_{N:N}}, \dots, \overleftarrow{\mathbf{r}_{N:0}} &=& \text{RNN}_{\text{backward}}(\mathbf{x}_0, \dots, \mathbf{x}_N) \label{eq:encoder_backward_rnn} \\ \mathbf{r}_i &=& [\overrightarrow{\mathbf{r}_{0:i}}, \overleftarrow{\mathbf{r}_{N:i}}] \qquad (i = 0, \dots, N) \label{eq:encoder_concatenation} \end{eqnarray} \smallskip\noindent \textbf{Decoder.} The aim of the decoder is to predict a probability distribution over $(N+1)$ positions at each timestep. (As mentioned at the start of \S\ref{sec:method}, the $(N+1)$-th position corresponds to predicting that there will be no clicks.) To make a good prediction at timestep $(t+1)$, we need to incorporate into the decoder the information about the position $p_t$ of the result clicked at timestep~$t$. We propose to implement the decoder as an \ac{RNN} that at each timestep~$t=0, 1, \dots$ outputs a vector~$\mathbf{o}_t$ used to predict the probability distribution, and updates its hidden state using the position~$p_t$ of the observed click at timestep~$t$. We also use an attention mechanism~\citep{bahdanau2014neural} to help the decoder extract the most relevant information from the list of embeddings~$\mathbf{r}_0, \dots, \mathbf{r}_N$ at each timestep. We initialize the hidden state of the decoder \ac{RNN} using the concatenation of the final states of the forward and backward \ac{RNN}s of the encoder, $[\overrightarrow{\mathbf{r}_{0:N}}, \overleftarrow{\mathbf{r}_{N:0}}]$, passed through a linear transformation; we use $W_\text{init}$ to denote the transformation matrix. To obtain the probability distribution over $(N+1)$ positions at timestep~$t$, we concatenate the vector $\mathbf{o}_t$ predicted by the decoder \ac{RNN} and the attention vector $\mathbf{a}_t$ computed at timestep~$t$, and pass the result through a linear transformation $W_\text{output}$ followed by softmax.\footnote{ Using the output of an \ac{RNN} together with an attention vector has been shown to improve prediction performance~\citep{bahdanau2014neural, pascanu2013construct}. In the literature, this idea is known as \emph{deep output}~\citep{pascanu2013construct}.} We represent the position~$p_t$ of the observed click at timestep~$t$ as a one-hot vector~$\mathbf{p}_t$ of size~$N$. And apply a linear transformation~$W_{pos}$ to it before passing to the decoder \ac{RNN}. We describe the decoder formally using Eqs.~\ref{eq:decoder_init}--\ref{eq:decoder_softmax}. \begin{eqnarray} \mathbf{s}_0 &=& W_{\text{init}} [\overrightarrow{\mathbf{r}_{0:N}}, \overleftarrow{\mathbf{r}_{N:0}}] \label{eq:decoder_init} \\ \mathbf{a}_{t+1} &=& \text{Attention}(\mathbf{s}_t, [\mathbf{r}_0, \dots, \mathbf{r}_N]) \label{eq:decoder_attention} \\ \mathbf{s}_{t+1}, \mathbf{o}_{t+1} &=& \text{RNN}_{\text{step}}(\mathbf{s}_t, \mathbf{a}_t, W_{pos} \mathbf{p}_t) \label{eq:decoder_step} \\ P(p_{t+1} \mid \dots) &=& \text{Softmax}\left( W_\text{output}[\mathbf{o}_{t+1}, \mathbf{a}_{t+1}] \right) \label{eq:decoder_softmax} \end{eqnarray} To alleviate the \emph{exploding gradient problem}~\citep{bengio1994learning}, we use \acp{GRU} in both the forward and backward \ac{RNN}s of the encoder, and in the decoder \ac{RNN}. \subsection{Beam search}\label{sec:beam_search} As stated in~\S\ref{sec:problem}, our main goal is to predict $K$ most probable click sequences for a given query and search results. These $K$ sequences are then used to reason about actual user click behavior, i.e., sequences of clicks that a user could actually perform for the given query and search results (we call them \textit{probably correct} sequences, see~\S\ref{sec:problem}). \ac{CSM} defines a probability distribution over infinitely many click sequences. Extracting $K$ most probable sequences in this case is not straightforward (since we cannot simply go over all sequences and pick $K$ best ones). We need a means of generating $K$ most probable sequences without having to calculate the probability for every possible click sequence. To do that, we suggest to use beam search~\cite{graves2012sequence}. In our experiments, we use $K \le 1024$ and \emph{beam size} $= K$. Setting the beam size to $K$ guarantees that the $K$ sequences generated by beam search have the highest probabilities according to \ac{CSM}, i.e., they are indeed most probable click sequences according to \ac{CSM}. Using a smaller beam size allows us to generate $K$ sequences faster, but does not guarantee that these sequences are the most probable ones. \subsection{Training}\label{sec:training} We learn the parameters~$\Theta$ of the \ac{CSM} network (both the encoder and decoder parts) by maximizing the log-likelihood of observed click sequences. We optimize these parameters using \ac{SGD}. The learning rates for each parameter are adjusted according to the Adam~\citep{kingma2015adam} algorithm using the default values of $\beta_1 = 0.9$, $\beta_2 = 0.999$ and $\varepsilon = 10 ^ {-8}$. We also use \emph{gradient clipping}~\citep{pascanu2013difficulty} with the norm set to \num{1} to alleviate the \emph{exploding gradient problem}~\citep{bengio1994learning}, which, as mentioned earlier, GRUs also try to mitigate. \section{Experimental setup}\label{sec:experimental setup} In this section we describe our experimental setup. We start by describing the data we use to conduct our experiments (\S\ref{sec:data}). Then we discuss our evaluation methodology (\S\ref{sec:evaluation_methodology}), formulate research questions (\S\ref{sec:research_questions}) and list the experiments we run to answer these research questions (\S\ref{sec:experiments}). \subsection{Data}\label{sec:data} We use Yandex Relevance Prediction dataset\footnote{\url{https://academy.yandex.ru/events/data_analysis/relpred2011/} (last visited \today)} released in 2011 by Yandex, the major search engine in Russia. The dataset consists of \num{146278823} query sessions ordered by time. We use the first half of the dataset for training \ac{CSM}, and \num{100000} randomly selected query sessions from the second half of the dataset for evaluation. \citet{borisov2016neural} also use the first half of the dataset for training, which allows a direct comparison with their work. The statistics about the number of query sessions in the test set split by the number and order of clicks is given in Table~\ref{table:dataset_statistic}. \begin{table} \centering \caption{The number of query sessions in the test set split by the number and order of clicks. By ordered click sequences we mean those where a user clicks on results in the order they appear on a SERP. The total number of click sequences in the test set is \num{100000}.} \label{table:dataset_statistic} \begin{tabular}{l r r} \toprule Number of clicks & Ordered sequences & Unordered sequences \\ \midrule 0 & \num{30466} & 0 \\ 1 & \num{46550} & 0 \\ 2 & \num{8851} & \num{2143} \\ 3 & \num{3437} & \num{1856} \\ 4 & \num1564{} & \num{1290} \\ 5 & \num{751} & \num{814} \\ 6 & \num{407} & \num{512} \\ 7 & \num{244} & \num{305} \\ 8 & \num{156} & \num{195} \\ 9 & \num{85} & \num{137} \\ 10+ & \num{73} & \num{164} \\ \bottomrule \end{tabular} \end{table} \subsection{Evaluation methodology}\label{sec:evaluation_methodology} To properly evaluate the proposed \ac{CSM} model, we would need to know a set of all (or at least a sample of) probably correct click sequences for each test query and search results. Then we could measure how well the $K$ most probable sequences predicted by \ac{CSM} describe the properties of the known probably correct sequences. In practice, however, we observe only one (or a few) of all probably correct click sequences and, therefore, we cannot even argue about their properties. The best we can do is to check whether the observed click sequence appears in the list of $K$ sequences predicted by \ac{CSM}. In particular, we measure recall@K, i.e., the fraction of query sessions for which \ac{CSM} includes the observed click sequence in the list of $K$ most probable sequences. Since \ac{CSM} is the first model for predicting click sequences, there are no baselines to compare against. However, we can use as a reference level the percentage of query sessions in which users interact with results in the order these results are presented on a \ac{SERP}. In our test data, this percentage equals \num{92.73}\%. This is an upper bound under the assumption that a user scans search results sequentially from top to bottom. This means that a model, that predicts only click sequences ordered by position, will contain the observed click sequence in the list of $K$ most probable sequences for $\le$ \num{92.73}\% of query sessions. \smallskip\noindent \textbf{Task 1.} Predicting whether a user will click on $\le L$ results is a new task and, hence, there are no standard metrics to evaluate performance on this task and no existing baselines to compare to. We propose to evaluate the performance on this task using perplexity and, because this is a classification problem for a fixed L, \ac{AUC}. Perplexity measures how ``surprised'' a model is upon observing $\le L$ clicks. \ac{AUC} measures the model's discriminative power. We use a naive baseline which predicts that a user will make $\le L$ clicks with a constant probability calculated on the training set. \ac{AUC} of such method is $0.5$. \smallskip\noindent \textbf{Task 2.} Predicting whether a click sequence will be ordered by position is also a new task and, similarly to Task 1, there are no standard metrics to evaluate the performance on this task and no existing baselines to compare to. Similarly to Task 1, we use perplexity and \ac{AUC}. Our naive baseline predicts that a click sequence will be ordered by position with a constant probability calculated on the training set. \ac{AUC} of such method is also $0.5$, as in Task 1. \smallskip\noindent \textbf{Task 3.} Following~\citep{dupret2008user,grotov2015comparative,guo2009click,borisov2016neural,wang2015incorporating}, we evaluate the performance on the standard click prediction task using perplexity, which measures how ``surprised'' a model is upon observing an unordered set of clicks on search engine results. We use the following click models as our baselines: \ac{DBN}~\citep{chapelle2009dynamic}, \ac{DCM}~\citep{guo2009efficient}, \ac{CCM}~\citep{guo2009efficient}, \ac{UBM}~\citep{dupret2008user} and \ac{NCM}~\citep{borisov2016neural}. \citet{borisov2016neural} use the same data for training these click models, which allows us to compare with the result reported in their work. \subsection{Research questions}\label{sec:research_questions} We aim to answer the following research questions: \smallskip\noindent \begin{enumerate}[itemsep=0pt,topsep=0pt,itemindent=0pt, label=\bfseries RQ\arabic*] \item How well does \ac{CSM}, described in \S\ref{sec:method}, predict probably correct click sequences? \begin{enumerate}[itemsep=0pt,topsep=0pt] \item For how many query sessions, the observed click sequence occurs in the list of $K$ most probable click sequences predicted by \ac{CSM}? How fast does this number increase with~$K$? \item How well does \ac{CSM} perform for query sessions in which clicks \begin{inparaenum}[(i)] \item follow the order in which results are presented on a \ac{SERP}, and \item do not follow the order in which results are presented on a \ac{SERP}. \end{inparaenum} \item How well does \ac{CSM} perform for query sessions with different number of clicks? \item Do $K$ most probable click sequences provide good means to reason about the probability distribution over click sequences predicted by \ac{CSM}? \end{enumerate} \item How well does \ac{CSM} predict the number of clicks on search results (see Task~1 in \S\ref{sec:prediction tasks})? \item How well does \ac{CSM} predict whether or not a user will click on results in the order they are presented on a SERP (see Task~2 in \S\ref{sec:prediction tasks})? \item How well does \ac{CSM} predict clicks on search results (see Task~3 in \S\ref{sec:prediction tasks})? Does it reach the performance of the state-of-the-art click models? \end{enumerate} \subsection{Experiments}\label{sec:experiments} We design our experiments to answer our research questions. \smallskip\noindent% \textbf{E1(a).} To answer RQ1(a), we measure the percentage of query sessions for which the observed click sequence occurs in the list of $K$ most probable click sequences according to \ac{CSM}. We use $K = \{1, 2, 3, \dots, 1024\}$. \smallskip\noindent% \textbf{E1(b).} To answer RQ1(b), we measure the percentage of query sessions for which the observed click sequence occurs in the list of $K$ most probable click sequences according to \ac{CSM} separately \begin{inparaenum}[(i)] \item for query sessions in which clicks are ordered by position, and \item for query sessions in which clicks are not ordered by position. \end{inparaenum} We use $K = \{1, 2, 3, \dots, 1024\}$. \smallskip\noindent% \textbf{E1(c).} To answer RQ1(c), we measure the percentage of query sessions for which the observed click sequence occurs in the list of $K$ most probable click sequences according to \ac{CSM} for query sessions with $\le L$ clicks. We use $K = \{1, 2, 3, \dots, 1024\}$ and $L=\{1, 2, 3, 4, 5\}$. \smallskip\noindent% \textbf{E1(d).} To answer RQ1(d), we compute the total probability of $K$ most probable click sequences according to \ac{CSM}. If this probability is close to \num{1}, we conclude that using $K$ most probable click sequences is enough to form a representative empirical distribution over click sequences. And, thus, $K$ most probable click sequences provide good means to reason about the properties of the probability distribution over click sequences predicted by \ac{CSM}. If the total probability mass of $K$ most probable click sequences is small, we conclude that using these sequences is not enough to reason about the probability distribution over click sequences predicted by \ac{CSM}. We use $K = \{1, 2, 3, \dots, 1024\}$. \smallskip\noindent% \textbf{E2.} To answer RQ2, we compute probabilities of clicking on $\le L$ results by marginalizing over the $K$ most probable click sequences according to \ac{CSM} (see Eq.~\ref{eq:estimating probability of <= L clicks}). We use these probabilities to compute perplexity and \ac{AUC}. We use $K=1024$ and $L=\{1, 2, 3, 4, 5\}$. \smallskip\noindent% \textbf{E3.} To answer RQ3, we compute the probability that a user will click on results in the order these results are presented on a \ac{SERP} by marginalizing over the $K$ most probable click sequences according to \ac{CSM} (see Eq.~\ref{eq:estimating probability that click sequence will be ordered by position}). We use this probability to compute perplexity and \ac{AUC}, $K = 1024$. \smallskip\noindent% \textbf{E4.} To answer RQ4, we compute probabilities of clicking on each result by marginalizing over the $K$ most probable click sequences according to \ac{CSM} (see Eq.~\ref{eq:estimating unconditional click probabilities}). We use these probabilities to compute perplexities for each position and average these perplexity values over positions to obtain the final score. We use $K = 1024$. \smallskip\noindent% In our experiments, we use embeddings of size \num{256}, and the same number of \acp{GRU} in all \acp{RNN}. We train \ac{CSM} using \acl{SGD} with mini-batches of 64 query sessions and the parameters specified in~\S\ref{sec:training}. \section{Results}\label{sec:results} In this section we present the results of the experiments described in \S\ref{sec:experiments} and provide answers to the research questions stated in \S\ref{sec:research_questions}. \subsection{Experiment 1(a)}\label{sec:results of experiment 1 (a)} Figure~\ref{fig:recall at K} shows recall at different values of $K$ (i.e., the percentage of query sessions for which the observed click sequence occurs in the list of $K$ most probable sequences predicted by \ac{CSM}) in linear and logarithmic scales. The percentage of query sessions in which clicks are ordered by position equals $92.73$\%. We show it on the plots as a reference level. \begin{figure}[h!] \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture} \begin{axis}[ width=\linewidth, height=50mm, grid=major, grid style={dashed,gray!30}, xmin=1, xmax=1024, xlabel=$K$ (linear scale), ylabel=recall, ylabel near ticks, legend style={at={(1, 0)}, anchor=south east}, legend cell align=left, ] \addplot[mark=none, color=blue,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls.csv}; \addplot[color=red, dashed, very thick] coordinates {(1, 0.9273) (1024, 0.9273)}; \legend{CSM, Ordered by position} \end{axis} \end{tikzpicture} \label{fig:recall at K (linear scale)} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture} \begin{axis}[ width=\linewidth, height=50mm, grid=major, grid style={dashed,gray!30}, xmode=log, log basis x=2, xmin=1, xmax=1024, xlabel=$K$ (log scale), ylabel=recall, ylabel near ticks, legend style={at={(1, 0)}, anchor=south east}, legend cell align=left, ] \addplot[mark=none, color=blue,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls.csv}; \addplot[color=red, dashed, very thick] coordinates {(1, 0.9273) (1024, 0.9273)}; \legend{CSM, Ordered by position} \end{axis} \end{tikzpicture} \label{fig:recall at K (log scale)} \end{subfigure} \caption{Percentage of query sessions for which the observed click sequence occurs in the list of $K$ most probable click sequences predicted by \ac{CSM} (blue, solid) and percentage of click sequences ordered by position (red, dashed).} \label{fig:recall at K} \end{figure} We find that for \num{47.24}\% of query sessions, \ac{CSM} assigns the highest probability to the observed click sequence. For \num{62.76}\% of query sessions, the observed sequence appears in the list of two sequences with the highest probabilities according to \ac{CSM}. Since the curve on the logarithmic scale is concave (see Figure~\ref{fig:recall at K}, bottom plot), we conclude that recall of \ac{CSM} increases slower than logarithmically with~$K$. The percentage of query sessions in which clicks are ordered by position can be seen as an upper bound under the assumption that a user scans search results sequentially from top to bottom. \ac{CSM} does not make this assumption, and, as a result, is able to reach and surpass this upper bound, achieving \num{96.26}\% recall at $K=1024$. Answering RQ1(a), we conclude that recall of \ac{CSM} increases slower than logarithmically with $K$, starting from \num{47.24}\% at $K=1$ and reaching \num{96.26}\% at $K=1024$, which is higher than recall under the sequential assumption ($92.73$\%). \subsection{Experiment 1(b)}\label{sec:results of experiment 1 (b)} Figure~\ref{fig:recall at K for ordered and unordered groups} shows recall at different values of $K$ (in linear and logarithmic scales) for \begin{inparaenum}[(i)] \item all query sessions (black, solid), \item query sessions in which clicks are ordered by position (blue, solid), and \item query sessions in which clicks are not ordered by position (red, solid). \end{inparaenum} Dashed lines show percentages of query sessions in the corresponding groups, in which clicks on results happen in the order these results are presented on a \ac{SERP}. Obviously, the second group has 100\% of query sessions with clicks ordered by position (and, hence, the blue dotted line denotes recall of $1$), while the third group has no such sessions (and, hence, the red dotted line denotes recall of~$0$). \begin{figure}[h!] \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture} \begin{axis}[ width=\linewidth, height=50mm, grid=major, grid style={dashed,gray!30}, xmin=1, xmax=1024, xlabel=$K$ (linear scale), ylabel=recall, ylabel near ticks, legend style={at={(0.5,-.35)},anchor=north}, legend cell align=left, ] \addplot[mark=none, color=black,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls.csv}; \addplot[color=black, dashed, very thick] coordinates {(1, 0.9273) (1024, 0.9273)}; \addplot[mark=none, color=blue,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_ordered_group.csv}; \addplot[color=blue, dashed, very thick] coordinates {(1, 1.0) (1024, 1.0)}; \addplot[mark=none, color=red,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_unordered_group.csv}; \addplot[color=red, dashed, very thick] coordinates {(1, 0.0) (1024, 0.0)}; \end{axis} \end{tikzpicture} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture} \begin{axis}[ width=\linewidth, height=50mm, grid=major, grid style={dashed,gray!30}, xmode=log, log basis x=2, xmin=1, xmax=1024, xlabel=$K$ (log scale), ylabel=recall, ylabel near ticks, legend style={at={(1, -.75)}, anchor=south east}, legend cell align=left, ] \addplot[mark=none, color=black,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls.csv}; \addplot[color=black, dashed, very thick] coordinates {(1, 0.9273) (1024, 0.9273)}; \addplot[mark=none, color=blue,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_ordered_group.csv}; \addplot[color=blue, dashed, very thick] coordinates {(1, 1.0) (1024, 1.0)}; \addplot[mark=none, color=red,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_unordered_group.csv}; \addplot[color=red, dashed, very thick] coordinates {(1, 0.0) (1024, 0.0)}; \legend{All query sessions, , Query sessions in which clicks ordered by position, , Query sessions in which click not ordered by position} \end{axis} \end{tikzpicture} \end{subfigure} \medskip \caption{Recall at different values of $K$ for (i) all query sessions (black, solid) (ii) query sessions in which clicks are ordered by position (blue, solid), and (iii) query sessions in which clicks are not ordered by position (red, solid). Dashed lines show percentages of query sessions in the corresponding groups, in which clicks on results happen in the order these results are presented on a \ac{SERP}.} \label{fig:recall at K for ordered and unordered groups} \end{figure} As we find in \S\ref{sec:results of experiment 1 (a)}, \ac{CSM} assigns the highest probability to the observed click sequence for \num{47.24}\% of query sessions. If we consider only query sessions in which clicks are ordered by position, this percentage goes up to \num{50.94}\% and then increases slower than logarithmically with $K$ (see Figure~\ref{fig:recall at K for ordered and unordered groups}, bottom plot) achieving \num{99.62}\% at $K = 1024$. However, if we consider only query sessions in which clicks are not ordered by position, this percentage goes down to \num{0} and then increases logarithmically with $K$ for $K \ge 5$ (see Figure~\ref{fig:recall at K for ordered and unordered groups}, bottom plot), achieving \num{53.37}\% at $K = 1024$. It is to be expected that predicting click sequences, where clicks are not ordered by position, is a much more difficult task than predicting ordered clicks. First, in our training data the number of ordered click sequences is greater than the number of unordered click sequences (see Table~\ref{table:dataset_statistic} for the statistic computed on the test set; the training set shares the same distribution). Second, the number of possible ordered click sequences is less than the number of possible unordered click sequences. For click sequences of length $L$, there are $N\choose L$ $=$ $\frac{N!}{L!(N - L)!}$ possible ordered click sequences and $N ^ L$ possible unordered click sequences. And this is where \ac{CSM} makes a difference: in more than 50\% of cases, the observed click sequence appears in the top $K = 1024$ sequences predicted by \ac{CSM}. Note that under the assumption that a user scans search results sequentially from top to bottom such sequences cannot be predicted at all (see the red dotted line at zero recall). Even in a simple case of predicting ordered click sequences, \ac{CSM} almost reaches the perfect recall of $1$ for $K = 1024$. Answering RQ1(b), we conclude that recall of \ac{CSM} is much higher in query sessions in which clicks follow the presentation order than in those in which users click on higher ranked results after clicking on a lower ranked result. \subsection{Experiment 1(c)}\label{sec:results of experiment 1 (c)} Figure~\ref{fig:recall at K for query sessions with L clicks} shows recall at different values of $K$ for \begin{inparaenum}[(i)] \item all query sessions and \item query sessions with $L$ clicks. \end{inparaenum} Dashed lines show percentages of query sessions in the corresponding groups, in which clicks on results happen in the order these results are presented on a \ac{SERP}. \begin{figure}[h!] \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture} \begin{axis}[ width=\linewidth, height=50mm, grid=major, grid style={dashed,gray!30}, xmin=1, xmax=1024, xlabel=$K$ (linear scale), ylabel=recall, ylabel near ticks, legend style={at={(1, -1.2)}, anchor=south east}, legend cell align=left, ] \addplot[color=black, dashed, very thick] coordinates {(1, 0.9273) (1024, 0.9273)}; \addplot[color=blue, dashed, very thick] coordinates {(1, 1) (1024, 1)}; \addplot[color=red, dashed, very thick] coordinates {(1, 1) (1024, 1)}; \addplot[color=magenta, dashed, very thick] coordinates {(1, 0.8203) (1024, 0.8203)}; \addplot[color=orange, dashed, very thick] coordinates {(1, 0.6840) (1024, 0.6840)}; \addplot[color=yellow, dashed, very thick] coordinates {(1, 0.5197) (1024, 0.5197)}; \addplot[color=teal, dashed, very thick] coordinates {(1, 0.4190) (1024, 0.4190)}; \addplot[mark=none, color=black,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls.csv}; \addplot[mark=none, color=blue,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_group_0.csv}; \addplot[mark=none, color=red,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_group_1.csv}; \addplot[mark=none, color=magenta,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_group_2.csv}; \addplot[mark=none, color=orange,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_group_3.csv}; \addplot[mark=none, color=yellow,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_group_4.csv}; \addplot[mark=none, color=teal,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_group_5.csv}; \end{axis} \end{tikzpicture} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture} \begin{axis}[ width=\linewidth, height=50mm, grid=major, grid style={dashed,gray!30}, xmode=log, log basis x=2, xmin=1, xmax=1024, xlabel=$K$ (log scale), ylabel=recall, ylabel near ticks, legend style={at={(1, -1.2)}, anchor=south east}, legend cell align=left, ] \addplot[color=black, dashed, very thick] coordinates {(1, 0.9273) (1024, 0.9273)}; \addplot[color=blue, dashed, very thick] coordinates {(1, 1) (1024, 1)}; \addplot[color=red, dashed, very thick] coordinates {(1, 1) (1024, 1)}; \addplot[color=magenta, dashed, very thick] coordinates {(1, 0.8203) (1024, 0.8203)}; \addplot[color=orange, dashed, very thick] coordinates {(1, 0.6840) (1024, 0.6840)}; \addplot[color=yellow, dashed, very thick] coordinates {(1, 0.5197) (1024, 0.5197)}; \addplot[color=teal, dashed, very thick] coordinates {(1, 0.4190) (1024, 0.4190)}; \addplot[mark=none, color=black,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls.csv}; \addplot[mark=none, color=blue,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_group_0.csv}; \addplot[mark=none, color=red,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_group_1.csv}; \addplot[mark=none, color=magenta,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_group_2.csv}; \addplot[mark=none, color=orange,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_group_3.csv}; \addplot[mark=none, color=yellow,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_group_4.csv}; \addplot[mark=none, color=teal,very thick] table[x=rank,y=recall,col sep=comma] {data/recalls_in_group_5.csv}; \legend{,,,,,,,All query sessions, Query sessions with $0$ clicks, Query sessions with $1$ clicks, Query sessions with $2$ clicks, Query sessions with $3$ clicks, Query sessions with $4$ clicks, Query sessions with $5$ clicks} \end{axis} \end{tikzpicture} \end{subfigure} \medskip \caption{Recall at different values of $K$ for (i) all query sessions and (ii) query sessions with $L$ clicks. Dashed lines show percentages of query sessions in the corresponding groups, in which clicks on results happen in the order these results are presented on a \ac{SERP}.} \label{fig:recall at K for query sessions with L clicks} \end{figure} For click sequences of length $L = 0$ and $L = 1$, recall of \ac{CSM} approaches \num{1} (already for small values of $K$). Recall of CSM for sequences of length $L=2, 3, 4$ at $K=1024$ is higher than the percentages of query sessions in which click sequences of length~$L$ are ordered by position. For $L=5$ and $K=1024$, recall of CSM approaches the percentage of query sessions in which click sequences are of length \num{5} and are ordered by position. For sequences of length $\ge 2$, recall of \ac{CSM} first increases logarithmically with $K$ (for $K \ge K_0(L)$), and then might increase both faster and slower then logarithmically with $K$ depending on $L$ and the range of $K$. We can see that the longer a click sequence is, the more difficult it is to predict such a sequence. This is intuitive and can be explained similarly to \S\ref{sec:results of experiment 1 (b)}. First, we have more training data for shorter click sequences (see Table~\ref{table:dataset_statistic}). Second, the number of possible click sequences of length~$L$ increases exponentially with $L$, making the prediction task more difficult. Answering RQ1(c), we conclude that recall of \ac{CSM} is very high in query sessions with a small number of clicks, and lower in query sessions with a larger number of clicks. \subsection{Experiment 1(d)}\label{sec:results of experiment 1 (d)} Figure~\ref{fig:probability_of_k_most_probable_sequences} plots, for different values of $K$, the total probability of $K$ most probable click sequences predicted by \ac{CSM} averaged over query sessions in the test set. We write $\sum_{i=1}^{K} P_\text{CSM}(s_i)$ to denote this probability. \begin{figure}[h!] \begin{tikzpicture} \begin{axis}[ width=\linewidth, height=50mm, grid=major, grid style={dashed,gray!30}, xmin=1, xmax=1024, xlabel=$K$, ylabel=$\sum_{i=1}^{K} P_\text{CSM}(s_i)$, ylabel near ticks ] \addplot[mark=none, color=blue,very thick] table[x=k,y=total_probability,col sep=comma] {data/total_probability_of_k_most_probable_sequences.csv}; \end{axis} \end{tikzpicture} \caption{Total probability of $K$ most probable sequences predicted by \ac{CSM} for different values of~$K$.} \label{fig:probability_of_k_most_probable_sequences} \end{figure} We find that the total probability of $K$ most probable click sequences grows fast with $K$, starting from $46.29$\% at $K=1$ and $71.69$\% at $K=4$, and achieving $91.81$\% at $K=128$ and $96.47$\% at $K=1024$. Thus, even for modest values of $K$, the total probability of the $K$ most probable click sequences is not significantly less than~\num{1}. Answering RQ1(d), we conclude that the $K$ most probable click sequences according to \ac{CSM} provide good means to reason about the whole probability distribution over click sequences predicted by \ac{CSM}. \subsection{Experiment 2}\label{sec:results of experiment 3} The results on Task 1 (\S\ref{sec:prediction tasks}) are given in Tables~\ref{table:perplexities of <= L clicks} and~\ref{table:auc for the task of predicting whether a user will click <= L results}. Table~\ref{table:perplexities of <= L clicks} shows perplexity of \ac{CSM} upon observing a sequence of $\le L$ clicks. Table~\ref{table:auc for the task of predicting whether a user will click <= L results} shows \ac{AUC} of \ac{CSM} on the same task. Recall, that the naive baseline predicts that a user will make $\le L$ clicks with a constant probability optimized on the training set. \begin{table}[h] \caption{Perplexity of observing a sequence of $\le L$ clicks. Lower values correspond to better prediction performance.} \centering \begin{tabular}{@{} l r r r r r r r @{}} \toprule & \multicolumn{1}{c}{$L=0$} &\multicolumn{1}{c}{$L \le 1$} & \multicolumn{1}{c}{$L \le 2$} & \multicolumn{1}{c}{$L \le 3$} & \multicolumn{1}{c}{$L \le 4$} & \multicolumn{1}{c}{$L \le 5$} \\ \midrule Baseline & $1.8512$ & $1.7169$ & $1.4450$ & $1.2779$ & $1.1784$ & $1.1068$ \\ \ac{CSM} & $1.7155$ & $1.6153$ & $1.3852$ & $1.2438$ & $1.1602$ & $1.1029$ \\ \bottomrule \end{tabular} \label{table:perplexities of <= L clicks} \end{table} \begin{table}[h] \caption{\ac{AUC} for the task of predicting whether a user will click on $\le L$ results.} \centering \begin{tabular}{@{} l r r r r r r r @{}} \toprule & \multicolumn{1}{c}{$L=0$} &\multicolumn{1}{c}{$L \le 1$} & \multicolumn{1}{c}{$L \le 2$} & \multicolumn{1}{c}{$L \le 3$} & \multicolumn{1}{c}{$L \le 4$} & \multicolumn{1}{c}{$L \le 5$} \\ \midrule Baseline & \num{.5000} & \num{.5000} & \num{.5000} & \num{.5000} & \num{.5000} & \num{.5000} \\ \ac{CSM} & \num{.7362} & \num{.7278} & \num{.7353} & \num{.7535} & \num{.7566} & \num{.7795} \\ \bottomrule \end{tabular} \label{table:auc for the task of predicting whether a user will click <= L results} \end{table} Both Tables~\ref{table:perplexities of <= L clicks} and \ref{table:auc for the task of predicting whether a user will click <= L results} show that \ac{CSM} predicts the number of clicks better than the baseline and its performance increases with $L$ (lower perplexity and higher \ac{AUC}). The latter result is intuitive as more sequences have $\le L$ clicks for larger $L$ and, thus, the prediction task becomes easier as $L$ grows. Answering RQ3, we conclude that \ac{CSM} provides good means to predict the number of clicked results. \subsection{Experiment 3}\label{sec:results of experiment 4} The results on Task 2 (\S\ref{sec:prediction tasks}) are given in Table~\ref{table:perplexity of observing clicks ordered by position}. The table shows perplexity and \ac{AUC} of \ac{CSM} when predicting a sequence of clicks ordered by position. \begin{table}[h] \caption{Performance for the task of predicting whether a user will click on results in the order these results are presented on a \ac{SERP}. Lower values of perplexity and larger values of \ac{AUC} correspond to better prediction performance.} \centering \begin{tabular}{ l r r } \toprule & Perplexity & \ac{AUC} \\ \midrule Baseline & \num{1.2984} & \num{.5000} \\ \ac{CSM} & \num{1.2788} & \num{.6826} \\ \bottomrule \end{tabular} \label{table:perplexity of observing clicks ordered by position} \end{table} Table~\ref{table:perplexity of observing clicks ordered by position} shows that \ac{CSM} outperforms the baseline in terms of both perplexity and \ac{AUC}. Thus, answering RQ3, we conclude that \ac{CSM} provides good means to predict whether a user will interact with results in the order these results are presented on a SERP. \subsection{Experiment 4}\label{sec:results of experiment 2} The results on Task 3 (\S\ref{sec:prediction tasks}), i.e., the click prediction task, are given in Table~\ref{table:perpexity on the click prediction task}. The results for \ac{DBN}, \ac{DCM}, \ac{CCM}, \ac{UBM} and \ac{NCM} are according to~\citep{borisov2016neural}. \begin{table}[h] \caption{Perplexity for the click prediction task. Lower values correspond to better prediction performance. The results for \ac{DBN}, \ac{DCM}, \ac{CCM}, \ac{UBM} and \ac{NCM} are according to~\citep{borisov2016neural}.} \centering \begin{tabular}{ l r } \toprule Click model & Perplexity \\ \midrule DBN & $1.3510$ \\ DCM & $1.3627$ \\ CCM & $1.3692$ \\ UBM & $1.3431$ \\ NCM & $1.3318$ \\ \midrule \ac{CSM} & $1.3312$\\ \bottomrule \end{tabular} \label{table:perpexity on the click prediction task} \end{table} Table~\ref{table:perpexity on the click prediction task} shows that \ac{CSM} outperforms \ac{DBN}, \ac{DCM}, \ac{CCM} and \ac{UBM} by a large margin and matches the performance of \ac{NCM}, which is reported to be the state-of-the-art click model~\citep{borisov2016neural}. Answering RQ4, we conclude that \ac{CSM} provides good means to predict clicks on search engine results, achieving the state-of-the-art performance. \section{Related Work} \label{sec:related work} We describe two types of related work: user interactions in search and user modeling. \subsection{User interactions in search} Log data from interactive systems such as search engines is one of the most ubiquitous forms of data available, as it can be recorded at little cost~\citep{white-interactions-2016}. These data have become a popular source for improving the performance of search engines. In particular, logged interactions have been successfully adopted to improve various aspects of search, including document ranking~\citep{joachims-optimizing-2002,agichtein2006improving,o2016leveraging}, query auto-completion~\citep{li2014two,jiang2014learning} and query suggestion~\citep{cao2008context,wu2013learning}, to improve recommender systems~\citep{oard-implicit-1998}, optimizing presentations~\cite{wang2016beyond}, and evaluation~\citep{hofmann-estimating-2012}. In the context of web search, many types of implicit information interaction behavior have been studied over the years. Early work, e.g., by \citet{craswell2008experimental}, focuses on single clicks and, in particular, on the \emph{first} click. And assumes that a user abandons examination of web results upon the first click. \citet{guo2009efficient} expand on this by studying sessions with \emph{multiple} clicks, looking not just at the first click but also at follow-up clicks, the last click and dependencies between clicks, reflecting more complex information behavior. There is a very broad spectrum of research that studies and tries to interpret information interaction behavior that involves multiple clicks, either by also taking additional signals into consideration or by zooming in on specific aspects of sequences of clicks. Examples of the former include work by \citet{huurnink-search-2010} who examine click signals, download behavior, and purchase signals in vertical search and find high degrees of correlation between the three. Time, such as dwell time or time between user actions such as clicks, has been found to be another important source of implicit signals~\citep{borisov2016context}: times elapsed between user actions provide means to measure user satisfaction at the result level~\citep{fox2005evaluating,kim2014modeling}, session level~\citep{fox2005evaluating,hassan2012semi} and system level~\citep{chapelle2012large,schuth_2015_predicting}. And beyond that, on mobile or screen-less devices there is range of interaction signals that are different from signals familiar from desktop environment -- due to the context of use and due to gesture- and voice-based control, such as swipes, touch and voice conversations -- and that have not been studied extensively~\citep{kiseleva-evaluating-2017}. Our work differs from these publications as we remain focused on click signals only and especially on sequences click signals. Relevant examples of the studies that zoom in on specific aspects of multiple click behavior include work on repeat behavior such as repeated examinations or clicks~\citep{oard-implicit-1998,xu2012incorporating}, which can be interpreted as strong evidence of the value ascribed to the result being examined or clicked again. In a similar vein, \citet{scaria-last-2014} consider back clicks and last clicks; in their view, back clicks suggest a lack of progress on the current navigational path and, depending on contextual factors, last clicks mark success or failure. \citet{hassan-awadallah-struggling-2014} and \citet{odijk-struggling-2015} focus on aspects of click sequences, including the number of clicks, their dwell time, and features to capture whether the user was clicking on the same results or results from the same domain multiple times, indicative of difficulty locating a particular resource. \citet{williams-does-2017} examine whether sequences of user interactions over time can be used to differentiate between good and abandonment and train an LSTM to distinguish between the two types of behavior. Especially relevant for our paper is the work by \citet{wang2015incorporating}, who consider non-sequential examination and click behavior, both through an eye-tracking study and a log-based study. They arrive at several behavioral principles, for instance% \begin{inparaenum}[(i)] \item between adjacent clicks, users tend to examine search results in a single direction without changes, and the direction is usually consistent with that of clicks; and \item although the examination behavior between adjacent clicks can be regarded as locally unidirectional, users may skip a few results and examine a result at some distance from the current one following a certain direction. \end{inparaenum} \subsection{Modeling user interactions} To understand, describe and predict various types of user interactions discussed above, a number of user interaction models have been proposed aimed at modeling clicks~\cite{chuklin2015click}, mouse movements~\cite{diaz2013robust}, dwell time~\cite{kim2014modeling,liu2010understanding}, etc. So far, modeling user clicks in search has attracted the most attention~\cite{chuklin2015click}. Click models usually represent clicks as binary random variables and construct a \ac{PGM} that describes the dependencies between clicks and other (usually hidden) random variables, such as attractiveness (i.e., whether a snippet is attractive to a user given a query) and examination (i.e., whether a snippet is examined by a user)~\cite{chapelle2009dynamic,craswell2008experimental,dupret2008user,guo2009click,guo2009efficient}. The advantage of \ac{PGM}-based click models is that they intuitively describe user click behavior and can predict future clicks based on past observations~\cite{chuklin2015click}. Some click models take into account the order in which a user interacts with the results in order to better model and predict clicks~\cite{xu2010temporal,wang2010inferring,wang2015incorporating,liu2016time,xie-constructing-2018}. However, such models either do not aim at predicting click sequences~\cite{wang2010inferring,wang2015incorporating,liu2016time,xie-constructing-2018} or consider only very short sequences of clicks~\cite{xu2010temporal}. Recently, neural click models have been proposed~\cite{zhang2014sequential,borisov2016neural}. The advantage of these models is that they do not require manually constructed \ac{PGM}s to describe and predict user clicks, but rely on raw click data to learn hidden click patterns. Neural click models have better click prediction accuracy, but suffer from uninterpretability of the learned neural model as opposed to easily interpretable \ac{PGM}-based click models. Also, as before, neural click models cannot predict sequences of clicks. In addition to clicks, mouse movements between search results and various search-related timings have been studied and modeled. The probability of hovering over one element of a \ac{SERP} after hovering over another element is predicted using the Farley-Ring model in~\cite{diaz2013robust}. Dwell time is modeled though Weibull and gamma distributions in~\cite{kim2014modeling,liu2010understanding}. More timings, such as time between clicks, time to first/last click, etc., are considered in~\cite{borisov2016context}, where, in addition to the above distribution-based models, a context bias is modeled using neural networks. \smallskip\noindent What our work adds on top of the work listed above is our focus on \emph{sequences} of clicks, and in particular on describing a set of \emph{probably correct} click sequences. \section{Conclusion and future work}\label{sec:conclusion and future work} In this paper, we studied the problem of predicting sequences of user interactions and, in particular, sequences of clicks. We formally defined the problem of click sequence prediction and introduced the notion of probably correct click sequences. Furthermore, we proposed \ac{CSM}, a neural network based model for predicting a probability distribution over click sequences. We advocated for using the $K$ most probable click sequences predicted by \ac{CSM} as a set probably correct click sequences. And suggested to use these $K$ click sequences to reason about the properties of the probability distribution over click sequences, such as the expected number of clicks and the expected order of clicks. We evaluated the quality of \ac{CSM} on a publicly available dataset. First, we showed that even for modest thresholds the $K$ (larger than the threshold) most probable click sequences predicted by the \ac{CSM} constitute a substantial part of the total probability mass assigned by the \ac{CSM} to all possible click sequences, and thus can be regarded as probably correct click sequences predicted by \ac{CSM}. We proposed to judge the success of a click sequence model~$\mathcal{M}$ by the fact that the observed click sequence occurs in the list of the $K$ most probable click sequences predicted by $\mathcal{M}$. We measured performance of \ac{CSM} using recall@K, a metric that is also used to evaluate the performance of approximate nearest neighbor search methods~\citep{muja2009fast,jegou2011product}. Our results showed that recall@K grows fast with $K$, starting from \num{47.24}\% at $K = 1$ and reaching \num{96.26}\% at $K = 1024$. We also found that recall@K increases slower with $K$ in query sessions with larger number of clicks and in query sessions where users click on higher ranked results after clicking on a lower ranked result. We also evaluated \ac{CSM} on three prediction tasks: \begin{inparaenum}[(i)] \item predicting the number of clicks, \item predicting non-consecutive click sequences, and \item predicting clicks. \end{inparaenum} The first two tasks were proposed in our work for the first time and the last one is a standard task used to evaluate click models. We found that \ac{CSM} shows reasonable performance on the first two tasks, outperforming naive baselines that predict \begin{inparaenum}[(i)] \item that a user will click on $\le L$ results, and \item that a user will click on the results in a non-consecutive order \end{inparaenum} with a constant probability optimized on the training set. Finally, we observed that \ac{CSM} reaches state-of-the-art performance on the task of predicting clicks, outperforming \ac{PGM}-based click models \ac{DBN}~\citep{chapelle2009dynamic}, \ac{DCM}~\citep{guo2009efficient}, \ac{CCM}~\citep{guo2009efficient} and \ac{UBM}~\citep{dupret2008user} by a large margin, and matching the results of the recently proposed \ac{NCM}~\citep{borisov2016neural}, which is also implemented as a neural network. In contrast to previous studies, which focus on modeling and predicting separate interaction events (e.g., a click on a result or mouse movement between two results) and properties of these separate events (e.g., time between clicks), our work focuses on understanding, modeling and predicting sequences of these events. As to future work, we see two main directions:% \begin{inparaenum}[(i)] \item to consider other representations of the user's query and the results returned by a search engine, and \item to extend \ac{CSM} to non-linear \ac{SERP} layouts. \end{inparaenum} The user's query can be represented by its text, and the results by their content (title, snippet and main content). We believe that using content-based representations will allow us to learn more interesting dependencies between the results, and improve the performance for rare queries. The encoder proposed in \S\ref{sec:network_architecture} makes use of the fact that search results are presented as a list. Recommender systems present their results using non-linear layouts. Generalizing the encoder will make \ac{CSM} suitable for applications outside of web search. \subsection*{Acknowledgements} This research was partially supported by Ahold Delhaize, Amsterdam Data Science, the Bloomberg Research Grant program, the China Scholarship Council, the Criteo Faculty Research Award program, Elsevier, the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement nr 312827 (VOX-Pol), the Google Faculty Research Awards program, the Microsoft Research Ph.D.\ program, the Netherlands Institute for Sound and Vision, the Netherlands Organisation for Scientific Research (NWO) under pro\-ject nrs CI-14-25, 652.\-002.\-001, 612.\-001.\-551, 652.\-001.\-003, and Yandex. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors. \bibliographystyle{ACM-Reference-Format} \balance
2024-02-18T23:40:42.673Z
2018-05-10T02:06:14.000Z
algebraic_stack_train_0000
3,151
10,446
proofpile-arXiv_065-15351
\section{Introduction}\label{intro} Stellar mass is a fundamental parameter in galaxy evolution studies, presenting correlations with several galaxy properties such as star formation rate \citep[e.g.][]{noeske07,chang15}, gas-phase metallicity \citep[e.g.][]{tremonti04,mannucci09,lara10}, stellar content \citep[e.g.][]{gallazzi05,gallazzi14,diazgarcia18sp}, galaxy size \citep[e.g.][]{shen03,trujillo07,vanderwel14}, morphology \citep[e.g.][]{moffett16,huertas16}, or nuclear activity \citep[e.g.][]{kauffmann03,bongiorno16}. The measurement of stellar mass in modern photometric and spectroscopic surveys is mainly performed by comparing either an empirical or a theoretical library of templates with the observational spectral energy distribution (SED) of galaxies. The mass-to-light ratio associated to the templates, combined with the flux normalization, provides the stellar mass of a given source \citep[see][for a recent review on galaxy mass estimation]{courteau14}. Thus, understanding and characterising the mass-to-light ratio of different galaxy populations is important to derive reliable stellar masses as well as to minimise systematic differences between data sets and template libraries. The mass-to-light versus colour relations (MLCRs) have been studied theoretically and observationally \citep{tinsley81, jablonka92, bell01, bell03, portinari04, gallazzi09, zibetti09, taylor11, into13, mcgaugh14, zaritsky14, vandesande15, roediger15, herrmann16} in the optical, the ultraviolet (UV), and the near-infrared (NIR). These studies find well defined linear MLCRs with low scatter (< 0.2 dex) and focus in the low redshift Universe ($z \lesssim 0.5$). We highlight the work of \citet[][T11 hereafter]{taylor11}. It is based in the SED-fitting to the $ugriz$ broad bands of the Sloan Digital Sky Survey (SDSS DR7, \citealt{sdssdr7}) available for the GAMA \citep[Galaxy And Mass Assembly, ][]{gama} survey area. They find a remarkable tight relation (0.1 dex dispersion) between the mass-to-light ratio in the $i$ band, noted $M_{\star}/L_{i}$, and the rest-frame colour $(g-i)$ at $z < 0.65$, with a median redshift of $\langle z \rangle = 0.2$ for the analysed global population. T11 argue that this small dispersion is driven by (i) the degeneracies of the galaxy templates in such a plane, that are roughly perpendicular to the MLCR, implying from the theoretical point of view $\sim$0.2 dex errors in the mass-to-light ratio even with large errors in the derived stellar population parameters. And (ii) the galaxy formation and evolution processes, that are encoded in the observed galaxy colours and only allow a limited set of solutions, making the observed relation even tighter than the theoretical expectations. In the present work, we expand the results from T11 with the multi-filter ALHAMBRA\footnote{\tt www.alhambrasurvey.com} (Advanced, Large, Homogeneous Area, Medium-Band Redshift Astronomical) survey \citep{alhambra}. ALHAMBRA provides stellar masses thanks to the application of the Multi Filter FITing (\texttt{MUFFIT}, \citealt{diazgarcia15}) code to 20 optical medium-band and 3 NIR photometric points. In addition, ALHAMBRA covers a wide redshift range, reaching $z = 1.5$ with a median redshift of $\langle z \rangle = 0.65$, and reliably classifies quiescent and star-forming galaxies thanks to dust de-reddened colours. We also refine the statistical estimation of the MLCRs. Instead of performing an error-weighted fit to the data, we applied a Bayesian inference model that accounts for observational uncertainties and includes intrinsic dispersions in the relations \citep[see][for other applications of such kind of modelling]{taylor15,monterodorta16lf}. The paper is organised as follows. In Sect.~\ref{data}, we present the ALHAMBRA photometric redshifts, stellar masses, and luminosities. The derived $i-$band MLCRs for quiescent and star-forming galaxies and their modelling are described in Sect.~\ref{mlratio}. Our results are presented and discussed in Sect.~\ref{results}. Summary and conclusions are in Sect.~\ref{conclusions}. Throughout this paper we use a standard cosmology with $\Omega_{\rm m} = 0.3$, $\Omega_{\Lambda} = 0.7$, $\Omega_{\rm k} = 0$, $H_{0}= 100h$ km s$^{-1}$ Mpc$^{-1}$, and $h = 0.7$. Magnitudes are given in the AB system \citep{oke83}. The stellar masses, $M_{\star}$, are expressed in solar masses ($M_{\odot}$) and the luminosities, $L$, in units equivalent to an AB magnitude of 0. The derived mass-to-light ratios can be transformed into solar luminosities $L_{\odot}$ by subtracting 2.05, 1.90, and 1.81 to the presented MLCRs for the $g$, $r$, and $i$ bands, respectively. With the definitions above, stellar masses can be estimated from the reported mass-to-light ratios as \begin{equation} \log_{10} M_{\star} = \log_{10}\,(M_{\star}/L) - 0.4M, \end{equation} where $M$ is the absolute AB magnitude of the galaxy. \section{ALHAMBRA survey}\label{data} The ALHAMBRA survey provides a photometric data set over 20 contiguous, equal-width ($\sim$300\AA), non-overlapping, medium-band optical filters (3500\AA - 9700\AA) plus 3 standard broad-band NIR filters ($J$, $H$, and $K_{\rm s}$) over 8 different regions of the northern sky \citep{alhambra}. The final survey parameters and scientific goals, as well as the technical properties of the filter set, were described by \citet{alhambra}. The survey collected its data for the 20+3 optical-NIR filters in the 3.5m telescope at the Calar Alto observatory, using the wide-field camera LAICA (Large Area Imager for Calar Alto) in the optical and the OMEGA–2000 camera in the NIR. The full characterisation, description, and performance of the ALHAMBRA optical photometric system was presented in \citet{aparicio10}. A summary of the optical reduction can be found in \citet{molino13}, while that of the NIR reduction is in \citet{cristobal09}. \subsection{Bayesian photometric redshifts in ALHAMBRA} The Bayesian photometric redshifts ($z_{\rm b}$) of ALHAMBRA were estimated with \texttt{BPZ2}, a new version of the Bayesian photometric redshift (\texttt{BPZ}, \citealt{benitez00}) code. The \texttt{BPZ2} code is a SED-fitting method based in a Bayesian inference, where a maximum likelihood is weighted by a prior probability. The template library comprises 11 SEDs, with four ellipticals, one lenticular, two spirals, and four starbursts. The ALHAMBRA photometry used to compute the photometric redshifts is PSF-matched aperture-corrected and based on isophotal magnitudes \citep{molino13}. In addition, a recalibration of the zero point of the images was performed to enhance the accuracy of the photometric redshifts. Sources were detected in a synthetic $F814W$ filter image defined to resemble the HST/$F814W$ filter. The total area covered by the current release of the ALHAMBRA survey after masking low signal-to-noise areas and bright stars is 2.38 deg$^{2}$ \citep{arnaltemur14}. The full description of the photometric redshift estimation is detailed in \citet{molino13}. The photometric redshift accuracy, as estimated by comparison with spectroscopic redshifts ($z_{\rm s}$), is $\sigma_{\rm NMAD} = 0.012$ at $F814W \leq 23$. The variable $\sigma_{\rm NMAD}$ is the normalized median absolute deviation of the photometric vs. spectroscopic redshift distribution \citep[e.g.][]{ilbert06, molino13}. The fraction of catastrophic outliers with $|z_{\rm b} - z_{\rm s}|/(1 + z_{\rm s}) > 0.2$ is 2.1\%. We refer to \citet{molino13} for a more detailed discussion. \begin{figure}[t] \centering \resizebox{\hsize}{!}{\includegraphics{UVJ.pdf}} \caption{Rest-frame colour-colour plane $F365 - F551$ vs. $F551 - J$ for the 76642 ALHAMBRA galaxies with $F814W \leq 23$ at $z < 1.5$. This plane is equivalent to the commonly used $UVJ$ diagram, and the quiescent selection box from the literature \citep{williams09} is delimited by dashed lines. The coloured lines show level contours in the density of galaxies, starting at 0.1 galaxies dex$^{-2}$ and increasing in 0.6 galaxies dex$^{-2}$ steps. Red contours show the quiescent population, and blue contours show the star-forming one, as defined with dust-corrected colours by DG17. The side panels show the normalized distribution in $F365 - F551$ ({\it right panel}) and $F551 - J$ ({\it top panel}). In both panels the total distribution is presented in black, the quiescent population in filled red, and the star-forming population in filled blue.} \label{UVJ} \end{figure} \subsection{\texttt{MUFFIT}: stellar masses and rest-frame colours}\label{muffit} The \texttt{BPZ2} template library presented above is empirical, and the different templates have not assigned mass-to-light ratios {\it a priori}. Hence, an alternative methodology is needed to compute the stellar mass of the ALHAMBRA sources. The \texttt{MUFFIT} code is specifically performed and optimized to deal with multi-photometric data, such as the ALHAMBRA dataset, through the SED-fitting (based in a $\chi^2$-test weighted by errors) to mixtures of two single stellar populations (a dominant ``old'' component plus a posterior star formation episode, which can be related with a burst or a younger/extended tail in the star formation history). \texttt{MUFFIT} includes an iterative process for removing those bands that may be affected by strong emission lines, being able to carry out a detailed analysis of the galaxy SED even when strong nebular or active galactic nuclei (AGN) emission lines are present, which may be specially troublesome for intermediate and narrow band surveys. ALHAMBRA sources with $F814W \leq 23$ are analysed with \texttt{MUFFIT} by \citet[][hereafter DG17]{diazgarcia18uvj}, retriving ages, metallicities, stellar masses, rest-frame luminosities, and extinctions. \texttt{MUFFIT} also provides photometric redshifts, using the \texttt{BPZ2} solutions presented in the previous section as a prior to minimise degeneracies and improving the photometric redshift accuracy by $\sim20$\%. The retrieved parameters are in good agreement with both spectroscopic diagnostics from SDSS data and photometric studies in the COSMOS survey with shared galaxy samples (\citealp{diazgarcia15}, DG17). To study the MLCR of ALHAMBRA galaxies and its redshift evolution, we used the redshifts, stellar masses, and rest-frame luminosities in the $gri$ broad-bands derived by \texttt{MUFFIT}. These parameters were estimated assuming \citet[][BC03]{bc03} stellar population models, \citet{fitzpatrick99} extinction law, and \citet{chabrier03} initial mass function (IMF). We refer the reader to \citet{diazgarcia15}, \citet{diazgarcia18sp} and DG17 for further details about \texttt{MUFFIT} and derived quantities. \subsection{Selection of quiescent and star-forming galaxies}\label{selection} Throughout this paper, we focus our analysis on the galaxies in the ALHAMBRA gold catalogue\footnote{\tt http://cosmo.iaa.es/content/ALHAMBRA-Gold-catalog}. This catalogue comprises $\sim100$k sources with $F814W \leq 23$ \citep{molino13}. We split our galaxies into quiescent and star-forming with the dust-corrected version of the $UVJ$ colour-colour plane selection presented in DG17, adapted to the ALHAMBRA medium-band filter system: we used $F365$ instead of the filter $U$ and $F551$ instead of the filter $V$. The ALHAMBRA filter $J$ is the standard one. As shown by DG17, quiescent and star-forming galaxies with $F814W \leq 23$ define two non-overlapping populations in the colour-colour plane after removing dust effects, with the selection boundary located at $(F365 - F551) = 1.5$. We refer the reader to DG17 for a detailed description of the selection process and the study of the stellar population properties of quiescent galaxies in the $UVJ$ colour-colour plane. We show the observed (i.e. reddened by dust) rest-frame distribution of the 76642 ALHAMBRA gold catalogue galaxies with $z < 1.5$ in Fig.~\ref{UVJ}. The quiescent population is enclosed by the common colour-colour selection box \citep{williams09}, but a population of dusty star-forming galaxies is also located in this area. DG17 show that a significant fraction ($\sim 20$\%) of the red galaxies are indeed dusty star-forming, contaminating the quiescent population. Thanks to the low-resolution spectral information from ALHAMBRA, the \texttt{MUFFIT} code is able to provide a robust quiescent vs. star-forming classification. The final sample, located at $z < 1.5$ with $F814W \leq 23$, comprises 12905 quiescent and 63737 star-forming galaxies. The stellar masses covered by our data span the $8 < \log_{10} M_{\star} / M_{\odot} < 11.5$ range. Further details about the stellar mass completeness and the redshift distribution of the sample are presented in DG17. We study the MLCR of these samples in the next section. \section{Mass-to-light ratio vs. colour relation at $z < 1.5$}\label{mlratio} In this section, we study the relation between the mass-to-light ratio in the $i$ band and the observed rest-frame $(g-i)$ colour of the ALHAMBRA galaxies with $z < 1.5$. In some cases, we denote $\Upsilon = \log_{10}(M_{\star}/L_i)$ and ${\mathcal C} = (g-i)$ for the sake of clarity. The redshift, stellar masses, and observed rest-frame (i.e. reddened by dust) luminosities were derived by the \texttt{MUFFIT} code (Sect.~\ref{muffit}). \begin{figure}[t] \centering \resizebox{\hsize}{!}{\includegraphics{ML_vs_gi_tot.pdf}}\\ \resizebox{\hsize}{!}{\includegraphics{DML_taylor.pdf}} \caption{{\it Top panel}: Mass-to-light ratio $M_{\star}/L_{i}$ as a function of the rest-frame colour $g-i$ for the 76642 ALHAMBRA galaxies with $F814W \leq 23$ at $z < 1.5$. The coloured lines show level contours in the density of galaxies, starting at 0.1 galaxies dex$^{-2}$ and increasing in 1.5 galaxies dex$^{-2}$ steps. Red contours show the quiescent population, and blue contours the star-forming one. The side panels show the normalized distribution in $g-i$ ({\it upper panel}) and $\log_{10}\,(M_{\star}/L_{i})$ ({\it right panel}). In both panels the total distribution is presented in black, the quiescent population in filled red, and the star-forming population in filled blue. The black dashed line marks the relation derived by \citet{taylor11} at $z < 0.65$ using GAMA galaxies and SDSS five-band photometry. {\it Bottom panel}: Comparison between the observed ALHAMBRA mass-to-light ratio and that expected from the \citet{taylor11} relation. The red solid line is the best Gaussian fit with median $\langle \Delta \Upsilon \rangle = 0.01$ dex and dispersion $\sigma_{\Delta \Upsilon} = 0.10$ dex.} \label{MLtot} \end{figure} We present the $M_{\star}/L_{i}$ vs. $(g-i)$ colour plane for both quiescent and star-forming galaxies in the {\it top panel} of Fig.~\ref{MLtot}. We find that, for both populations, the mass-to-light ratio increases for redder colours, in agreement with the literature (see references in Sect.~\ref{intro}). We describe the modelling of this dependence in Sect.~\ref{modelling}. In the figure, we also present the MLCR found by T11 in the GAMA survey. Their relation is in excellent agreement with our observed values: the comparison between our measurements and their predictions, $\Delta \Upsilon = \Upsilon - \Upsilon_{\rm T11}({\mathcal C})$, has no bias, $\langle \Delta \Upsilon \rangle = 0.01$ dex, and a small dispersion of $\sigma_{\Delta \Upsilon} = 0.1$ dex ({\it bottom panel} in Fig.~\ref{MLtot}), similar to the one found by T11 with GAMA data. We note that T11 use BC03 stellar population models and a \citet{chabrier03} IMF, as we did, but different extinction laws (\citealt{calzetti00} vs. \citealt{fitzpatrick99}) and star formation histories (SFHs; $e-$fold tau models vs. two stellar populations mix) were assumed. The T11 study is performed in a sample of $z < 0.65$ galaxies with a median redshift of $\langle z \rangle = 0.2$, and our data covers a wider redshift range ($z \leq 1.5$) with a median redshift of $\langle z \rangle = 0.65$. This suggests that the low-redshift relation measured by T11 in GAMA has not evolved significantly with redshift. We assume this redshift independence in the following and test it in Sect.~\ref{MLz}. \subsection{Modelling the intrinsic mass-to-light vs. colour relation}\label{modelling} The measurements presented in the previous section are affected by observational errors, blurring the information and biasing our analysis. We are interested in the intrinsic distribution of our measurements in the mass-to-light ratio vs. colour plane, and in this section we detail the steps to estimate it. The results are presented in Sect.~\ref{results}. The intrinsic distribution of interest is noted $D$, and provides the real values of our measurements for a set of parameters $\theta$, \begin{equation} D\,(\Upsilon_0, {\mathcal C}_0\,|\,\theta), \end{equation} where $\Upsilon_0$ and ${\mathcal C}_0$ are the real values of the mass-to-light ratio and the colour unaffected by observational errors. We derive the posterior of the parameters $\theta$ that define the intrinsic distribution $D$ for both quiescent and star-forming galaxies with a Bayesian model. Formally, \begin{equation} P\,(\theta\,|\,\Upsilon,{\mathcal C},\sigma_{\Upsilon},\sigma_{\mathcal C}) \propto {\mathcal L}\,(\Upsilon,{\mathcal C}\,|\,\theta, \sigma_{\Upsilon}, \sigma_{\mathcal C})\,P(\theta), \end{equation} where $\sigma_{\Upsilon}$ and $\sigma_{\mathcal C}$ are the uncertainties in the observed mass-to-light ratio and $(g-i)$ colour, respectively, ${\mathcal L}$ is the likelihood of the data given $\theta$, and $P(\theta)$ the prior in the parameters. The posterior probability is normalised to one. The likelihood function associated to our problem is \begin{equation} {\mathcal L}\,(\Upsilon,{\mathcal C}\,|\,\theta, \sigma_{\Upsilon}, \sigma_{\mathcal C}) = \prod_k P_k\,(\Upsilon_k,{\mathcal C}_k\,|\,\theta, \sigma_{\Upsilon,k}, \sigma_{{\mathcal C},k}), \end{equation} where the index $k$ spans the galaxies in the sample, and $P_k$ traces the probability of the measurement $k$ for a set of parameters $\theta$. This probability can be expressed as \begin{align} P_k\,(&\Upsilon_k,{\mathcal C}_k\,|\,\theta, \sigma_{\Upsilon,k}, \sigma_{{\mathcal C},k}) = \nonumber\\ &\int \! D\,(\Upsilon_0, {\mathcal C}_0\,|\,\theta)\,P_G(\Upsilon_k\,|\,\Upsilon_0, \sigma_{\Upsilon,k})\,P_G({\mathcal C}_k\,|\,{\mathcal C}_0, \sigma_{{\mathcal C},k})\,{\rm d}\Upsilon_0\,{\rm d}{\mathcal C}_0,\label{eq_pk} \end{align} where the real values $\Upsilon_0$ and ${\mathcal C}_0$ derived from the model $D$ are affected by Gaussian observational errors, \begin{equation} P_G\,(x\,|\,x_0, \sigma_x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\bigg[-\frac{(x - x_0)^2}{2\sigma^2}\bigg], \end{equation} providing the likelihood of observing a magnitude given its real value and uncertainty. We have no access to the real values $\Upsilon_0$ and ${\mathcal C}_0$, so we marginalise over them in Eq.~(\ref{eq_pk}) and the likelihood is expressed therefore with known quantities. We assumed no covariance between $\Upsilon$ and ${\mathcal C}$, although they share the $i$-band luminosity information. We checked by Monte Carlo sampling of the $M_{\star}$, $L_i$, and $L_g$ distributions that such covariance is small, with $\rho_{\Upsilon{\mathcal C}} \sim 0.05$. Hence, we disregard the covariance term by simplicity. We explore the parameters posterior distribution with the \texttt{emcee} \citep{emcee} code, a \texttt{Python} implementation of the affine-invariant ensemble sampler for Markov chain Monte Carlo (MCMC) proposed by \citet{goodman10}. The \texttt{emcee} code provides a collection of solutions in the parameter space, noted $\theta_{\rm MC}$, with the density of solutions being proportional to the posterior probability of the parameters. We obtained central values of the parameters as the median, noted $\langle \theta_{\rm MC} \rangle$, and their uncertainties as the range enclosing 68\% of the projected solutions around the median. We define in the following the distributions assumed for the quiescent and star-forming populations, and the prior imposed to their parameters. The quiescent (${\rm Q}$) population is described as \begin{equation} D_{\rm Q}\,(\Upsilon_0, {\mathcal C}_0\,|\,\theta_{\rm Q}) = P_G\,({\mathcal C}_0\,|\,\mu_{\rm Q}, s_{\rm Q})\,P_G\,(\Upsilon_0\,|\,A_{\rm Q} + B_{\rm Q}\,{\mathcal C}_0, \sigma_{\rm Q}), \end{equation} where $\mu_{\rm Q}$ and $s_{\rm Q}$ describe the intrinsic $(g-i)$ colour distribution, $A_{\rm Q}$ and $B_{\rm Q}$ are the coefficients that define the MLCR, and $\sigma_{\rm Q}$ the intrinsic (i.e. related to physical processes) dispersion of such relation. We have a set of five parameters to describe the distribution of quiescent galaxies, $\theta_{\rm Q} = \{\mu_{\rm Q}, s_{\rm Q}, A_{\rm Q}, B_{\rm Q}, \sigma_{\rm Q}\}$. We used flat priors, $P(\theta_{\rm Q}) = 1$, except for the dispersions $s_{\rm Q}$ and $\sigma_{\rm Q}$, that we imposed as positive. \begin{figure}[t] \centering \resizebox{\hsize}{!}{\includegraphics{ML_vs_gi_red.pdf}}\\ \resizebox{\hsize}{!}{\includegraphics{DML_red.pdf}} \caption{{\it Top panel}: Mass-to-light ratio $M_{\star}/L_{i}$ as a function of the rest-frame colour $g-i$ for the 12905 ALHAMBRA quiescent galaxies with $F814W \leq 23$ at $z < 1.5$. The solid lines show level contours in the density of galaxies as in Fig.~\ref{MLtot}. The grey scale shows the median fitting model to the data, $D_{\rm Q}\,(\Upsilon_0, \mathcal{C}_0\,|\,\langle \theta_{\rm Q} \rangle)$. The red area represents the derived MLCR, $\log_{10}\,(M_{\star}/L_{i}) = 1.02 + 0.84(g-i)$, and its $1\sigma$ intrinsic dispersion, $\sigma_{\rm P} = 0.02$. The side panels show the normalized projected histogram in $g-i$ ({\it upper panel}) and $\log_{10}\,(M_{\star}/L_{i})$ ({\it right panel}). In both panels the observed distribution is presented in filled red, and the derived median model in solid black. {\it Bottom panel}: Comparison between the observed ALHAMBRA mass-to-light ratio and the expected from our median relation (red filled histogram). The solid red line is the best Gaussian fit with median $\langle \Delta \Upsilon \rangle = -0.01$ and $\sigma_{\Delta \Upsilon} = 0.07$. The solid black line illustrates the estimated intrinsic dispersion unaffected by observational uncertainties.} \label{MLred} \end{figure} \begin{figure}[t] \centering \resizebox{\hsize}{!}{\includegraphics{ML_vs_gi_blue.pdf}}\\ \resizebox{\hsize}{!}{\includegraphics{DML_blue.pdf}} \caption{{\it Top panel}: Mass-to-light ratio $M_{\star}/L_{i}$ as a function of the rest-frame colour $g-i$ for the 63737 ALHAMBRA star-forming galaxies with $F814W \leq 23$ at $z < 1.5$. The solid lines show level contours in the density of galaxies as in Fig.~\ref{MLtot}. The grey scale shows the median fitting model to the data, $D_{\rm SF}\,(\Upsilon_0, \mathcal{C}_0\,|\,\langle \theta_{\rm SF} \rangle)$. The blue area represents the derived MLCR, $\log_{10}\,(M_{\star}/L_{i}) = 1.411 + 0.212(g-i) + 0.144(g-i)^2$, and its $1\sigma$ intrinsic dispersion, $\sigma_{\rm SF} = 0.06$. The side panels show the normalized projected histogram in $g-i$ ({\it upper panel}) and $\log_{10}\,(M_{\star}/L_{i})$ ({\it right panel}). In both panels the observed distribution is presented in filled blue, and the derived median model in solid black. {\it Bottom panel}: Comparison between the observed ALHAMBRA mass-to-light ratio and the expected from our median relation (blue filled histogram). The blue solid line is the best Gaussian fit with median $\langle \Delta \Upsilon \rangle = -0.01$ and $\sigma_{\Delta \Upsilon} = 0.09$. The black solid line illustrates the estimated intrinsic dispersion unaffected by observational uncertainties.} \label{MLblue} \end{figure} The star-forming (${\rm SF}$) population presents a more complex behaviour (Fig.~\ref{MLtot}), and we modelled it as \begin{align} D_{\rm SF}\,(\Upsilon_0, {\mathcal C}_0\,|\,\theta_{\rm SF}) =&\,P_G\,({\mathcal C}_0\,|\,\mu_{\rm SF}, s_{\rm SF})\,\bigg[1 + {\rm erf}\,\bigg(\alpha_{\rm SF}\,\frac{C_0 - \mu_{\rm SF}}{\sqrt{2}s_{\rm SF}}\bigg)\bigg]\nonumber\\ &P_G\,(\Upsilon_0\,|\,A_{\rm SF} + B_{\rm SF}\,{\mathcal C}_0 + C_{\rm SF}\,{\mathcal C}_0^2, \sigma_{\rm SF}), \end{align} where $\mu_{\rm SF}$, $s_{\rm SF}$, and $\alpha_{\rm SF}$ describe the intrinsic $(g-i)$ colour distribution, $A_{\rm SF}$, $B_{\rm SF}$ and $C_{\rm SF}$ are the coefficients that define the MLCR for star-forming galaxies, and $\sigma_{\rm SF}$ the intrinsic dispersion of such relation. Important differences are present as compared with the quiescent population. First, the distribution of ${\mathcal C}_0$ is not symmetric (Fig.~\ref{MLtot}). We accounted for this asymmetry by adding the error function term and the parameter $\alpha_{\rm SF}$, that controls the skewness of the distribution \citep{azzalini05}. Second, we found that the dependence of $\Upsilon_0$ with the colour is not linear, but a second order polynomial. This is motivated by the apparent curvature present at the redder colours in Fig.~\ref{MLtot}. To choose between the linear or the parabolic MLCR, we used the Bayesian information criterion (BIC, \citealt{schwarz78}), defined as \begin{equation} {\rm BIC} = N\log n - 2\log {\mathcal L}\,(\Upsilon, {\mathcal C}\,|\,\langle \theta_{\rm MC} \rangle), \end{equation} where $N$ is the number of parameters in the model and $n$ the number of galaxies in the sample. We find $\Delta {\rm BIC} = {\rm BIC}_{\rm par} - {\rm BIC}_{\rm lin} = -750$, favouring the inclusion of $C_{\rm SF}$ in the modelling. For consistency, we checked the application of a parabolic MLCR for quiescent galaxies. We found $\Delta {\rm BIC} = {\rm BIC}_{\rm par} - {\rm BIC}_{\rm lin} = 1.5$, thus favouring the simpler linear model. Figure~\ref{MLtot} also suggests an asymmetric distribution in $\Upsilon_0$, instead of the assumed Gaussian. We studied the inclusion of an additional skew parameter for $\Upsilon_0$, but it was consistent with zero and in this case the BIC favours the simpler Gaussian model without the extra skew parameter. Finally, we have a set of seven parameters to describe the distribution of star-forming galaxies, $\theta_{\rm SF} = \{\mu_{\rm SF}, s_{\rm SF}, \alpha_{\rm SF}, A_{\rm SF}, B_{\rm SF}, C_{\rm SF}, \sigma_{\rm SF}\}$. We used flat priors, $P(\theta_{\rm SF}) = 1$, except for the dispersions $s_{\rm SF}$ and $\sigma_{\rm SF}$, that we imposed as positive. We note that the redshift dimension is not included in our analysis because we are assuming that the MLCRs do not depend on redshift. This was initially motivated by the excellent agreement with the local relation from T11 shown in Fig.~\ref{MLtot}, and we further test this assumption in Sect.~\ref{MLz}. \section{Results}\label{results} We present the derived $i-$band MLCRs for both quiescent and star-forming galaxies in Sect.~\ref{mlratiomodel}, and explore the redshift dependence of the relations in Sect.~\ref{MLz}. The $g-$ and $r-$band MLCRs are presented in Sect.~\ref{MLgr}, and we compare our results with the literature in Sect.~\ref{mlratiolit}. \subsection{Mass-to-light ratio vs. colour relation for quiescent and star-forming galaxies}\label{mlratiomodel} In this section, we present the results of our modelling. They are summarised on Fig.~\ref{MLred} for quiescent galaxies and Fig.~\ref{MLblue} for the star-forming ones. The derived parameters are compiled in Table~\ref{param_tab}. We find that, in both cases, the assumed model describes satisfactorily the observed distributions in colour and mass-to-light ratio spaces. We start presenting the results for quiescent galaxies. We estimate \begin{equation} \Upsilon_{\rm Q} = 1.02 + 0.84\,(g-i)\label{eq_mlred} \end{equation} with a small intrinsic dispersion of $\sigma_{\rm Q} = 0.02$ dex. The observed dispersion, that includes the observational errors, was estimated from a Gaussian fit to the distribution of the variable $\Delta \Upsilon = \Upsilon - \Upsilon_{\rm Q}$, yielding $\sigma_{\Delta \Upsilon} = 0.07$ dex ({\it bottom panel} in Fig.~\ref{MLred}). This value is lower than the 0.1 dex obtained with the local MLCR from T11. \begin{figure}[t] \centering \resizebox{\hsize}{!}{\includegraphics{DML_red_z.pdf}}\\ \resizebox{\hsize}{!}{\includegraphics{DML_blue_z.pdf}} \caption{Comparison between the observed ALHAMBRA mass-to-light ratio and the expected one from our median relation as a function of redshift for quiescent ({\it top panel}) and star-forming ({\it bottom panel}) galaxies with $F814W \leq 23$ (gray dots). The solid lines show level contours in the number of galaxies, starting at one galaxy and increasing in five galaxies steps for quiescent galaxies, and in 15 galaxies steps for star-forming galaxies. The dashed lines mark null difference.} \label{DMLz} \end{figure} For star-forming galaxies, we find \begin{equation} \Upsilon_{\rm SF} = 1.411 + 0.212\,(g-i) + 0.144\,(g-i)^2\label{eq_mlblue} \end{equation} with an intrinsic dispersion of $\sigma_{\rm SF} = 0.06$ dex. The observed dispersion in this case is $\sigma_{\Delta \Upsilon} = 0.09$ dex ({\it bottom panel} in Fig.~\ref{MLblue}), similar to the 0.1 dex obtained with the T11 relation. The higher complexity of the star-forming population is not surprising due to the combination of an underlying old population that dominates the stellar mass, a young population dominating the emission in the bluer bands, and the presence of different dust contents. Despite this fact, a well defined MLCR with a small dispersion is inferred from our data. We conclude that the encouraging 0.1 dex precision in the mass-to-light ratio estimation from the optical colour $(g-i)$ found by T11 is even tighter after the observational uncertainties are accounted for. The dispersion derived with ALHAMBRA data at $z < 1.5$ is 0.02 dex for quiescent galaxies and 0.06 dex for star-forming galaxies. These small dispersions refer to the statistical analysis of the data, and systematic uncertainties related with the assumed stellar population models, IMF, SFHs, extinction law, etc. are not included in the analysis (see \citealt{portinari04}, \citealt{barro11mass}, and \citealt{courteau14}, for a detailed discussion about systematics in stellar mass estimations). The similarity between T11 and our values suggests that the assumed extinction law and the SFHs are not an important source of systematics, with stellar population models and the IMF being the main contributors. The application of different stellar population models, such as those from \citet{vazdekis16}, \citet{maraston05}, or \citet{conroy10}, is beyond the scope of the present work. \begin{table*}[t] \caption{ALHAMBRA mass-to-light ratio vs. $(g-i)$ colour relation.} \label{param_tab} \begin{center} \begin{tabular}{lcccccc} \hline\hline\noalign{\smallskip} Optical band & Galaxy type & $A$ & $B$ & $C$ & $\sigma_{\rm int}$ & $\sigma_{\Delta \Upsilon}$\\ \noalign{\smallskip} \hline \noalign{\smallskip} $i$ band & Quiescent & $1.02 \pm 0.01$ & $0.84 \pm 0.01$ & $\cdots$ & $0.022 \pm 0.001$ & $0.07$\\ & Star-forming & $1.411 \pm 0.003$ & $0.212 \pm 0.007$ & $0.144 \pm 0.005$ & $0.061 \pm 0.001$ & $0.09$\\ \noalign{\smallskip} \hline \noalign{\smallskip} $r$ band & Quiescent & $1.02 \pm 0.01$ & $1.01 \pm 0.01$ & $\cdots$ & $0.021 \pm 0.001$ & $0.08$\\ & Star-forming & $1.453 \pm 0.003$ & $0.373 \pm 0.008$ & $0.128 \pm 0.007$ & $0.063 \pm 0.001$ & $0.10$\\ \noalign{\smallskip} \hline \noalign{\smallskip} $g$ band & Quiescent & $0.98 \pm 0.02$ & $1.28 \pm 0.02$ & $\cdots$ & $0.014 \pm 0.001$ & $0.07$\\ & Star-forming & $1.386 \pm 0.003$ & $0.707 \pm 0.009$ & $0.078 \pm 0.007$ & $0.057 \pm 0.001$ & $0.09$\\ \hline \end{tabular} \end{center} \end{table*} \subsection{Redshift evolution of the mass-to-light ratio vs. colour relation}\label{MLz} The results presented in previous section implies a tight relation of the mass-to-light ratio with the optical colour $(g-i)$. In our analysis, we assumed such a relation as redshift independent, motivated by the nice agreement with the $z \sim 0.2$ results from T11 (Fig.~\ref{MLtot}). We present the redshift evolution of $\Delta \Upsilon$ in Fig.~\ref{DMLz}, both for quiescent and star-forming galaxies. We find no evidence of redshift evolution either for quiescent or star-forming galaxies. The median $|\Delta \Upsilon|$ at any redshift is always below 0.02 dex, and a simple linear fitting constrains the possible residual evolution with $z$ to less than $0.05$ dex since $z = 1.5$. We conclude therefore that the relations presented in Eq.~(\ref{eq_mlred}) and Eq.~(\ref{eq_mlblue}) have not changed appreciably during the last 9 Gyr of the Universe, with quiescent and star-forming galaxies galaxies evolving along the derived relations since $z = 1.5$. \subsection{Mass-to-light ratio vs. colour relation in the $r$ and $g$ bands}\label{MLgr} We complement the results in the previous sections with the estimation of the intrinsic relation between the mass-to-light ratio in the $r$ and $g$ bands with $(g-i)$ colour, both for quiescent and star-forming galaxies. We confirm the tight relations found with the $i$-band luminosity and the curvature for the star-forming population. We present the estimated relations in Table~\ref{param_tab} for future reference. We find that the normalization of the MLCRs are similar in the $gri$ bands at 0.05 dex level. This is because our luminosities are expressed in AB units, so a null colour implies the same luminosity in all the bands, which share a common stellar mass. Regarding the slope for the quiescent population, it is larger for bluer bands. This implies that at the median colour of the quiescent population, $\langle (g-i) \rangle = 1$, the mass-to-light ratio decreases from $\log_{10}(M_{\star}/L_{g}) = 2.26$ to $\log_{10}(M_{\star}/L_{i}) = 1.86$, reflecting the larger contribution to the stellar mass budget of redder low-mass stars. In the case of the star-forming galaxies, the parameter $B_{\rm SF}$ is larger at bluer bands, but the parameter $C_{\rm SF}$ is smaller. This implies a lower curvature of the MLCR in the $g$ band. We checked that the quadratic model is still favoured by the data even in the $g$-band case. The intrinsic dispersion in the MLCRs is still low and similar to the $i$-band values, with $\sigma_{\rm P} \sim 0.02$ dex and $\sigma_{\rm SF} \sim 0.06$ dex. Finally, the observed dispersion, affected by observational errors, are also similar to the fiducial $i$-band values, as summarised in Table~\ref{param_tab}. We conclude that the MLCR holds in the optical range covered by the $gri$ bands, confirming the tight correlation between optical mass-to-light ratios and the rest-frame colour $(g-i)$. \begin{figure}[t] \centering \resizebox{\hsize}{!}{\includegraphics{MLgi_lit.pdf}} \caption{Comparison of the observed mass-to-light ratio vs. $(g-i)$ colour in ALHAMBRA with MLCRs from the literature. The contours are the same as in Fig.~\ref{MLtot}. The dashed green line is from the observational study of T11. The other black lines are from theoretical expectations: \citet[][dotted]{zibetti09} and \citet[][solid]{roediger15}. All the MLCRs have been scaled to a \citet{chabrier03} IMF and referred to BC03 models.} \label{MLlit} \end{figure} \subsection{Comparison with the literature}\label{mlratiolit} In addition to the T11 work, several studies in the literature have tackled the problem of the MLCR, both theoretically and observationally (see references in Sect.~\ref{intro}). We present the $i$-band mass-to-light ratio vs. $(g-i)$ colour from previous work in Fig.~\ref{MLlit}. We only present the colour range imposed by the ALHAMBRA data, $0 < (g-i) < 1.5$ (Fig.~\ref{MLtot}). All the MLCRs have been scaled to a \citet{chabrier03} IMF and referred to BC03 stellar population models to minimise systematic differences. We find a reasonably good agreement with the theoretical results from \citet{roediger15} and \citet{zibetti09}. The comparison of these predictions with our values yields a bias of $\langle \Delta \Upsilon \rangle = -0.01$ and $0.08$, and a dispersion of $\sigma_{\Delta \Upsilon} = 0.17$ and $0.19$, respectively. We highlight the predictions from \citet{roediger15}, that have no bias and only a factor of two larger dispersion than our optimal MLCRs. From the observational point of view, we recall the agreement with the results from T11 (Fig.~\ref{MLtot}). Their relation provides no bias and a dispersion of $\sigma_{\Delta \Upsilon} = 0.1$. We also compare our results with the popular work by \citet{bell03}. They relation yields a bias of $\langle \Delta \Upsilon \rangle = -0.12$ and again a dispersion of $\sigma_{\Delta \Upsilon} = 0.1$. We note that the MLCR of \citet{bell03} was estimated with \texttt{PEGASE} \citep{pegase} stellar populations models and a ``diet Salpeter'' IMF. Hence, we applied to the relations in \citet{bell03} a -0.10 dex offset to account for the differentce in the stellar population models, as estimated by \citet{barro11mass}, and a -0.15 dex offset to scale the IMF. Following T11, we conclude that the range of colours covered by the observed galaxies, that are consequence of their formation and evolution, restrict the parameter space of the models and provide tighter MLCRs than expected from theory. The bias with respect to previous work is at $\sim 0.1$ dex level, supponting the tight relations derived from ALHAMBRA data. \section{Summary and conclusions}\label{conclusions} We used the redshifts, stellar masses and rest-frame colours derived with \texttt{MUFFIT} for 76642 ALHAMBRA sources at $z \leq 1.5$ to explore the $i$-band mass-to-light ratio relation with the rest-frame $(g-i)$ colour. As shown by T11, there is a tight (0.1 dex) MLCR in the GAMA survey at $z \sim 0.2$, and we expand their study up to $z = 1.5$. We found that the $i-$band MLCR is also present in ALHAMBRA at $z \leq 1.5$, for both quiescent and star-forming galaxies. The data suggests a lineal MLCR for quiescent galaxies and a quadratic one for star-forming systems, as summarised in Table~\ref{param_tab}, and also holds for $g$ and $r$ luminosities. These relations present an intrinsic dispersion, after accounting by observational uncertainties, of $\sigma_{\rm P} = 0.02$ dex and $\sigma_{\rm SF} = 0.06$ dex. These dispersions are intrinsic, and must be accounted in addition to the observational uncertainties of the colour. We also stress that they refer to statistical dispersions, and the final error budget in mass-to-light ratio predictions should account by systematic uncertainties ($\sim0.2$ dex; e.g. \citealt{barro11mass}) related with the assumed stellar population models, IMF, SFHs, extinction law, etc. Our measurements suggests that the estimated MLCRs are redshift-independent at least since $z \sim 1.5$. This is, quiescent and star-forming galaxies have evolved along the MLCRs in the last 9 Gyrs of the Universe, preserving the observed relations with time. We compare our data with other proposed MLCRs in the literature. The observational relation of T11, based on GAMA survey data, reproduces our values with no bias and dispersion $\sigma_{\Delta \Upsilon} = 0.1$ dex. Regarding theoretical studies, the MLCR from \citet{roediger15} matches best with our measurements, the bias is below 0.1 dex and the dispersion is $\sigma_{\Delta \Upsilon} = 0.17$ dex. Our results could be expanded in several ways. The analysis could be made by using different stellar population models to test the redshift independence of the relations and the curvature of the star-forming MLCR. The study of the MLCR at higher redshifts will provide extra clues about the absence of redshift evolution, for which a NIR-selected ALHAMBRA sample is needed \citep{nieves17}. Finally, the study at masses lower than $\log_{10} M_{\star} \sim 8$ will test the results' robustness at the bluer end of the relation, where intense star-forming episodes could compromise the stellar masses estimated with our current techniques. The derived relations can be used to estimate stellar masses with photometric redshift codes based on a limited set of empirical templates, such as \texttt{BPZ2}. The intrinsic MLCRs, unaffected by observational errors, are the needed priors to define the probability distribution function (PDF) of the stellar mass. The PDF-based estimator of the luminosity function was presented by \citet{clsj17lfbal} as part of the PROFUSE\footnote{\tt profuse.cefca.es} project, that uses PRObability Functions for Unbiased Statistical Estimations in multi-filter surveys, and successfully applied to estimate the $B$-band luminosity function at $z < 1$ \citep{clsj17lfbal} and the $UV$ luminosity function at $2.5 \leq z < 4.5$ \citep{viironen18} in ALHAMBRA. The present paper is a fundamental step towards a PDF-based estimator of the stellar mass function. \begin{acknowledgements} We dedicate this paper to the memory of our six IAC colleagues and friends who met with a fatal accident in Piedra de los Cochinos, Tenerife, in February 2007, with a special thanks to Maurizio Panniello, whose teachings of \texttt{python} were so important for this paper. We thank R. Angulo, S. Bonoli, A. Ederoclite, C. Hern\'andez-Monteagudo, A. Mar\'{\i}n-Franch, A. Orsi, and all the CEFCA staff, post-docs, and students for useful and productive discussions. This work has been mainly funding by the FITE (Fondos de Inversiones de Teruel) and the Spanish MINECO/FEDER projects AYA2015-66211-C2-1-P, AYA2012-30789, AYA2006-14056, and CSD2007-00060. We also acknowledge the financial support from the Arag\'on Government Research Group E96 and E103. We acknowledge support from the Spanish Ministry for Economy and Competitiveness and FEDER funds through grants AYA2010-15081, AYA2010-22111-C03-01, AYA2010-22111-C03-02, AYA2012-39620, AYA2013-40609-P, AYA2013-42227-P, AYA2013-48623-C2-1, AYA2013-48623-C2-2, AYA2016-76682-C3-1-P, AYA2016-76682-C3-3-P, ESP2013-48274, Generalitat Valenciana project Prometeo PROMETEOII/2014/060, Junta de Andaluc\'{\i}a grants TIC114, JA2828, P10-FQM-6444, and Generalitat de Catalunya project SGR-1398. K. V. acknowledges the {\it Juan de la Cierva incorporaci\'on} fellowship, IJCI-2014-21960, of the Spanish government. A. M. acknowledges the financial support of the Brazilian funding agency FAPESP (Post-doc fellowship - process number 2014/11806-9). B. A. has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 656354. M. P. acknowledges financial supports from the Ethiopian Space Science and Technology Institute (ESSTI) under the Ethiopian Ministry of Science Science and Technology (MoST). This research made use of \texttt{Astropy}, a community-developed core \texttt{Python} package for Astronomy \citep{astropy}, and \texttt{Matplotlib}, a 2D graphics package used for \texttt{Python} for publication-quality image generation across user interfaces and operating systems \citep{pylab}. \end{acknowledgements}
2024-02-18T23:40:42.840Z
2018-05-10T02:11:55.000Z
algebraic_stack_train_0000
3,165
6,946
proofpile-arXiv_065-15457
\section{Introduction} During the recent years, micro-blogging become a source of current and topical news. Twitter\footnote{www.twitter.com} is one of the most popular micro-blogging service which brings together millions of users and allows to publish and exchange short messages, known under the name of \emph{tweets}. Twitter was a pioneer in providing APIs to access public data since 2006\footnote{https://developer.twitter.com/en.html} and enabling applications to retrieve tweets using a set of keywords. However, there is no control on the retrieved tweets which are not always relevant. Reaching relevant answers requires multiple calls and filtering the retrieved results. There are several research works that have as objective searching relevant tweets according to a given query~\cite{gabielkov2014sampling,gouriten2014scalable,li2013towards,safran2012improving}. Researchers tried to improve the ways based on features such as hashtags, retweets and mentions to retrieve the most relevant tweets from Twitter. For example, one basic and simple method is if a query just contains one word, they find the frequency of this word in different features of Twitter\cite{nazi2015walk}. Researchers also tried to use external information extracted from sources such as Wikipedia to enrich a query by finding the most similar words to find the most related results to the given query \cite{guisado2016enrich}. {In this paper, we exploited different external resources such as \textit{Wordnet}, \textit{Wordnik}, \textit{DBPedia}, and \textit{New York Times(NYT)} to have the most complete set of keywords similar to the original query keywords. We also defined a smart crawling algorithm based on the tweet's features to reach the most relevant ones as early as possible and going beyond Twitter limitations. More precisely, we define a new crawling algorithm called \textbf{Smart Twitter Crawling} (STiC) by considering two different aspects: (i) Enriching the original query using external sources, (ii) crawling Twitter graph using a DFS search based on new scoring to reach the best nodes (tweets) related to the targeted topic. To measure the relevance of a tweet, the algorithm assigns a score to a tweet by taking into account its content, its hashtags and the user who has relation with it, such as posting, replying, being mention or retweeting. Given this score we are able to select highly related tweets in each iteration and continue by adding the relate valuable tweets. Different experiments have been achieved on tweets collected for different kind of query keywords to evaluate the precision of STiC algorithm. Thanks to our approach, compared to a simple BFS search, we increased the precision of related retrieved tweets up to 86\% for some queries. Also in case of number of retrieved results, we got significant improvement in compare with simple BFS model. In this paper, we first present related work in Section~\ref{sec:relatedwork}. Then, we present in detail our smart Twitter crawling approach including query enrichment and Twitter graph crawling in Section~\ref{sec:approach}. In section~\ref{sec:experiments}, we present and discuss our results using different queries. Finally,in Section~\ref{sec:conclusion} we give our conclusions and perspectives. \section{Related Work} \label{sec:relatedwork} Even before the emerging of social media, crawling the Web pages has been a common practice~\cite{li2013towards}. Finding the Web pages related to one topic was one of the interesting approaches to study~\cite{safran2012improving}. The common applied methodology, used neural network and vector space models to compute the priority models\cite{safran2012improving}. Deligenti\cite{diligenti2000focused} in 2000, introduced a model for focused crawling based on context graph by assigning appropriate credits to documents. Also Safran and et al.,\cite{safran2012improving} at 2012 proposed a new approach to improve relevance prediction in focused Web crawlers. They chose Na\"{\i}ve Bayesian as the base prediction model and they used four relevant attributes to create their prediction model: URL, anchor text, surrounding texts, and parent pages. They extended the list of keywords related to a topic by using WordNet and extracted relevant initial seed URLs automatically by choosing the top k-URLs retrieved from Google, Yahoo and MSN search engines. Gouriten \cite{gouriten2014scalable}, in 2014 introduced an adaptive, scalable and generic system for focused crawling to identify and optimized the relevant subsystems. Their algorithm was defined for focused Web crawling, topic-centered Twitter user crawl, deep Web siphoning through a keyword search, gossip peer-to-peer search and real-world social network to answer a query. Xinyue Wang and et al, in 2015\cite{wang2015adaptive} studied about finding a solution for crawling Microblog feeds in real time. They proposed an adaptive crawling model which extracts the hashtags from Twitter iteratively to achieve a list of relevant tweets to a query. Cha and et al, \cite{cha2010measuring} have worked on how to find most influential users in Twitter and his results could be useful when it be used to complete the idea for topic-focused crawling. In 2010, Tianyi and at al., \cite{wang2010unbiased} proposed a method to unbiased crawling the Tweets based on Metropolis-Hasting Random Walk(MHRW) using USDSG in the new method. Rui and et al., in 2013 \cite{li2013towards} proposed a data platform to automatically monitor “target” tweets from the Twitter stream for any specific topic. They designed Automatic Topic-focused Monitor (ATM), which first samples tweets from the Twitter stream and second selects a list of keywords to track based on the samples. GabielKov and et al.in 2014\cite{gabielkov2014sampling}, were working on sampling techniques for studying OSN. They have two scenarios for sampling and they want to find the best technique for each of them: first, they are looking for most popular users; the second one is that they have an aim to obtain unbiased sample of users. They showed that the classical sampling methods are highly biased by high degree nodes. \cite{gabielkov2014studying} In \cite{kwak2010twitter}, they proved that BFS will have a large bias when the number of requests to the API is limited. In RW, choosing the next node for visiting, depends on the degree of the node. They used USDSG (Unbiased sampling for directed social graphs) algorithm, proposed in \cite{wang2010unbiased}, which is a modification of RW and discards a random jump to a node with a probability proportional to the degree of the node and replace arcs with undirected edges.\newline Selecting keywords to retrieve relevant documents have been studied in lots of academic researches. As we mentioned earlier, Safran\cite{safran2012improving} at 2012 used WordNet to extend the extracted word set. Rui and co in 2013 \cite{li2013towards} proposed ATM Framework to select keywords in a constrained optimization approach, which finds near optimal keywords with guarantee (e.g., keywords are not too specific) and considers two types of costs. Also, ATM updates keywords in iterations which monitor the Twitter stream continuously. In 2015, Xinyue Wang and et al.,\cite{wang2015adaptive} reviewed the retrieved tweets to identify new keywords for automatically live event tweet collection, these new keywords were mostly based on the hashtags which was embedded inside the tweet. Gusiado and et al. in 2016 \cite{guisado2016enrich} presented a query rewriting cloud service. Their aim is solving the problem of vocabulary mismatch and topic inexperience of users. So, they proposed a method which offers a generic solution by analyzing the websites using Wikipedia and identifying the entities called ENRICH. \\ \section{Smart Twitter Crawling Approach} \label{sec:approach} Monitoring the set of tweets related to a target topic is an unsolved problem~\cite{congosto2017t}. In this section we present the Smart Twitter Crawling (STiC) approach we defined as a solution to this problem. The figure \ref{fig:architecture} describes the overall of our approach. STiC algorithm enriches initial keywords using external sources to query Twitter graph. It builds an initial sub-graph providing related seeds. The crawling is then based on a DFS search and exploits each considered tweet's features to assign a score and to select the most relevant tweets to be crawled. The results of the crawl will be a sub-graph made by different crawled nodes and the edges between them. This sub-graph will be stores in the a graph database, which is Neo4j in our work. Before going any further into details, we first present the input and output data representation of STiC algorithm. \begin{figure} \includegraphics[width=\linewidth, height=70mm]{Architecture.png} \caption{Architecture of our approach} \label{fig:architecture} \end{figure} \subsection{Input and Output of STiC Algorithm} \label{sec:representation} Twitter data can be represented as a graph $\mathcal{T} = <\mathcal{V} , \mathcal{U}> $ where $\mathcal{V}$ is the set of nodes and $\mathcal{U}$ is the set of directed edges between nodes. Different types of nodes are defined in $\mathcal{V}$: \begin{itemize} \item $t$ is a \emph{tweet}, accompanied by attribute values, which include the text of the tweet and its identifier. \item $h$ is a \emph{hashtag} extracted from the tweet. \item $u$ is a \emph{user} accompanied by its identifier value. \end{itemize} Different types of relations are defined in $\mathcal{U}$: \begin{itemize} \item \textbf{$<t,h>$} edge called $Has\_Hashtag$ which relates a tweet $t$ to a hashtag $h$ it contains. \item \textbf{$<t,t^\prime>$} edges called: \begin{itemize} \item $Quotes$ which relates a tweet $t$ to a tweet $t^\prime$. In this case, the text of node $t$ contains the text of $t^\prime$ in addition to its own text. \item $Replies\_To$ which relates a reply tweet $t$ to the origin tweet $t^\prime$. \item $ReTweets$ which relates a retweet $t$ to the origin tweet $t^\prime$. In this case, the text of $t$ is exactly the same as text of tweet $t^\prime$. \end{itemize} \item \textbf{$<t,u>$} called $Mentions$ which relates a tweet $t$ to the user $u$ mentioned in it. \item $\textbf{$<u,t>$}$ edges called: \begin{itemize} \item $Favorites$ which relates a user $u$ to a tweet $t$ which means $u$ likes $t$. \item $Posts$ which relates a user $u$ to a tweet $t$ which means $u$ posted $t$. \end{itemize} \end{itemize} \begin{itemize} \item $\textbf{$<u,u\prime>$}$ edge called $Follows$ which relates a user $u$ to a user $u\prime$ he follows. \end{itemize} The input of STiC algorithm is the list of keywords after enrichment and an initial sub-graph of Twitter in which nodes has no any additional information than what is available from Twitter API. The output of the algorithm is a sub-graph, in which each node is accompanied with a score value. STiC algorithm defines 3 main procedures: \begin{itemize} \item \textbf{Query enrichment procedure:} First step of algorithm to enrich list of keywords for the given query. \item \textbf{Select new node procedure:} The procedure for selecting next node for being visited in crawling process. \item \textbf{Smart crawling procedure:} The main process of algorithm to use the enriched list of keywords and node selection process for visiting new nodes and crawl Twitter. \end{itemize} \subsection{Query enrichment} \label{sec:queryEnrichment} The REST Twitter APIs offer the possibility to retrieve tweets using a set of keywords. When a user tries to retrieve tweets he is not always conscious of the best set of keywords to use in order to obtain the correct subset of tweets. In our research we use the external sources to enrich the set of keywords that the user specifies in the target query. Alg.\ref{alg:CrawlAlgorithmQE} expresses the process of enriching a query. In this procedure, we collect all related words from different data sources, such as NYT~\cite{zhao2011comparing}, Wikipedia~\cite{guisado2016enrich} and WordNet~\cite{safran2012improving} APIs. We identified a list of APIs that provide as source of information news, articles or synonyms and we identified the following ones: \textit{New York Times(NYT)}, \textit{Wordnet}, \textit{Wordnik}, \textit{DBPedia}, \textit{Thesaurus}, \textit{Datamuse}, \textit{WordsAPI}, \textit{Pydictionary}. We also give a weight to each keyword to specify the relevance of the keyword to the original query keywords. For assigning this weight, we consider the subset of keyword given by each external source as a separated set and we calculate the IDF of each keyword. For this aim, each external source is considered as a document and then we calculated term frequency as the number of occurrence of the word in all documents. Then we assign the weight of each word as term frequency in all documents multiply by its IDF score. For instance for \textit{obama} original query keyword we retrieve the following terms and their frequency: ({\textit{barack obama}: 4, \textit{barack hussein obama}: 3, \textit{barack}: 3, \textit{obama}: 3, \textit{community organizer in chief}: 2, \textit{barak obama}: 2, etc.}) \\ Then the weight (total term frequency*IDF score) for each term is computed: \textit{barack obama}: 0.96, \textit{barack hussein obama}: 0.89, \textit{community organizer in chief}: 0.78, \textit{barak obama}: 0.78, etc.})\\ Finally we select the top score keywords: (\textit{barack obama}, \textit{barack hussein obama}, \textit{community organizer in chief}, \textit{barak obama}, \textit{barrack obama}, \textit{president obama}, \textit{barackobama}, \textit{brack obama}, \textit{obama barack}, etc.) We finally merge the all keywords extracted from all APIs with their calculated weights and we sort them based on their weights and on a threshold $\alpha$. The key factor for selecting $\alpha$ is that the keywords with the score above the threshold should not be irrelevant to the query. This threshold may vary and depends on the type of the query and results of the query enrichment. The algorithm \ref{alg:CrawlAlgorithmQE} describes the function \textit{mostRelatedKeywords} which returns the \begin{comment} \begin{algorithm} \caption{STiC Algorithm - Query Enrichment} \label{alg:CrawlAlgorithmQE} \hspace*{\algorithmicindent} \textbf{Input} $external\_sources\_list$ as \textit{New York Times(NYT)}, \textit{Wordnet}, \textit{Wordnik},\textit{DBPedia}, \textit{Thesaurus}, \textit{Datamuse}, \textit{WordsAPI}, \textit{Pydictionary} \\ $\alpha$, \textit{keyword relevance threshould}, $query$\\ \hspace*{\algorithmicindent} \textbf{Output} $keywords\_list$ \begin{algorithmic}[1] \For {$source$ in $external\_sources\_list$} \State $keywords\_list.add(related\_words(source, query))$ \EndFor \State $calculateTFIDFScore(keywords\_list)$ \State \Return {$mostRelatedKeywords(keywords\_list,\alpha)$} \end{algorithmic} \end{algorithm} \end{comment} \begin{algorithm}[t] \DontPrintSemicolon \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \SetKwProg{Fn}{Function}{:}{} \caption{STiC Algorithm - Query Enrichment} \label{alg:CrawlAlgorithmQE} \KwInput {$external\_sources\_list$ \tcp*{\textit{New York Times(NYT)}, \textit{Wordnet}, \textit{Wordnik},\textit{DBPedia}, \textit{Thesaurus}, \textit{Datamuse}, \textit{WordsAPI}, \textit{Pydictionary}}} \KwInput {$\alpha$ \tcp*{\textit{keyword relevance threshold}}} \KwInput {$query$} \KwOutput{$keywords\_list$} \BlankLine \For {$source$ in $external\_sources\_list$} { {$keywords\_list.add(related\_words(source, query))$} } {$calculateTFIDFScore(keywords\_list)$}\\ {\textbf{return} {$mostRelatedKeywords(keywords\_list,\alpha)$}} \end{algorithm} \subsection{Smart Crawling} \label{sec:smartCrawling} In the Alg.\ref{alg:CrawlAlgorithmSC}, the list of keywords from the Alg.\ref{alg:CrawlAlgorithmQE}, the initial node and number of iterations, will be given to the procedure and in first iteration, a sub-graph of nodes from neighbors of initial node will be created and scores of nodes will be updated. The initialization of the graph is crucial for the success of the following iterations of the algorithm: a bad initialization can lead to a graph where there is any node is related to query. In the beginning, we chose manually a single node we knew was relevant (e.g. we can take the official hashtag if it is in the set of keywords specified by the user). While quite effective, this selection cannot be transparently done since it needs manual selection for each different query and there is chance of leading the crawling to a specific part of data. We initialize the graph automatically with the result of a simple Twitter API search using the enriched keywords. This preliminarily results allow for the first round of the crawl. Then, as number of iterations, the crawler will visit the selected node from Alg.\ref{alg:CrawlAlgorithmNS}, which is explained in \ref{sec:nodeSelection} and it will add its neighbors to the list of candidate nodes, which will be given to the Alg.\ref{alg:CrawlAlgorithmNS} in the next iteration. Then, before going for the next iteration, we need to update the scores of nodes which is explained in \ref{sec:scoreCalculation}. In the end, a sub-graph of Twitter will be returned which has been created by crawling nodes based on the defined score for each one. \begin{comment} \begin{algorithm} \caption{STiC Algorithm - Smart Crawling} \label{alg:CrawlAlgorithmSC} \hspace*{\algorithmicindent} \textbf{Input}: keywords\_list, iterations, initial\_relevant\_node \\ \hspace*{\algorithmicindent} \textbf{Output}:$visited\_nodes$ \begin{algorithmic}[1] \State $i \gets \textit{iterations}$ \State ${ n_0 \gets \textit{initial\_relevant\_node} }$ \State $current\_node \gets n_0$ \State $update\_queue\_nodes(current\_node\_neighbors)$ \State $update\_nodes\_scores()$ \State $add\_to\_visited\_list(current\_node)$ \State $\textit{i} \gets \textit{i \- 1}$ \While {i $>$ 0}: \State $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ \While{$is\_visited(current\_node)$} \State $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ \EndWhile \State $update\_queue\_nodes(current\_node\_neighbors)$ \State $update\_nodes\_scores()$ \State $add\_to\_visited\_list(current\_node)$ \State $\textit{i} \gets \textit{i \- 1}$ \EndWhile \Return \textit{visited\_list} \end{algorithmic} \end{algorithm} \end{comment} \begin{algorithm}[t] \DontPrintSemicolon \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \SetKwProg{Fn}{Function}{:}{} \caption{STiC Algorithm - Smart Crawling} \label{alg:CrawlAlgorithmSC} \KwInput {$keywords\_list$} \KwInput {$iterations$} \KwInput {$initial\_relevant\_node$} \KwOutput{$visited\_nodes$ } \BlankLine { $i \gets \textit{iterations}$ }\\ { ${ n_0 \gets \textit{initial\_relevant\_node} }$ }\\ { $current\_node \gets n_0$ }\\ { $update\_queue\_nodes(current\_node\_neighbors)$ }\\ { $update\_nodes\_scores()$ }\\ { $add\_to\_visited\_list(current\_node)$ }\\ { $\textit{i} \gets \textit{i - 1}$ }\\ \While {i $>$ 0} { { $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ }\\ \While{$is\_visited(current\_node)$} { { $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ }\\ } { $update\_queue\_nodes(current\_node\_neighbors)$ }\\ { $update\_nodes\_scores()$ }\\ { $add\_to\_visited\_list(current\_node)$ }\\ { $\textit{i} \gets \textit{i - 1}$ }\\ } \textbf{return} $\textit{visited\_list}$ \end{algorithm} \subsection{Node Selection} \label{sec:nodeSelection} Our crawling method selects the next node from which to continue the crawling during each iteration. On the one hand we want to explore the most promising nodes, i.e. the ones with the highest estimate scores, and not waste queries on irrelevant nodes. On the other hand, we would also like to avoid remaining in a portion of the Twitter graph and miss other relevant nodes. The first objective can be understood as efficiency whereas the second as completeness. A common solution to this trade off is the introduction of random based choice. In the equations below, the $p$ (a real number between 0 and 1) parametrizes the probability distribution. The closer $p$ is to 1, the higher the probability to chose a high score is. On the contrary, if $p$ is close to 0, the low scored nodes will have a higher probability of being chosen. This probability distribution is inspired by a multinomial distribution and a soft-max function. For each node $i$, the probability to be selected $P_i$ is given by the non-normalized multinomial function $f_i$, and depending of the parameter $p$: \[ P_i = \dfrac{ \exp \left( f_i \right)}{\underset{i}\sum \exp \left( f_i \right)} \] \[ \text{where: }f_i = \dfrac{x_i }{x_{\min}} \cdot p + \dfrac{x_{\max}}{x_i} \cdot (1-p) \] Using this formula, we are able to jump from one node to another one if the score of the node in not large enough to be selected for crawling. In this case, we use the minimum and maximum scores of the crawled nodes and choose $p$ arbitrary, to define the function $f_i$ for each node. The probability of $P_i$ shows the chance of a node to be selected based on the fraction of its $f_i$ to sum of $f_i$ for other crawled nodes. In next section we are going to describe the process of calculating the score for different types of nodes. The algorithm \ref{alg:CrawlAlgorithmNS} \subsubsection{Score Calculation} \label{sec:scoreCalculation} At the beginning of each iteration, a node, \textit{$n_0$}, is selected and all internal information about this node is retrieved from Twitter which includes the \textit{id\_str} of its neighbors, who are then added to the queue for the next iteration. Then, the scores of all nodes are updated. For \textit{$n_0$} only \textsc{text\_score} is available at the beginning and as there is no other node which has been visited before that, its \textsc{estimate\_score} is equal to 0. The final score is calculated according to the various score attributes to find the relevance of a node according to the initial query. The score related to a text of a tweet is defined as follows: \begin{itemize} \item \textsc{text\_score}$(t)$: is defined for a tweet node $t$ and is represent the frequency of query keywords in the text body of the tweet. \subsubsection{Tweets content analysis} \label{sec:contentAnalysis} Contrary to the User and Hashtag nodes (the Hashtag nodes are merely a word or a name), a tweet is characterized by a textual content that allows us to use Natural Language Processing tools to judge their relevance to the target topic. We begin this step with a list of keywords. The analysis of the tweet consists in a lexical and semantic comparison between the keywords and the text body. This analysis begins with the lemmatization of both texts. This is a classic NLP tool that transforms the words into their root form. This allows us to ignores plurality for nouns and tenses for verbs. Punctuation marks and linking words (e.g. the, and, a, of . . . ) are removed because they usually do not convey useful semantic knowledge. Both texts are then compared both lexically and semantically. The lexical comparison is done by counting the number of words the texts have in common. We note that this count is not normalized, but the limit of 280 characters of a tweet prevents the possibility of a longer text that contains a lot of keywords. The semantic comparison is done using the Word Net database. In this database, words possess various relationships with each other. In particular, we utilize the hyponym relationship: the link between two words is be measured as the depth of the closest common ancestor in the hyponymy graph. A keyword is considered to match a word with a semantical relation if the similarity value given by Word Net is higher than a threshold set beforehand. At last, the score from the text of a tweet is the sum of weights of keywords matched (either by semantic relation or lexical).\\ \item \textsc{estimate\_score}: is estimating the relevance of the node, $n\in$ $\mathcal{V}$ based on the score of a direct predecessor node, $n^\prime\in$ $\mathcal{V}$, which is a visited node that has a relation with $n$ and the edge $e\in$ $\mathcal{U}$ connects $n$ and $n^\prime$ together. \begin{itemize} \item {for a Tweet}: \textsc{tweet\_estimate\_coef}$ = [0.4, 0.6, 1.0, 1.0, 1.0, 0.5,0.5]$\\ These coefficients concern, in order, the user who posts the tweet, mentioned users in this tweet, original of this tweet if it replies to another one, original of this tweet if it quotes another one, original of this Tweet if it is a retweet of another one, and retweets of this tweet. \item {for a User}: \textsc{user\_estimate\_coef}$ = [1.0, 0.6, 0.5, 0.3]$\\ These coefficients concern, in order, tweets posted by this user, his favorite tweets, his friends, and his followers. \end{itemize} \item \textsc{feedback\_score}: is estimating the relevance of the node, $n\in$ $\mathcal{V}$ based on the score of direct successor nodes $n^\prime\in$ $\mathcal{V}$, which is the one who has been visited after $n$ and there is edge $e\in$ $\mathcal{U}$ between $n$ and $n^\prime$ to show the relation between nodes. \item \textsc{score}: this final score is computed after the crawling and feedback steps of the algorithm and it is calculated based on the three previous scores. \begin{itemize} \item {for a Tweet}: $Score$ = $text\_score$ + $feedback\_score$ \item {for a User}: $Score$ = $estimate\_score$ + $feedback\_score$ \item {for a Hashtag}: $Score$ = $estimate\_score$ + $feedback\_score$ + $Occurance\_Count$ \end{itemize} \end{itemize} To obtain a node's {\sc estimate\_score}, we multiply it's predecessor's {\sc score} by the corresponding coefficient. Thus, a tweet node has 4 score-related attributes whereas other node types have 3. These attributes exist regardless of the node's state. We assume at the start that we begin with some seed tweets, considered highly relevant. The precise way in which we obtain those tweets is detailed in Section 3.4. We evaluate the \textsc{text\_score} of these seeds using the strategy described in Section 3.2. We set their \textsc{estimate\_score} equal to their \textsc{text\_score} to allow our algorithm to run. At each iteration during the crawling, we begin by selecting a new node using the method described in Section 3.5. We then query Twitter to complete its information. We update the \textsc{score} of the nodes as follows: if it is a tweet, we compute its \textsc{text\_score} and We add the difference between calculated \textsc{text\_score} and its \textsc{estimate\_score} to its parent's \textsc{feedback\_score}. We then proceed to add this node's uncrawled neighbors to the graph. We set their \textsc{estimate\_score} as a fraction of the current node's score, based on the relationship they share. If it is a User node, the score will be equal to sum of its \textsc{estimate\_score} and \textsc{feedback\_score}. If it is a Hashtag node, in addition to sum of its \textsc{estimate\_score} and \textsc{feedback\_score}, we count how often they appear and add it to their score. Alg.\ref{alg:CrawlAlgorithmNS} defines the process of selecting a new node. In this procedure, the input is a list of candidate nodes and for each node in this list, the function $f$ will be calculated and then selecting probability for it, $P$ will be calculated. In the end, the node with the highest probability will be returned. \begin{comment} \begin{algorithm} \caption{STiC Algorithm - Node Selection} \label{alg:CrawlAlgorithmNS} \hspace{-4mm}\textbf{Input} {$queue\_nodes$}\\ \textbf{Input} {$ p \gets \textit{0.7 \Comment{probability of selecting high score node} }$\\} \textbf{Output} $selected\_node$ \begin{algorithmic}[1] \State $max\_score \gets \textit{maximum score of queue nodes}$ \State $min\_score \gets \textit{minimum score of queue nodes}$ \For {$node$ in $queue\_nodes$} \State ${ f[i] = calculate\_F(node.score, min\_score, max\_score, p) }$ \State $P[i] = exp(f[i])/sum(exp(f[i]))$ \EndFor \State \Return {$node\_with\_max\_P[i]$} \end{algorithmic} \end{algorithm} \end{comment} \begin{algorithm}[t] \DontPrintSemicolon \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \SetKwProg{Fn}{Function}{:}{} \caption{STiC Algorithm - Node Selection} \label{alg:CrawlAlgorithmNS} \KwInput {$queue\_nodes$} \KwInput {$ p \gets 0.7$ \tcp*{probability of selecting high score node}} \KwOutput {$selected\_node$} \BlankLine { $max\_score \gets \textit{maximum score of queue nodes}$ }\\ { $min\_score \gets \textit{minimum score of queue nodes}$ }\\ \For {$node$ in $queue\_nodes$} { { ${ f[i] = calculate\_F(node.score, min\_score, max\_score, p) }$ }\\ { $P[i] = exp(f[i])/sum(exp(f[i]))$ }\\ } { \textbf{return} {$node\_with\_max\_P[i]$} }\\ \end{algorithm} \section{Experiments and Evaluation} \label{sec:experiments} STiC algorithm is implemented by Python, we used tweepy\footnote{http://www.tweepy.org/} v3.5 to access the Twitter API, and neo4j-driver\footnote{https://neo4j.com/developer/python/} v1.0.2 and neo4jrestclient\footnote{https://pypi.org/project/neo4jrestclient/} v2.1.1 to communicate with Neo4j. For enriching the list of keywords we used different APIs and all of them are implemented in python\footnote{https://www.python.org/download/releases/3.4.0/} 3.4. In some cases we needed to create a new library while for others used predefined libraries. We aimed to increase the precision of retrieved tweets. In order to evaluate our approach to see how much STiC is successful, we run the experiments with maximum 100 iteration for crawling and maximum timeout 720 second. The relevance threshold for keywords, $\alpha$, is chosen as equal as 0.5 and the threshold for selecting high score node, $p$, is 0.7. These numbers are arbitrary and selected after observing a few iterations of crawling. We run the model on each query separately and stored the results to be able to compare them by statistics and manual check. We selected four original queries from four different categories including proper nouns: \textit{obama}, general words: \textit{bank}, concepts: \textit{energy} and recent trends: \textit{birlinggap}. The reason for choosing these keywords is covering different categories of queries and being able to evaluate the system with different inputs and decrease the bias toward the specific part of tweets or users. \textit{obama} is the previous president of United States and he has huge number of followers, hashtags, mentions and tweets and it is very good option to start the crawling. 'bank' and 'energy' are very general and they have good number of relations and hashtags in Twitter. Also there is a lot of number of users which has significant number of related tweets and we can have a good chance to crawl an enough large subset of the crawling space. \textit{birlinggap} was one of the recent trends at the moment of doing experiments and it gives us the chance to do manual check on results easily. Fig.\ref{fig:birlinggap-new} shows the retrieved nodes using STiC after storing in the database. Red nodes represent tweets in the database while blue nodes show the hashtags which found related to the query and purple nodes indicate users which has been crawled during the process. The edges' labels define the type of relation between nodes. \begin{figure}[t] \includegraphics[width=0.85\linewidth, height=75mm]{birlinggap-new.png} \caption{Retrieved nodes for query \textit{birlinggap} using STiC} \label{fig:birlinggap-new} \end{figure} \begin{comment} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{obama-new.png} \caption{Retrieved tweets for original query \textit{obama} using STiC} \label{fig:obama-new} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{bank-new.png} \caption{Retrieved tweets for original query \textit{bank} using STiC} \label{fig:bank-new} \end{minipage} \end{figure} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{birlinggap-new.png} \caption{Retrieved tweets for original query \textit{birlinggap} using STiC} \label{fig:birlinggap-new} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{energy-new.png} \caption{Retrieved tweets for original query \textit{energy} using STiC} \label{fig:energy-new} \end{minipage} \end{figure} Figure\ref{fig:obama-simple} \ref{fig:bank-simple} \ref{fig:birlinggap-simple}\ref{fig:energy-simple}, show the diagrams of retrieved tweets using simple API for the given queries. \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{obama-simple.png} \caption{Retrieved tweets for original query \textit{obama} using simple BFS API call} \label{fig:obama-simple} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{bank-simple.png} \caption{Retrieved tweets for original query \textit{bank} using simple BFS API call} \label{fig:bank-simple} \end{minipage} \end{figure} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{birlinggap-simple.png} \caption{Retrieved tweets for original query \textit{birlinggap} using simple BFS API call} \label{fig:birlinggap-simple} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{energy-simple.png} \caption{Retrieved tweets for original query \textit{energy} using simple BFS API call} \label{fig:energy-simple} \end{minipage} \end{figure} \end{comment} Figure \ref{fig:obama-nodes} and Figure \ref{fig:energy-nodes} show the number of different types of nodes for queries \textit{obama} and \textit{energy} using STiC, respectively and compare them with results of simple BFS API call for the given query. STic used more API calls and found more related tweets. It decreased the variety of relations between nodes and brought more Hastag and User node in compare to simple BFS method. \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-obama.png} \caption{Number of different nodes for query 'obama'} \label{fig:obama-nodes} \end{figure} \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-energy.png} \caption{Number of different nodes for query 'energy'} \label{fig:energy-nodes} \end{figure} Figure \ref{fig:bank-nodes} gives the comparison for different nodes retrieved by STiC and simple BFS for query \textit{bank}. STiC gives more tweet nodes and more User nodes but the number of Hashtag nodes are less than simple method. The reason is that STiC build the connected graph by crawling more users and tweets rather than jumping from node to another one without any relation. \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-bank.png} \caption{Number of different nodes for query 'bank'} \label{fig:bank-nodes} \end{figure} Figure \ref{fig:birlinggap-nodes}, shows that STiC found the same number of tweets for \textit{birlinggap} query while it is using more API calls and increase number of relationships between nodes. This increment is explained by comparing number of Hashtags and Users, since it found more nodes than simple BFS, these nodes caused more edges than before. \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-birlinggap.png} \caption{Number of different nodes for query 'birlinggap'} \label{fig:birlinggap-nodes} \end{figure} \begin{comment} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-obama.png} \caption{Number of different nodes for query 'obama'} \label{fig:obama-nodes} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-bank.png} \caption{Number of different nodes for query 'bank'} \label{fig:bank-nodes} \end{minipage} \end{figure} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-birlinggap.png} \caption{Number of different nodes for query 'birlinggap'} \label{fig:birlinggap-nodes} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-energy.png} \caption{Number of different nodes for query 'energy'} \label{fig:energy-nodes} \end{minipage} \end{figure} \end{comment} In this part for final evaluation, we decided to calculate precision for both STiC and simple BFS API call and see the improvement clearly. \begin{comment} we defined these measures as following:\\ Precision (P) is the fraction of retrieved tweets that are relevant. $$ Precision =\frac{\#(relevant\, tweets\, retrieved)}{\#(retrieved\, tweets)} = P(relevant|retrieved) $$ \\ Recall (R) is the fraction of relevant tweets that are retrieved. $$ Recall =\frac{\#(relevant\, tweets\, retrieved)}{\#(relevant\, tweets)} = P(retrieved|relevant) $$ \\ A single measure that trades off precision versus recall is the F-measure, which is the weighted harmonic mean of precision and recall: $$ F-measure =\frac{2PR}{P + R} $$ we should have an estimation about retrieved and relevant tweets, so we used Automated Social Media Analytics \footnote{http://keyhole.co} to find number of relevant post to the exact hashtag of each query. This number for 'obama', 'birlinggap', 'energy' and 'bank' is 768, 143, 713 and 678 respectively. \end{comment} in the tables \ref{table:Measures-obama}, \ref{table:Measures-birlinggap}, \ref{table:Measures-energy} and \ref{table:Measures-bank} you can find the precision percentage for simple BFS API call model and STiC model for each topic. STiC model shows significant improvement in finding related tweets, specially in \textit{birlinggap} query in which the simple BFS could not find any Hashtag or User nodes, which is very important for building relations and making connections between nodes. \begin{center} \captionof{table}{Precision result comparison for 'obama'} \label{table:Measures-obama} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 71 & 87 & 81.6\% \\ \hline \textit{Simple BFS} & 16 & 25 & 64\% \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Precision result comparison for 'birlinggap'} \label{table:Measures-birlinggap} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 15 & 21 & 71.43\% \\ \hline \textit{Simple BFS} & 8 & 22 & 36.36\% \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Precision result comparison for 'energy'} \label{table:Measures-energy} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 89 & 113 & 78.76\% \\ \hline \textit{Simple BFS} & 16 & 29 & 55.17\% \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Precision result comparison for 'bank'} \label{table:Measures-bank} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 96 & 111 & 86.49\% \\ \hline \textit{Simple BFS} & 24 & 31 & 77.42\% \\ \hline \end{tabu} \end{center} \begin{comment} \begin{center} \captionof{table}{F-Measure for different sample queries} \label{table:Measures} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[c] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Relevant Tweet & Retrieved Tweet & Recall & Precision & F-Measure \\ \hline \textit{obama} & 57 & 768 & 87 & 0.07422 & 0.6552 & 13.3334 \\ \hline \textit{birling- gap} & 15 & 143 & 21 & 0.1049 & 0.7143 & 18.2927 \\ \hline \textit{energy} & 89 & 713 & 113 & 0.1249 & 0.7876 & 21.5496 \\ \hline \textit{bank} & 96 & 678 & 111 & 0.1416 & 0.8649 & 24.3346 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Number of different nodes for each query} \label{table:NodesNumber} \begin{tabu} to \linewidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Query} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textit{Obama} & 2212 & 48 & 79 & 300 & 25 & 2 & 213 & 87 \\ \hline \textit{birling- gap} & 69 & 9 & 5 & 38 & 74 & 4 & 11 & 22 \\ \hline \textit{energy} & 2281 & 55 & 86 & 305 & 27 & 9 & 178 & 116 \\ \hline \textit{bank} & 2219 & 53 & 79 & 302 & 25 & 7 & 182 & 113 \\ \hline \end{tabu} \end{center} \end{comment} Based on the achieved results, which have been shown in the tables and figures above, we observed the significant improvement in term of precision, number of retrieved tweets and different types of nodes after using STiC algorithm. This method shows high performance for crawling the Twitter and finding related tweets for a given query. In compare to simple BFS API call, STiC is able to retrieve more related tweets, while it is finding more Hashtags and Users and extending list of nodes during crawling Twitter. This method use more API calls since it can find stronger relation between visited nodes and uncrawled ones. For some queries such as \textit{birlinggap} which is a proper noun and there is small set of related words for them, simple BFS API call can not reach to a well connected graph and most of the nodes are not connected to each other while the STiC can build a graph with more edges between the nodes. For other queries, STiC can build a connected graph with more nodes and less diversity in number of relationships. By comparing the results of STiC model for all queries, we observed that the number of queries having more related keywords how also a greater number of related nodes with respect to the queries having a smaller set of keywords. \begin{comment} \subsubsection{new Score System\newline} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'obama'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 2212 & 48 & 79 & 300 & 25 & 2 & 213 & 87 \\ \hline \textbf{Old System} & 2218 & 48 & 79 & 300 & 25 & 2 & 213 & 87 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'birlinggap'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 69 & 9 & 5 & 38 & 74 & 4 & 11 & 22 \\ \hline \textbf{Old System} & 65 & 8 & 5 & 36 & 73 & 4 & 11 & 21 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'energy'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 2281 & 55 & 86 & 305 & 27 & 9 & 178 & 116 \\ \hline \textbf{Old System} & 2270 & 51 & 82 & 300 & 25 & 9 & 178 & 113 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'bank'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 2219 & 53 & 79 & 302 & 25 & 7 & 182 & 113 \\ \hline \textbf{Old System} & 2216 & 51 & 79 & 300 & 25 & 7 & 182 & 111 \\ \hline \end{tabu} \end{center} These experiences seem to indicate that query enrichment has a huge influence on improving the results to find the related Tweets to an original query by using the smart crawling algorithm. Also, starting the crawling Tweets from a node which is more related to the original query can improve the final results. By comparing the achieved results for F-Measure of both models, the significant improvement in new model is observable. \end{comment} \section{Conclusion and perspective} \label{sec:conclusion} In this paper, we aimed at developing a system for crawling relevant tweets to a given topic using a keyword query. We considered two aspects of the problem: the keyword-set enrichment and the crawling of relevant tweets using the Twitter APIs. First we focused on enriching queries and we used different external APIs (WordsAPI, Datamuse, Thesaurus, DBPedia, Wordnik, PyDictionary, Wordnet and New York Times API) and identified related keywords. We calculated TF-IDF score for these keywords and we removed the ones with lower score than threshold. We claimed that we can get more related tweets while we use more related keywords for a given topic. In the second step we defined a crawling algorithm and a scoring system in which each tweet is annotated by a score. Our crawling algorithm takes advantage of the text content of tweets using Natural Language Processing tools to deduce the relevance of each node. Overall, we obtain very satisfying results on well known topics, as a large number of retrieved tweets are related to the topic and number of retrieved tweets for running the model in a short period of time seems to be enough. Twitter is dynamic, as several thousands of tweets are posted each second, and the Twitter graph is in constant evolution. However the solution we developed seems to be resilient to these changes. This work opens the door for further interactions between various data sources. We could also consider taking advantage of more than just the concepts from the APIs (e.g. the content of the articles). We would also have liked to test this process on a larger number of iterations, but we were limited by the manual aspect of our evaluation method. For future work, we are going to improve the performance of finding related new tweets by using machine learning algorithms. So we will try to build a supervised learning system to classify new tweets by using the collected tweets as a train set. In this case we are able to use this system in many cases that have overlap with train set and sample queries. For being more precise and having significant improvement in pruning unrelated tweets, we can use the idea of popular users and most effective users to improve the the performance of scoring system and giving weight to the users as well. Another idea for smart crawling the tweets is using the provided URL in tweets and applying NLP methods on the text of the Web pages besides considering the meta data of the Web pages to create a more related list of keywords and hashtags for crawling the new tweets. \section{Acknowledgment} \label{sec:acknowledgement} Our contributions consist in (i) the usage of several data sources in order to enrich a keyword query; (ii) the definition of a Smart Crawling algorithm as the main method to access the related tweets to an original query using the keyword enriched query and the REST Twitter APIs. We would like to thank V. Chetyrkine, C. Hamelain, X. Li for their work in the development of the tool which was the base model for STiC and Benoit Grotz and Silviu Maniu for many helpful discussions. \bibliographystyle{ACM-Reference-Format} \section{Introduction} During the recent years, micro-blogging become a source of current and topical news. Twitter\footnote{www.twitter.com} is one of the most popular micro-blogging service which brings together millions of users and allows to publish and exchange short messages, known under the name of \emph{tweets}. Twitter was a pioneer in providing APIs to access public data since 2006\footnote{https://developer.twitter.com/en.html} and enabling applications to retrieve tweets using a set of keywords. However, there is no control on the retrieved tweets which are not always relevant. Reaching relevant answers requires multiple calls and filtering the retrieved results. There are several research works that have as objective searching relevant tweets according to a given query~\cite{gabielkov2014sampling,gouriten2014scalable,li2013towards,safran2012improving}. Researchers tried to improve the ways based on features such as hashtags, retweets and mentions to retrieve the most relevant tweets from Twitter. For example, one basic and simple method is if a query just contains one word, they find the frequency of this word in different features of Twitter\cite{nazi2015walk}. Researchers also tried to use external information extracted from sources such as Wikipedia to enrich a query by finding the most similar words to find the most related results to the given query \cite{guisado2016enrich}. {In this paper, we exploited different external resources such as \textit{Wordnet}, \textit{Wordnik}, \textit{DBPedia}, and \textit{New York Times(NYT)} to have the most complete set of keywords similar to the original query keywords. We also defined a smart crawling algorithm based on the tweet's features to reach the most relevant ones as early as possible and going beyond Twitter limitations. More precisely, we define a new crawling algorithm called \textbf{Smart Twitter Crawling} (STiC) by considering two different aspects: (i) Enriching the original query using external sources, (ii) crawling Twitter graph using a DFS search based on new scoring to reach the best nodes (tweets) related to the targeted topic. To measure the relevance of a tweet, the algorithm assigns a score to a tweet by taking into account its content, its hashtags and the user who has relation with it, such as posting, replying, being mention or retweeting. Given this score we are able to select highly related tweets in each iteration and continue by adding the relate valuable tweets. Different experiments have been achieved on tweets collected for different kind of query keywords to evaluate the precision of STiC algorithm. Thanks to our approach, compared to a simple BFS search, we increased the precision of related retrieved tweets up to 86\% for some queries. Also in case of number of retrieved results, we got significant improvement in compare with simple BFS model. In this paper, we first present related work in Section~\ref{sec:relatedwork}. Then, we present in detail our smart Twitter crawling approach including query enrichment and Twitter graph crawling in Section~\ref{sec:approach}. In section~\ref{sec:experiments}, we present and discuss our results using different queries. Finally,in Section~\ref{sec:conclusion} we give our conclusions and perspectives. \section{Related Work} \label{sec:relatedwork} Even before the emerging of social media, crawling the Web pages has been a common practice~\cite{li2013towards}. Finding the Web pages related to one topic was one of the interesting approaches to study~\cite{safran2012improving}. The common applied methodology, used neural network and vector space models to compute the priority models\cite{safran2012improving}. Deligenti\cite{diligenti2000focused} in 2000, introduced a model for focused crawling based on context graph by assigning appropriate credits to documents. Also Safran and et al.,\cite{safran2012improving} at 2012 proposed a new approach to improve relevance prediction in focused Web crawlers. They chose Na\"{\i}ve Bayesian as the base prediction model and they used four relevant attributes to create their prediction model: URL, anchor text, surrounding texts, and parent pages. They extended the list of keywords related to a topic by using WordNet and extracted relevant initial seed URLs automatically by choosing the top k-URLs retrieved from Google, Yahoo and MSN search engines. Gouriten \cite{gouriten2014scalable}, in 2014 introduced an adaptive, scalable and generic system for focused crawling to identify and optimized the relevant subsystems. Their algorithm was defined for focused Web crawling, topic-centered Twitter user crawl, deep Web siphoning through a keyword search, gossip peer-to-peer search and real-world social network to answer a query. Xinyue Wang and et al, in 2015\cite{wang2015adaptive} studied about finding a solution for crawling Microblog feeds in real time. They proposed an adaptive crawling model which extracts the hashtags from Twitter iteratively to achieve a list of relevant tweets to a query. Cha and et al, \cite{cha2010measuring} have worked on how to find most influential users in Twitter and his results could be useful when it be used to complete the idea for topic-focused crawling. In 2010, Tianyi and at al., \cite{wang2010unbiased} proposed a method to unbiased crawling the Tweets based on Metropolis-Hasting Random Walk(MHRW) using USDSG in the new method. Rui and et al., in 2013 \cite{li2013towards} proposed a data platform to automatically monitor “target” tweets from the Twitter stream for any specific topic. They designed Automatic Topic-focused Monitor (ATM), which first samples tweets from the Twitter stream and second selects a list of keywords to track based on the samples. GabielKov and et al.in 2014\cite{gabielkov2014sampling}, were working on sampling techniques for studying OSN. They have two scenarios for sampling and they want to find the best technique for each of them: first, they are looking for most popular users; the second one is that they have an aim to obtain unbiased sample of users. They showed that the classical sampling methods are highly biased by high degree nodes. \cite{gabielkov2014studying} In \cite{kwak2010twitter}, they proved that BFS will have a large bias when the number of requests to the API is limited. In RW, choosing the next node for visiting, depends on the degree of the node. They used USDSG (Unbiased sampling for directed social graphs) algorithm, proposed in \cite{wang2010unbiased}, which is a modification of RW and discards a random jump to a node with a probability proportional to the degree of the node and replace arcs with undirected edges.\newline Selecting keywords to retrieve relevant documents have been studied in lots of academic researches. As we mentioned earlier, Safran\cite{safran2012improving} at 2012 used WordNet to extend the extracted word set. Rui and co in 2013 \cite{li2013towards} proposed ATM Framework to select keywords in a constrained optimization approach, which finds near optimal keywords with guarantee (e.g., keywords are not too specific) and considers two types of costs. Also, ATM updates keywords in iterations which monitor the Twitter stream continuously. In 2015, Xinyue Wang and et al.,\cite{wang2015adaptive} reviewed the retrieved tweets to identify new keywords for automatically live event tweet collection, these new keywords were mostly based on the hashtags which was embedded inside the tweet. Gusiado and et al. in 2016 \cite{guisado2016enrich} presented a query rewriting cloud service. Their aim is solving the problem of vocabulary mismatch and topic inexperience of users. So, they proposed a method which offers a generic solution by analyzing the websites using Wikipedia and identifying the entities called ENRICH. \\ \section{Smart Twitter Crawling Approach} \label{sec:approach} Monitoring the set of tweets related to a target topic is an unsolved problem~\cite{congosto2017t}. In this section we present the Smart Twitter Crawling (STiC) approach we defined as a solution to this problem. The figure \ref{fig:architecture} describes the overall of our approach. STiC algorithm enriches initial keywords using external sources to query Twitter graph. It builds an initial sub-graph providing related seeds. The crawling is then based on a DFS search and exploits each considered tweet's features to assign a score and to select the most relevant tweets to be crawled. The results of the crawl will be a sub-graph made by different crawled nodes and the edges between them. This sub-graph will be stores in the a graph database, which is Neo4j in our work. Before going any further into details, we first present the input and output data representation of STiC algorithm. \begin{figure} \includegraphics[width=\linewidth, height=70mm]{Architecture.png} \caption{Architecture of our approach} \label{fig:architecture} \end{figure} \subsection{Input and Output of STiC Algorithm} \label{sec:representation} Twitter data can be represented as a graph $\mathcal{T} = <\mathcal{V} , \mathcal{U}> $ where $\mathcal{V}$ is the set of nodes and $\mathcal{U}$ is the set of directed edges between nodes. Different types of nodes are defined in $\mathcal{V}$: \begin{itemize} \item $t$ is a \emph{tweet}, accompanied by attribute values, which include the text of the tweet and its identifier. \item $h$ is a \emph{hashtag} extracted from the tweet. \item $u$ is a \emph{user} accompanied by its identifier value. \end{itemize} Different types of relations are defined in $\mathcal{U}$: \begin{itemize} \item \textbf{$<t,h>$} edge called $Has\_Hashtag$ which relates a tweet $t$ to a hashtag $h$ it contains. \item \textbf{$<t,t^\prime>$} edges called: \begin{itemize} \item $Quotes$ which relates a tweet $t$ to a tweet $t^\prime$. In this case, the text of node $t$ contains the text of $t^\prime$ in addition to its own text. \item $Replies\_To$ which relates a reply tweet $t$ to the origin tweet $t^\prime$. \item $ReTweets$ which relates a retweet $t$ to the origin tweet $t^\prime$. In this case, the text of $t$ is exactly the same as text of tweet $t^\prime$. \end{itemize} \item \textbf{$<t,u>$} called $Mentions$ which relates a tweet $t$ to the user $u$ mentioned in it. \item $\textbf{$<u,t>$}$ edges called: \begin{itemize} \item $Favorites$ which relates a user $u$ to a tweet $t$ which means $u$ likes $t$. \item $Posts$ which relates a user $u$ to a tweet $t$ which means $u$ posted $t$. \end{itemize} \end{itemize} \begin{itemize} \item $\textbf{$<u,u\prime>$}$ edge called $Follows$ which relates a user $u$ to a user $u\prime$ he follows. \end{itemize} The input of STiC algorithm is the list of keywords after enrichment and an initial sub-graph of Twitter in which nodes has no any additional information than what is available from Twitter API. The output of the algorithm is a sub-graph, in which each node is accompanied with a score value. STiC algorithm defines 3 main procedures: \begin{itemize} \item \textbf{Query enrichment procedure:} First step of algorithm to enrich list of keywords for the given query. \item \textbf{Select new node procedure:} The procedure for selecting next node for being visited in crawling process. \item \textbf{Smart crawling procedure:} The main process of algorithm to use the enriched list of keywords and node selection process for visiting new nodes and crawl Twitter. \end{itemize} \subsection{Query enrichment} \label{sec:queryEnrichment} The REST Twitter APIs offer the possibility to retrieve tweets using a set of keywords. When a user tries to retrieve tweets he is not always conscious of the best set of keywords to use in order to obtain the correct subset of tweets. In our research we use the external sources to enrich the set of keywords that the user specifies in the target query. Alg.\ref{alg:CrawlAlgorithmQE} expresses the process of enriching a query. In this procedure, we collect all related words from different data sources, such as NYT~\cite{zhao2011comparing}, Wikipedia~\cite{guisado2016enrich} and WordNet~\cite{safran2012improving} APIs. We identified a list of APIs that provide as source of information news, articles or synonyms and we identified the following ones: \textit{New York Times(NYT)}, \textit{Wordnet}, \textit{Wordnik}, \textit{DBPedia}, \textit{Thesaurus}, \textit{Datamuse}, \textit{WordsAPI}, \textit{Pydictionary}. We also give a weight to each keyword to specify the relevance of the keyword to the original query keywords. For assigning this weight, we consider the subset of keyword given by each external source as a separated set and we calculate the IDF of each keyword. For this aim, each external source is considered as a document and then we calculated term frequency as the number of occurrence of the word in all documents. Then we assign the weight of each word as term frequency in all documents multiply by its IDF score. For instance for \textit{obama} original query keyword we retrieve the following terms and their frequency: ({\textit{barack obama}: 4, \textit{barack hussein obama}: 3, \textit{barack}: 3, \textit{obama}: 3, \textit{community organizer in chief}: 2, \textit{barak obama}: 2, etc.}) \\ Then the weight (total term frequency*IDF score) for each term is computed: \textit{barack obama}: 0.96, \textit{barack hussein obama}: 0.89, \textit{community organizer in chief}: 0.78, \textit{barak obama}: 0.78, etc.})\\ Finally we select the top score keywords: (\textit{barack obama}, \textit{barack hussein obama}, \textit{community organizer in chief}, \textit{barak obama}, \textit{barrack obama}, \textit{president obama}, \textit{barackobama}, \textit{brack obama}, \textit{obama barack}, etc.) We finally merge the all keywords extracted from all APIs with their calculated weights and we sort them based on their weights and on a threshold $\alpha$. The key factor for selecting $\alpha$ is that the keywords with the score above the threshold should not be irrelevant to the query. This threshold may vary and depends on the type of the query and results of the query enrichment. The algorithm \ref{alg:CrawlAlgorithmQE} describes the function \textit{mostRelatedKeywords} which returns the \begin{comment} \begin{algorithm} \caption{STiC Algorithm - Query Enrichment} \label{alg:CrawlAlgorithmQE} \hspace*{\algorithmicindent} \textbf{Input} $external\_sources\_list$ as \textit{New York Times(NYT)}, \textit{Wordnet}, \textit{Wordnik},\textit{DBPedia}, \textit{Thesaurus}, \textit{Datamuse}, \textit{WordsAPI}, \textit{Pydictionary} \\ $\alpha$, \textit{keyword relevance threshould}, $query$\\ \hspace*{\algorithmicindent} \textbf{Output} $keywords\_list$ \begin{algorithmic}[1] \For {$source$ in $external\_sources\_list$} \State $keywords\_list.add(related\_words(source, query))$ \EndFor \State $calculateTFIDFScore(keywords\_list)$ \State \Return {$mostRelatedKeywords(keywords\_list,\alpha)$} \end{algorithmic} \end{algorithm} \end{comment} \begin{algorithm}[t] \DontPrintSemicolon \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \SetKwProg{Fn}{Function}{:}{} \caption{STiC Algorithm - Query Enrichment} \label{alg:CrawlAlgorithmQE} \KwInput {$external\_sources\_list$ \tcp*{\textit{New York Times(NYT)}, \textit{Wordnet}, \textit{Wordnik},\textit{DBPedia}, \textit{Thesaurus}, \textit{Datamuse}, \textit{WordsAPI}, \textit{Pydictionary}}} \KwInput {$\alpha$ \tcp*{\textit{keyword relevance threshold}}} \KwInput {$query$} \KwOutput{$keywords\_list$} \BlankLine \For {$source$ in $external\_sources\_list$} { {$keywords\_list.add(related\_words(source, query))$} } {$calculateTFIDFScore(keywords\_list)$}\\ {\textbf{return} {$mostRelatedKeywords(keywords\_list,\alpha)$}} \end{algorithm} \subsection{Smart Crawling} \label{sec:smartCrawling} In the Alg.\ref{alg:CrawlAlgorithmSC}, the list of keywords from the Alg.\ref{alg:CrawlAlgorithmQE}, the initial node and number of iterations, will be given to the procedure and in first iteration, a sub-graph of nodes from neighbors of initial node will be created and scores of nodes will be updated. The initialization of the graph is crucial for the success of the following iterations of the algorithm: a bad initialization can lead to a graph where there is any node is related to query. In the beginning, we chose manually a single node we knew was relevant (e.g. we can take the official hashtag if it is in the set of keywords specified by the user). While quite effective, this selection cannot be transparently done since it needs manual selection for each different query and there is chance of leading the crawling to a specific part of data. We initialize the graph automatically with the result of a simple Twitter API search using the enriched keywords. This preliminarily results allow for the first round of the crawl. Then, as number of iterations, the crawler will visit the selected node from Alg.\ref{alg:CrawlAlgorithmNS}, which is explained in \ref{sec:nodeSelection} and it will add its neighbors to the list of candidate nodes, which will be given to the Alg.\ref{alg:CrawlAlgorithmNS} in the next iteration. Then, before going for the next iteration, we need to update the scores of nodes which is explained in \ref{sec:scoreCalculation}. In the end, a sub-graph of Twitter will be returned which has been created by crawling nodes based on the defined score for each one. \begin{comment} \begin{algorithm} \caption{STiC Algorithm - Smart Crawling} \label{alg:CrawlAlgorithmSC} \hspace*{\algorithmicindent} \textbf{Input}: keywords\_list, iterations, initial\_relevant\_node \\ \hspace*{\algorithmicindent} \textbf{Output}:$visited\_nodes$ \begin{algorithmic}[1] \State $i \gets \textit{iterations}$ \State ${ n_0 \gets \textit{initial\_relevant\_node} }$ \State $current\_node \gets n_0$ \State $update\_queue\_nodes(current\_node\_neighbors)$ \State $update\_nodes\_scores()$ \State $add\_to\_visited\_list(current\_node)$ \State $\textit{i} \gets \textit{i \- 1}$ \While {i $>$ 0}: \State $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ \While{$is\_visited(current\_node)$} \State $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ \EndWhile \State $update\_queue\_nodes(current\_node\_neighbors)$ \State $update\_nodes\_scores()$ \State $add\_to\_visited\_list(current\_node)$ \State $\textit{i} \gets \textit{i \- 1}$ \EndWhile \Return \textit{visited\_list} \end{algorithmic} \end{algorithm} \end{comment} \begin{algorithm}[t] \DontPrintSemicolon \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \SetKwProg{Fn}{Function}{:}{} \caption{STiC Algorithm - Smart Crawling} \label{alg:CrawlAlgorithmSC} \KwInput {$keywords\_list$} \KwInput {$iterations$} \KwInput {$initial\_relevant\_node$} \KwOutput{$visited\_nodes$ } \BlankLine { $i \gets \textit{iterations}$ }\\ { ${ n_0 \gets \textit{initial\_relevant\_node} }$ }\\ { $current\_node \gets n_0$ }\\ { $update\_queue\_nodes(current\_node\_neighbors)$ }\\ { $update\_nodes\_scores()$ }\\ { $add\_to\_visited\_list(current\_node)$ }\\ { $\textit{i} \gets \textit{i - 1}$ }\\ \While {i $>$ 0} { { $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ }\\ \While{$is\_visited(current\_node)$} { { $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ }\\ } { $update\_queue\_nodes(current\_node\_neighbors)$ }\\ { $update\_nodes\_scores()$ }\\ { $add\_to\_visited\_list(current\_node)$ }\\ { $\textit{i} \gets \textit{i - 1}$ }\\ } \textbf{return} $\textit{visited\_list}$ \end{algorithm} \subsection{Node Selection} \label{sec:nodeSelection} Our crawling method selects the next node from which to continue the crawling during each iteration. On the one hand we want to explore the most promising nodes, i.e. the ones with the highest estimate scores, and not waste queries on irrelevant nodes. On the other hand, we would also like to avoid remaining in a portion of the Twitter graph and miss other relevant nodes. The first objective can be understood as efficiency whereas the second as completeness. A common solution to this trade off is the introduction of random based choice. In the equations below, the $p$ (a real number between 0 and 1) parametrizes the probability distribution. The closer $p$ is to 1, the higher the probability to chose a high score is. On the contrary, if $p$ is close to 0, the low scored nodes will have a higher probability of being chosen. This probability distribution is inspired by a multinomial distribution and a soft-max function. For each node $i$, the probability to be selected $P_i$ is given by the non-normalized multinomial function $f_i$, and depending of the parameter $p$: \[ P_i = \dfrac{ \exp \left( f_i \right)}{\underset{i}\sum \exp \left( f_i \right)} \] \[ \text{where: }f_i = \dfrac{x_i }{x_{\min}} \cdot p + \dfrac{x_{\max}}{x_i} \cdot (1-p) \] Using this formula, we are able to jump from one node to another one if the score of the node in not large enough to be selected for crawling. In this case, we use the minimum and maximum scores of the crawled nodes and choose $p$ arbitrary, to define the function $f_i$ for each node. The probability of $P_i$ shows the chance of a node to be selected based on the fraction of its $f_i$ to sum of $f_i$ for other crawled nodes. In next section we are going to describe the process of calculating the score for different types of nodes. The algorithm \ref{alg:CrawlAlgorithmNS} \subsubsection{Score Calculation} \label{sec:scoreCalculation} At the beginning of each iteration, a node, \textit{$n_0$}, is selected and all internal information about this node is retrieved from Twitter which includes the \textit{id\_str} of its neighbors, who are then added to the queue for the next iteration. Then, the scores of all nodes are updated. For \textit{$n_0$} only \textsc{text\_score} is available at the beginning and as there is no other node which has been visited before that, its \textsc{estimate\_score} is equal to 0. The final score is calculated according to the various score attributes to find the relevance of a node according to the initial query. The score related to a text of a tweet is defined as follows: \begin{itemize} \item \textsc{text\_score}$(t)$: is defined for a tweet node $t$ and is represent the frequency of query keywords in the text body of the tweet. \subsubsection{Tweets content analysis} \label{sec:contentAnalysis} Contrary to the User and Hashtag nodes (the Hashtag nodes are merely a word or a name), a tweet is characterized by a textual content that allows us to use Natural Language Processing tools to judge their relevance to the target topic. We begin this step with a list of keywords. The analysis of the tweet consists in a lexical and semantic comparison between the keywords and the text body. This analysis begins with the lemmatization of both texts. This is a classic NLP tool that transforms the words into their root form. This allows us to ignores plurality for nouns and tenses for verbs. Punctuation marks and linking words (e.g. the, and, a, of . . . ) are removed because they usually do not convey useful semantic knowledge. Both texts are then compared both lexically and semantically. The lexical comparison is done by counting the number of words the texts have in common. We note that this count is not normalized, but the limit of 280 characters of a tweet prevents the possibility of a longer text that contains a lot of keywords. The semantic comparison is done using the Word Net database. In this database, words possess various relationships with each other. In particular, we utilize the hyponym relationship: the link between two words is be measured as the depth of the closest common ancestor in the hyponymy graph. A keyword is considered to match a word with a semantical relation if the similarity value given by Word Net is higher than a threshold set beforehand. At last, the score from the text of a tweet is the sum of weights of keywords matched (either by semantic relation or lexical).\\ \item \textsc{estimate\_score}: is estimating the relevance of the node, $n\in$ $\mathcal{V}$ based on the score of a direct predecessor node, $n^\prime\in$ $\mathcal{V}$, which is a visited node that has a relation with $n$ and the edge $e\in$ $\mathcal{U}$ connects $n$ and $n^\prime$ together. \begin{itemize} \item {for a Tweet}: \textsc{tweet\_estimate\_coef}$ = [0.4, 0.6, 1.0, 1.0, 1.0, 0.5,0.5]$\\ These coefficients concern, in order, the user who posts the tweet, mentioned users in this tweet, original of this tweet if it replies to another one, original of this tweet if it quotes another one, original of this Tweet if it is a retweet of another one, and retweets of this tweet. \item {for a User}: \textsc{user\_estimate\_coef}$ = [1.0, 0.6, 0.5, 0.3]$\\ These coefficients concern, in order, tweets posted by this user, his favorite tweets, his friends, and his followers. \end{itemize} \item \textsc{feedback\_score}: is estimating the relevance of the node, $n\in$ $\mathcal{V}$ based on the score of direct successor nodes $n^\prime\in$ $\mathcal{V}$, which is the one who has been visited after $n$ and there is edge $e\in$ $\mathcal{U}$ between $n$ and $n^\prime$ to show the relation between nodes. \item \textsc{score}: this final score is computed after the crawling and feedback steps of the algorithm and it is calculated based on the three previous scores. \begin{itemize} \item {for a Tweet}: $Score$ = $text\_score$ + $feedback\_score$ \item {for a User}: $Score$ = $estimate\_score$ + $feedback\_score$ \item {for a Hashtag}: $Score$ = $estimate\_score$ + $feedback\_score$ + $Occurance\_Count$ \end{itemize} \end{itemize} To obtain a node's {\sc estimate\_score}, we multiply it's predecessor's {\sc score} by the corresponding coefficient. Thus, a tweet node has 4 score-related attributes whereas other node types have 3. These attributes exist regardless of the node's state. We assume at the start that we begin with some seed tweets, considered highly relevant. The precise way in which we obtain those tweets is detailed in Section 3.4. We evaluate the \textsc{text\_score} of these seeds using the strategy described in Section 3.2. We set their \textsc{estimate\_score} equal to their \textsc{text\_score} to allow our algorithm to run. At each iteration during the crawling, we begin by selecting a new node using the method described in Section 3.5. We then query Twitter to complete its information. We update the \textsc{score} of the nodes as follows: if it is a tweet, we compute its \textsc{text\_score} and We add the difference between calculated \textsc{text\_score} and its \textsc{estimate\_score} to its parent's \textsc{feedback\_score}. We then proceed to add this node's uncrawled neighbors to the graph. We set their \textsc{estimate\_score} as a fraction of the current node's score, based on the relationship they share. If it is a User node, the score will be equal to sum of its \textsc{estimate\_score} and \textsc{feedback\_score}. If it is a Hashtag node, in addition to sum of its \textsc{estimate\_score} and \textsc{feedback\_score}, we count how often they appear and add it to their score. Alg.\ref{alg:CrawlAlgorithmNS} defines the process of selecting a new node. In this procedure, the input is a list of candidate nodes and for each node in this list, the function $f$ will be calculated and then selecting probability for it, $P$ will be calculated. In the end, the node with the highest probability will be returned. \begin{comment} \begin{algorithm} \caption{STiC Algorithm - Node Selection} \label{alg:CrawlAlgorithmNS} \hspace{-4mm}\textbf{Input} {$queue\_nodes$}\\ \textbf{Input} {$ p \gets \textit{0.7 \Comment{probability of selecting high score node} }$\\} \textbf{Output} $selected\_node$ \begin{algorithmic}[1] \State $max\_score \gets \textit{maximum score of queue nodes}$ \State $min\_score \gets \textit{minimum score of queue nodes}$ \For {$node$ in $queue\_nodes$} \State ${ f[i] = calculate\_F(node.score, min\_score, max\_score, p) }$ \State $P[i] = exp(f[i])/sum(exp(f[i]))$ \EndFor \State \Return {$node\_with\_max\_P[i]$} \end{algorithmic} \end{algorithm} \end{comment} \begin{algorithm}[t] \DontPrintSemicolon \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \SetKwProg{Fn}{Function}{:}{} \caption{STiC Algorithm - Node Selection} \label{alg:CrawlAlgorithmNS} \KwInput {$queue\_nodes$} \KwInput {$ p \gets 0.7$ \tcp*{probability of selecting high score node}} \KwOutput {$selected\_node$} \BlankLine { $max\_score \gets \textit{maximum score of queue nodes}$ }\\ { $min\_score \gets \textit{minimum score of queue nodes}$ }\\ \For {$node$ in $queue\_nodes$} { { ${ f[i] = calculate\_F(node.score, min\_score, max\_score, p) }$ }\\ { $P[i] = exp(f[i])/sum(exp(f[i]))$ }\\ } { \textbf{return} {$node\_with\_max\_P[i]$} }\\ \end{algorithm} \section{Experiments and Evaluation} \label{sec:experiments} STiC algorithm is implemented by Python, we used tweepy\footnote{http://www.tweepy.org/} v3.5 to access the Twitter API, and neo4j-driver\footnote{https://neo4j.com/developer/python/} v1.0.2 and neo4jrestclient\footnote{https://pypi.org/project/neo4jrestclient/} v2.1.1 to communicate with Neo4j. For enriching the list of keywords we used different APIs and all of them are implemented in python\footnote{https://www.python.org/download/releases/3.4.0/} 3.4. In some cases we needed to create a new library while for others used predefined libraries. We aimed to increase the precision of retrieved tweets. In order to evaluate our approach to see how much STiC is successful, we run the experiments with maximum 100 iteration for crawling and maximum timeout 720 second. The relevance threshold for keywords, $\alpha$, is chosen as equal as 0.5 and the threshold for selecting high score node, $p$, is 0.7. These numbers are arbitrary and selected after observing a few iterations of crawling. We run the model on each query separately and stored the results to be able to compare them by statistics and manual check. We selected four original queries from four different categories including proper nouns: \textit{obama}, general words: \textit{bank}, concepts: \textit{energy} and recent trends: \textit{birlinggap}. The reason for choosing these keywords is covering different categories of queries and being able to evaluate the system with different inputs and decrease the bias toward the specific part of tweets or users. \textit{obama} is the previous president of United States and he has huge number of followers, hashtags, mentions and tweets and it is very good option to start the crawling. 'bank' and 'energy' are very general and they have good number of relations and hashtags in Twitter. Also there is a lot of number of users which has significant number of related tweets and we can have a good chance to crawl an enough large subset of the crawling space. \textit{birlinggap} was one of the recent trends at the moment of doing experiments and it gives us the chance to do manual check on results easily. Fig.\ref{fig:birlinggap-new} shows the retrieved nodes using STiC after storing in the database. Red nodes represent tweets in the database while blue nodes show the hashtags which found related to the query and purple nodes indicate users which has been crawled during the process. The edges' labels define the type of relation between nodes. \begin{figure}[t] \includegraphics[width=0.85\linewidth, height=75mm]{birlinggap-new.png} \caption{Retrieved nodes for query \textit{birlinggap} using STiC} \label{fig:birlinggap-new} \end{figure} \begin{comment} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{obama-new.png} \caption{Retrieved tweets for original query \textit{obama} using STiC} \label{fig:obama-new} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{bank-new.png} \caption{Retrieved tweets for original query \textit{bank} using STiC} \label{fig:bank-new} \end{minipage} \end{figure} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{birlinggap-new.png} \caption{Retrieved tweets for original query \textit{birlinggap} using STiC} \label{fig:birlinggap-new} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{energy-new.png} \caption{Retrieved tweets for original query \textit{energy} using STiC} \label{fig:energy-new} \end{minipage} \end{figure} Figure\ref{fig:obama-simple} \ref{fig:bank-simple} \ref{fig:birlinggap-simple}\ref{fig:energy-simple}, show the diagrams of retrieved tweets using simple API for the given queries. \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{obama-simple.png} \caption{Retrieved tweets for original query \textit{obama} using simple BFS API call} \label{fig:obama-simple} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{bank-simple.png} \caption{Retrieved tweets for original query \textit{bank} using simple BFS API call} \label{fig:bank-simple} \end{minipage} \end{figure} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{birlinggap-simple.png} \caption{Retrieved tweets for original query \textit{birlinggap} using simple BFS API call} \label{fig:birlinggap-simple} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{energy-simple.png} \caption{Retrieved tweets for original query \textit{energy} using simple BFS API call} \label{fig:energy-simple} \end{minipage} \end{figure} \end{comment} Figure \ref{fig:obama-nodes} and Figure \ref{fig:energy-nodes} show the number of different types of nodes for queries \textit{obama} and \textit{energy} using STiC, respectively and compare them with results of simple BFS API call for the given query. STic used more API calls and found more related tweets. It decreased the variety of relations between nodes and brought more Hastag and User node in compare to simple BFS method. \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-obama.png} \caption{Number of different nodes for query 'obama'} \label{fig:obama-nodes} \end{figure} \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-energy.png} \caption{Number of different nodes for query 'energy'} \label{fig:energy-nodes} \end{figure} Figure \ref{fig:bank-nodes} gives the comparison for different nodes retrieved by STiC and simple BFS for query \textit{bank}. STiC gives more tweet nodes and more User nodes but the number of Hashtag nodes are less than simple method. The reason is that STiC build the connected graph by crawling more users and tweets rather than jumping from node to another one without any relation. \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-bank.png} \caption{Number of different nodes for query 'bank'} \label{fig:bank-nodes} \end{figure} Figure \ref{fig:birlinggap-nodes}, shows that STiC found the same number of tweets for \textit{birlinggap} query while it is using more API calls and increase number of relationships between nodes. This increment is explained by comparing number of Hashtags and Users, since it found more nodes than simple BFS, these nodes caused more edges than before. \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-birlinggap.png} \caption{Number of different nodes for query 'birlinggap'} \label{fig:birlinggap-nodes} \end{figure} \begin{comment} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-obama.png} \caption{Number of different nodes for query 'obama'} \label{fig:obama-nodes} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-bank.png} \caption{Number of different nodes for query 'bank'} \label{fig:bank-nodes} \end{minipage} \end{figure} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-birlinggap.png} \caption{Number of different nodes for query 'birlinggap'} \label{fig:birlinggap-nodes} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-energy.png} \caption{Number of different nodes for query 'energy'} \label{fig:energy-nodes} \end{minipage} \end{figure} \end{comment} In this part for final evaluation, we decided to calculate precision for both STiC and simple BFS API call and see the improvement clearly. \begin{comment} we defined these measures as following:\\ Precision (P) is the fraction of retrieved tweets that are relevant. $$ Precision =\frac{\#(relevant\, tweets\, retrieved)}{\#(retrieved\, tweets)} = P(relevant|retrieved) $$ \\ Recall (R) is the fraction of relevant tweets that are retrieved. $$ Recall =\frac{\#(relevant\, tweets\, retrieved)}{\#(relevant\, tweets)} = P(retrieved|relevant) $$ \\ A single measure that trades off precision versus recall is the F-measure, which is the weighted harmonic mean of precision and recall: $$ F-measure =\frac{2PR}{P + R} $$ we should have an estimation about retrieved and relevant tweets, so we used Automated Social Media Analytics \footnote{http://keyhole.co} to find number of relevant post to the exact hashtag of each query. This number for 'obama', 'birlinggap', 'energy' and 'bank' is 768, 143, 713 and 678 respectively. \end{comment} in the tables \ref{table:Measures-obama}, \ref{table:Measures-birlinggap}, \ref{table:Measures-energy} and \ref{table:Measures-bank} you can find the precision percentage for simple BFS API call model and STiC model for each topic. STiC model shows significant improvement in finding related tweets, specially in \textit{birlinggap} query in which the simple BFS could not find any Hashtag or User nodes, which is very important for building relations and making connections between nodes. \begin{center} \captionof{table}{Precision result comparison for 'obama'} \label{table:Measures-obama} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 71 & 87 & 81.6\% \\ \hline \textit{Simple BFS} & 16 & 25 & 64\% \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Precision result comparison for 'birlinggap'} \label{table:Measures-birlinggap} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 15 & 21 & 71.43\% \\ \hline \textit{Simple BFS} & 8 & 22 & 36.36\% \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Precision result comparison for 'energy'} \label{table:Measures-energy} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 89 & 113 & 78.76\% \\ \hline \textit{Simple BFS} & 16 & 29 & 55.17\% \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Precision result comparison for 'bank'} \label{table:Measures-bank} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 96 & 111 & 86.49\% \\ \hline \textit{Simple BFS} & 24 & 31 & 77.42\% \\ \hline \end{tabu} \end{center} \begin{comment} \begin{center} \captionof{table}{F-Measure for different sample queries} \label{table:Measures} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[c] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Relevant Tweet & Retrieved Tweet & Recall & Precision & F-Measure \\ \hline \textit{obama} & 57 & 768 & 87 & 0.07422 & 0.6552 & 13.3334 \\ \hline \textit{birling- gap} & 15 & 143 & 21 & 0.1049 & 0.7143 & 18.2927 \\ \hline \textit{energy} & 89 & 713 & 113 & 0.1249 & 0.7876 & 21.5496 \\ \hline \textit{bank} & 96 & 678 & 111 & 0.1416 & 0.8649 & 24.3346 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Number of different nodes for each query} \label{table:NodesNumber} \begin{tabu} to \linewidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Query} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textit{Obama} & 2212 & 48 & 79 & 300 & 25 & 2 & 213 & 87 \\ \hline \textit{birling- gap} & 69 & 9 & 5 & 38 & 74 & 4 & 11 & 22 \\ \hline \textit{energy} & 2281 & 55 & 86 & 305 & 27 & 9 & 178 & 116 \\ \hline \textit{bank} & 2219 & 53 & 79 & 302 & 25 & 7 & 182 & 113 \\ \hline \end{tabu} \end{center} \end{comment} Based on the achieved results, which have been shown in the tables and figures above, we observed the significant improvement in term of precision, number of retrieved tweets and different types of nodes after using STiC algorithm. This method shows high performance for crawling the Twitter and finding related tweets for a given query. In compare to simple BFS API call, STiC is able to retrieve more related tweets, while it is finding more Hashtags and Users and extending list of nodes during crawling Twitter. This method use more API calls since it can find stronger relation between visited nodes and uncrawled ones. For some queries such as \textit{birlinggap} which is a proper noun and there is small set of related words for them, simple BFS API call can not reach to a well connected graph and most of the nodes are not connected to each other while the STiC can build a graph with more edges between the nodes. For other queries, STiC can build a connected graph with more nodes and less diversity in number of relationships. By comparing the results of STiC model for all queries, we observed that the number of queries having more related keywords how also a greater number of related nodes with respect to the queries having a smaller set of keywords. \begin{comment} \subsubsection{new Score System\newline} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'obama'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 2212 & 48 & 79 & 300 & 25 & 2 & 213 & 87 \\ \hline \textbf{Old System} & 2218 & 48 & 79 & 300 & 25 & 2 & 213 & 87 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'birlinggap'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 69 & 9 & 5 & 38 & 74 & 4 & 11 & 22 \\ \hline \textbf{Old System} & 65 & 8 & 5 & 36 & 73 & 4 & 11 & 21 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'energy'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 2281 & 55 & 86 & 305 & 27 & 9 & 178 & 116 \\ \hline \textbf{Old System} & 2270 & 51 & 82 & 300 & 25 & 9 & 178 & 113 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'bank'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 2219 & 53 & 79 & 302 & 25 & 7 & 182 & 113 \\ \hline \textbf{Old System} & 2216 & 51 & 79 & 300 & 25 & 7 & 182 & 111 \\ \hline \end{tabu} \end{center} These experiences seem to indicate that query enrichment has a huge influence on improving the results to find the related Tweets to an original query by using the smart crawling algorithm. Also, starting the crawling Tweets from a node which is more related to the original query can improve the final results. By comparing the achieved results for F-Measure of both models, the significant improvement in new model is observable. \end{comment} \section{Conclusion and perspective} \label{sec:conclusion} In this paper, we aimed at developing a system for crawling relevant tweets to a given topic using a keyword query. We considered two aspects of the problem: the keyword-set enrichment and the crawling of relevant tweets using the Twitter APIs. First we focused on enriching queries and we used different external APIs (WordsAPI, Datamuse, Thesaurus, DBPedia, Wordnik, PyDictionary, Wordnet and New York Times API) and identified related keywords. We calculated TF-IDF score for these keywords and we removed the ones with lower score than threshold. We claimed that we can get more related tweets while we use more related keywords for a given topic. In the second step we defined a crawling algorithm and a scoring system in which each tweet is annotated by a score. Our crawling algorithm takes advantage of the text content of tweets using Natural Language Processing tools to deduce the relevance of each node. Overall, we obtain very satisfying results on well known topics, as a large number of retrieved tweets are related to the topic and number of retrieved tweets for running the model in a short period of time seems to be enough. Twitter is dynamic, as several thousands of tweets are posted each second, and the Twitter graph is in constant evolution. However the solution we developed seems to be resilient to these changes. This work opens the door for further interactions between various data sources. We could also consider taking advantage of more than just the concepts from the APIs (e.g. the content of the articles). We would also have liked to test this process on a larger number of iterations, but we were limited by the manual aspect of our evaluation method. For future work, we are going to improve the performance of finding related new tweets by using machine learning algorithms. So we will try to build a supervised learning system to classify new tweets by using the collected tweets as a train set. In this case we are able to use this system in many cases that have overlap with train set and sample queries. For being more precise and having significant improvement in pruning unrelated tweets, we can use the idea of popular users and most effective users to improve the the performance of scoring system and giving weight to the users as well. Another idea for smart crawling the tweets is using the provided URL in tweets and applying NLP methods on the text of the Web pages besides considering the meta data of the Web pages to create a more related list of keywords and hashtags for crawling the new tweets. \section{Acknowledgment} \label{sec:acknowledgement} Our contributions consist in (i) the usage of several data sources in order to enrich a keyword query; (ii) the definition of a Smart Crawling algorithm as the main method to access the related tweets to an original query using the keyword enriched query and the REST Twitter APIs. We would like to thank V. Chetyrkine, C. Hamelain, X. Li for their work in the development of the tool which was the base model for STiC and Benoit Grotz and Silviu Maniu for many helpful discussions. \bibliographystyle{ACM-Reference-Format} \section{Introduction} During the recent years, micro-blogging become a source of current and topical news. Twitter\footnote{www.twitter.com} is one of the most popular micro-blogging service which brings together millions of users and allows to publish and exchange short messages, known under the name of \emph{tweets}. Twitter was a pioneer in providing APIs to access public data since 2006\footnote{https://developer.twitter.com/en.html} and enabling applications to retrieve tweets using a set of keywords. However, there is no control on the retrieved tweets which are not always relevant. Reaching relevant answers requires multiple calls and filtering the retrieved results. There are several research works that have as objective searching relevant tweets according to a given query~\cite{gabielkov2014sampling,gouriten2014scalable,li2013towards,safran2012improving}. Researchers tried to improve the ways based on features such as hashtags, retweets and mentions to retrieve the most relevant tweets from Twitter. For example, one basic and simple method is if a query just contains one word, they find the frequency of this word in different features of Twitter\cite{nazi2015walk}. Researchers also tried to use external information extracted from sources such as Wikipedia to enrich a query by finding the most similar words to find the most related results to the given query \cite{guisado2016enrich}. {In this paper, we exploited different external resources such as \textit{Wordnet}, \textit{Wordnik}, \textit{DBPedia}, and \textit{New York Times(NYT)} to have the most complete set of keywords similar to the original query keywords. We also defined a smart crawling algorithm based on the tweet's features to reach the most relevant ones as early as possible and going beyond Twitter limitations. More precisely, we define a new crawling algorithm called \textbf{Smart Twitter Crawling} (STiC) by considering two different aspects: (i) Enriching the original query using external sources, (ii) crawling Twitter graph using a DFS search based on new scoring to reach the best nodes (tweets) related to the targeted topic. To measure the relevance of a tweet, the algorithm assigns a score to a tweet by taking into account its content, its hashtags and the user who has relation with it, such as posting, replying, being mention or retweeting. Given this score we are able to select highly related tweets in each iteration and continue by adding the relate valuable tweets. Different experiments have been achieved on tweets collected for different kind of query keywords to evaluate the precision of STiC algorithm. Thanks to our approach, compared to a simple BFS search, we increased the precision of related retrieved tweets up to 86\% for some queries. Also in case of number of retrieved results, we got significant improvement in compare with simple BFS model. In this paper, we first present related work in Section~\ref{sec:relatedwork}. Then, we present in detail our smart Twitter crawling approach including query enrichment and Twitter graph crawling in Section~\ref{sec:approach}. In section~\ref{sec:experiments}, we present and discuss our results using different queries. Finally,in Section~\ref{sec:conclusion} we give our conclusions and perspectives. \section{Related Work} \label{sec:relatedwork} Even before the emerging of social media, crawling the Web pages has been a common practice~\cite{li2013towards}. Finding the Web pages related to one topic was one of the interesting approaches to study~\cite{safran2012improving}. The common applied methodology, used neural network and vector space models to compute the priority models\cite{safran2012improving}. Deligenti\cite{diligenti2000focused} in 2000, introduced a model for focused crawling based on context graph by assigning appropriate credits to documents. Also Safran and et al.,\cite{safran2012improving} at 2012 proposed a new approach to improve relevance prediction in focused Web crawlers. They chose Na\"{\i}ve Bayesian as the base prediction model and they used four relevant attributes to create their prediction model: URL, anchor text, surrounding texts, and parent pages. They extended the list of keywords related to a topic by using WordNet and extracted relevant initial seed URLs automatically by choosing the top k-URLs retrieved from Google, Yahoo and MSN search engines. Gouriten \cite{gouriten2014scalable}, in 2014 introduced an adaptive, scalable and generic system for focused crawling to identify and optimized the relevant subsystems. Their algorithm was defined for focused Web crawling, topic-centered Twitter user crawl, deep Web siphoning through a keyword search, gossip peer-to-peer search and real-world social network to answer a query. Xinyue Wang and et al, in 2015\cite{wang2015adaptive} studied about finding a solution for crawling Microblog feeds in real time. They proposed an adaptive crawling model which extracts the hashtags from Twitter iteratively to achieve a list of relevant tweets to a query. Cha and et al, \cite{cha2010measuring} have worked on how to find most influential users in Twitter and his results could be useful when it be used to complete the idea for topic-focused crawling. In 2010, Tianyi and at al., \cite{wang2010unbiased} proposed a method to unbiased crawling the Tweets based on Metropolis-Hasting Random Walk(MHRW) using USDSG in the new method. Rui and et al., in 2013 \cite{li2013towards} proposed a data platform to automatically monitor “target” tweets from the Twitter stream for any specific topic. They designed Automatic Topic-focused Monitor (ATM), which first samples tweets from the Twitter stream and second selects a list of keywords to track based on the samples. GabielKov and et al.in 2014\cite{gabielkov2014sampling}, were working on sampling techniques for studying OSN. They have two scenarios for sampling and they want to find the best technique for each of them: first, they are looking for most popular users; the second one is that they have an aim to obtain unbiased sample of users. They showed that the classical sampling methods are highly biased by high degree nodes. \cite{gabielkov2014studying} In \cite{kwak2010twitter}, they proved that BFS will have a large bias when the number of requests to the API is limited. In RW, choosing the next node for visiting, depends on the degree of the node. They used USDSG (Unbiased sampling for directed social graphs) algorithm, proposed in \cite{wang2010unbiased}, which is a modification of RW and discards a random jump to a node with a probability proportional to the degree of the node and replace arcs with undirected edges.\newline Selecting keywords to retrieve relevant documents have been studied in lots of academic researches. As we mentioned earlier, Safran\cite{safran2012improving} at 2012 used WordNet to extend the extracted word set. Rui and co in 2013 \cite{li2013towards} proposed ATM Framework to select keywords in a constrained optimization approach, which finds near optimal keywords with guarantee (e.g., keywords are not too specific) and considers two types of costs. Also, ATM updates keywords in iterations which monitor the Twitter stream continuously. In 2015, Xinyue Wang and et al.,\cite{wang2015adaptive} reviewed the retrieved tweets to identify new keywords for automatically live event tweet collection, these new keywords were mostly based on the hashtags which was embedded inside the tweet. Gusiado and et al. in 2016 \cite{guisado2016enrich} presented a query rewriting cloud service. Their aim is solving the problem of vocabulary mismatch and topic inexperience of users. So, they proposed a method which offers a generic solution by analyzing the websites using Wikipedia and identifying the entities called ENRICH. \\ \section{Smart Twitter Crawling Approach} \label{sec:approach} Monitoring the set of tweets related to a target topic is an unsolved problem~\cite{congosto2017t}. In this section we present the Smart Twitter Crawling (STiC) approach we defined as a solution to this problem. The figure \ref{fig:architecture} describes the overall of our approach. STiC algorithm enriches initial keywords using external sources to query Twitter graph. It builds an initial sub-graph providing related seeds. The crawling is then based on a DFS search and exploits each considered tweet's features to assign a score and to select the most relevant tweets to be crawled. The results of the crawl will be a sub-graph made by different crawled nodes and the edges between them. This sub-graph will be stores in the a graph database, which is Neo4j in our work. Before going any further into details, we first present the input and output data representation of STiC algorithm. \begin{figure} \includegraphics[width=\linewidth, height=70mm]{Architecture.png} \caption{Architecture of our approach} \label{fig:architecture} \end{figure} \subsection{Input and Output of STiC Algorithm} \label{sec:representation} Twitter data can be represented as a graph $\mathcal{T} = <\mathcal{V} , \mathcal{U}> $ where $\mathcal{V}$ is the set of nodes and $\mathcal{U}$ is the set of directed edges between nodes. Different types of nodes are defined in $\mathcal{V}$: \begin{itemize} \item $t$ is a \emph{tweet}, accompanied by attribute values, which include the text of the tweet and its identifier. \item $h$ is a \emph{hashtag} extracted from the tweet. \item $u$ is a \emph{user} accompanied by its identifier value. \end{itemize} Different types of relations are defined in $\mathcal{U}$: \begin{itemize} \item \textbf{$<t,h>$} edge called $Has\_Hashtag$ which relates a tweet $t$ to a hashtag $h$ it contains. \item \textbf{$<t,t^\prime>$} edges called: \begin{itemize} \item $Quotes$ which relates a tweet $t$ to a tweet $t^\prime$. In this case, the text of node $t$ contains the text of $t^\prime$ in addition to its own text. \item $Replies\_To$ which relates a reply tweet $t$ to the origin tweet $t^\prime$. \item $ReTweets$ which relates a retweet $t$ to the origin tweet $t^\prime$. In this case, the text of $t$ is exactly the same as text of tweet $t^\prime$. \end{itemize} \item \textbf{$<t,u>$} called $Mentions$ which relates a tweet $t$ to the user $u$ mentioned in it. \item $\textbf{$<u,t>$}$ edges called: \begin{itemize} \item $Favorites$ which relates a user $u$ to a tweet $t$ which means $u$ likes $t$. \item $Posts$ which relates a user $u$ to a tweet $t$ which means $u$ posted $t$. \end{itemize} \end{itemize} \begin{itemize} \item $\textbf{$<u,u\prime>$}$ edge called $Follows$ which relates a user $u$ to a user $u\prime$ he follows. \end{itemize} The input of STiC algorithm is the list of keywords after enrichment and an initial sub-graph of Twitter in which nodes has no any additional information than what is available from Twitter API. The output of the algorithm is a sub-graph, in which each node is accompanied with a score value. STiC algorithm defines 3 main procedures: \begin{itemize} \item \textbf{Query enrichment procedure:} First step of algorithm to enrich list of keywords for the given query. \item \textbf{Select new node procedure:} The procedure for selecting next node for being visited in crawling process. \item \textbf{Smart crawling procedure:} The main process of algorithm to use the enriched list of keywords and node selection process for visiting new nodes and crawl Twitter. \end{itemize} \subsection{Query enrichment} \label{sec:queryEnrichment} The REST Twitter APIs offer the possibility to retrieve tweets using a set of keywords. When a user tries to retrieve tweets he is not always conscious of the best set of keywords to use in order to obtain the correct subset of tweets. In our research we use the external sources to enrich the set of keywords that the user specifies in the target query. Alg.\ref{alg:CrawlAlgorithmQE} expresses the process of enriching a query. In this procedure, we collect all related words from different data sources, such as NYT~\cite{zhao2011comparing}, Wikipedia~\cite{guisado2016enrich} and WordNet~\cite{safran2012improving} APIs. We identified a list of APIs that provide as source of information news, articles or synonyms and we identified the following ones: \textit{New York Times(NYT)}, \textit{Wordnet}, \textit{Wordnik}, \textit{DBPedia}, \textit{Thesaurus}, \textit{Datamuse}, \textit{WordsAPI}, \textit{Pydictionary}. We also give a weight to each keyword to specify the relevance of the keyword to the original query keywords. For assigning this weight, we consider the subset of keyword given by each external source as a separated set and we calculate the IDF of each keyword. For this aim, each external source is considered as a document and then we calculated term frequency as the number of occurrence of the word in all documents. Then we assign the weight of each word as term frequency in all documents multiply by its IDF score. For instance for \textit{obama} original query keyword we retrieve the following terms and their frequency: ({\textit{barack obama}: 4, \textit{barack hussein obama}: 3, \textit{barack}: 3, \textit{obama}: 3, \textit{community organizer in chief}: 2, \textit{barak obama}: 2, etc.}) \\ Then the weight (total term frequency*IDF score) for each term is computed: \textit{barack obama}: 0.96, \textit{barack hussein obama}: 0.89, \textit{community organizer in chief}: 0.78, \textit{barak obama}: 0.78, etc.})\\ Finally we select the top score keywords: (\textit{barack obama}, \textit{barack hussein obama}, \textit{community organizer in chief}, \textit{barak obama}, \textit{barrack obama}, \textit{president obama}, \textit{barackobama}, \textit{brack obama}, \textit{obama barack}, etc.) We finally merge the all keywords extracted from all APIs with their calculated weights and we sort them based on their weights and on a threshold $\alpha$. The key factor for selecting $\alpha$ is that the keywords with the score above the threshold should not be irrelevant to the query. This threshold may vary and depends on the type of the query and results of the query enrichment. The algorithm \ref{alg:CrawlAlgorithmQE} describes the function \textit{mostRelatedKeywords} which returns the \begin{comment} \begin{algorithm} \caption{STiC Algorithm - Query Enrichment} \label{alg:CrawlAlgorithmQE} \hspace*{\algorithmicindent} \textbf{Input} $external\_sources\_list$ as \textit{New York Times(NYT)}, \textit{Wordnet}, \textit{Wordnik},\textit{DBPedia}, \textit{Thesaurus}, \textit{Datamuse}, \textit{WordsAPI}, \textit{Pydictionary} \\ $\alpha$, \textit{keyword relevance threshould}, $query$\\ \hspace*{\algorithmicindent} \textbf{Output} $keywords\_list$ \begin{algorithmic}[1] \For {$source$ in $external\_sources\_list$} \State $keywords\_list.add(related\_words(source, query))$ \EndFor \State $calculateTFIDFScore(keywords\_list)$ \State \Return {$mostRelatedKeywords(keywords\_list,\alpha)$} \end{algorithmic} \end{algorithm} \end{comment} \begin{algorithm}[t] \DontPrintSemicolon \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \SetKwProg{Fn}{Function}{:}{} \caption{STiC Algorithm - Query Enrichment} \label{alg:CrawlAlgorithmQE} \KwInput {$external\_sources\_list$ \tcp*{\textit{New York Times(NYT)}, \textit{Wordnet}, \textit{Wordnik},\textit{DBPedia}, \textit{Thesaurus}, \textit{Datamuse}, \textit{WordsAPI}, \textit{Pydictionary}}} \KwInput {$\alpha$ \tcp*{\textit{keyword relevance threshold}}} \KwInput {$query$} \KwOutput{$keywords\_list$} \BlankLine \For {$source$ in $external\_sources\_list$} { {$keywords\_list.add(related\_words(source, query))$} } {$calculateTFIDFScore(keywords\_list)$}\\ {\textbf{return} {$mostRelatedKeywords(keywords\_list,\alpha)$}} \end{algorithm} \subsection{Smart Crawling} \label{sec:smartCrawling} In the Alg.\ref{alg:CrawlAlgorithmSC}, the list of keywords from the Alg.\ref{alg:CrawlAlgorithmQE}, the initial node and number of iterations, will be given to the procedure and in first iteration, a sub-graph of nodes from neighbors of initial node will be created and scores of nodes will be updated. The initialization of the graph is crucial for the success of the following iterations of the algorithm: a bad initialization can lead to a graph where there is any node is related to query. In the beginning, we chose manually a single node we knew was relevant (e.g. we can take the official hashtag if it is in the set of keywords specified by the user). While quite effective, this selection cannot be transparently done since it needs manual selection for each different query and there is chance of leading the crawling to a specific part of data. We initialize the graph automatically with the result of a simple Twitter API search using the enriched keywords. This preliminarily results allow for the first round of the crawl. Then, as number of iterations, the crawler will visit the selected node from Alg.\ref{alg:CrawlAlgorithmNS}, which is explained in \ref{sec:nodeSelection} and it will add its neighbors to the list of candidate nodes, which will be given to the Alg.\ref{alg:CrawlAlgorithmNS} in the next iteration. Then, before going for the next iteration, we need to update the scores of nodes which is explained in \ref{sec:scoreCalculation}. In the end, a sub-graph of Twitter will be returned which has been created by crawling nodes based on the defined score for each one. \begin{comment} \begin{algorithm} \caption{STiC Algorithm - Smart Crawling} \label{alg:CrawlAlgorithmSC} \hspace*{\algorithmicindent} \textbf{Input}: keywords\_list, iterations, initial\_relevant\_node \\ \hspace*{\algorithmicindent} \textbf{Output}:$visited\_nodes$ \begin{algorithmic}[1] \State $i \gets \textit{iterations}$ \State ${ n_0 \gets \textit{initial\_relevant\_node} }$ \State $current\_node \gets n_0$ \State $update\_queue\_nodes(current\_node\_neighbors)$ \State $update\_nodes\_scores()$ \State $add\_to\_visited\_list(current\_node)$ \State $\textit{i} \gets \textit{i \- 1}$ \While {i $>$ 0}: \State $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ \While{$is\_visited(current\_node)$} \State $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ \EndWhile \State $update\_queue\_nodes(current\_node\_neighbors)$ \State $update\_nodes\_scores()$ \State $add\_to\_visited\_list(current\_node)$ \State $\textit{i} \gets \textit{i \- 1}$ \EndWhile \Return \textit{visited\_list} \end{algorithmic} \end{algorithm} \end{comment} \begin{algorithm}[t] \DontPrintSemicolon \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \SetKwProg{Fn}{Function}{:}{} \caption{STiC Algorithm - Smart Crawling} \label{alg:CrawlAlgorithmSC} \KwInput {$keywords\_list$} \KwInput {$iterations$} \KwInput {$initial\_relevant\_node$} \KwOutput{$visited\_nodes$ } \BlankLine { $i \gets \textit{iterations}$ }\\ { ${ n_0 \gets \textit{initial\_relevant\_node} }$ }\\ { $current\_node \gets n_0$ }\\ { $update\_queue\_nodes(current\_node\_neighbors)$ }\\ { $update\_nodes\_scores()$ }\\ { $add\_to\_visited\_list(current\_node)$ }\\ { $\textit{i} \gets \textit{i - 1}$ }\\ \While {i $>$ 0} { { $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ }\\ \While{$is\_visited(current\_node)$} { { $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ }\\ } { $update\_queue\_nodes(current\_node\_neighbors)$ }\\ { $update\_nodes\_scores()$ }\\ { $add\_to\_visited\_list(current\_node)$ }\\ { $\textit{i} \gets \textit{i - 1}$ }\\ } \textbf{return} $\textit{visited\_list}$ \end{algorithm} \subsection{Node Selection} \label{sec:nodeSelection} Our crawling method selects the next node from which to continue the crawling during each iteration. On the one hand we want to explore the most promising nodes, i.e. the ones with the highest estimate scores, and not waste queries on irrelevant nodes. On the other hand, we would also like to avoid remaining in a portion of the Twitter graph and miss other relevant nodes. The first objective can be understood as efficiency whereas the second as completeness. A common solution to this trade off is the introduction of random based choice. In the equations below, the $p$ (a real number between 0 and 1) parametrizes the probability distribution. The closer $p$ is to 1, the higher the probability to chose a high score is. On the contrary, if $p$ is close to 0, the low scored nodes will have a higher probability of being chosen. This probability distribution is inspired by a multinomial distribution and a soft-max function. For each node $i$, the probability to be selected $P_i$ is given by the non-normalized multinomial function $f_i$, and depending of the parameter $p$: \[ P_i = \dfrac{ \exp \left( f_i \right)}{\underset{i}\sum \exp \left( f_i \right)} \] \[ \text{where: }f_i = \dfrac{x_i }{x_{\min}} \cdot p + \dfrac{x_{\max}}{x_i} \cdot (1-p) \] Using this formula, we are able to jump from one node to another one if the score of the node in not large enough to be selected for crawling. In this case, we use the minimum and maximum scores of the crawled nodes and choose $p$ arbitrary, to define the function $f_i$ for each node. The probability of $P_i$ shows the chance of a node to be selected based on the fraction of its $f_i$ to sum of $f_i$ for other crawled nodes. In next section we are going to describe the process of calculating the score for different types of nodes. The algorithm \ref{alg:CrawlAlgorithmNS} \subsubsection{Score Calculation} \label{sec:scoreCalculation} At the beginning of each iteration, a node, \textit{$n_0$}, is selected and all internal information about this node is retrieved from Twitter which includes the \textit{id\_str} of its neighbors, who are then added to the queue for the next iteration. Then, the scores of all nodes are updated. For \textit{$n_0$} only \textsc{text\_score} is available at the beginning and as there is no other node which has been visited before that, its \textsc{estimate\_score} is equal to 0. The final score is calculated according to the various score attributes to find the relevance of a node according to the initial query. The score related to a text of a tweet is defined as follows: \begin{itemize} \item \textsc{text\_score}$(t)$: is defined for a tweet node $t$ and is represent the frequency of query keywords in the text body of the tweet. \subsubsection{Tweets content analysis} \label{sec:contentAnalysis} Contrary to the User and Hashtag nodes (the Hashtag nodes are merely a word or a name), a tweet is characterized by a textual content that allows us to use Natural Language Processing tools to judge their relevance to the target topic. We begin this step with a list of keywords. The analysis of the tweet consists in a lexical and semantic comparison between the keywords and the text body. This analysis begins with the lemmatization of both texts. This is a classic NLP tool that transforms the words into their root form. This allows us to ignores plurality for nouns and tenses for verbs. Punctuation marks and linking words (e.g. the, and, a, of . . . ) are removed because they usually do not convey useful semantic knowledge. Both texts are then compared both lexically and semantically. The lexical comparison is done by counting the number of words the texts have in common. We note that this count is not normalized, but the limit of 280 characters of a tweet prevents the possibility of a longer text that contains a lot of keywords. The semantic comparison is done using the Word Net database. In this database, words possess various relationships with each other. In particular, we utilize the hyponym relationship: the link between two words is be measured as the depth of the closest common ancestor in the hyponymy graph. A keyword is considered to match a word with a semantical relation if the similarity value given by Word Net is higher than a threshold set beforehand. At last, the score from the text of a tweet is the sum of weights of keywords matched (either by semantic relation or lexical).\\ \item \textsc{estimate\_score}: is estimating the relevance of the node, $n\in$ $\mathcal{V}$ based on the score of a direct predecessor node, $n^\prime\in$ $\mathcal{V}$, which is a visited node that has a relation with $n$ and the edge $e\in$ $\mathcal{U}$ connects $n$ and $n^\prime$ together. \begin{itemize} \item {for a Tweet}: \textsc{tweet\_estimate\_coef}$ = [0.4, 0.6, 1.0, 1.0, 1.0, 0.5,0.5]$\\ These coefficients concern, in order, the user who posts the tweet, mentioned users in this tweet, original of this tweet if it replies to another one, original of this tweet if it quotes another one, original of this Tweet if it is a retweet of another one, and retweets of this tweet. \item {for a User}: \textsc{user\_estimate\_coef}$ = [1.0, 0.6, 0.5, 0.3]$\\ These coefficients concern, in order, tweets posted by this user, his favorite tweets, his friends, and his followers. \end{itemize} \item \textsc{feedback\_score}: is estimating the relevance of the node, $n\in$ $\mathcal{V}$ based on the score of direct successor nodes $n^\prime\in$ $\mathcal{V}$, which is the one who has been visited after $n$ and there is edge $e\in$ $\mathcal{U}$ between $n$ and $n^\prime$ to show the relation between nodes. \item \textsc{score}: this final score is computed after the crawling and feedback steps of the algorithm and it is calculated based on the three previous scores. \begin{itemize} \item {for a Tweet}: $Score$ = $text\_score$ + $feedback\_score$ \item {for a User}: $Score$ = $estimate\_score$ + $feedback\_score$ \item {for a Hashtag}: $Score$ = $estimate\_score$ + $feedback\_score$ + $Occurance\_Count$ \end{itemize} \end{itemize} To obtain a node's {\sc estimate\_score}, we multiply it's predecessor's {\sc score} by the corresponding coefficient. Thus, a tweet node has 4 score-related attributes whereas other node types have 3. These attributes exist regardless of the node's state. We assume at the start that we begin with some seed tweets, considered highly relevant. The precise way in which we obtain those tweets is detailed in Section 3.4. We evaluate the \textsc{text\_score} of these seeds using the strategy described in Section 3.2. We set their \textsc{estimate\_score} equal to their \textsc{text\_score} to allow our algorithm to run. At each iteration during the crawling, we begin by selecting a new node using the method described in Section 3.5. We then query Twitter to complete its information. We update the \textsc{score} of the nodes as follows: if it is a tweet, we compute its \textsc{text\_score} and We add the difference between calculated \textsc{text\_score} and its \textsc{estimate\_score} to its parent's \textsc{feedback\_score}. We then proceed to add this node's uncrawled neighbors to the graph. We set their \textsc{estimate\_score} as a fraction of the current node's score, based on the relationship they share. If it is a User node, the score will be equal to sum of its \textsc{estimate\_score} and \textsc{feedback\_score}. If it is a Hashtag node, in addition to sum of its \textsc{estimate\_score} and \textsc{feedback\_score}, we count how often they appear and add it to their score. Alg.\ref{alg:CrawlAlgorithmNS} defines the process of selecting a new node. In this procedure, the input is a list of candidate nodes and for each node in this list, the function $f$ will be calculated and then selecting probability for it, $P$ will be calculated. In the end, the node with the highest probability will be returned. \begin{comment} \begin{algorithm} \caption{STiC Algorithm - Node Selection} \label{alg:CrawlAlgorithmNS} \hspace{-4mm}\textbf{Input} {$queue\_nodes$}\\ \textbf{Input} {$ p \gets \textit{0.7 \Comment{probability of selecting high score node} }$\\} \textbf{Output} $selected\_node$ \begin{algorithmic}[1] \State $max\_score \gets \textit{maximum score of queue nodes}$ \State $min\_score \gets \textit{minimum score of queue nodes}$ \For {$node$ in $queue\_nodes$} \State ${ f[i] = calculate\_F(node.score, min\_score, max\_score, p) }$ \State $P[i] = exp(f[i])/sum(exp(f[i]))$ \EndFor \State \Return {$node\_with\_max\_P[i]$} \end{algorithmic} \end{algorithm} \end{comment} \begin{algorithm}[t] \DontPrintSemicolon \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \SetKwProg{Fn}{Function}{:}{} \caption{STiC Algorithm - Node Selection} \label{alg:CrawlAlgorithmNS} \KwInput {$queue\_nodes$} \KwInput {$ p \gets 0.7$ \tcp*{probability of selecting high score node}} \KwOutput {$selected\_node$} \BlankLine { $max\_score \gets \textit{maximum score of queue nodes}$ }\\ { $min\_score \gets \textit{minimum score of queue nodes}$ }\\ \For {$node$ in $queue\_nodes$} { { ${ f[i] = calculate\_F(node.score, min\_score, max\_score, p) }$ }\\ { $P[i] = exp(f[i])/sum(exp(f[i]))$ }\\ } { \textbf{return} {$node\_with\_max\_P[i]$} }\\ \end{algorithm} \section{Experiments and Evaluation} \label{sec:experiments} STiC algorithm is implemented by Python, we used tweepy\footnote{http://www.tweepy.org/} v3.5 to access the Twitter API, and neo4j-driver\footnote{https://neo4j.com/developer/python/} v1.0.2 and neo4jrestclient\footnote{https://pypi.org/project/neo4jrestclient/} v2.1.1 to communicate with Neo4j. For enriching the list of keywords we used different APIs and all of them are implemented in python\footnote{https://www.python.org/download/releases/3.4.0/} 3.4. In some cases we needed to create a new library while for others used predefined libraries. We aimed to increase the precision of retrieved tweets. In order to evaluate our approach to see how much STiC is successful, we run the experiments with maximum 100 iteration for crawling and maximum timeout 720 second. The relevance threshold for keywords, $\alpha$, is chosen as equal as 0.5 and the threshold for selecting high score node, $p$, is 0.7. These numbers are arbitrary and selected after observing a few iterations of crawling. We run the model on each query separately and stored the results to be able to compare them by statistics and manual check. We selected four original queries from four different categories including proper nouns: \textit{obama}, general words: \textit{bank}, concepts: \textit{energy} and recent trends: \textit{birlinggap}. The reason for choosing these keywords is covering different categories of queries and being able to evaluate the system with different inputs and decrease the bias toward the specific part of tweets or users. \textit{obama} is the previous president of United States and he has huge number of followers, hashtags, mentions and tweets and it is very good option to start the crawling. 'bank' and 'energy' are very general and they have good number of relations and hashtags in Twitter. Also there is a lot of number of users which has significant number of related tweets and we can have a good chance to crawl an enough large subset of the crawling space. \textit{birlinggap} was one of the recent trends at the moment of doing experiments and it gives us the chance to do manual check on results easily. Fig.\ref{fig:birlinggap-new} shows the retrieved nodes using STiC after storing in the database. Red nodes represent tweets in the database while blue nodes show the hashtags which found related to the query and purple nodes indicate users which has been crawled during the process. The edges' labels define the type of relation between nodes. \begin{figure}[t] \includegraphics[width=0.85\linewidth, height=75mm]{birlinggap-new.png} \caption{Retrieved nodes for query \textit{birlinggap} using STiC} \label{fig:birlinggap-new} \end{figure} \begin{comment} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{obama-new.png} \caption{Retrieved tweets for original query \textit{obama} using STiC} \label{fig:obama-new} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{bank-new.png} \caption{Retrieved tweets for original query \textit{bank} using STiC} \label{fig:bank-new} \end{minipage} \end{figure} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{birlinggap-new.png} \caption{Retrieved tweets for original query \textit{birlinggap} using STiC} \label{fig:birlinggap-new} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{energy-new.png} \caption{Retrieved tweets for original query \textit{energy} using STiC} \label{fig:energy-new} \end{minipage} \end{figure} Figure\ref{fig:obama-simple} \ref{fig:bank-simple} \ref{fig:birlinggap-simple}\ref{fig:energy-simple}, show the diagrams of retrieved tweets using simple API for the given queries. \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{obama-simple.png} \caption{Retrieved tweets for original query \textit{obama} using simple BFS API call} \label{fig:obama-simple} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{bank-simple.png} \caption{Retrieved tweets for original query \textit{bank} using simple BFS API call} \label{fig:bank-simple} \end{minipage} \end{figure} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{birlinggap-simple.png} \caption{Retrieved tweets for original query \textit{birlinggap} using simple BFS API call} \label{fig:birlinggap-simple} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{energy-simple.png} \caption{Retrieved tweets for original query \textit{energy} using simple BFS API call} \label{fig:energy-simple} \end{minipage} \end{figure} \end{comment} Figure \ref{fig:obama-nodes} and Figure \ref{fig:energy-nodes} show the number of different types of nodes for queries \textit{obama} and \textit{energy} using STiC, respectively and compare them with results of simple BFS API call for the given query. STic used more API calls and found more related tweets. It decreased the variety of relations between nodes and brought more Hastag and User node in compare to simple BFS method. \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-obama.png} \caption{Number of different nodes for query 'obama'} \label{fig:obama-nodes} \end{figure} \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-energy.png} \caption{Number of different nodes for query 'energy'} \label{fig:energy-nodes} \end{figure} Figure \ref{fig:bank-nodes} gives the comparison for different nodes retrieved by STiC and simple BFS for query \textit{bank}. STiC gives more tweet nodes and more User nodes but the number of Hashtag nodes are less than simple method. The reason is that STiC build the connected graph by crawling more users and tweets rather than jumping from node to another one without any relation. \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-bank.png} \caption{Number of different nodes for query 'bank'} \label{fig:bank-nodes} \end{figure} Figure \ref{fig:birlinggap-nodes}, shows that STiC found the same number of tweets for \textit{birlinggap} query while it is using more API calls and increase number of relationships between nodes. This increment is explained by comparing number of Hashtags and Users, since it found more nodes than simple BFS, these nodes caused more edges than before. \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-birlinggap.png} \caption{Number of different nodes for query 'birlinggap'} \label{fig:birlinggap-nodes} \end{figure} \begin{comment} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-obama.png} \caption{Number of different nodes for query 'obama'} \label{fig:obama-nodes} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-bank.png} \caption{Number of different nodes for query 'bank'} \label{fig:bank-nodes} \end{minipage} \end{figure} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-birlinggap.png} \caption{Number of different nodes for query 'birlinggap'} \label{fig:birlinggap-nodes} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-energy.png} \caption{Number of different nodes for query 'energy'} \label{fig:energy-nodes} \end{minipage} \end{figure} \end{comment} In this part for final evaluation, we decided to calculate precision for both STiC and simple BFS API call and see the improvement clearly. \begin{comment} we defined these measures as following:\\ Precision (P) is the fraction of retrieved tweets that are relevant. $$ Precision =\frac{\#(relevant\, tweets\, retrieved)}{\#(retrieved\, tweets)} = P(relevant|retrieved) $$ \\ Recall (R) is the fraction of relevant tweets that are retrieved. $$ Recall =\frac{\#(relevant\, tweets\, retrieved)}{\#(relevant\, tweets)} = P(retrieved|relevant) $$ \\ A single measure that trades off precision versus recall is the F-measure, which is the weighted harmonic mean of precision and recall: $$ F-measure =\frac{2PR}{P + R} $$ we should have an estimation about retrieved and relevant tweets, so we used Automated Social Media Analytics \footnote{http://keyhole.co} to find number of relevant post to the exact hashtag of each query. This number for 'obama', 'birlinggap', 'energy' and 'bank' is 768, 143, 713 and 678 respectively. \end{comment} in the tables \ref{table:Measures-obama}, \ref{table:Measures-birlinggap}, \ref{table:Measures-energy} and \ref{table:Measures-bank} you can find the precision percentage for simple BFS API call model and STiC model for each topic. STiC model shows significant improvement in finding related tweets, specially in \textit{birlinggap} query in which the simple BFS could not find any Hashtag or User nodes, which is very important for building relations and making connections between nodes. \begin{center} \captionof{table}{Precision result comparison for 'obama'} \label{table:Measures-obama} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 71 & 87 & 81.6\% \\ \hline \textit{Simple BFS} & 16 & 25 & 64\% \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Precision result comparison for 'birlinggap'} \label{table:Measures-birlinggap} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 15 & 21 & 71.43\% \\ \hline \textit{Simple BFS} & 8 & 22 & 36.36\% \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Precision result comparison for 'energy'} \label{table:Measures-energy} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 89 & 113 & 78.76\% \\ \hline \textit{Simple BFS} & 16 & 29 & 55.17\% \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Precision result comparison for 'bank'} \label{table:Measures-bank} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 96 & 111 & 86.49\% \\ \hline \textit{Simple BFS} & 24 & 31 & 77.42\% \\ \hline \end{tabu} \end{center} \begin{comment} \begin{center} \captionof{table}{F-Measure for different sample queries} \label{table:Measures} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[c] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Relevant Tweet & Retrieved Tweet & Recall & Precision & F-Measure \\ \hline \textit{obama} & 57 & 768 & 87 & 0.07422 & 0.6552 & 13.3334 \\ \hline \textit{birling- gap} & 15 & 143 & 21 & 0.1049 & 0.7143 & 18.2927 \\ \hline \textit{energy} & 89 & 713 & 113 & 0.1249 & 0.7876 & 21.5496 \\ \hline \textit{bank} & 96 & 678 & 111 & 0.1416 & 0.8649 & 24.3346 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Number of different nodes for each query} \label{table:NodesNumber} \begin{tabu} to \linewidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Query} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textit{Obama} & 2212 & 48 & 79 & 300 & 25 & 2 & 213 & 87 \\ \hline \textit{birling- gap} & 69 & 9 & 5 & 38 & 74 & 4 & 11 & 22 \\ \hline \textit{energy} & 2281 & 55 & 86 & 305 & 27 & 9 & 178 & 116 \\ \hline \textit{bank} & 2219 & 53 & 79 & 302 & 25 & 7 & 182 & 113 \\ \hline \end{tabu} \end{center} \end{comment} Based on the achieved results, which have been shown in the tables and figures above, we observed the significant improvement in term of precision, number of retrieved tweets and different types of nodes after using STiC algorithm. This method shows high performance for crawling the Twitter and finding related tweets for a given query. In compare to simple BFS API call, STiC is able to retrieve more related tweets, while it is finding more Hashtags and Users and extending list of nodes during crawling Twitter. This method use more API calls since it can find stronger relation between visited nodes and uncrawled ones. For some queries such as \textit{birlinggap} which is a proper noun and there is small set of related words for them, simple BFS API call can not reach to a well connected graph and most of the nodes are not connected to each other while the STiC can build a graph with more edges between the nodes. For other queries, STiC can build a connected graph with more nodes and less diversity in number of relationships. By comparing the results of STiC model for all queries, we observed that the number of queries having more related keywords how also a greater number of related nodes with respect to the queries having a smaller set of keywords. \begin{comment} \subsubsection{new Score System\newline} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'obama'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 2212 & 48 & 79 & 300 & 25 & 2 & 213 & 87 \\ \hline \textbf{Old System} & 2218 & 48 & 79 & 300 & 25 & 2 & 213 & 87 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'birlinggap'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 69 & 9 & 5 & 38 & 74 & 4 & 11 & 22 \\ \hline \textbf{Old System} & 65 & 8 & 5 & 36 & 73 & 4 & 11 & 21 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'energy'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 2281 & 55 & 86 & 305 & 27 & 9 & 178 & 116 \\ \hline \textbf{Old System} & 2270 & 51 & 82 & 300 & 25 & 9 & 178 & 113 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'bank'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 2219 & 53 & 79 & 302 & 25 & 7 & 182 & 113 \\ \hline \textbf{Old System} & 2216 & 51 & 79 & 300 & 25 & 7 & 182 & 111 \\ \hline \end{tabu} \end{center} These experiences seem to indicate that query enrichment has a huge influence on improving the results to find the related Tweets to an original query by using the smart crawling algorithm. Also, starting the crawling Tweets from a node which is more related to the original query can improve the final results. By comparing the achieved results for F-Measure of both models, the significant improvement in new model is observable. \end{comment} \section{Conclusion and perspective} \label{sec:conclusion} In this paper, we aimed at developing a system for crawling relevant tweets to a given topic using a keyword query. We considered two aspects of the problem: the keyword-set enrichment and the crawling of relevant tweets using the Twitter APIs. First we focused on enriching queries and we used different external APIs (WordsAPI, Datamuse, Thesaurus, DBPedia, Wordnik, PyDictionary, Wordnet and New York Times API) and identified related keywords. We calculated TF-IDF score for these keywords and we removed the ones with lower score than threshold. We claimed that we can get more related tweets while we use more related keywords for a given topic. In the second step we defined a crawling algorithm and a scoring system in which each tweet is annotated by a score. Our crawling algorithm takes advantage of the text content of tweets using Natural Language Processing tools to deduce the relevance of each node. Overall, we obtain very satisfying results on well known topics, as a large number of retrieved tweets are related to the topic and number of retrieved tweets for running the model in a short period of time seems to be enough. Twitter is dynamic, as several thousands of tweets are posted each second, and the Twitter graph is in constant evolution. However the solution we developed seems to be resilient to these changes. This work opens the door for further interactions between various data sources. We could also consider taking advantage of more than just the concepts from the APIs (e.g. the content of the articles). We would also have liked to test this process on a larger number of iterations, but we were limited by the manual aspect of our evaluation method. For future work, we are going to improve the performance of finding related new tweets by using machine learning algorithms. So we will try to build a supervised learning system to classify new tweets by using the collected tweets as a train set. In this case we are able to use this system in many cases that have overlap with train set and sample queries. For being more precise and having significant improvement in pruning unrelated tweets, we can use the idea of popular users and most effective users to improve the the performance of scoring system and giving weight to the users as well. Another idea for smart crawling the tweets is using the provided URL in tweets and applying NLP methods on the text of the Web pages besides considering the meta data of the Web pages to create a more related list of keywords and hashtags for crawling the new tweets. \section{Acknowledgment} \label{sec:acknowledgement} Our contributions consist in (i) the usage of several data sources in order to enrich a keyword query; (ii) the definition of a Smart Crawling algorithm as the main method to access the related tweets to an original query using the keyword enriched query and the REST Twitter APIs. We would like to thank V. Chetyrkine, C. Hamelain, X. Li for their work in the development of the tool which was the base model for STiC and Benoit Grotz and Silviu Maniu for many helpful discussions. \bibliographystyle{ACM-Reference-Format} \section{Introduction} During the recent years, micro-blogging become a source of current and topical news. Twitter\footnote{www.twitter.com} is one of the most popular micro-blogging service which brings together millions of users and allows to publish and exchange short messages, known under the name of \emph{tweets}. Twitter was a pioneer in providing APIs to access public data since 2006\footnote{https://developer.twitter.com/en.html} and enabling applications to retrieve tweets using a set of keywords. However, there is no control on the retrieved tweets which are not always relevant. Reaching relevant answers requires multiple calls and filtering the retrieved results. There are several research works that have as objective searching relevant tweets according to a given query~\cite{gabielkov2014sampling,gouriten2014scalable,li2013towards,safran2012improving}. Researchers tried to improve the ways based on features such as hashtags, retweets and mentions to retrieve the most relevant tweets from Twitter. For example, one basic and simple method is if a query just contains one word, they find the frequency of this word in different features of Twitter\cite{nazi2015walk}. Researchers also tried to use external information extracted from sources such as Wikipedia to enrich a query by finding the most similar words to find the most related results to the given query \cite{guisado2016enrich}. {In this paper, we exploited different external resources such as \textit{Wordnet}, \textit{Wordnik}, \textit{DBPedia}, and \textit{New York Times(NYT)} to have the most complete set of keywords similar to the original query keywords. We also defined a smart crawling algorithm based on the tweet's features to reach the most relevant ones as early as possible and going beyond Twitter limitations. More precisely, we define a new crawling algorithm called \textbf{Smart Twitter Crawling} (STiC) by considering two different aspects: (i) Enriching the original query using external sources, (ii) crawling Twitter graph using a DFS search based on new scoring to reach the best nodes (tweets) related to the targeted topic. To measure the relevance of a tweet, the algorithm assigns a score to a tweet by taking into account its content, its hashtags and the user who has relation with it, such as posting, replying, being mention or retweeting. Given this score we are able to select highly related tweets in each iteration and continue by adding the relate valuable tweets. Different experiments have been achieved on tweets collected for different kind of query keywords to evaluate the precision of STiC algorithm. Thanks to our approach, compared to a simple BFS search, we increased the precision of related retrieved tweets up to 86\% for some queries. Also in case of number of retrieved results, we got significant improvement in compare with simple BFS model. In this paper, we first present related work in Section~\ref{sec:relatedwork}. Then, we present in detail our smart Twitter crawling approach including query enrichment and Twitter graph crawling in Section~\ref{sec:approach}. In section~\ref{sec:experiments}, we present and discuss our results using different queries. Finally,in Section~\ref{sec:conclusion} we give our conclusions and perspectives. \section{Related Work} \label{sec:relatedwork} Even before the emerging of social media, crawling the Web pages has been a common practice~\cite{li2013towards}. Finding the Web pages related to one topic was one of the interesting approaches to study~\cite{safran2012improving}. The common applied methodology, used neural network and vector space models to compute the priority models\cite{safran2012improving}. Deligenti\cite{diligenti2000focused} in 2000, introduced a model for focused crawling based on context graph by assigning appropriate credits to documents. Also Safran and et al.,\cite{safran2012improving} at 2012 proposed a new approach to improve relevance prediction in focused Web crawlers. They chose Na\"{\i}ve Bayesian as the base prediction model and they used four relevant attributes to create their prediction model: URL, anchor text, surrounding texts, and parent pages. They extended the list of keywords related to a topic by using WordNet and extracted relevant initial seed URLs automatically by choosing the top k-URLs retrieved from Google, Yahoo and MSN search engines. Gouriten \cite{gouriten2014scalable}, in 2014 introduced an adaptive, scalable and generic system for focused crawling to identify and optimized the relevant subsystems. Their algorithm was defined for focused Web crawling, topic-centered Twitter user crawl, deep Web siphoning through a keyword search, gossip peer-to-peer search and real-world social network to answer a query. Xinyue Wang and et al, in 2015\cite{wang2015adaptive} studied about finding a solution for crawling Microblog feeds in real time. They proposed an adaptive crawling model which extracts the hashtags from Twitter iteratively to achieve a list of relevant tweets to a query. Cha and et al, \cite{cha2010measuring} have worked on how to find most influential users in Twitter and his results could be useful when it be used to complete the idea for topic-focused crawling. In 2010, Tianyi and at al., \cite{wang2010unbiased} proposed a method to unbiased crawling the Tweets based on Metropolis-Hasting Random Walk(MHRW) using USDSG in the new method. Rui and et al., in 2013 \cite{li2013towards} proposed a data platform to automatically monitor “target” tweets from the Twitter stream for any specific topic. They designed Automatic Topic-focused Monitor (ATM), which first samples tweets from the Twitter stream and second selects a list of keywords to track based on the samples. GabielKov and et al.in 2014\cite{gabielkov2014sampling}, were working on sampling techniques for studying OSN. They have two scenarios for sampling and they want to find the best technique for each of them: first, they are looking for most popular users; the second one is that they have an aim to obtain unbiased sample of users. They showed that the classical sampling methods are highly biased by high degree nodes. \cite{gabielkov2014studying} In \cite{kwak2010twitter}, they proved that BFS will have a large bias when the number of requests to the API is limited. In RW, choosing the next node for visiting, depends on the degree of the node. They used USDSG (Unbiased sampling for directed social graphs) algorithm, proposed in \cite{wang2010unbiased}, which is a modification of RW and discards a random jump to a node with a probability proportional to the degree of the node and replace arcs with undirected edges.\newline Selecting keywords to retrieve relevant documents have been studied in lots of academic researches. As we mentioned earlier, Safran\cite{safran2012improving} at 2012 used WordNet to extend the extracted word set. Rui and co in 2013 \cite{li2013towards} proposed ATM Framework to select keywords in a constrained optimization approach, which finds near optimal keywords with guarantee (e.g., keywords are not too specific) and considers two types of costs. Also, ATM updates keywords in iterations which monitor the Twitter stream continuously. In 2015, Xinyue Wang and et al.,\cite{wang2015adaptive} reviewed the retrieved tweets to identify new keywords for automatically live event tweet collection, these new keywords were mostly based on the hashtags which was embedded inside the tweet. Gusiado and et al. in 2016 \cite{guisado2016enrich} presented a query rewriting cloud service. Their aim is solving the problem of vocabulary mismatch and topic inexperience of users. So, they proposed a method which offers a generic solution by analyzing the websites using Wikipedia and identifying the entities called ENRICH. \\ \section{Smart Twitter Crawling Approach} \label{sec:approach} Monitoring the set of tweets related to a target topic is an unsolved problem~\cite{congosto2017t}. In this section we present the Smart Twitter Crawling (STiC) approach we defined as a solution to this problem. The figure \ref{fig:architecture} describes the overall of our approach. STiC algorithm enriches initial keywords using external sources to query Twitter graph. It builds an initial sub-graph providing related seeds. The crawling is then based on a DFS search and exploits each considered tweet's features to assign a score and to select the most relevant tweets to be crawled. The results of the crawl will be a sub-graph made by different crawled nodes and the edges between them. This sub-graph will be stores in the a graph database, which is Neo4j in our work. Before going any further into details, we first present the input and output data representation of STiC algorithm. \begin{figure} \includegraphics[width=\linewidth, height=70mm]{Architecture.png} \caption{Architecture of our approach} \label{fig:architecture} \end{figure} \subsection{Input and Output of STiC Algorithm} \label{sec:representation} Twitter data can be represented as a graph $\mathcal{T} = <\mathcal{V} , \mathcal{U}> $ where $\mathcal{V}$ is the set of nodes and $\mathcal{U}$ is the set of directed edges between nodes. Different types of nodes are defined in $\mathcal{V}$: \begin{itemize} \item $t$ is a \emph{tweet}, accompanied by attribute values, which include the text of the tweet and its identifier. \item $h$ is a \emph{hashtag} extracted from the tweet. \item $u$ is a \emph{user} accompanied by its identifier value. \end{itemize} Different types of relations are defined in $\mathcal{U}$: \begin{itemize} \item \textbf{$<t,h>$} edge called $Has\_Hashtag$ which relates a tweet $t$ to a hashtag $h$ it contains. \item \textbf{$<t,t^\prime>$} edges called: \begin{itemize} \item $Quotes$ which relates a tweet $t$ to a tweet $t^\prime$. In this case, the text of node $t$ contains the text of $t^\prime$ in addition to its own text. \item $Replies\_To$ which relates a reply tweet $t$ to the origin tweet $t^\prime$. \item $ReTweets$ which relates a retweet $t$ to the origin tweet $t^\prime$. In this case, the text of $t$ is exactly the same as text of tweet $t^\prime$. \end{itemize} \item \textbf{$<t,u>$} called $Mentions$ which relates a tweet $t$ to the user $u$ mentioned in it. \item $\textbf{$<u,t>$}$ edges called: \begin{itemize} \item $Favorites$ which relates a user $u$ to a tweet $t$ which means $u$ likes $t$. \item $Posts$ which relates a user $u$ to a tweet $t$ which means $u$ posted $t$. \end{itemize} \end{itemize} \begin{itemize} \item $\textbf{$<u,u\prime>$}$ edge called $Follows$ which relates a user $u$ to a user $u\prime$ he follows. \end{itemize} The input of STiC algorithm is the list of keywords after enrichment and an initial sub-graph of Twitter in which nodes has no any additional information than what is available from Twitter API. The output of the algorithm is a sub-graph, in which each node is accompanied with a score value. STiC algorithm defines 3 main procedures: \begin{itemize} \item \textbf{Query enrichment procedure:} First step of algorithm to enrich list of keywords for the given query. \item \textbf{Select new node procedure:} The procedure for selecting next node for being visited in crawling process. \item \textbf{Smart crawling procedure:} The main process of algorithm to use the enriched list of keywords and node selection process for visiting new nodes and crawl Twitter. \end{itemize} \subsection{Query enrichment} \label{sec:queryEnrichment} The REST Twitter APIs offer the possibility to retrieve tweets using a set of keywords. When a user tries to retrieve tweets he is not always conscious of the best set of keywords to use in order to obtain the correct subset of tweets. In our research we use the external sources to enrich the set of keywords that the user specifies in the target query. Alg.\ref{alg:CrawlAlgorithmQE} expresses the process of enriching a query. In this procedure, we collect all related words from different data sources, such as NYT~\cite{zhao2011comparing}, Wikipedia~\cite{guisado2016enrich} and WordNet~\cite{safran2012improving} APIs. We identified a list of APIs that provide as source of information news, articles or synonyms and we identified the following ones: \textit{New York Times(NYT)}, \textit{Wordnet}, \textit{Wordnik}, \textit{DBPedia}, \textit{Thesaurus}, \textit{Datamuse}, \textit{WordsAPI}, \textit{Pydictionary}. We also give a weight to each keyword to specify the relevance of the keyword to the original query keywords. For assigning this weight, we consider the subset of keyword given by each external source as a separated set and we calculate the IDF of each keyword. For this aim, each external source is considered as a document and then we calculated term frequency as the number of occurrence of the word in all documents. Then we assign the weight of each word as term frequency in all documents multiply by its IDF score. For instance for \textit{obama} original query keyword we retrieve the following terms and their frequency: ({\textit{barack obama}: 4, \textit{barack hussein obama}: 3, \textit{barack}: 3, \textit{obama}: 3, \textit{community organizer in chief}: 2, \textit{barak obama}: 2, etc.}) \\ Then the weight (total term frequency*IDF score) for each term is computed: \textit{barack obama}: 0.96, \textit{barack hussein obama}: 0.89, \textit{community organizer in chief}: 0.78, \textit{barak obama}: 0.78, etc.})\\ Finally we select the top score keywords: (\textit{barack obama}, \textit{barack hussein obama}, \textit{community organizer in chief}, \textit{barak obama}, \textit{barrack obama}, \textit{president obama}, \textit{barackobama}, \textit{brack obama}, \textit{obama barack}, etc.) We finally merge the all keywords extracted from all APIs with their calculated weights and we sort them based on their weights and on a threshold $\alpha$. The key factor for selecting $\alpha$ is that the keywords with the score above the threshold should not be irrelevant to the query. This threshold may vary and depends on the type of the query and results of the query enrichment. The algorithm \ref{alg:CrawlAlgorithmQE} describes the function \textit{mostRelatedKeywords} which returns the \begin{comment} \begin{algorithm} \caption{STiC Algorithm - Query Enrichment} \label{alg:CrawlAlgorithmQE} \hspace*{\algorithmicindent} \textbf{Input} $external\_sources\_list$ as \textit{New York Times(NYT)}, \textit{Wordnet}, \textit{Wordnik},\textit{DBPedia}, \textit{Thesaurus}, \textit{Datamuse}, \textit{WordsAPI}, \textit{Pydictionary} \\ $\alpha$, \textit{keyword relevance threshould}, $query$\\ \hspace*{\algorithmicindent} \textbf{Output} $keywords\_list$ \begin{algorithmic}[1] \For {$source$ in $external\_sources\_list$} \State $keywords\_list.add(related\_words(source, query))$ \EndFor \State $calculateTFIDFScore(keywords\_list)$ \State \Return {$mostRelatedKeywords(keywords\_list,\alpha)$} \end{algorithmic} \end{algorithm} \end{comment} \begin{algorithm}[t] \DontPrintSemicolon \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \SetKwProg{Fn}{Function}{:}{} \caption{STiC Algorithm - Query Enrichment} \label{alg:CrawlAlgorithmQE} \KwInput {$external\_sources\_list$ \tcp*{\textit{New York Times(NYT)}, \textit{Wordnet}, \textit{Wordnik},\textit{DBPedia}, \textit{Thesaurus}, \textit{Datamuse}, \textit{WordsAPI}, \textit{Pydictionary}}} \KwInput {$\alpha$ \tcp*{\textit{keyword relevance threshold}}} \KwInput {$query$} \KwOutput{$keywords\_list$} \BlankLine \For {$source$ in $external\_sources\_list$} { {$keywords\_list.add(related\_words(source, query))$} } {$calculateTFIDFScore(keywords\_list)$}\\ {\textbf{return} {$mostRelatedKeywords(keywords\_list,\alpha)$}} \end{algorithm} \subsection{Smart Crawling} \label{sec:smartCrawling} In the Alg.\ref{alg:CrawlAlgorithmSC}, the list of keywords from the Alg.\ref{alg:CrawlAlgorithmQE}, the initial node and number of iterations, will be given to the procedure and in first iteration, a sub-graph of nodes from neighbors of initial node will be created and scores of nodes will be updated. The initialization of the graph is crucial for the success of the following iterations of the algorithm: a bad initialization can lead to a graph where there is any node is related to query. In the beginning, we chose manually a single node we knew was relevant (e.g. we can take the official hashtag if it is in the set of keywords specified by the user). While quite effective, this selection cannot be transparently done since it needs manual selection for each different query and there is chance of leading the crawling to a specific part of data. We initialize the graph automatically with the result of a simple Twitter API search using the enriched keywords. This preliminarily results allow for the first round of the crawl. Then, as number of iterations, the crawler will visit the selected node from Alg.\ref{alg:CrawlAlgorithmNS}, which is explained in \ref{sec:nodeSelection} and it will add its neighbors to the list of candidate nodes, which will be given to the Alg.\ref{alg:CrawlAlgorithmNS} in the next iteration. Then, before going for the next iteration, we need to update the scores of nodes which is explained in \ref{sec:scoreCalculation}. In the end, a sub-graph of Twitter will be returned which has been created by crawling nodes based on the defined score for each one. \begin{comment} \begin{algorithm} \caption{STiC Algorithm - Smart Crawling} \label{alg:CrawlAlgorithmSC} \hspace*{\algorithmicindent} \textbf{Input}: keywords\_list, iterations, initial\_relevant\_node \\ \hspace*{\algorithmicindent} \textbf{Output}:$visited\_nodes$ \begin{algorithmic}[1] \State $i \gets \textit{iterations}$ \State ${ n_0 \gets \textit{initial\_relevant\_node} }$ \State $current\_node \gets n_0$ \State $update\_queue\_nodes(current\_node\_neighbors)$ \State $update\_nodes\_scores()$ \State $add\_to\_visited\_list(current\_node)$ \State $\textit{i} \gets \textit{i \- 1}$ \While {i $>$ 0}: \State $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ \While{$is\_visited(current\_node)$} \State $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ \EndWhile \State $update\_queue\_nodes(current\_node\_neighbors)$ \State $update\_nodes\_scores()$ \State $add\_to\_visited\_list(current\_node)$ \State $\textit{i} \gets \textit{i \- 1}$ \EndWhile \Return \textit{visited\_list} \end{algorithmic} \end{algorithm} \end{comment} \begin{algorithm}[t] \DontPrintSemicolon \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \SetKwProg{Fn}{Function}{:}{} \caption{STiC Algorithm - Smart Crawling} \label{alg:CrawlAlgorithmSC} \KwInput {$keywords\_list$} \KwInput {$iterations$} \KwInput {$initial\_relevant\_node$} \KwOutput{$visited\_nodes$ } \BlankLine { $i \gets \textit{iterations}$ }\\ { ${ n_0 \gets \textit{initial\_relevant\_node} }$ }\\ { $current\_node \gets n_0$ }\\ { $update\_queue\_nodes(current\_node\_neighbors)$ }\\ { $update\_nodes\_scores()$ }\\ { $add\_to\_visited\_list(current\_node)$ }\\ { $\textit{i} \gets \textit{i - 1}$ }\\ \While {i $>$ 0} { { $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ }\\ \While{$is\_visited(current\_node)$} { { $current\_node \gets \textit{selectNewNode(queue\_nodes)}$ }\\ } { $update\_queue\_nodes(current\_node\_neighbors)$ }\\ { $update\_nodes\_scores()$ }\\ { $add\_to\_visited\_list(current\_node)$ }\\ { $\textit{i} \gets \textit{i - 1}$ }\\ } \textbf{return} $\textit{visited\_list}$ \end{algorithm} \subsection{Node Selection} \label{sec:nodeSelection} Our crawling method selects the next node from which to continue the crawling during each iteration. On the one hand we want to explore the most promising nodes, i.e. the ones with the highest estimate scores, and not waste queries on irrelevant nodes. On the other hand, we would also like to avoid remaining in a portion of the Twitter graph and miss other relevant nodes. The first objective can be understood as efficiency whereas the second as completeness. A common solution to this trade off is the introduction of random based choice. In the equations below, the $p$ (a real number between 0 and 1) parametrizes the probability distribution. The closer $p$ is to 1, the higher the probability to chose a high score is. On the contrary, if $p$ is close to 0, the low scored nodes will have a higher probability of being chosen. This probability distribution is inspired by a multinomial distribution and a soft-max function. For each node $i$, the probability to be selected $P_i$ is given by the non-normalized multinomial function $f_i$, and depending of the parameter $p$: \[ P_i = \dfrac{ \exp \left( f_i \right)}{\underset{i}\sum \exp \left( f_i \right)} \] \[ \text{where: }f_i = \dfrac{x_i }{x_{\min}} \cdot p + \dfrac{x_{\max}}{x_i} \cdot (1-p) \] Using this formula, we are able to jump from one node to another one if the score of the node in not large enough to be selected for crawling. In this case, we use the minimum and maximum scores of the crawled nodes and choose $p$ arbitrary, to define the function $f_i$ for each node. The probability of $P_i$ shows the chance of a node to be selected based on the fraction of its $f_i$ to sum of $f_i$ for other crawled nodes. In next section we are going to describe the process of calculating the score for different types of nodes. The algorithm \ref{alg:CrawlAlgorithmNS} \subsubsection{Score Calculation} \label{sec:scoreCalculation} At the beginning of each iteration, a node, \textit{$n_0$}, is selected and all internal information about this node is retrieved from Twitter which includes the \textit{id\_str} of its neighbors, who are then added to the queue for the next iteration. Then, the scores of all nodes are updated. For \textit{$n_0$} only \textsc{text\_score} is available at the beginning and as there is no other node which has been visited before that, its \textsc{estimate\_score} is equal to 0. The final score is calculated according to the various score attributes to find the relevance of a node according to the initial query. The score related to a text of a tweet is defined as follows: \begin{itemize} \item \textsc{text\_score}$(t)$: is defined for a tweet node $t$ and is represent the frequency of query keywords in the text body of the tweet. \subsubsection{Tweets content analysis} \label{sec:contentAnalysis} Contrary to the User and Hashtag nodes (the Hashtag nodes are merely a word or a name), a tweet is characterized by a textual content that allows us to use Natural Language Processing tools to judge their relevance to the target topic. We begin this step with a list of keywords. The analysis of the tweet consists in a lexical and semantic comparison between the keywords and the text body. This analysis begins with the lemmatization of both texts. This is a classic NLP tool that transforms the words into their root form. This allows us to ignores plurality for nouns and tenses for verbs. Punctuation marks and linking words (e.g. the, and, a, of . . . ) are removed because they usually do not convey useful semantic knowledge. Both texts are then compared both lexically and semantically. The lexical comparison is done by counting the number of words the texts have in common. We note that this count is not normalized, but the limit of 280 characters of a tweet prevents the possibility of a longer text that contains a lot of keywords. The semantic comparison is done using the Word Net database. In this database, words possess various relationships with each other. In particular, we utilize the hyponym relationship: the link between two words is be measured as the depth of the closest common ancestor in the hyponymy graph. A keyword is considered to match a word with a semantical relation if the similarity value given by Word Net is higher than a threshold set beforehand. At last, the score from the text of a tweet is the sum of weights of keywords matched (either by semantic relation or lexical).\\ \item \textsc{estimate\_score}: is estimating the relevance of the node, $n\in$ $\mathcal{V}$ based on the score of a direct predecessor node, $n^\prime\in$ $\mathcal{V}$, which is a visited node that has a relation with $n$ and the edge $e\in$ $\mathcal{U}$ connects $n$ and $n^\prime$ together. \begin{itemize} \item {for a Tweet}: \textsc{tweet\_estimate\_coef}$ = [0.4, 0.6, 1.0, 1.0, 1.0, 0.5,0.5]$\\ These coefficients concern, in order, the user who posts the tweet, mentioned users in this tweet, original of this tweet if it replies to another one, original of this tweet if it quotes another one, original of this Tweet if it is a retweet of another one, and retweets of this tweet. \item {for a User}: \textsc{user\_estimate\_coef}$ = [1.0, 0.6, 0.5, 0.3]$\\ These coefficients concern, in order, tweets posted by this user, his favorite tweets, his friends, and his followers. \end{itemize} \item \textsc{feedback\_score}: is estimating the relevance of the node, $n\in$ $\mathcal{V}$ based on the score of direct successor nodes $n^\prime\in$ $\mathcal{V}$, which is the one who has been visited after $n$ and there is edge $e\in$ $\mathcal{U}$ between $n$ and $n^\prime$ to show the relation between nodes. \item \textsc{score}: this final score is computed after the crawling and feedback steps of the algorithm and it is calculated based on the three previous scores. \begin{itemize} \item {for a Tweet}: $Score$ = $text\_score$ + $feedback\_score$ \item {for a User}: $Score$ = $estimate\_score$ + $feedback\_score$ \item {for a Hashtag}: $Score$ = $estimate\_score$ + $feedback\_score$ + $Occurance\_Count$ \end{itemize} \end{itemize} To obtain a node's {\sc estimate\_score}, we multiply it's predecessor's {\sc score} by the corresponding coefficient. Thus, a tweet node has 4 score-related attributes whereas other node types have 3. These attributes exist regardless of the node's state. We assume at the start that we begin with some seed tweets, considered highly relevant. The precise way in which we obtain those tweets is detailed in Section 3.4. We evaluate the \textsc{text\_score} of these seeds using the strategy described in Section 3.2. We set their \textsc{estimate\_score} equal to their \textsc{text\_score} to allow our algorithm to run. At each iteration during the crawling, we begin by selecting a new node using the method described in Section 3.5. We then query Twitter to complete its information. We update the \textsc{score} of the nodes as follows: if it is a tweet, we compute its \textsc{text\_score} and We add the difference between calculated \textsc{text\_score} and its \textsc{estimate\_score} to its parent's \textsc{feedback\_score}. We then proceed to add this node's uncrawled neighbors to the graph. We set their \textsc{estimate\_score} as a fraction of the current node's score, based on the relationship they share. If it is a User node, the score will be equal to sum of its \textsc{estimate\_score} and \textsc{feedback\_score}. If it is a Hashtag node, in addition to sum of its \textsc{estimate\_score} and \textsc{feedback\_score}, we count how often they appear and add it to their score. Alg.\ref{alg:CrawlAlgorithmNS} defines the process of selecting a new node. In this procedure, the input is a list of candidate nodes and for each node in this list, the function $f$ will be calculated and then selecting probability for it, $P$ will be calculated. In the end, the node with the highest probability will be returned. \begin{comment} \begin{algorithm} \caption{STiC Algorithm - Node Selection} \label{alg:CrawlAlgorithmNS} \hspace{-4mm}\textbf{Input} {$queue\_nodes$}\\ \textbf{Input} {$ p \gets \textit{0.7 \Comment{probability of selecting high score node} }$\\} \textbf{Output} $selected\_node$ \begin{algorithmic}[1] \State $max\_score \gets \textit{maximum score of queue nodes}$ \State $min\_score \gets \textit{minimum score of queue nodes}$ \For {$node$ in $queue\_nodes$} \State ${ f[i] = calculate\_F(node.score, min\_score, max\_score, p) }$ \State $P[i] = exp(f[i])/sum(exp(f[i]))$ \EndFor \State \Return {$node\_with\_max\_P[i]$} \end{algorithmic} \end{algorithm} \end{comment} \begin{algorithm}[t] \DontPrintSemicolon \SetKwInput{KwInput}{Input} \SetKwInput{KwOutput}{Output} \SetKwProg{Fn}{Function}{:}{} \caption{STiC Algorithm - Node Selection} \label{alg:CrawlAlgorithmNS} \KwInput {$queue\_nodes$} \KwInput {$ p \gets 0.7$ \tcp*{probability of selecting high score node}} \KwOutput {$selected\_node$} \BlankLine { $max\_score \gets \textit{maximum score of queue nodes}$ }\\ { $min\_score \gets \textit{minimum score of queue nodes}$ }\\ \For {$node$ in $queue\_nodes$} { { ${ f[i] = calculate\_F(node.score, min\_score, max\_score, p) }$ }\\ { $P[i] = exp(f[i])/sum(exp(f[i]))$ }\\ } { \textbf{return} {$node\_with\_max\_P[i]$} }\\ \end{algorithm} \section{Experiments and Evaluation} \label{sec:experiments} STiC algorithm is implemented by Python, we used tweepy\footnote{http://www.tweepy.org/} v3.5 to access the Twitter API, and neo4j-driver\footnote{https://neo4j.com/developer/python/} v1.0.2 and neo4jrestclient\footnote{https://pypi.org/project/neo4jrestclient/} v2.1.1 to communicate with Neo4j. For enriching the list of keywords we used different APIs and all of them are implemented in python\footnote{https://www.python.org/download/releases/3.4.0/} 3.4. In some cases we needed to create a new library while for others used predefined libraries. We aimed to increase the precision of retrieved tweets. In order to evaluate our approach to see how much STiC is successful, we run the experiments with maximum 100 iteration for crawling and maximum timeout 720 second. The relevance threshold for keywords, $\alpha$, is chosen as equal as 0.5 and the threshold for selecting high score node, $p$, is 0.7. These numbers are arbitrary and selected after observing a few iterations of crawling. We run the model on each query separately and stored the results to be able to compare them by statistics and manual check. We selected four original queries from four different categories including proper nouns: \textit{obama}, general words: \textit{bank}, concepts: \textit{energy} and recent trends: \textit{birlinggap}. The reason for choosing these keywords is covering different categories of queries and being able to evaluate the system with different inputs and decrease the bias toward the specific part of tweets or users. \textit{obama} is the previous president of United States and he has huge number of followers, hashtags, mentions and tweets and it is very good option to start the crawling. 'bank' and 'energy' are very general and they have good number of relations and hashtags in Twitter. Also there is a lot of number of users which has significant number of related tweets and we can have a good chance to crawl an enough large subset of the crawling space. \textit{birlinggap} was one of the recent trends at the moment of doing experiments and it gives us the chance to do manual check on results easily. Fig.\ref{fig:birlinggap-new} shows the retrieved nodes using STiC after storing in the database. Red nodes represent tweets in the database while blue nodes show the hashtags which found related to the query and purple nodes indicate users which has been crawled during the process. The edges' labels define the type of relation between nodes. \begin{figure}[t] \includegraphics[width=0.85\linewidth, height=75mm]{birlinggap-new.png} \caption{Retrieved nodes for query \textit{birlinggap} using STiC} \label{fig:birlinggap-new} \end{figure} \begin{comment} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{obama-new.png} \caption{Retrieved tweets for original query \textit{obama} using STiC} \label{fig:obama-new} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{bank-new.png} \caption{Retrieved tweets for original query \textit{bank} using STiC} \label{fig:bank-new} \end{minipage} \end{figure} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{birlinggap-new.png} \caption{Retrieved tweets for original query \textit{birlinggap} using STiC} \label{fig:birlinggap-new} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{energy-new.png} \caption{Retrieved tweets for original query \textit{energy} using STiC} \label{fig:energy-new} \end{minipage} \end{figure} Figure\ref{fig:obama-simple} \ref{fig:bank-simple} \ref{fig:birlinggap-simple}\ref{fig:energy-simple}, show the diagrams of retrieved tweets using simple API for the given queries. \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{obama-simple.png} \caption{Retrieved tweets for original query \textit{obama} using simple BFS API call} \label{fig:obama-simple} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{bank-simple.png} \caption{Retrieved tweets for original query \textit{bank} using simple BFS API call} \label{fig:bank-simple} \end{minipage} \end{figure} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{birlinggap-simple.png} \caption{Retrieved tweets for original query \textit{birlinggap} using simple BFS API call} \label{fig:birlinggap-simple} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{energy-simple.png} \caption{Retrieved tweets for original query \textit{energy} using simple BFS API call} \label{fig:energy-simple} \end{minipage} \end{figure} \end{comment} Figure \ref{fig:obama-nodes} and Figure \ref{fig:energy-nodes} show the number of different types of nodes for queries \textit{obama} and \textit{energy} using STiC, respectively and compare them with results of simple BFS API call for the given query. STic used more API calls and found more related tweets. It decreased the variety of relations between nodes and brought more Hastag and User node in compare to simple BFS method. \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-obama.png} \caption{Number of different nodes for query 'obama'} \label{fig:obama-nodes} \end{figure} \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-energy.png} \caption{Number of different nodes for query 'energy'} \label{fig:energy-nodes} \end{figure} Figure \ref{fig:bank-nodes} gives the comparison for different nodes retrieved by STiC and simple BFS for query \textit{bank}. STiC gives more tweet nodes and more User nodes but the number of Hashtag nodes are less than simple method. The reason is that STiC build the connected graph by crawling more users and tweets rather than jumping from node to another one without any relation. \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-bank.png} \caption{Number of different nodes for query 'bank'} \label{fig:bank-nodes} \end{figure} Figure \ref{fig:birlinggap-nodes}, shows that STiC found the same number of tweets for \textit{birlinggap} query while it is using more API calls and increase number of relationships between nodes. This increment is explained by comparing number of Hashtags and Users, since it found more nodes than simple BFS, these nodes caused more edges than before. \begin{figure}[t] \includegraphics[width=\linewidth]{Nodes-birlinggap.png} \caption{Number of different nodes for query 'birlinggap'} \label{fig:birlinggap-nodes} \end{figure} \begin{comment} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-obama.png} \caption{Number of different nodes for query 'obama'} \label{fig:obama-nodes} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-bank.png} \caption{Number of different nodes for query 'bank'} \label{fig:bank-nodes} \end{minipage} \end{figure} \begin{figure}[t] \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-birlinggap.png} \caption{Number of different nodes for query 'birlinggap'} \label{fig:birlinggap-nodes} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[width=\linewidth]{Nodes-energy.png} \caption{Number of different nodes for query 'energy'} \label{fig:energy-nodes} \end{minipage} \end{figure} \end{comment} In this part for final evaluation, we decided to calculate precision for both STiC and simple BFS API call and see the improvement clearly. \begin{comment} we defined these measures as following:\\ Precision (P) is the fraction of retrieved tweets that are relevant. $$ Precision =\frac{\#(relevant\, tweets\, retrieved)}{\#(retrieved\, tweets)} = P(relevant|retrieved) $$ \\ Recall (R) is the fraction of relevant tweets that are retrieved. $$ Recall =\frac{\#(relevant\, tweets\, retrieved)}{\#(relevant\, tweets)} = P(retrieved|relevant) $$ \\ A single measure that trades off precision versus recall is the F-measure, which is the weighted harmonic mean of precision and recall: $$ F-measure =\frac{2PR}{P + R} $$ we should have an estimation about retrieved and relevant tweets, so we used Automated Social Media Analytics \footnote{http://keyhole.co} to find number of relevant post to the exact hashtag of each query. This number for 'obama', 'birlinggap', 'energy' and 'bank' is 768, 143, 713 and 678 respectively. \end{comment} in the tables \ref{table:Measures-obama}, \ref{table:Measures-birlinggap}, \ref{table:Measures-energy} and \ref{table:Measures-bank} you can find the precision percentage for simple BFS API call model and STiC model for each topic. STiC model shows significant improvement in finding related tweets, specially in \textit{birlinggap} query in which the simple BFS could not find any Hashtag or User nodes, which is very important for building relations and making connections between nodes. \begin{center} \captionof{table}{Precision result comparison for 'obama'} \label{table:Measures-obama} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 71 & 87 & 81.6\% \\ \hline \textit{Simple BFS} & 16 & 25 & 64\% \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Precision result comparison for 'birlinggap'} \label{table:Measures-birlinggap} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 15 & 21 & 71.43\% \\ \hline \textit{Simple BFS} & 8 & 22 & 36.36\% \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Precision result comparison for 'energy'} \label{table:Measures-energy} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 89 & 113 & 78.76\% \\ \hline \textit{Simple BFS} & 16 & 29 & 55.17\% \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Precision result comparison for 'bank'} \label{table:Measures-bank} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Retrieved Tweet & Precision\\ \hline \textit{STiC} & 96 & 111 & 86.49\% \\ \hline \textit{Simple BFS} & 24 & 31 & 77.42\% \\ \hline \end{tabu} \end{center} \begin{comment} \begin{center} \captionof{table}{F-Measure for different sample queries} \label{table:Measures} \begin{tabu}to 0.95\linewidth{ | X[l] |X[c] |X[c] |X[c] |X[c] |X[c] |X[C]|} \hline \textsc{} & Retrieved Relevant Tweet & Relevant Tweet & Retrieved Tweet & Recall & Precision & F-Measure \\ \hline \textit{obama} & 57 & 768 & 87 & 0.07422 & 0.6552 & 13.3334 \\ \hline \textit{birling- gap} & 15 & 143 & 21 & 0.1049 & 0.7143 & 18.2927 \\ \hline \textit{energy} & 89 & 713 & 113 & 0.1249 & 0.7876 & 21.5496 \\ \hline \textit{bank} & 96 & 678 & 111 & 0.1416 & 0.8649 & 24.3346 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Number of different nodes for each query} \label{table:NodesNumber} \begin{tabu} to \linewidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Query} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textit{Obama} & 2212 & 48 & 79 & 300 & 25 & 2 & 213 & 87 \\ \hline \textit{birling- gap} & 69 & 9 & 5 & 38 & 74 & 4 & 11 & 22 \\ \hline \textit{energy} & 2281 & 55 & 86 & 305 & 27 & 9 & 178 & 116 \\ \hline \textit{bank} & 2219 & 53 & 79 & 302 & 25 & 7 & 182 & 113 \\ \hline \end{tabu} \end{center} \end{comment} Based on the achieved results, which have been shown in the tables and figures above, we observed the significant improvement in term of precision, number of retrieved tweets and different types of nodes after using STiC algorithm. This method shows high performance for crawling the Twitter and finding related tweets for a given query. In compare to simple BFS API call, STiC is able to retrieve more related tweets, while it is finding more Hashtags and Users and extending list of nodes during crawling Twitter. This method use more API calls since it can find stronger relation between visited nodes and uncrawled ones. For some queries such as \textit{birlinggap} which is a proper noun and there is small set of related words for them, simple BFS API call can not reach to a well connected graph and most of the nodes are not connected to each other while the STiC can build a graph with more edges between the nodes. For other queries, STiC can build a connected graph with more nodes and less diversity in number of relationships. By comparing the results of STiC model for all queries, we observed that the number of queries having more related keywords how also a greater number of related nodes with respect to the queries having a smaller set of keywords. \begin{comment} \subsubsection{new Score System\newline} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'obama'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 2212 & 48 & 79 & 300 & 25 & 2 & 213 & 87 \\ \hline \textbf{Old System} & 2218 & 48 & 79 & 300 & 25 & 2 & 213 & 87 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'birlinggap'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 69 & 9 & 5 & 38 & 74 & 4 & 11 & 22 \\ \hline \textbf{Old System} & 65 & 8 & 5 & 36 & 73 & 4 & 11 & 21 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'energy'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 2281 & 55 & 86 & 305 & 27 & 9 & 178 & 116 \\ \hline \textbf{Old System} & 2270 & 51 & 82 & 300 & 25 & 9 & 178 & 113 \\ \hline \end{tabu} \end{center} \begin{center} \captionof{table}{Comparison of New scoring system and Old scoring system for 'bank'} \begin{tabu} to \textwidth { | X[l] | X[c] | X[c] |X[c] |X[c] |X[c] |X[c] |X[c] | X[R] |} \hline \textbf{Model} & Queue list & Visited list & API request & Node & Relatio- nship & Hashtag & User & Tweet \\ \hline \textbf{New System} & 2219 & 53 & 79 & 302 & 25 & 7 & 182 & 113 \\ \hline \textbf{Old System} & 2216 & 51 & 79 & 300 & 25 & 7 & 182 & 111 \\ \hline \end{tabu} \end{center} These experiences seem to indicate that query enrichment has a huge influence on improving the results to find the related Tweets to an original query by using the smart crawling algorithm. Also, starting the crawling Tweets from a node which is more related to the original query can improve the final results. By comparing the achieved results for F-Measure of both models, the significant improvement in new model is observable. \end{comment} \section{Conclusion and perspective} \label{sec:conclusion} In this paper, we aimed at developing a system for crawling relevant tweets to a given topic using a keyword query. We considered two aspects of the problem: the keyword-set enrichment and the crawling of relevant tweets using the Twitter APIs. First we focused on enriching queries and we used different external APIs (WordsAPI, Datamuse, Thesaurus, DBPedia, Wordnik, PyDictionary, Wordnet and New York Times API) and identified related keywords. We calculated TF-IDF score for these keywords and we removed the ones with lower score than threshold. We claimed that we can get more related tweets while we use more related keywords for a given topic. In the second step we defined a crawling algorithm and a scoring system in which each tweet is annotated by a score. Our crawling algorithm takes advantage of the text content of tweets using Natural Language Processing tools to deduce the relevance of each node. Overall, we obtain very satisfying results on well known topics, as a large number of retrieved tweets are related to the topic and number of retrieved tweets for running the model in a short period of time seems to be enough. Twitter is dynamic, as several thousands of tweets are posted each second, and the Twitter graph is in constant evolution. However the solution we developed seems to be resilient to these changes. This work opens the door for further interactions between various data sources. We could also consider taking advantage of more than just the concepts from the APIs (e.g. the content of the articles). We would also have liked to test this process on a larger number of iterations, but we were limited by the manual aspect of our evaluation method. For future work, we are going to improve the performance of finding related new tweets by using machine learning algorithms. So we will try to build a supervised learning system to classify new tweets by using the collected tweets as a train set. In this case we are able to use this system in many cases that have overlap with train set and sample queries. For being more precise and having significant improvement in pruning unrelated tweets, we can use the idea of popular users and most effective users to improve the the performance of scoring system and giving weight to the users as well. Another idea for smart crawling the tweets is using the provided URL in tweets and applying NLP methods on the text of the Web pages besides considering the meta data of the Web pages to create a more related list of keywords and hashtags for crawling the new tweets. \section{Acknowledgment} \label{sec:acknowledgement} Our contributions consist in (i) the usage of several data sources in order to enrich a keyword query; (ii) the definition of a Smart Crawling algorithm as the main method to access the related tweets to an original query using the keyword enriched query and the REST Twitter APIs. We would like to thank V. Chetyrkine, C. Hamelain, X. Li for their work in the development of the tool which was the base model for STiC and Benoit Grotz and Silviu Maniu for many helpful discussions. \bibliographystyle{ACM-Reference-Format}
2024-02-18T23:40:43.311Z
2021-10-13T02:22:32.000Z
algebraic_stack_train_0000
3,188
30,026
proofpile-arXiv_065-15500
\section{Introduction} \label{s:introduction} Let $\mathsf{RCA}_0$ be the system in the language of second-order arithmetic consisting of the usual arithmetic laws, the recursive comprehension axiom, and the $\Sigma_1$-induction scheme. The main objective of reverse mathematics is to investigate the proof-theoretic strength of a mathematical theorem formulated in this language over a base system such as $\mathsf{RCA}_0$. This approach has proved to be very fruitful as can be seen from numerous examples in the literature. Of particular relevance to the subject of this paper are weak K\"onig's lemma ($\mathsf{WKL}_0$) and Ramsey's theorem for pairs ($\mathsf{RT}^2_2$), two theorems which, when coupled with $\mathsf{RCA}_0$, constitute major subsystems of second-order arithmetic. $\mathsf{WKL}_0$ states that every infinite binary tree has an infinite path. Its proof-theoretic strength was shown by Harrington (unpublished, see Simpson \cite{Sim2009}) to be $\Pi^1_1$-conservative over $\mathsf{RCA}_0$, i.e.~every $\Pi^1_1$-sentence provable in $\mathsf{WKL}_0$ is already provable in $\mathsf{RCA}_0$. $\mathsf{RT}^2_2$ states that for every two-coloring of pairs of natural numbers, there is an infinite set in which all pairs of numbers have the same color. It was shown by Hirst \cite{Hirst1987} that $\mathsf{RCA}_0+\mathsf{RT}^2_2$ implies the $\Sigma_2$-bounding scheme $B\Sigma_2$ whose strength lies strictly between $\Sigma_1$- and $\Sigma_2$-induction (Paris and Kirby \cite{KP1978}). Principles related to, or inspired by, $\mathsf{RT}^2_2$ are arguably the most intensively studied combinatorial principles in reverse mathematics in the past 25 years. It is a long standing open problem whether, over $\mathsf{RCA}_0$, $\mathsf{RT}^2_2$ is $\Pi^1_1$-conservative over $B\Sigma_2$. A significant step towards resolving the open problem was taken by Patey and Yokoyama \cite{PY} who showed that $\mathsf{RT}^2_2+\mathsf{WKL}_0$ is $\Pi^0_3$-conservative over $\mathsf{RCA}_0$. This result motivated the study of the corresponding problem, discussed in this paper, on finite coloring of the full binary tree. Let $\mathsf{TT}^1$ be the principle stating that every finite coloring of nodes in the full infinite binary tree has an isomorphic subtree that is homogeneous (or monochromatic), i.e.~a tree in which all nodes have the same color. $\mathsf{TT}^1$ is a deceptively simple principle: Given an instance of $\mathsf{TT}^1$ in a standard model of $\mathsf{RCA}_0$, i.e.~one whose first order part is the set of standard natural numbers, a straightforward argument shows that above some fixed node there is a dense set of nodes computable in the coloring (viewed as a function on the full binary tree) with the same color, and this offers an immediate solution to the instance. However, as proved in Corduan, Groszek and Mileti \cite{CGM2009}, the picture is very different for $\mathsf{TT}^1$ in a model of $\mathsf{RCA}_0$ in which $\Sigma_2$-induction ($I\Sigma_2$) does not hold. Recent studies show that $\mathsf{TT}^1$ shares an important property with $\mathsf{RT}^2_2$ (Chong, Li, Wang and Yang \cite{CLWY}, Chong, Slaman and Yang \cite{CSY2017}): Over $\mathsf{RCA}_0$, the inductive strength of $\mathsf{TT}^1$ and $\mathsf{RT}^2_2$ each lies strictly below $I\Sigma_2$. In fact, by combining the constructions in \cite{CLWY} and \cite{CSY2017}, one can derive the same result for $\mathsf{TT}^1+\mathsf{RT}^2_2+\mathsf{WKL}_0$, although the two coloring principles are long known to imply $B\Sigma_2$. On the other hand, it was shown in \cite{CLWY} that $\mathsf{RCA}_0+\mathsf{TT}^1$ is $\Pi^1_1$-conservative over $P\Sigma_1+B\Sigma_2$, where $P\Sigma_1$ is a $\Pi^0_3$-sentence shown by Kreuzer and Yokoyama \cite{KY} to be equivalent to a number of principles, including the totality of the Ackermann function as well as the bounded monotone enumeration principle $\mathsf{BME}_1$, introduced in Chong, Slaman and Yang \cite{CSY2014} for the separation of stable Ramsey's theorem principle $\mathsf{SRT}^2_2$ from $\mathsf{RT}^2_2$. It is not known if $\mathsf{RCA}_0+\mathsf{RT}^2_2$ is $\Pi^1_1$-conservative over $P\Sigma_1+B\Sigma_2$. In this paper, we show in Theorem \ref{thm:conservation-1} that $\mathsf{TT}^1+\mathsf{RT}^2_2+\mathsf{WKL}_0$ is $\Pi^0_3$-conservative over $\mathsf{RCA}_0$. This exhibit yet another common property shared by $\mathsf{TT}^1$ and $\mathsf{RT}^2_2$. It follows (Corollary \ref{cor:TT1-PSigma1}) that this system does not imply $P\Sigma_1$. Apart from being of independent interest, Theorem \ref{thm:conservation-1} also sheds light on the problem comparing the strengths of $\mathsf{TT}^1$ and $\mathsf{RT}^2_2$ over $\mathsf{RCA}_0$. First, since $\mathsf{TT}^1$ holds in every model of $\mathsf{RCA}_0+I\Sigma_2$, one concludes immediately that $\mathsf{RCA}_0+\mathsf{TT}^1\not\rightarrow\mathsf{RT}^2_2$. The converse is not known. One may investigate the problem from the angle of relative conservation strength. Thus, if $\mathsf{TT}^1$ is not $\Pi^0_3$-conservative over $\mathsf{RCA}_0$, then it would follow that $\mathsf{RT}^2_2$ does not imply $\mathsf{TT}^1$ over $\mathsf{RCA}_0$. Theorem \ref{thm:conservation-1} shows that this approach does not resolve the problem. Our strategy to establish Theorem \ref{thm:conservation-1} is broadly speaking similar in outline to that in Ko{\l}odziejczyk and Yokoyama \cite{KoYo}, although the details are naturally different. In Section \ref{s:persistence}, we prove a combinatorial result (Theorem \ref{thm:large-prehomogeneous-trees}) which states that if a ``very large'' finite tree is finitely colored then it contains an almost homogeneous subtree which is ``sufficiently large''. The combinatorial theorem is then applied in Section \ref{s:conservation} to prove the conservation result. We conclude this section by fixing or recalling some standard notations. Given a set $X$, let $[X]^n$ denote the collection of $n$-element subsets of $X$. When $n = 2$, elements of $[X]^2$ are identified with ordered pairs $(x,y)$ where $x < y$. Greek letters $\rho, \sigma, \tau, \eta, \ldots$ denote finite binary strings. Given $\sigma$ and $\tau$, write $\sigma \preceq \tau$ (respectively $\sigma \prec \tau$) if $\sigma$ is a (respectively proper) initial segment of $\tau$. The length of $\sigma$ is denoted $|\sigma|$, and the initial segment of $\sigma$ of length $n$ (if it exists) is denoted $\sigma \upharpoonright n$. Two strings are \emph{compatible} if one is an initial segment of the other. A \emph{tree} is a set of finite strings closed under initial segments. A subset $S$ is \emph{compatible} with $\sigma$ if all members of $S$ are compatible with $\sigma$. By abuse of notation, we write $2^i$ for both $2^i$ as a natural number and as the set of all strings of length $i$, which will be clear from the context to which object the notation refers. Subsets of $\mathbb{N}$ are denoted $X, Y, Z, X_i, Y_i, Z_i$ etc. Very often, we will restrict our attention to a finite section $\{\sigma\in 2^{<\omega}: |\sigma|\in X\}$ of a ``large'' finite set $X$, large in the sense of Ketonen and Solovay \cite{KS1981}. Recall that a nonempty set $X$ is \emph{$\omega$-large} if $|X| > \min X$. Given $x$, $X$ and $Y$, write $X < Y$ if $X = \emptyset \neq Y$ or $\max X < \min Y$, and write $x < X$ (respectively $X < x$) if $\{x\} < X$ (respectively $X < \{x\}$). Given $n\in\mathbb{N}$, a \emph{stack} of sets $\{X_m: m < n\}$ (of size $n$) is a collection of nonempty finite sets such that $X_m < X_{m+1}$ for $m < n-1$. We say that $X$ is \emph{$\omega^{d} \cdot n$-large} if $X$ is the union of a stack of $n$ many $\omega^d$-large sets; and $X$ is \emph{$\omega^{d+1}$-large} if $X = \{\min X\} \cup X_1$ where $\min X_1 > \min X$ and $X_1$ is $\omega^d \cdot \min X$-large. The following property of $\alpha$-largeness will be used, sometimes implicitly, in the subsequent discussion: \begin{fact} \label{fac:large-inj} If $X$ is $\omega^n$-large and $f: X \to Y$ is an injection such that $f(x) \leq x$ for all $x$, then $Y$ is also $\omega^n$-large. In particular, a superset of an $\omega^n$-large set is also $\omega^n$-large. \end{fact} Let $\exp(x) = 2^x$ and $\exp^{n+1}(x) = \exp(\exp^n(x))$. The following useful bound is given in \cite[\S 2]{KoYo}. \begin{fact}\label{fac:omega3-large} If $X$ is $\omega^3$-large then $|X| > \exp^{\min X}(\min X)$. \end{fact} In Section \ref{s:persistence}, we introduce two notions of largeness for finite trees: persistence and superpersistence. A tree with either of these properties is large in the sense that, even after a thinning operation, the resulting tree continues to contain a subtree that is homogeneous for a given coloring and of sufficiently large size. \section{Persistence and a key combinatorial theorem} \label{s:persistence} This section is devoted to proving Theorem \ref{thm:large-prehomogeneous-trees} which is used in an essential way to establish our main result in the next section. The proof of Theorem \ref{thm:large-prehomogeneous-trees} applies a number of combinatorial lemmata concerning a largeness notion for trees called {\it persistence} and its generalization {\it superpersistence}. For a model $\mathfrak{M}\models \mathsf{RCA}_0$, we let $\mathbb{N}$ denote the first-order part of $\mathfrak{M}$ and call it the set of natural numbers (in the model). Reference to standard natural numbers will be explicitly stated to avoid any possible ambiguity. The reader familiar with fragments of Peano arithmetic will observe that the discussion in this section can be carried out in any model of $\mathsf{RCA}_0$. We begin with defining some notions concerning trees. \begin{definition} \label{def:q-strong-tree} Let $X= \{x_0 < \ldots < x_n\}$ be a subset of $\mathbb{N}$. A finite binary tree $T$ is \emph{$X$-quasistrong} if \begin{enumerate} \item [(i)] $T \cap 2^{x_i} \neq \emptyset$ for every $x_i \in X$; \item [(ii)] for each $i <n$, every $\sigma \in T \cap 2^{x_i}$ has exactly two incompatible extensions in $T \cap 2^{x_{i+1}}$. \end{enumerate} \end{definition} Intuitively, the nodes on an $X$-quasistrong tree are required to split in a well-controlled manner determined by $X$. More precisely, every node of length $x_i$ has incompatible extensions of length $x_{i+1}$. This allows one to measure the largeness of such trees in terms of the $\alpha$-largeness of $X$. \begin{definition} \label{def:prehomogeneous} Given $X = \{x_0 < \ldots < x_n\} \subseteq \mathbb{N}$ and a coloring $C$ of $2^{<\mathbb{N}}$, a tree $T \subseteq 2^{<\mathbb{N}}$ is \emph{$(C,X)$-prehomogeneous for color $c$} if for every $i<n$, $\sigma \in T \cap 2^{x_i}$ and $\sigma \prec \tau \in T \cap 2^{x_{i+1}}$, there exists a $\zeta\in T$ such that $\sigma \preceq \zeta \preceq \tau$ and $C(\zeta) = c$. A tree is \emph{$(C,X)$-prehomogeneous} if it is $(C,X)$-prehomogeneous for some color. \end{definition} The main combinatorial theorem states: \begin{theorem} \label{thm:large-prehomogeneous-trees} Let $d,k\in\mathbb{N}$. If $X$ is an $\omega^{2d+1}$-large finite set with $\min X > \max\{4,k\}$ and $C$ is a $k$-coloring of an $X$-quasistrong finite binary tree $T$, then there exist an $\omega^d$-large $Y \subseteq X$ and an $S \subseteq T$ such that $S$ is both $Y$-quasistrong and $(C,Y)$-prehomogeneous. \end{theorem} The proof of the theorem requires the use of two types of tree thinning operation. The notion of persistence is related to the first type of thinning which is induced by coloring of the leaves. \begin{definition} \label{def:persistent} Let $n, d, k\ge 1$, $\alpha=\omega^d \cdot n$ and let $X$ be a nonempty finite subset of ${\mathbb{N}}$. Then \begin{enumerate} \item [(i)] $X$ is \emph{$(\alpha,k,0)$-persistent} if $X$ is $\alpha$-large; \item [(ii)] $X$ is \emph{$(\alpha,k,i)$-persistent}, for $i\geq 1$, if $X$ contains an $(\alpha,k,i-1)$-persistent subset $Y$ such that for any $X$-quasistrong tree $T$ and any coloring of the leaves $C: T \cap 2^{\max X} \to k$, there exist a $c < k$ and a $Y$-quasistrong finite tree $S \subseteq T$, such that every leaf of $S$ has an extension in $C^{-1}(c)$. \end{enumerate} \end{definition} From now on, $\alpha$ is always of the form $\omega^d \cdot n$. The persistence property bears some resemblance to that of $\alpha$-largeness: \begin{proposition} \label{fac:persistent} \ \begin{enumerate} \item [(i)] If $k' \geq k$, $i' \geq i$ and $X$ is $(\alpha,k',i')$-persistent, then $X$ is $(\alpha,k,i)$-persistent. \item [(ii)] Every $(\alpha,k,i)$-persistent set is $\alpha$-large. \item [(iii)] Every superset of an $(\alpha,k,i)$-persistent set is $(\alpha,k,i)$-persistent. \item [(iv)] Suppose that $X$ is $(\alpha,k,i)$-persistent, $f: X \to Y$ is injective and $f(x) \leq x$ for all $x \in X$. Then $Y$ is also $(\alpha,k,i)$-persistent. \end{enumerate} \end{proposition} \begin{proof} (i) and (ii) are immediate. (iii). Suppose $X \supseteq Y$ and $Y$ is $(\alpha,k,i)$-persistent. The case where $i = 0$ follows from Fact \ref{fac:large-inj}. Assume that $i > 0$. Let $Z \subseteq Y$ witness $Y$ being $(\alpha,k,i)$-persistent as in Definition \ref{def:persistent}, in particular, $Z$ is $(\alpha,k,i-1)$-persistent. Let $T$ be an $X$-quasistrong tree and $C: T \cap 2^{\max X} \to k$. Let $T_Y = T \cap 2^{\leq \max Y}$. Then, we may assume that $T_Y$ is $Y$-quasistrong (by deleting some nodes if necessary). For $\sigma$ a leaf of $T_Y$, let $\zeta(\sigma) \in T \cap 2^{\max X}$ be the leftmost extension of $\sigma$. Define $$ C_Y(\sigma) = C(\zeta(\sigma)). $$ By the persistence of $Y$, there exist $S$ and $c < k$ such that $S$ is a $Z$-quasistrong subtree of $T_Y$ and every leaf of $S$ has an extension in $C_Y^{-1}(c)$. By the definition of $C_Y$, every leaf of $S$ has an extension in $C^{-1}(c)$. Hence $Z$ witnesses $X$ being $(\alpha,k,i)$-persistent. (iv). By (iii), we may assume that $f$ is bijective. We prove (iv) by induction on $i$. The case where $i = 0$ follows from Fact \ref{fac:large-inj}. Below, assume that $i > 0$. Let $Z \subseteq X$ witness $X$ being $(\alpha,k,i)$-persistent. We prove that $f(Z)$ witnesses $Y$ being $(\alpha,k,i)$-persistent. As $Z$ is $(\alpha,k,i-1)$-persistent, $f(Z)$ is $(\alpha,k,i-1)$-persistent as well by induction hypothesis. Let $T$ be a $Y$-quasistrong tree. As $f$ is bijective, there is an $X$-quasistrong tree $T_X$ and a bijection $g$ such that $g: T_X \cap \bigcup_{x \in X} 2^x \to T \cap \bigcup_{y \in Y} 2^y$ and $\sigma \prec \tau$ in $T_X$ if and only if $g(\sigma) \prec g(\tau)$ in $T$. Thus $g$ is an isomorphism in a sense (after we ignore some irrelevant nodes). For $C: T \cap 2^{\max Y} \to k$, define $C_X: T_X \cap 2^{\max X} \to k$ by $$ C_X(\sigma) = C(g(\sigma)). $$ Since $Z$ witnesses the persistence of $X$, there exist $c < k$ and $S_X$ such that $S_X$ is a $Z$-quasistrong subtree of $T_X$ and each leaf of $S_X$ has an extension in $C_X^{-1}(c)$. Let $$ S = \{\sigma \in T: \sigma \text{ has an extension in } g(S_X)\}. $$ Then $S$ is $f(Z)$-quasistrong and each of its leaves has an extension in $C^{-1}(c)$. \end{proof} To prove the existence of $(\omega,\cdot,\cdot)$-persistent sets, we begin with a lemma whose proof is inspired by an idea from Ku\u{c}era-G\'{a}cs coding (\cite{Kuc1984} \cite{Gac1986}) in algorithmic randomness. \begin{lemma} [Ku\u{c}era-G\'{a}cs coding] \label{lem:Kucera-Gacs} There exists a primitive recursive function $g: \mathbb{N} \times \mathbb{Q} \to \mathbb{N}$ such that if $n = g(m,\delta) \geq m \geq 1 \geq \delta > 0$ and $X \in [\mathbb{N}]^{\geq n}$, then there exists a $Y \in [X]^m$ such that \begin{enumerate} \item [(i)] $\min Y = \min X$, and \item [(ii)] for every $X$-quasistrong tree $T$ and every $A \subseteq T \cap 2^{\max X}$ with $|A| \geq \delta |T \cap 2^{\max X}|$, there exists a $Y$-quasistrong tree $S$ such that $S \subseteq T$ and every leaf of $S$ has an extension in $A$. \end{enumerate} \end{lemma} \begin{proof} Let $e:=e(\delta)$ be the least integer with \begin{equation}\label{eq:pos-q-strong-e} 2^{-e} < \delta. \end{equation} Define a sequence $(e_p)_{p\in \mathbb{N}}$ by $$ e_0 = 0, \ e_p = \sum_{r = 1}^{p} (e+r) = e_{p-1} + e + p. $$ Let \begin{equation} \label{eq:pos-q-strong-f} g(m,\delta) = e_{m-1}= \sum_{r=1}^{m-1} (e+r) = (m-1) e + \frac{(m-1)m}{2}. \end{equation} Fix $n = g(m,\delta)$ and $X \in [\mathbb{N}]^{\geq n}$. We may further assume that $X$ has exactly $n$ elements $$ x_0 < x_1 < \ldots < x_{n-1}. $$ For $i < m$, let $y_i = x_{e_i}$, and let $$ Y = \{y_i: i < m\}. $$ Let $x = \max X$. Fix an $X$-quasistrong tree $T$ and $A \subseteq T \cap 2^{x}$ such that $|A| \geq \delta |T \cap 2^{x}| > 2^{-e} |T \cap 2^{x}|$. For each $\rho \in T$, let $$ T_{\rho} = \{\sigma \in T: \sigma \text{ is compatible with } \rho\}. $$ Let $\rho \in T \cap 2^{y_0}$ be such that $$ |A \cap T_\rho \cap 2^{x}| > 2^{-e} |T_\rho \cap 2^x|, $$ such $\rho$ exists because $|A| \geq \delta |T \cap 2^{x}|$ and because of the additivity of the measure. Let $$ S_0 = \{\sigma: \sigma \preceq \rho\}. $$ Clearly $S_0$ is $\{y_0\}$-quasistrong. Suppose that $p < m-1$, $S_p$ is a subtree of $T \cap 2^{\leq y_p}$ such that $S_p$ is $\{y_i: i \leq p\}$-quasistrong, and \begin{equation}\label{eq:pos-q-strong-T} |A \cap T_\sigma \cap 2^{x}| > 2^{-e-p} |T_\sigma \cap 2^{x}| \end{equation} for each $\sigma \in S_p \cap 2^{y_p}$. For each $\sigma \in S_p \cap 2^{y_p}$, there exist incompatible $\tau(\sigma,0)$ and $\tau(\sigma,1) \in T \cap 2^{y_{p+1}}$ such that $$ |A \cap T_{\tau(\sigma,i)} \cap 2^{x}| > 2^{-e-p-1} |T_{\tau(\sigma,i)} \cap 2^{x}|. $$ The reason is that if not, the total number of nodes in $A\cap T_{\sigma}\cap 2^x$ would be at most \[ t+\frac{2^{e+p+1}-1}{2^{e+p+1}}t=2t-\frac{1}{2^{e+p+1}}t \] where $t=|T_{\tau}\cap 2^x|$ for some (or any) $\tau \in T \cap 2^{y_{p+1}}$, contradicting $$ |A \cap T_\sigma \cap 2^{x}| > 2^{-e-p} |T_\sigma \cap 2^x|=2t. $$ Let $$ S_{p+1} = \{\rho: \exists \sigma, i(\sigma \in S_p \cap 2^{y_p} \wedge i = 0,1 \wedge \rho \preceq \tau(\sigma,i))\}. $$ Then $S_{p+1}\subseteq T$ is quasistrong with respect to $\{y_i: i \leq p+1\}$. It follows that $S = S_{m-1}$ satisfies the requirement. \end{proof} Applying the above, we now show that $(\omega,\cdot,\cdot)$-persistent sets exist. \begin{lemma}\label{lem:persistent-existence} Let $g$ be as in Lemma \ref{lem:Kucera-Gacs}. Define $$ \bar{g}(x,k,0) = x+k+1,\ \bar{g}(x,k,i+1) = g(\bar{g}(x,k,i),1/k). $$ Then for each $i$, \begin{enumerate} \item [(i)] $\bar{g}(x,k,i) \leq (x+k+i+1)^{2^{i}}$ for $x \geq k > 0$; \item [(ii)] If $|X|>\bar{g}(\min X, k, i)$, then $X$ is $(\omega,k,i)$-persistent. \end{enumerate} \end{lemma} \begin{proof} We prove the lemma by induction on $i$. (i). First, $$ \bar{g}(x,k,0) = x+k+1 \leq (x+k+1)^{2^0}. $$ Suppose $i>0$ and let $e$ be the least such that $2^{-e} < 1/k$. Then $e \leq k$; and an easy induction on $i$ shows that $k< \bar{g}(x,k,i)/2$. Hence \begin{align*} \bar{g}(x,k,i) &= g(\bar{g}(x,k,i-1),1/k) \\ &= (\bar{g}(x,k,i-1)-1) \frac{2e + \bar{g}(x,k,i-1)}{2} \mbox{\ \ \ by (\ref{eq:pos-q-strong-f})}\\ &< \bar{g}(x,k,i-1)^{2} \mbox{\ \ \ by $2e\leq 2k<\bar{g}(x,k,i-1)$}\\ &\le ((x+k+i)^{2^{i-1}} )^2< (x+k+i+1)^{2^{i}}. \end{align*} (ii). If $|X| \geq \bar{g}(\min X,k,0) \geq |\min X| + 1$, then $X$ is $\omega$-large by definition, and thus $(\omega,k,0)$-persistent. Suppose that $i>0$ and $|X| \geq \bar{g}(\min X,k,i)$. Let $m = \bar{g}(\min X,k,i-1)$. Let $Y \in [X]^m$ be as in the conclusion of Lemma \ref{lem:Kucera-Gacs}. Note that $m = \bar{g}(\min Y,k,i-1)$ since $\min Y = \min X$. By induction hypothesis, $Y$ is $(\omega,k,i-1)$-persistent. Let $T$ be an $X$-quasistrong tree and let $C: T \cap 2^{\max X} \to k$ be a $k$-coloring for some positive $k$. Choose $c < k$ such that $$ |C^{-1}(c)| \geq |T \cap 2^{\max X}|/k. $$ By Lemma \ref{lem:Kucera-Gacs}, there exists a $Y$-quasistrong tree $S \subseteq T$ whose leaves have extensions in $C^{-1}(c)$. Thus $X$ is $(\omega,k,i)$-persistent. \end{proof} Next, we construct $(\omega^{d+1},\cdot,\cdot)$-persistent sets through the process of stacking. We introduce a notion generalizing that of quasistrongness. \begin{definition} \label{def:stack-qstrong} For a stack $\vec{X} = (X_m: m < n)$, a tree is \emph{$\vec{X}$-quasistrong} if it is $X_m$-quasistrong for each $m < n$. \end{definition} Note that an $\vec{X}$-quasistrong tree $T$ may not be $\bigcup \vec{X}$-quasistrong since an element of $T \cap 2^{\max X_0}$ may not have distinct extensions in $2^{\min X_1}$. However, the lack of splits between $X_i$ and $X_{i+1}$ is the only missing ingredient. \begin{lemma} \label{lem:persistent-stack-naive} Suppose that $i > 0$, $\vec{X} = (X_m: m < n)$ is a stack of $(\alpha,k,i)$-persistent sets and the persistence of each $X_m$ is witnessed by $Y_m$. If $T$ is $\vec{X}$-quasistrong and $C: 2^{\max X_{n-1}} \to k$, then there exist a $c < k$ and a tree $S \subseteq T$ such that $S$ is $(Y_m: m < n)$-quasistrong and every leaf of $S$ has an extension in $C^{-1}(c)$. \end{lemma} \begin{proof} We may assume that $n > 0$. Let $X_{-1} = \{0\}$. If $0 \leq m < n$ and $\rho \in 2^{\max X_{m-1}}$. Let $$ T_\rho (=T_{\rho}^m):= \{\sigma \in T: |\sigma| \leq \max X_m\wedge \sigma \text{ is compatible with } \rho\}. $$ Note that $T_\rho$ is $X_m$-quasistrong. For $0\le m\le n-1$, define in succession colorings $C_m$. Let $C_{n-1} = C$. Suppose that $C_m: 2^{\max X_m} \to k$ is defined and $m-1\ge 0$. As $Y_m$ witnesses the persistence of $X_m$ and each $T_\rho$ with $\rho \in 2^{\max X_{m-1}}$ is $X_m$-quasistrong, we can select $c_\rho < k$ and a tree $U_\rho \subseteq T_\rho$ such that $U_\rho$ is $Y_m$-quasistrong and every leaf of $U_\rho$ has an extension in $C_m^{-1}(c_\rho)$. Define $C_{m-1}: 2^{\max X_{m-1}} \to k$ by $C_{m-1}(\rho) = c_\rho$. Define a sequence of trees $\{S_m: m<n\}$ as follows: Let $c = c_{\emptyset}$, and let $S_0 = U_{\emptyset}$. For $m < n$, suppose that \begin{itemize} \item $S_m$ is a subtree of $T \cap 2^{\leq \max X_m}$; \item $S_m$ is $(Y_\ell: \ell \leq m)$-quasistrong; \item every leaf of $S_m$ has an extension in $C_m^{-1}(c)$. \end{itemize} Suppose that $m < n-1$. By the definition of $C_m$, for each leaf $\sigma$ of $S_m$, we can select $\rho(\sigma) \in 2^{\max X_{m}}$ such that $\sigma \preceq \rho(\sigma)$ and $c_{\rho(\sigma)} = c$. Let $S_{m+1}$ be the union of $S_m$ and the trees $U_{\rho(\sigma)}$'s for $\sigma$ ranging over the leaves of $S_m$. Then $S=S_{n-1}$ is the required tree. \end{proof} To obtain an $(\alpha \cdot n, k, i)$-persistent set by the process of stacking, we may need more than $n$-many $(\alpha,k,i)$-persistent sets to succeed. The additional stacks are used to supply the missing splits mentioned after Definition \ref{def:stack-qstrong}. Roughly speaking, we keep the even layers in the stack and discard the odd layers to produce the splits, which reduces the size by a factor of two. The next lemma provides a sufficient condition. \begin{lemma} \label{lem:persistent-stack} Suppose that $(X_m: m < 2^i n - 2^i + 1)$ is a stack of $(\alpha,k,i)$-persistent sets. Then $ \bigcup_m X_m$ is $(\alpha \cdot n,k,i)$-persistent. \end{lemma} \begin{proof} For $i = 0$, $(\alpha \cdot n,k,i)$-persistence and $\alpha \cdot n$-largeness are the same, and the conclusion holds trivially. Hence assume that $i > 0$. Let $\bar{n} = 2^i n - 2^i + 1$ and $X = \bigcup_{m < \bar{n}} X_m$. For each $m$, let $Y_m$ be a subset of $X_m$ witnessing the persistence of $X_m$. Thus, each $Y_m$ is $(\alpha,k,i-1)$-persistent. By induction hypothesis, $Y = \bigcup_{2\ell < \bar{n}} Y_{2\ell}$ is an $(\alpha \cdot n, k, i-1)$-persistent subset of $X$. Let $C: 2^{\max X} \to k$ and let $T$ be an $X$-quasistrong tree. Then $T$ is $(X_m: m < \bar{n})$-quasistrong. By Lemma \ref{lem:persistent-stack-naive}, there exist $c < k$ and a tree $S' \subseteq T$ such that $S'$ is $(Y_m: m < \bar{n})$-quasistrong and every leaf of $S'$ has an extension in $C^{-1}(c)$. Let $S_0 = S' \cap 2^{\leq \max Y_0}$, which is $Y_0$-quasistrong. Suppose that $2m < \bar{n}$ and $S_m$ is a $\bigcup_{\ell \leq m} Y_{2\ell}$-quasistrong subtree of $S' \cap 2^{\leq \max Y_{2m}}$. If $2m = \bar{n}-1$ then let $S = S_m$. Suppose that $2m < \bar{n}-1$. As $\bar{n}$ is odd, $2m + 1 < \bar{n}-1$ as well. If $\sigma$ is a leaf of $S_m$, $|\sigma| = \max Y_{2m}$. For each such $\sigma$, select two distinct extensions of $\sigma$ in $S' \cap 2^{\max Y_{2m+1}}$, and denote them by $\rho(\sigma,0)$ and $\rho(\sigma,1)$. The $\rho(\sigma,j)$'s exist because of $S'$ being $Y_{2m+1}$-quasistrong. Let \[ S_{m+1} = \{\tau \in S': |\tau| \leq \max Y_{2m+2}, \tau \text{ is compatible with some } \rho(\sigma,j)\}. \] Then $S_{\bar{n}-1}$ is the $Y$-quasistrong subtree that we wanted. \end{proof} The lemma below allows us to construct $(\omega^{d+1},\cdot, \cdot)$-persistent sets by stacking a sufficient number of $(\omega^{d},\cdot, \cdot)$-persistent sets. \begin{lemma} \label{lem:persistent-induction} If $X$ is $(\omega^d \cdot (\min X + 1), k, i)$-persistent, then it is $(\omega^{d+1},k,i)$-persistent. \end{lemma} \begin{proof} If $X$ is $(\omega^d \cdot (\min X + 1), k, 0)$-persistent, then $X$ is $\omega^d \cdot (\min X + 1)$-large and thus $\omega^{d+1}$-large. So $X$ is $(\omega^{d+1},k,0)$-persistent. Suppose that $i > 0$ and $X$ is $(\omega^d \cdot (\min X + 1), k, i)$-persistent. Let $Y \subseteq X$ witness the persistence of $X$. Then $Y$ is $(\omega^d \cdot (\min X+1),k,i-1)$-persistent. Let $$ Z = \{\min X\} \cup (Y - \{\min Y\}). $$ By Proposition \ref{fac:persistent}, $Z$ is $(\omega^d \cdot (\min X+1),k,i-1)$-persistent as well. By the induction hypothesis and the fact that $\min Z = \min X$, $Z$ is $(\omega^{d+1},k,i-1)$-persistent. Let $T$ be an $X$-quasistrong tree and $C: T \cap 2^{\max X} \to k$. As $Y$ witnesses the persistence of $X$, there exist a $c < k$ and an $S$ such that $S$ is a $Y$-quasistrong subtree of $T$ and every leaf of $S$ has an extension in $C^{-1}(c)$. Let $$ y_1 = \min (Y - \{\min Y\}). $$ By the $Y$-quasistrongness of $S$, we can select two distinct elements in $S \cap 2^{y_1}$, say $\sigma_0$ and $\sigma_1$, which extend a string in $S \cap 2^{\min Y}$. Then $\sigma_0$ and $\sigma_1$ extend a common string in $S \cap 2^{\min X}$ as well. Let $$ U = \{\tau \in S: \tau \text{ is compatible with one of } \sigma_0, \sigma_1\}. $$ Then $U$ is $Z$-quasistrong and has all its leaves extended by elements of $C^{-1}(c)$. It follows that $Z$ witnesses $X$ being $(\omega^{d+1},k,i)$-persistent. \end{proof} The following which shows a connection between $\alpha$-largeness and persistence is of independent interest. It will not be used in our subsequent discussion. \begin{corollary}\label{cor:persistent-largeness} Suppose that $d > 0$. If $X$ is $\omega^{2d+1}$-large and $\min X > \max\{k,i,2\}$, then $X$ is $(\omega^d,k,i)$-persistent. \end{corollary} \begin{proof} Suppose that $d = 1$, $X$ is $\omega^{3}$-large and $x_0 = \min X > \max\{k,i,2\}$. Then \[ X = \{x_0\} \cup X_1 \cup X_2 \cup X_3, \] where $x_0 < X_1 < X_2 < X_3$ and the $X_j$'s are $\omega^2$-large. For $j = 1,2,3$, let $x_j = \min X_j$. Straightforward calculations show that $x_2 > 2^{x_1}$ and \[ |X_2 \cup X_3| > 2^{(x_2+2) 2^{x_2}} > (4x_2)^{2^{x_2}} \geq \bar{g}(x_2,k,i), \] where $\bar{g}$ is as in Lemma \ref{lem:persistent-existence}. By Lemma \ref{lem:persistent-existence}, $X_2 \cup X_3$ is $(\omega,k,i)$-persistent. Thus $X$ is $(\omega,k,i)$-persistent as well, by Proposition \ref{fac:persistent}. Now suppose that $d > 1$, $X$ is $\omega^{2d + 1}$-large and $x_0 = \min X > \max\{k,i,2\}$. Then $X = X_0 \cup X_1$, where $X_0$ and $X_1$ are $\omega^{2d}$-large sets such that $X_0 < X_1$. Let $n = 2^i (\min X + 1) - 2^i + 1$. As $\min X_0 = \min X \geq i$, $\max X_0 \geq n$ by an easy calculation and that $2d > 2$. Then $X_1$ is the union of the following sets $$ \{\min X_1\} < Y_0 < \ldots < Y_{n-1} < Y_{n}, $$ where each $Y_m$ is $\omega^{2d-1}$-large. Let $f$ be the map sending every $j+1$-th element of $X_1$ to the $j$-th element of $X$. For $m < n$, each $f(Y_m)$ is $\omega^{2d-1}$-large with $\min f(Y_m) > \max\{k,i,2\}$, and thus $(\omega^{d-1},k,i)$-persistent; and $\min f(Y_0) = \min X_0 = \min X$. By Lemma \ref{lem:persistent-stack}, $Z = \bigcup_{m < n} f(Y_m)$ is $(\omega^{d-1} \cdot (\min Z + 1), k, i)$-persistent. By Lemma \ref{lem:persistent-induction}, $Z$, and thus $X$, is $(\omega^{d},k,i)$-persistent. \end{proof} To incorporate the property of persistence into the construction of prehomogeneous trees, we introduce another persistence notion similar to that of $\alpha$-largeness. This notion will handle colorings of an entire tree, instead of simply its leaves as in Definition \ref{def:persistent}. \begin{definition} \label{def:superpersistent} A set $X$ is \emph{$(\alpha,k,i)$-superpersistent}, if for any collection $\{T_\rho: \rho \in 2^{\min X}\}$ of $X$-quasistrong trees and any $C: 2^{<\mathbb{N}} \to k$, there exist $Y \subseteq X$ and $S_\rho\subseteq T_\rho$ for each $\rho$ such that $Y$ is $(\alpha,k,i)$-persistent, and $S_\rho$ is $Y$-quasistrong as well as $(C,Y)$-prehomogeneous. \end{definition} Observe that, apart from the obvious difference that superpersistence of $X$ concerns families of trees $\{T_\rho: \rho\in 2^{\text{min }X}\}$ while persistence concerns only single trees $T$, there is the additional point that for the former one considers colorings of each $T_\rho$ rather than just its leaves, which is the case for th latter, i.e.~the leaves of $T$. A superpersistent set shares properties similar to those given in Proposition \ref{fac:persistent} for persistent sets. We state them below without proof: \begin{proposition} \label{fac:superpersistent} \ \begin{enumerate} \item [(i)] If $k' \geq k$, $i' \geq j$ and $X$ is $(\alpha,k',i')$-superpersistent, then $X$ is $(\alpha,k,i)$-superpersistent. \item [(ii)] Every $(\alpha,k,i)$-superpersistent set is $\alpha$-large. \item [(iii)] Every superset of an $(\alpha,k,i)$-superpersistent set is also $(\alpha,k,i)$-superpersistent. \item [(iv)] Suppose that $X$ is $(\alpha,k,i)$-superpersistent, $f: X \to Y$ is injective and $f(x) \leq x$ for all $x \in X$. Then $Y$ is also $(\alpha,k,i)$-superpersistent. \end{enumerate} \end{proposition} We begin proving the existence of $(\omega,\cdot, \cdot)$-superpersistent sets with the following two lemmata. \begin{lemma} \label{lem:intersection} Let $Y\subset\mathbb{N}$, $\epsilon, \delta\in (0,1)$ be rational, $S$ be a finite set and let $\{R_y: y\in Y\}$ be a family of subsets of $S$ with $|R_y| \geq \delta |S|$. For $p \in [S]^2$, let $Y_p = \{y \in Y: p \in [R_y]^2\}$. Let $P = \{p \in [S]^2: |Y_p| \geq \epsilon |Y|\}$. Then \[ |P| > \frac{(\delta |S| - 1)^2 - \epsilon |S|^2}{2(1-\epsilon)}. \] In particular, if $(\delta - \sqrt{\epsilon}) |S| > 1$ then there is a $p$ such that $|Y_p|>\epsilon|Y|$. \end{lemma} \begin{proof} We do a counting argument. Consider the set \[ Q = \{(p,y): p \in [R_y]^2\}. \] On the one hand, as $|R_y| \geq \delta |S|$, there are at least $2^{-1}\delta |S| (\delta |S| - 1)$ many elements in each $[R_y]^2$, thus \[ |Q| \geq 2^{-1} \delta |S| (\delta |S| - 1) |Y| > 2^{-1} (\delta |S| - 1)^2 |Y|. \] On the other hand, $Q=\{(p,y): p\in [R_y]^2\}=\{(p,y): y\in Y_p\}$, thus \begin{align*} |Q| = \sum_{p \in P} |Y_p| + \sum_{p \not\in P} |Y_p| < |P| |Y| + (2^{-1}|S|^2 - |P|) \epsilon |Y|. \end{align*} Hence \[ |P| > \frac{(\delta |S| - 1)^2 - \epsilon |S|^2}{2(1-\epsilon)}. \] In particular, when $(\delta - \sqrt{\epsilon}) |S| > 1$, $P$ is nonempty, the result follows. \end{proof} Fix a subset $X$ of $\mathbb{N}$. \begin{lemma}\label{lem:pos-antichains} There is a primitive recursive function $h: \mathbb{N}^2 \times \mathbb{Q} \to \mathbb{N}$ such that for any $Y$, $\{T_k: k < \ell\}$ and $\{A_{k,y}: k < \ell, y \in Y\}$ satisfying \begin{enumerate} \item [(a)] $Y$ is a (finite) subset of $X$ and $|Y| \geq h(\ell,m,\delta)$; \item [(b)] $\{T_k: k < \ell\}$ is a family of $X$-quasistrong trees and $T_k \cap 2^{\min X}$ has a single element $\rho_k$; \item [(c)] $A_{k,y}\subseteq T_k \cap 2^y$ and $|A_{k,y}| \geq \delta |T_k \cap 2^y|$, \end{enumerate} there exist a $Z \in [X]^{m}$ and a family of trees $\{S_k: k < \ell\}$ such that \begin{enumerate} \item [(i)] $\min Z = \min X$; \item [(ii)] $S_k$ is a $Z$-quasistrong subtree of $T_k$, and \item [(iii)] if $(x,z) \in [Z]^2$ and $\sigma \in S_k \cap 2^z$, then $\sigma \upharpoonright y \in A_{k,y}$ for some $y \in Y \cap [x,z]$. \end{enumerate} \end{lemma} In fact the most important property of $h$ that we will need is the following: for all $m, \ell>0$, $0<\delta\le 1$ rational, \begin{equation}\label{eq:pos-antichains} h(\ell,m,\delta) \geq h(2\ell, m-1, \delta/4) 2^{3 \ell (e+2)} + e + 3, \end{equation} where $e$ is the least such that $2^{-e} < \delta$. \vskip.2in \noindent {\it Remark.} The informal idea is that $h(\ell,m,\delta)$ offers enough space to accommodate a desired $Z$ of size $m$. If $m-1$-many members of $Z$ are determined, then in the course of obtaining the $m^{\rm th}$-member of $Z$, the number $\ell$ of leaves of $S_k$ is doubled to $2\ell$ and the density $\delta$ may be reduced by some factor, say $1/4$. In the above inequality, the numbers $2^{3 \ell (e+2)}$ and $e+3$ come from combinatorial calculations that show up in the proof below. There exist primitive recursive functions satisfying \eqref{eq:pos-antichains}. For instance, we may take \begin{equation}\label{eq:h} h(\ell,m,\delta) = \exp\{e \ell \exp(5m)\}, \end{equation} where $e$ is the least integer such that $2^{-e} < \delta$. If $\ell$ and $m$ are positive, then \begin{align*} h(2\ell, m-1, \delta/4) 2^{3 \ell (e+2)} &= \exp\{(e+2) \ell \exp(5m-4) + 3 (e+2) \ell\} \\ &\leq \exp\{e \ell (\exp(5m-2) + 9)\} \\ &\leq h(\ell,m,\delta) - (e + 3). \end{align*} Hence $h$ satisfies \eqref{eq:pos-antichains}. \begin{proof} Suppose that $Y$, $\{T_k\}$ and $\{A_{k,y}\}$ satisfy (a)--(c). Then $|Y| \geq h(\ell,m,\delta) > e+3$. Let $Y_0$ be obtained by removing the first $e+3$ many elements from $Y$, and let $y_0 = \min Y_0$. Clearly, by \eqref{eq:pos-antichains} \begin{equation}\label{eq:pos-antichains-Y0} |Y_0| \geq h(\ell, m, \delta)-(e+3)\geq h(2\ell,m-1,\delta/4) 2^{3\ell(e+2)}. \end{equation} Note that by the quasistrongness of $T_k$, \begin{equation}\label{eq:pos-antichains-w} y \in Y_0 \to |T_k \cap 2^{y}| \geq 2^{e+3}. \end{equation} For each $y \in Y_0$, let $$ B_{k,y} = \{\sigma \in T_k \cap 2^y: \exists x \in Y_0 \cap [0,y] (\sigma \upharpoonright x \in A_{k,x})\}. $$ We chop the (density) interval $[0,1]$ into $2^{e+2}$ many subintervals and consider the interval $[\frac{v}{2^{e+2}}, \frac{v+1}{2^{e+2}})$ that $\frac{|B_{k,y}|}{|T_k \cap 2^y|}$ falls into and define: $$ f_y: \ell \to 2^{e+2}, \ f_y(k) = \max\left\{v < 2^{e+2}: |B_{k,y}| \geq \frac{v}{2^{e+2}} |T_k \cap 2^y| \right\}. $$ Since there are $2^{\ell(e+2)}$ many such $f_y$'s, by \eqref{eq:pos-antichains-Y0} and the Pigeonhole Principle we can select $Y_1 \subseteq Y_0$ such that \begin{equation}\label{eq:pos-antichains-Y1-size} |Y_1| \geq h(2\ell, m-1, \delta/4) 2^{2\ell(e+2)}, \end{equation} and for all $y,y' \in Y_1$, $f_{y} = f_{y'}$. Let $y_1 = \min Y_1$ and $B_k = B_{k,y_1}$. Note that for each $y \in Y_1$ and $k < \ell$, \begin{equation}\label{eq:pos-antichains-Y1} |\{\sigma \in A_{k,y}: \sigma \upharpoonright y_1 \in B_{k}\}| > (\delta - 2^{-e-2}) |T_k \cap 2^y| > \frac{3\delta}{4} |T_k \cap 2^y|. \end{equation} The first inequality uses the fact that \[ |\{\sigma\in A_{k,y}: \sigma \upharpoonright y_1 \not\in B_{k}\}|<\frac{1}{2^{e+2}}|T_k \cap 2^y|, \] since otherwise, the number $\frac{|B_{k,y}|}{|T_k \cap 2^y|}$ would fall into some interval $[\frac{v'}{2^{e+2}}, \frac{v'+1}{2^{e+2}})$ with $v'>v$, contradicting the definition of $Y_1$. For each $\zeta \in T_k \cap 2^{y_1}$, let $$ T_{k,\zeta} = \{\sigma \in T_k: \sigma \text{ is compatible with } \zeta\}, $$ and let $$ A_{k,\zeta,y} = A_{k,y} \cap T_{k,\zeta}. $$ For $k < \ell$ and $y \in Y_1$, let \[ D_{k,y} = \left\{ \zeta \in B_k: |A_{k,\zeta,y}| \geq \frac{\delta}{4} |T_{k,\zeta} \cap 2^y| \right\}. \] Then for $y \in Y_1$, \begin{align*} |\{\sigma \in A_{k,y}: \sigma \upharpoonright y_1 \in B_{k}\}| &= \big| \bigcup_{\zeta \in D_{k,y}} A_{k,\zeta,y} \big| + \big| \bigcup_{\zeta \in B_k - D_{k,y}} A_{k,\zeta,y} \big| \\ &< \left( \frac{|D_{k,y}|}{|T_k \cap 2^{y_1}|} + \frac{\delta}{4} \right) |T_k \cap 2^y|. \end{align*} Putting this together with \eqref{eq:pos-antichains-Y1}, we obtain \begin{equation}\label{eq:pos-antichains-Dky-size} |D_{k,y}| \geq 2^{-1}\delta |T_k \cap 2^{y_1}|. \end{equation} By \eqref{eq:pos-antichains-w} and \eqref{eq:pos-antichains-Dky-size}, we can apply Lemma \ref{lem:intersection} to $Y_1$, $\epsilon=\frac{\delta^2}{4^2}$, $\frac{\delta}{2}$, $S = T_0 \cap 2^{y_1}$ and the family $(D_{0,y}: y \in Y_1)$. Since $(\delta-\sqrt{\epsilon})|S|>1$ because $|S| \geq 2^{y_1} > 2^{e+2}$, we obtain $p=(\zeta(0,0),\zeta(0,1))$ and $Z_0$ which is the $Y_p$ in Lemma \ref{lem:intersection} such that $\zeta(0,0)$ and $\zeta(0,1)$ are distinct elements of $T_0 \cap 2^{y_1}$, $\zeta(0,i) \in D_{0,y}$ for $y \in Z_0 \subseteq Y_1$ and $|Z_0| \geq 2^{-4} \delta^2 |Y_1|$. By $(\ell-1)$-many inductive applications of Lemma \ref{lem:intersection}, from $T_1$ to $T_{\ell-1}$ consecutively, we have $Z_\ell \subseteq Z_0 \subseteq Y_1$ and $(\zeta(k,i): k < \ell, i < 2)$ such that $\zeta(k,0)$ and $\zeta(k,1)$ are distinct elements of $T_k \cap 2^{y_1}$, $\zeta(k,i) \in D_{k,y}$ for $y \in Z_\ell$ and $|Z_\ell| \geq 2^{-4\ell} \delta^{2\ell} |Y_1|$. Let $X' = X \cap [y_1,\infty)$, $Y' = Z_\ell$. By \eqref{eq:pos-antichains-Y1-size} and $2^{-e} < \delta$, \[ |Y'| \geq h(2\ell,m-1,\delta/4). \] Applying the induction hypothesis to $X',Y'$, the $T_{k,\zeta(k,i)}$'s and $A_{k,\zeta(k,i),y}$'s, we conclude that there exist $Z' \in [X']^{m-1}$ and $S_{k,\zeta(k,i)}$ ($i= 0,1$) such that \begin{itemize} \item $\min Z' = \min X' = y_1$; \item $S_{k,\zeta(k,i)}$ is a $Z'$-quasistrong subtree of $T_{k,\zeta(k,i)}$; \item If $(x,z) \in [Z']^2$ and $\sigma \in S_{k,\zeta(k,i)}$ then $\sigma \upharpoonright y \in A_{k,\zeta(k,i),y}$ for some $y \in Y' \cap [x,z]$. \end{itemize} Finally, let $Z = \{\min X\} \cup Z' \in [X]^{m}$, and for each $k < \ell$, let $$ S_k = \{\sigma: \exists i < 2 (\sigma \prec \zeta(k,i))\} \cup S_{k,\zeta(k,0)} \cup S_{k,\zeta(k,1)}. $$ It is straightforward to verify that $Z$ and the $S_k$'s satisfy conditions (ii) and (iii). \end{proof} \begin{lemma} \label{lem:prehom-multiple} There is a primitive recursion function $\bar{h}: \mathbb{N}^3 \to \mathbb{N}$ such that for all $X$, $(T_\rho: \rho \in 2^{\min X})$ and $C$ such that \begin{enumerate} \item[(a)] $X$ is a finite set with $|X| \geq \bar{h}(\min X, n, k)$, \item[(b)] $(T_\rho: \rho \in 2^{\min X})$ is a collection of trees where each $T_\rho$ is $X$-quasistrong and compatible with $\rho$, \item[(c)] $C: \bigcup_\rho T_\rho \to k$ is a coloring, \end{enumerate} there exist a $Z \in [X]^n$ and a family $(S_\rho \subseteq T_\rho:\rho \in 2^{\min X})$ satisfying \begin{enumerate} \item [(i)] $\min Z = \min X$; \item [(ii)] Each $S_\rho$ is $Z$-quasistrong and $(C,Z)$-prehomogeneous. \end{enumerate} \end{lemma} \begin{proof} We verify that $$ \bar{h}(x,n,k) = h(2^x,n,1/k) k^{2^x} $$ works. For each $x \in X$, define $f_x: 2^{\min X} \to k$ as follows: $$ f_x(\rho) = \min \left\{c < k: |T_\rho \cap 2^x \cap C^{-1}(c)| \geq \frac{|T_\rho \cap 2^x|}{k} \right\}. $$ By (a) and the Pigeonhole Principle, we can select $Y \subseteq X$ such that $|Y| \geq h(2^{\min X},n,1/k)$ and $f_x = f_y$ for all $(x,y) \in [Y]^2$. Let $f = f_y$ for any $y \in Y$. For each $\rho \in 2^{\min X}$ and $y \in Y$, let $$ A_{\rho,y} = T_\rho \cap 2^y \cap C^{-1}(f(\rho)). $$ It is easy to verify that $X, Y$, and the $T_\rho$'s as well as $A_{\rho,y}$'s satisfy the hypothesis of Lemma \ref{lem:pos-antichains} for $\ell = 2^{\min X}$, $m = n$ and $\delta = 1/k$. Hence there exist $Z \in [X]^{n}$ and $(S_\rho: \rho \in 2^{\min X})$ satisfying the conclusions of Lemma \ref{lem:pos-antichains}. Then $Z$ and the $S_\rho$'s are as required. \end{proof} We are now ready to show the existence of $(\omega,\cdot,\cdot)$-superpersistent sets. \begin{lemma}\label{lem:superpersistent-omega} If $|X| \geq \bar{h}(\min X, \bar{g}(\min X, k, i), k)$ then $X$ is $(\omega,k,i)$-superpersistent. \end{lemma} \begin{proof} This is a direct consequence of Lemmata \ref{lem:persistent-existence} and \ref{lem:prehom-multiple}. \end{proof} Lemmata \ref{lem:persistent-existence} and \ref{lem:superpersistent-omega} imply the following. \begin{corollary}\label{cor:superpersistent-omega} Suppose that $k > 0$, $X$ is $\omega^{3}$-large and $\min X > \max\{k, i, 4\}$. Then $X$ is $(\omega,k,i)$-superpersistent. \end{corollary} \begin{proof} Let $k > 0$ and $X$ be $\omega^3$-large with $x = \min X > \max\{k,i,4\}$. Let $e$ be the least such that $2^{-e} > 1/k$. So $e \leq k$. Clearly, $\bar{h}$ is increasing on the second variable. By Lemma \ref{lem:persistent-existence}, \begin{align*} \bar{h}(x, \bar{g}(x, k, i), k) &\leq h(2^x, (x+k+i+1)^{2^i}, 1/k) k^{2^x} \\ & \leq \exp \{k 2^x \exp(5(x+k+i+1)^{2^i}) + k 2^x\}. \end{align*} Since $x > \max\{k, i, 4\}$, \begin{align*} k 2^x \exp(5(x+k+i+1)^{2^i}) + k 2^x &< x \exp((3x)^{2^x} + x) \\ &< \exp^2 (x 2^x) < \exp^4(x). \end{align*} Thus $\bar{h}(x,\bar{g}(x,k,i),k) < \exp^5(x)$. By Fact \ref{fac:omega3-large} and $X$ being $\omega^3$-large, \[ |X| \geq \exp^x(x) > \bar{h}(x,\bar{g}(x,k,i),k). \] Hence $X$ is $(\omega,k,i)$-large by Lemma \ref{lem:superpersistent-omega}. \end{proof} Next, we climb up the ladder of $(\omega^e,\cdot,\cdot)$-superpersistence by the process of stacking. We need the following variation of prehomogeneity regarding stacks. \begin{definition} \label{def:stack-prehomogeneous} Let $C$ be a coloring on a tree $T$ and $\vec{X} = (X_m: m < n)$ be a stack. We say that $T$ is \emph{$(C,\vec{X})$-prehomogeneous}, if it is $(C,X_m)$-prehomogeneous for each $m < n$. \end{definition} Note that a $(C,\vec{X})$-prehomogeneous tree $T$ may not be $(C,\bigcup \vec{X})$-prehomogeneous, since $T$ could be $(C,X_0)$-prehomogeneous with color $c_0$, but $(C,X_1)$-prehomogeneous with a different color $c_1$. \begin{lemma} \label{lem:superpersistent-stack-naive} Suppose that \begin{enumerate} \item[(a)] $\vec{X} = (X_m: m < n)$ is a stack of $(\alpha,k,i+n-1)$-superpersistent sets; \item[(b)] $\{T_\rho: \rho \in 2^{\min X_0}\}$ is a collection of $\vec{X}$-quasistrong trees where each $T_\rho$ is compatible with $\rho$; \item[(c)] $C$ is a $k$-coloring on $ \bigcup_\rho T_\rho $. \end{enumerate} Then there exist a stack $\vec{Y} = (Y_m: m < n)$ and a collection $\{S_\rho: \rho \in 2^{\min X_0}\}$ such that \begin{enumerate} \item [(i)] $Y_m$ is an $(\alpha,k,i)$-persistent subset of $X_m$, \item [(ii)] $S_\rho$ is a $\vec{Y}$-quasistrong and $(C,\vec{Y})$-prehomogeneous subtree of $T_\rho$. \end{enumerate} \end{lemma} \begin{proof} Let $x_0 = \min X_0$. If $m < n$ and $\zeta \in T_{\rho} \cap 2^{\min X_m}$, let $$ \hat{T}_\zeta = \{\eta \in T_{\rho}: |\eta| \leq \max X_m \wedge \eta \text{ is compatible with } \zeta\}. $$ As $X_m$ is $(\alpha,k,i+n-1)$-superpersistent for all $m < n$, select $Y_m^{0}$ and $U_\zeta$, where $\zeta \in 2^{\min X_m}$, such that \begin{itemize} \item $Y_m^{0}$ is a $(\alpha,k,i+n-1)$-persistent subset of $X_m$; \item $U_\zeta$ is a $Y_m^0$-quasistrong subtree of $\hat{T}_\zeta$ and \item $U_\zeta$ is $(C,Y_m^0)$-prehomogeneous with color $c_\zeta$. \end{itemize} For $\rho$ with length $x_0$, let $S_\rho^0 = U_\rho$ and $c_\rho^0 = c_\rho$. Suppose that $m < n$ and we have the following data, \begin{itemize} \item $Y_\ell^{m}$ is an $(\alpha,k,i+n-1-m)$-persistent subset of $X_m$; \item $\vec{Y}_{\leq m}^m = (Y_\ell^m: \ell \leq m)$; \item For each $\rho \in 2^{x_0}$, $S_\rho^m$ is a subtree of $T_\rho \cap 2^{\leq \max X_m}$; \item $S_\rho^m$ is $\vec{Y}_{\leq m}^m$-quasistrong and $(C,\vec{Y}_{\leq m}^m)$-prehomogeneous. \end{itemize} If $m = n-1$, let $Y_\ell = Y_\ell^{n-1}$ and $S_\rho = S_\rho^{n-1}$. Then $Y_\ell$'s and $S_\rho$'s are as required. Suppose that $m < n-1$. We construct the $Y_\ell^{m+1}$'s and $S_\rho^{m+1}$'s as follows. Let $Y_\ell^{m+1}$ witness the $(\alpha,k,i+n-1-m)$-persistence of $Y_\ell^m$. So $Y_\ell^{m+1}$ is an $(\alpha,k,i+n-1-m-1)$-persistent subset of $X_m$. For $\sigma$ a leaf of $S_\rho^m$, select $\zeta(\sigma)$ be such that $\sigma \prec \zeta(\sigma) \in T_\rho \cap 2^{\min X_{m+1}}$. Let $$ C_m(\sigma) = c_{\zeta(\sigma)}. $$ Then $C_m$ is a $k$-coloring on the leaves of $S_\rho^m$'s. By Lemma \ref{lem:persistent-stack-naive}, for each $S_\rho^m$, there exist $c_{\rho}^m < k$ and $\hat{S}_\rho^m$ such that $\hat{S}_\rho^m$ is a subtree of $S_\rho^m$, $\hat{S}_\rho^m$ is $(Y_\ell^{m+1}: \ell \leq m)$-quasistrong, and every leaf $\tau$ of $\hat{S}_\rho^m$ has an extension $\sigma(\tau)$ in $C_m^{-1}(c_\rho^m)$. By the definition of $C_m$, each $\sigma(\tau)$ corresponds to $\zeta(\sigma(\tau))$ such that $|\zeta(\sigma(\tau))| = \min X_{m+1}$ and $c_{\zeta(\sigma(\tau))} = c_\rho^m$. Let $S_\rho^{m+1}$ be the union of $\hat{S}_\rho^m$ and $U_{\zeta(\sigma(\tau))}$'s, where $\tau$ ranges over the leaves of $\hat{S}_\rho^m$. Let $\vec{Y}_{\leq m+1}^{m+1} = (Y_\ell^{m+1}: \ell \leq m+1)$. As each $U_{\zeta(\sigma(\tau))}$ is $Y_{m+1}^{m+1}$-quasistrong, $S_\rho^{m+1}$ is $\vec{Y}_{\leq m+1}^{m+1}$-quasistrong; and as each $U_{\zeta(\sigma(\tau))}$ is $(C,Y_{m+1}^{m+1})$-prehomogeneous with color $c_\rho^m$, $S_\rho^{m+1}$ is $(C,\vec{Y}_{\leq m+1}^{m+1})$-prehomogeneous. \end{proof} With the above lemma and the pigeonhole principle, we can construct $(\alpha \cdot n,\cdot, \cdot)$-superpersistent sets by stacking $(\alpha, \cdot,\cdot)$-superpersistent sets. \begin{lemma}\label{lem:superpersistent-stack} Suppose that $\hat{n} = (2^{i+1} n - 2^{i+1}) k^{2^{x_0}} + 1$ and $j = i + \hat{n} - 1$. If $(X_m: m < \hat{n})$ is a stack of $(\alpha,k,j)$-superpersistent sets and $x_0 = \min X_0$, then $X = \bigcup_{m < \hat{n}} X_m$ is $(\alpha \cdot n, k, i)$-superpersistent. \end{lemma} \begin{proof} Fix a family $\{T_\rho: \rho \in 2^{x_0}\}$ of $X$-quasistrong trees and let $C$ be a $k$-coloring. Then each $T_\rho$ is $(X_m: m < \hat{n})$-quasistrong. By Lemma \ref{lem:superpersistent-stack-naive}, there exist $\vec{Y} = (Y_m: m < \hat{n})$ and $\hat{S}_\rho$'s such that $Y_m$ is a $(\alpha,k,i)$-persistent subset of $X_m$, $\hat{S}_\rho$ is a $\vec{Y}$-quasistrong subtree of $T_\rho$, and $\hat{S}_\rho$ is $(C,\vec{Y})$-prehomogeneous. Let $c_\rho^m$ be such that if $(x,z) \in [Y_m]^2$ and $\sigma \in \hat{S}_\rho \cap 2^z$ then $C(\sigma \upharpoonright y) = c_\rho^m$ for some $y \in [x,z]$. For each $m < \hat{n}$ and $\rho\in 2^{x_0}$, we have $c_\rho^m \in k$. Hence there exist $L \subseteq [0,\hat{n}-1]$ and $\{c_\rho: \rho \in 2^{x_0}\}$ such that $|L| = 2^{i+1} n - 2^{i+1} + 1$ and $c_\rho^\ell = c_\rho$ for each $\ell \in L$ and $\rho \in 2^{x_0}$. List the members of $L$ as $$ m_0 < m_1 < \ldots < m_{2^{i+1} n - 2^{i+1}}. $$ Let $Z_\ell = Y_{m_{2\ell}}$. By Lemma \ref{lem:persistent-stack}, $Z = \bigcup_{\ell \leq 2^i n - 2^i} Z_\ell$ is $(\alpha \cdot n,k,i)$-persistent. Now, it is easy to find for each $\rho$ a tree $S_\rho \subseteq \hat{S}_\rho$, which is $Z$-quasistrong and $(C,Z)$-prehomogeneous with color $c_\rho$. \end{proof} Lemma \ref{lem:superpersistent-induction} below is an analog of Lemma \ref{lem:persistent-induction}. \begin{lemma}\label{lem:superpersistent-induction} If $X$ is $(\omega^e \cdot (\min X + 1), k, i)$-superpersistent, then $X$ is $(\omega^{e+1},k,i)$-superpersistent. \end{lemma} \begin{proof} Let $\{T_\rho: \rho \in 2^{\min X}\}$ be a family of $X$-quasistrong trees such that $T_\rho \cap 2^{\min X} = \{\rho\}$, and let $C$ be a $k$-coloring on $\bigcup_\rho T_\rho $. By the $(\omega^e \cdot (\min X + 1), k, i)$-superpersistence of $X$, there exist $Z \subseteq X$ and $\{\hat{S}_\rho: \rho \in 2^{\min X}\}$ such that $Z$ is $(\omega^e \cdot (\min X + 1), k, i)$-persistent, each $\hat{S}_\rho$ is a $Y$-quasistrong and $(C,Z)$-prehomogeneous subtree of $T_\rho$. For each $\rho$, select $\sigma(\rho) \in \hat{S}_\rho \cap 2^{\min Z}$. Let $S_\rho$ be the subtree of $\hat{S}_\rho$ consisting of elements compatible with $\sigma(\rho)$. Let $$ Y = \{\min X\} \cup (Z - \{\min Z\}). $$ By Proposition \ref{fac:persistent}, $Y$ is $(\omega^e \cdot (\min X + 1), k, i)$-persistent as well. By Lemma \ref{lem:persistent-induction} and the fact that $\min Y = \min X$, $Y$ is $(\omega^{e+1},k,i)$-persistent. Moreover, the $S_\rho$'s are $Y$-quasistrong and $(C,Y)$-prehomogeneous. Thus $X$ is $(\omega^{e+1},k,i)$-superpersistent. \end{proof} We now establish a connection between $\alpha$-largeness and superpersistence, similar to that shown in Corollary \ref{cor:persistent-largeness}. \begin{lemma}\label{lem:superpersistent-large} If $d > 0$ and $X$ is an $\omega^{2d+1}$-large set with $\min X > \max\{4,k,i\}$, then $X$ is $(\omega^{d},k,i)$-superpersistent. \end{lemma} \begin{proof} We prove the lemma by induction on $d$. The case where $d = 1$ follows from Lemma \ref{lem:superpersistent-omega}. Suppose that $d > 1$ and $X$ is $\omega^{2d+1}$-large. Then $X = X_0 \cup X_1$, where $X_0 < X_1$ and both $X_i$'s are $\omega^{2d}$-large. Let $x_0 = \min X$, $n = x_0 + 1$, $\hat{n} = (2^{i+1} n - 2^{i+1}) k^{2^{x_0}} + 1$, and $j = i + \hat{n} - 1$. By easy calculations and Fact \ref{fac:omega3-large}, $$ \min X_1 > \max\{k,\hat{n},j\}. $$ So $X_1 - \{\min X_1\}$ is a union of a stack $(Y_m: m < \hat{n})$ of $\omega^{2d-1}$-large sets. By induction hypothesis, each $Y_m$ is $(\omega^{d-1},k,i)$-superpersistent. By Lemma \ref{lem:superpersistent-stack}, $X_1 - \{\min X_1\}$ is $(\omega^d \cdot (x_0 + 1),k,i)$-superpersistent. By Proposition \ref{fac:superpersistent} and Lemma \ref{lem:superpersistent-induction}, $X$ is $(\omega^{d+1},k,i)$-superpersistent. \end{proof} Theorem \ref{thm:large-prehomogeneous-trees} follows immediately from Lemma \ref{lem:superpersistent-large}. \section{$\Pi^0_3$-Conservation}\label{s:conservation} In this section we prove the following conservation theorem for $\mathsf{TT}^1$. Unless otherwise indicated, $\mathbb{N}$ will denote the set of standard natural numbers. \begin{theorem}\label{thm:conservation} $\mathsf{WKL}_0 + \mathsf{TT}^1$ is $\Pi^0_3$-conservative over $\mathsf{RCA}_0$. \end{theorem} \begin{proof} Suppose that $\mathsf{RCA}_0 \not\vdash \forall x \exists y \forall z R(x,y,z)$, where $R$ is a $\Sigma^0_0$-predicate. We prove that $\mathsf{RCA}_0 + \mathsf{WKL}_0+ \mathsf{TT}^1$ does not imply $\forall x \exists y \forall z R(x,y,z)$ either, by exhibiting a model of $\mathsf{RCA}_0 + \mathsf{TT}^1$, which does not satisfy $\forall x \exists y \forall z R(x,y,z)$. Let $\mathfrak M=(M,\mathcal{S})$ be a countable model of $\mathsf{RCA}_0 + \exists x \forall y \exists z \neg R(x,y,z)$, and let $a \in M$ be such that $\mathfrak M \models \forall y \exists z \neg R(a,y,z)$. Select an $\mathfrak{M}$-infinite $B \in \mathcal{S}$ such that if $(b_0,b_1) \in [B]^2$ then \begin{equation}\label{eq:conservation-B} \mathfrak M \models \forall y < b_0 \exists z < b_1 \neg R(a,y,z). \end{equation} Let $X$ be an $\mathfrak{M}$-finite subset of $B$ such that $X$ is $\omega^d$-large for some $d \in M \setminus \mathbb{N}$ and $\min X > a$. By \cite{KoYo} and Theorem \ref{thm:large-prehomogeneous-trees}, we can define a sequence $(X_i: i \in \mathbb{N})$ of $\mathfrak{M}$-finite sets such that \begin{itemize} \item $X = X_0 \supseteq\cdots\supseteq X_i \supset X_{i+1}\supseteq\cdots$; \item In $M$, $X_i$ is $\omega^{d_i}$-large for some $d_i \in M \setminus \mathbb{N}$; \item $\min X_{i+1} > \min X_i$; \item If $E$ is an $\mathfrak{M}$-finite set with $|E|^{\mathfrak{M}} < \min X_i$ then there exists a $j > i$ such that $ [\min X_j, \max X_j] \cap E = \emptyset$; \item if $C \in \mathcal{S}$ is a $k$-coloring of $2^{< M}$ for some $k < \min X_i$ then there exist $j > i$ and $S$ such that $S$ is an $\mathfrak{M}$-finite $X_j$-quasi-strong and $(C,X_j)$-prehomogeneous tree. \end{itemize} Let $$ I = \bigcup_{i \in \mathbb{N}} [0,\min X_i]. $$ As in \cite{KoYo}, it is easy to verify that $I$ is a semi-regular cut of $M$, and thus $\mathfrak{N} = (I,\operatorname{Cod}(M/I)) \models \mathsf{WKL}_0$. By \eqref{eq:conservation-B} and the fact that $a < \min X_0$, $$ \mathfrak{N} \models \exists x \forall y \exists z \neg R(x,y,z). $$ To see that $\mathfrak{N} \models \mathsf{TT}^1$, fix $k \in I$ and a $k$-coloring $\hat{C}$ of $2^{< I}$. Then there exist $i \in \mathbb{N}$ and $C \in \mathcal{S}$ such that $k < \min X_i$, $\hat{C} = C \cap I$ and $C$ is a $k$-coloring of $2^{< M}$. By the construction of the $X_i$'s, let $j > i$ and $S \in M$ be such that $S$ is an $X_j$-quasi-strong and $(C,X_j)$-prehomogeneous tree. Then $\hat{S} = S \cap I \in \operatorname{Cod}(M/I)$ is a $\hat{C}$-prehomogeneous perfect tree in $\mathfrak{N}$. As $\mathfrak{N} \models I\Sigma^0_1$, in $\mathfrak{N}$ there exists a $\hat{C}$-homogeneous perfect subtree of $\hat{S}$. Hence, $\mathfrak{N} \models \mathsf{TT}^1$. \end{proof} For readers familiar with \cite{KoYo}, it is not difficult to see that the proof of the above theorem can be combined with the proof of \cite[Theorem 3.3]{KoYo} to yield a stronger result: \begin{theorem}\label{thm:conservation-1} $\mathsf{WKL}_0 + \mathsf{RT}^2_2 + \mathsf{TT}^1$ is $\Pi^0_3$-conservative over $\mathsf{RCA}_0$. \end{theorem} In \cite{CLWY}, it is proved that $\mathsf{TT}^1$ is $\Pi^1_1$-conservative over $\mathsf{RCA}_0 + B\Sigma^0_2 + P\Sigma^0_1$. Since $P\Sigma^0_1$ is a $\Pi^0_3$ sentence, the following is a direct consequence of Theorem \ref{thm:conservation-1}. \begin{corollary}\label{cor:TT1-PSigma1} $\mathsf{WKL}_0 + \mathsf{RT}^2_2 + \mathsf{TT}^1 \not\vdash P\Sigma^0_1$. \end{corollary} \bigskip We end this paper with two questions. The second question generalizes the longstanding open question for $\mathsf{RT}^2_2$: \begin{enumerate} \item Does $\mathsf{RT}^2_2$ imply $\mathsf{TT}^1$ over $\mathsf{RCA}_0$? \vskip.15in \item Is $\mathsf{RCA}_0 +\mathsf{RT}^2_2 +\mathsf{TT}^1$ a $\Pi^1_1$-conservative system over $\mathsf{RCA}_0+B\Sigma_2$? \end{enumerate}
2024-02-18T23:40:43.488Z
2021-10-13T02:22:40.000Z
algebraic_stack_train_0000
3,193
10,189
proofpile-arXiv_065-15552
\section{Introduction} The total angular momentum operators, which contain besides the orbital angular momentum also a spin angular momentum term, occur as symmetries of a Dirac Hamiltonian (see the introduction of \cite{DBOVJ18} for a brief overview). When one considers a Dunkl deformed version of a Dirac equation or operator, its symmetries form a deformation of the total angular momentum algebra. In this article, we continue the study of the Dunkl total angular momentum algebra for arbitrary real reflection groups. This is related to the theory of Howe dual pairs, which we will now explain. Let $(V_0,B_0)$ be a Euclidean pair with $V_0 \cong \mathbb{R}^d$ a real vector space and let $(V,B)$ be its complexification. Let also $\mathsf{O}(d) = \mathsf{O}(V,B) \subset GL(V)$ denote the orthogonal group of the pair $(V,B)$. Denote by $\mathcal{W} = \mathcal{W}(V)$ the Weyl algebra of polynomial coefficients partial differential operators acting on the polynomial space $\mathbb{C}[V]$. As is well known (see, e.g., \cite[Section 4, item (a)]{Ho89}), the Laplacian and its dual symbol, the squared norm, are $\mathsf{O}(d)$-invariant elements inside $\mathcal{W}$, and they generate a Lie algebra isomorphic to $\mathfrak{sl}(2,\mathbb{C})$. Moreover, the associative subalgebra generated by this realisation of $\mathfrak{sl}(2,\mathbb{C})$ gives all $\mathsf{O}(d)$-invariants in $\mathcal{W}$. The pair $(\mathsf{O}(d),\mathfrak{sl}(2))$ just described is one of the simplest examples in the theory of Howe dual pairs. Together, they give a multiplicity-free decomposition of $\mathbb{C}[V]$ in irreducible $(\mathsf{O}(d),\mathfrak{sl}(2))$-bimodules, where the linked $\mathsf{O}(d)$- and $\mathfrak{sl}(2)$-modules uniquely determine each other. The $\mathsf{O}(d)$-modules in this decomposition are precisely the spherical harmonics. In the Weyl-Clifford algebra $\mathcal{W}\clif = \mathcal{W}\otimes\clif$, where $\clif = \clif(V,B)$ is the Clifford algebra attached to $(V,B)$, the $\mathsf{O}(d)$-invariants include square roots of the squared norm and the Laplacian, with the square root of the latter being the Dirac operator. These square roots generate a Lie superalgebra isomorphic to $\mathfrak{osp}(1|2,\mathbb{C})$, which contains the above-mentioned copy of $\mathfrak{sl}(2,\mathbb{C})$ as its even subalgebra. Also in this case, the associative subalgebra generated by this realisation of $\mathfrak{osp}(1|2,\mathbb{C})$ gives all $\mathsf{O}(d)$-invariants in $\mathcal{W}\clif$. Here, the relevant Howe dual pair is $(\mathsf{Pin}(d),\mathfrak{osp}(1|2,\mathbb{C}))$, to properly account for the spin-representations of $\mathsf{O}(d)$ occurring in a similar multiplicity-free decomposition and correspondence of irreducible modules, now of the space of spinor-valued polynomials (see \cite{Ni91}, \cite{CJHW10}, and also \cite{BDSES10} for classical dualities involving the Pin-group). Deformations of these Howe dual pairs occur when the action of the partial derivatives is replaced by the divided-difference operators introduced by Dunkl \cite{Du89}. In other words, we are interested in seeing the dual pairs of the previous paragraphs in the context of a rational Cherednik algebra $\mathsf{H}_{t,c}(V,W)$ (see \cite{EG02} and \cite{DO03}) at a parameter function $c$, $t\neq 0$ and associated with $(V,W)$, where we now see $V$ as a (faithful) representation $W \hookrightarrow \mathsf{O}(V,B) \subset GL(V)$ of a real reflection group $W$ (in general, this action does not need to be irreducible nor essential). Using the well-known PBW properties of these algebras, there is a linear embedding $\mathcal{W} \hookrightarrow \mathsf{H}_{t,c}(V,W)$ (respectively, $\mathcal{W}\clif \hookrightarrow \mathsf{H}\clif_{t,c} := \mathsf{H}_{t,c}(V,W)\otimes \clif$) and the image of $\mathfrak{sl}(2,\mathbb{C})$ (respectively, $\mathfrak{osp}(1|2,\mathbb{C})$) under this embedding still closes into a Lie algebra (respectively, Lie superalgebra) in the Cherednik context (see \cite{He91}, \cite{DBOSS12}). Since the full action of $\mathsf{O}(d)$ is not present in $\mathsf{H}_{t,c}(V,W)$ (only $W\subseteq\mathsf{O}(d)$ acts on the space of Dunkl-operators), to make sense of dual pairs, one must first compute what is the centraliser algebra for the Lie (super)algebra in question. In \cite{CD20b} (see also \cite{BSO06}) the case of $\mathfrak{sl}(2)$ was considered and results on the joint-decomposition of the polynomial space for the action of the dual pair was obtained. Similar results for $\mathfrak{osp}(1|2)$ were obtained in \cite{DBOJ18}, while in \cite{Os21} the symmetry algebra was completely determined (see also \cite{OSS09} for an initial overview of the theory of deformations of the Howe dual pairs we consider in the Cherednik context, \cite{DBGV16} for the case when $W$ is a product of groups of type $A_1$, which is used to define a higher rank Bannai--Ito algebra, and \cite{DBOVJ18}, \cite{DBLROVJ22} for other specific choices of the reflection group $W$). In the present work, we focus on the $\mathbb{Z}_2$-graded algebra $O_{t,c}:=O_{t,c}(V,W)$ that is defined as the symmetry algebra of the above-mentioned realisation of $\mathfrak{osp}(1|2)$ inside $\mathsf{H}\clif_{t,c}$. We shall refer to this algebra as the Dunkl total angular momentum algebra, since it contains the total angular momentum operators when $c=0$ and $t=1$. Our main result (Theorem \ref{t:TamaCent}) is a full description of the centre of $O_{t,c}$ for any real reflection group $W$ acting on $V$ and for any parameter function $c$. As an application to the determination of the centre, and inspired by the successful Dirac-theories for Drinfeld algebras \cite{Ci16} we prove in Theorem \ref{t:Vogan} and \ref{t:CentChar} results analagous to the celebrated ideas of Vogan on Dirac cohomologies (see \cite{Vo97}, \cite{HP02}, \cite{Ci16}). Our results build on Dirac theories for subalgebras of the Cherednik algebra (see \cite{Ca20} and \cite{CDM22}). The theory in this paper shares similarities with the Dunkl angular momentum algebra \cite{CDM22}, in the sense that we consider a family of operators depending on certain ``admissible'' elements (see Definition \ref{d:Dirac_C}). We remark that in the present case, instead of enlarging the algebra in question with a suitably defined Clifford algebra and define a theory using a Dirac element, we do not tensor $O_{t,c}$ with another Clifford algebra and we use a natural element inside the algebra $O_{t,c}$ itself (see Definition \ref{d:WGamma} -- classically when $c=0$, the eigenvalues of this element are precisely the square root of the total angular momentum quantum numbers), this reflects the theory for Hecke-Clifford algebras as in \cite{Ch17}. Finally, we give a breakdown of the contents of the paper: in Section \ref{s:Prelims}, we recall the basic definitions of the rational Cherednik algebra, Clifford algebra and double cover of the reflection group. In this section, we also recall the precise realisation of the $\mathfrak{osp}(1|2)$ Lie superalgebra and we discuss the several notions of ``centre'' that we will consider in the context of $\mathbb{Z}_2$-graded algebras. Next, in Section \ref{s:TAMA}, we define the algebra $O_{t,c}$ and we recall some structural properties such as the set of generators, based on recent results obtained in \cite{Os21}. In Section \ref{s:centre}, we prove our main result on the description of the centre. The key idea in our arguments is to compare with the result when $c=0$ by introducing a parameter $\mathbf{q}$ and using generic versions of the Weyl and rational Cherednik algebras. Further, in Section \ref{s:Vogan}, we discuss the application to Vogan morphism and cohomologies in our context and in the last section, we discuss in detail the set of admissible elements of the group algebra $\mathbb{C}\tilde W$. \section{Preliminaries}\label{s:Prelims} We start this section by defining the bilinear products and variations of centre that we will use throughout this paper. \begin{definition} Let $A = A_{\overline{0}}\oplus A_{\overline{1}}$ be a $\mathbb{Z}_2$-graded associative algebra. Throughout this work we will be interested in 4 different bilinear products in $A$. The first is the associative multiplication of $A$ which will be denoted by juxtaposition. Given homogeneous elements elements $x,y\in A$ of degree $|x|$ and $|y|$ we denote by \begin{align*} \llbracket x,y \rrbracket &= xy -(-1)^{|x||y|} yx\\ [x,y] &= xy - yx\\ \{x,y\} &= xy + xy. \end{align*} Accordingly, we define the \textbf{ungraded, graded} and \textbf{anti} centers, respectively denoted as $\centre^{\textup{ug}}(A), \centre^{\textup{gr}}(A)$ and $\centre^{\textup{anti}}(A)$, b \begin{align*} \centre^{\textup{ug}}(A) &= \{z \in A \mid [ z,x ]=0 \textup{ for all } x\in A \},\\ \centre^{\textup{gr}}(A) &= \{z \in A \mid \llbracket z,x \rrbracket=0 \textup{ for all } x\in A \},\\ \centre^{\textup{anti}}(A) &= \centre^{\textup{anti}}_{\overline{0}}(A) \oplus \centre^{\textup{anti}}_{\overline{1}}(A). \end{align*} where \begin{align*} \centre^{\textup{anti}}_{\overline{0}}(A) &= \{z \in A_{\overline{0}} \mid \{ z,x \} =0\textup{ for all } x\in A_{\overline{1}},[ z,x ] =0\textup{ for all } x\in A_{\overline{0}} \},\\ \centre^{\textup{anti}}_{\overline{1}}(A) &= \{z \in A_{\overline{1}} \mid [ z,x ] =0\textup{ for all } x\in A\}. \end{align*} \end{definition} \begin{remark} Note that $\centre^{\textup{gr}}(A) = \centre^{\textup{gr}}_{\overline{0}}(A) \oplus \centre^{\textup{gr}}_{\overline{1}}(A)$ where \begin{align*} \centre^{\textup{gr}}_{\overline{0}}(A) &= \{z \in A_{\overline{0}} \mid [ z,x ] =0\textup{ for all } x\in A\},\\ \centre^{\textup{gr}}_{\overline{1}}(A) &=\{z \in A_{\overline{1}} \mid [ z,x ] =0\textup{ for all } x\in A_{\overline{0}}, \{ z,x \} =0\textup{ for all } x\in A_{\overline{1}}\}, \end{align*} which justifies the terminology anti-center. \end{remark} \subsection{Rational Cherednik algebras} Fix a vector space $V\cong \mathbb{C}^d$ with $d\geq 3$ and a non-degenerate symmetric bilinear form $B$ on $V$. We view $B$ as the complexification of a Euclidean structure on $V_0\cong \mathbb{R}^d$. When needed, $\{y_1,\dotsc,y_d\}\subset V$ and $\{x_1,\dotsc,x_d\}\subset V^*$ will denote real orthonormal and dual bases, so that $\delta_{j,k} =\langle x_j,y_k\rangle = B(y_j,y_k)$ for $j,k\in\{1,\dotsc,d\}$, where $\langle-,-\rangle:V^*\times V\to \mathbb{C}$ denotes the natural bilinear pairing. Let $\mathsf{O}(d) \colonequals \mathsf{O}(V,B)$ denote the orthogonal group of the pair $(V,B)$ and $ \mathsf{SO}(V,B)$ denote its identity component. We consider a finite real reflection group $W \subset \mathsf{O}(d)$. Let $R\subset V^*$ denote the root system of $W$ and fix a positive system $R^+ \subset R$, and a compatible choice of simple roots $\Delta$. Let $c\colon R^+ \to \mathbb{C}$ be a $W$-invariant parameter function and denote $c(\alpha) = c_\alpha$ where $c_{w\alpha} = c_\alpha$ for all $w\in W$ and $\alpha \in R$. For a root $\alpha\in R$, denote by $s_{\alpha}\in W$ the reflection in the hyperplane perpendicular to $\alpha$, and by $\alpha^\vee\in V$ the coroot such that $\langle \alpha^\vee,\alpha \rangle = 2$. Fix $t\in \mathbb{C}^\times$. \begin{definition}\label{d:RCA} Define $\mathsf{H}_{t,c} \colonequals\mathsf{H}_{t,c}(V,W)$ to be the quotient of $T(V \oplus V^*) \rtimes W$ by the following relations for $y,v \in V$ and $x,u\in V^*$: \begin{equation} \label{e:RC} [x,u] = 0 = [y,v], \qquad [y,x] = t\langle y, x\rangle - \sum_{\alpha>0} \langle y, \alpha\rangle\langle \alpha^{\vee}, x \rangle c(\alpha) s_{\alpha} . \end{equation} \end{definition} \begin{remark} When $c=0$ and $t=1$, we have that $\mathsf{H}_{1,0}=\mathcal{W}\rtimes W$, where $\mathcal{W}$ is the Weyl algebra of polynomial coefficient differential operators associated with $V\oplus V^*$. Note that for a general $t\in \mathbb{C}^\times$, we have $\mathsf{H}_{t,0} \cong \mathcal{W}\rtimes W$. We shall refer to the $c=0$ as the classical or undeformed case. \end{remark} \subsection{Clifford algebras} We consider the Clifford algebra $ \clif \colonequals \clif (V,B)$ with canonical map $\gamma \colon V\to \clif $. Let $e_j \colonequals \gamma(y_j) $ for $j\in\{1,\dotsc,d\}$, then $ \clif $ is generated by $\{e_1,\dotsc,e_d\}$ satisfying \begin{equation}\label{e:clifrel} \{e_j,e_k\} =2 B(y_j,y_k) = 2 \delta_{jk}\,. \end{equation} The Clifford algebra is naturally $\mathbb{Z}_2$-graded with $\gamma(V)$ having degree $\overline{1}$. We shall denote by $\hc_{t,c}$ the tensor product $\mathsf{H}_{t,c}\otimes \clif$. For a subset $A \subset \{1,\dotsc,d\}$, with elements $A= \{ i_{1},i_{2},\dotsc,i_{k}\} $ such that $1\leq i_{1}<i_{2}<\cdots <i_{k}\leq d$, we denote $ e_A = e_{i_{1}}e_{i_ {2}}\cdots e_{i_{k}} $. Let $e_{\emptyset} = 1$, then a basis for $\clif$ as a vector space is given by $\{e_A \mid A \subset \{1,\dotsc,d\} \}$. We denote the chirality operator or pseudo-scalar of the Clifford algebra as \begin{equation}\label{e:Gamma} \Gamma \colonequals i^{d(d-1)/2} e_1 \dotsm e_d \in \mathcal{C}; \end{equation} it satisfies $\Gamma^2 = 1$. In the Clifford algebra, there is a realisation of the group $\mathsf{Pin} \colonequals \mathsf{Pin}(V,B)$, which is a double covering of the orthogonal group $p \colon \mathrm{Pin} \to \mathsf{O}(d)$. A double cover of the group $W$ is defined as $\tilde W = p^{-1}(W)$. For a reflection $s \in W$ let $\tilde s$ denote a preimage in $\tilde W$, so $p(\tilde s) = s$. Let $\theta$ be the nontrivial preimage of $1$ in $\tilde{W}$. The element $\theta$ is central in $\tilde{W}$ and has order two: $\theta^2 =1$. The group $W$ has presentations \[ W = \langle s_\alpha,\alpha\in R\mid s_\alpha^2=1,s_\alpha s_\beta s_\alpha = s_\gamma,\gamma = s_\alpha(\beta)\rangle, \] \[ W = \langle s_\alpha,\alpha\in \Delta \mid (s_\alpha s_\beta)^{m_{\alpha,\beta}} = 1 \rangle. \] While the double cover has presentations \textbf{} \begin{equation}\label{e:PinPresentation} \tilde{W} = \langle \theta, \tilde{s}_\alpha,\alpha\in R\mid \tilde{s}_\alpha^2=1=\theta^2,\tilde{s}_\alpha \tilde{s}_\beta \tilde{s}_\alpha = \theta\tilde{s}_\gamma,\gamma = s_\alpha(\beta), \theta \text{ central}\rangle, \end{equation} \begin{equation}\label{e:PinPresentationmalpha} \tilde{W} = \langle \theta, \tilde{s}_\alpha,\alpha\in \Delta \mid (\tilde{s}_\alpha \tilde{s}_\beta)^{m_{\alpha,\beta}} = (\theta)^{m_{\alpha,\beta}-1}, \theta \text{ central}\rangle. \end{equation}The group algebra $\mathbb{C} \tilde{W}$ splits into two subalgebras \begin{equation}\label{e:idemptildeW} \mathbb{C}\tilde{W} = \frac{1}{2}(1+\theta) \mathbb{C} \tilde{W} \oplus \frac{1}{2}(1-\theta)\mathbb{C}\tilde{W}. \end{equation} we shall denote the algebras $\frac{1}{2}(1\pm \theta)\mathbb{C}\tilde{W}$ by, respectively $\mathbb{C}\tilde{W}_\pm$. The algebra $ \mathbb{C} W_+$ is isomorphic to $\mathbb{C} W$. We define a diagonal map $\rho$ from $\tilde{W}$ to $\hc_{t,c} = \mathsf{H}_{t,c} \otimes \clif$: \begin{equation}\label{e:rho} \rho \colon \tilde W \to \mathsf{H}_{t,c} \otimes \clif \colon \tilde w \mapsto p(\tilde w) \otimes \tilde w \end{equation} which is extended linearly to a homomorphism on the group algebra $\mathbb{C}\tilde W$. \begin{proposition}\label{p:rhoiso} The image $\rho(\mathbb{C}\tilde W)$ is isomorphic to $\mathbb{C}\tilde W_-$. \end{proposition} \begin{proof} Using the first isomorphism theorem it is sufficient to prove that the kernel of $\rho$ is $\frac{1}{2}(1+\theta) \mathbb{C} W_+$. Note that $\rho(\theta) = 1 \otimes -1$, therefore $\rho(1+\theta) = 0$ and hence $\mathbb{C}\tilde{W}_+$ is in the kernel. Furthermore, the image of $\rho$ contains the vectors $w \otimes p^{-1}(w)$, choosing a single element in $p^{-1}(w)$ for each $w$, one finds this space has $|W|$ linearly independent vectors. Therefore, the dimension of $\rho(\mathbb{C} \tilde{W})$ is equal to the dimension of $\mathbb{C}\tilde{W}_-$ and hence the full kernel is $\mathbb{C}\tilde{W}_+$. \end{proof} The tensor product $\mathsf{H}_{t,c}\otimes \clif $ is $\mathbb{Z}_2$-graded, inheriting the $\mathbb{Z}_2$-grading from $\clif$. \subsection{The realisation of \texorpdfstring{$\mathfrak{osp}(1|2)$}{osp(1|2)}} In the undeformed case, the invariants for the action of $\mathsf{O}(d)$ in the Weyl-Clifford algebra $\mathcal{W}\otimes \clif$ are generated by the scalar products (the symmetric tensor corresponding to the bilinear form $B$, for a copy of $S^2(V^*)$ in $\mathcal{W}\mathcal{C}$), see \cite[Theorem 2.1 pg. 390]{Pr07} or \cite[Theorem 4.19]{CW12}. These elements form a realisation of the Lie superalgebra $\mathfrak{osp}(1|2)$ in $\mathcal{W}\otimes \clif$, and this realisation is preserved in the deformation $\mathsf{H}\otimes \clif $. In terms of the $B$-orthonormal bases for $V$ and $V^*$, we can write these as \begin{equation}\label{e:osp12} \begin{aligned}[] F^+ & = \frac{1}{\sqrt{2t}}\sum_{p=1}^d x_p e_p ,& F^- & = \frac{1}{\sqrt{2t}}\sum_{p=1}^d y_p e_p,\\ E^+& = \frac1{2t}\sum_{p=1}^d x_p^2,& E^-& = -\frac1{2t}\sum_{p=1}^d y_p^2,\\ H & = \frac{1}{t}\sum_{p=1}^d \left(x_py_p + \frac{td}{2} - \Omega_c\right), \end{aligned} \end{equation} where the element $\css \colonequals \sum_{\alpha>0} c(\alpha)s_\alpha$ is central in $\mathbb{C} W$. These elements satisfy the relations \begin{equation}\label{e:osp12re} \begin{aligned}[] \llbracket F^{+} , F^{-} \rrbracket & = H ,\quad & \llbracket H , F^{\pm} \rrbracket &= \pm F^{\pm}, & \llbracket F^{\pm} , F^{\pm} \rrbracket & = \pm 2\,E^{\pm}, \\ \llbracket E^{+} , E^{-} \rrbracket & = H, & \llbracket H , E^{\pm} \rrbracket & = \pm2\,E^{\pm}, \quad & \llbracket F^{\pm},E^{\mp} \rrbracket & = F^{\mp}. \end{aligned} \end{equation} The following elements in $U(\mathfrak{osp}(1|2))$ will play an important role in this manuscript. \begin{definition} The $\mathfrak{osp}(1|2)$ Scasimir operator is given by \begin{equation}\label{e:Scasiosp} \mathcal{S} = (F^-F^+ - F^+F^- - 1/2) \in U(\mathfrak{osp}(1|2)), \end{equation} while the $\mathfrak{osp}(1|2)$ Casimir operator is given by \begin{equation}\label{e:Casiosp} \Omega_\mathfrak{osp} = H^2 +2(E^+E^-+E^-E^+) -(F^+ F^- -F^-F^+) \in U(\mathfrak{osp}(1|2)) . \end{equation} \end{definition} The Scasimir $\mathcal{S}$ is in the anti-centre of $U(\mathfrak{osp}(1|2))$ and the quadratic Casimir element $\Omega_\mathfrak{osp}$ is in the graded centre of $ U(\mathfrak{osp}(1|2))$. These two elements are related in the following well known way. \begin{proposition}\label{p:scasisquare} The Scasimir $\mathcal{S}$ squares to $\Omega_\mathfrak{osp}+\frac14$. \end{proposition} \begin{proof} The above proposition is stated in \cite[Example 2, p. 9]{Fr96} with a different normalisation. \end{proof} \begin{remark} Note that $\Omega_{\mathfrak{sl}(2)} = H^2+2(E^+E^-+E^-E^+)$ is the quadratic Casimir element of the even subalgebra $\mathfrak{sl}(2)$ spanned by $H,E^+,E^-$. \end{remark} \section{Centraliser algebra of \texorpdfstring{$\mathfrak{osp}(1|2)$}{osp(1|2)}}\label{s:TAMA} \begin{definition} The $\mathbb{Z}_2$-graded algebra $O_{t,c} \colonequals O_{t,c}(V,W)$ is the graded centraliser of $\mathfrak{osp}(1|2)$, given by~\eqref{e:osp12} inside $\mathsf{H}_{t,c}\otimes \clif $: \[ O_{t,c}(V,W) \colonequals \{ \, a \in \mathsf{H}\otimes \clif \mid \llbracket a ,b \rrbracket = 0 \text{ for all } b \in \mathfrak{osp}(1|2) \,\} . \] \end{definition} The elements of $O_{t,c}$ were described in~\cite{Os21}. We have that $\rho( \mathbb{C} \tilde W) \subset O_{t,c}$. Moreover, there is an isomorphism as $W$-modules ($\mathsf{O}(d)$-modules when $c=0$) from $\bigwedge (V)$ to a subspace of $O_{t,c}$, which sends $y_{i_{1}} \wedge y_{i_{2}} \wedge\cdots \wedge y_{i_{k}} \in \bigwedge^k( V)$, where $A= \{ i_{1},i_{2},\dotsc,i_{k}\} \subset \{1,\dotsc,d\}$, to \begin{equation}\label{e:OA} O_A = O_{i_{1}i_{2}\dotsm i_{k}} = \bigg(\frac{|A| -1}{2}t + \sum_{a\in A} \oO_{a} e_{a} +\sum_{\{a,b\}\subset A } {M}_{ab} e_{ab} \bigg)e_A \in O_{t,c}, \end{equation} where $M_{ij} = x_iy_j - x_jy_i$ and the elements $\oO_j$ are defined as \begin{equation}\label{e:Oj} \oO_j \colonequals \frac12\sum_{\alpha>0} \langle y_j,\alpha \rangle \, c(\alpha) \,s_\alpha \, \gamma({\alpha_s^{\vee}}) \in \rho( \mathbb{C} \tilde{W} ). \end{equation} \begin{comment} \begin{equation}\label{e:Oj} \oO_j \colonequals \frac12\sum_{\alpha>0} \langle \alpha^{\vee},x_j \rangle \, c(\alpha) \,s_\alpha \, \gamma({\alpha_s}) \in \rho( \mathbb{C} \tilde{W} ) \end{equation} \end{comment} Note that $y_j\in \bigwedge^1 (V)$ is sent to $O_j = \oO_j$, and that an element of the form $O_{u_1\dotsm u_n} $ for $u_1,\dotsc,u_n\in V$ is skew-symmetric multilinear in its indices. \begin{remark} The notation $\oO_j$ is used instead of $O_j$ to emphasize that they are elements of $\rho( \mathbb{C} \tilde{W} )$, and also to more easily distinguish their occurrence in the algebra relations, section~\ref{s:rels}. \end{remark} \begin{remark} When $A= \{1,\dotsc,d\}$ in~\eqref{e:OA}, the element $O_{1\dotsm d}$ and the Scasimir $\mathcal{S}$ can be related by $\Gamma$ (see \cite[Section~3.1, page 1922]{DBOJ18} or \cite{Os21}), \begin{equation}\label{e:SGamma} \mathcal{S} \,\Gamma = \frac{i^{d(d-1)/2}}{t}O_{1\dotsm d}. \end{equation} Furthermore, the square of the Scasimir can be written as (see \cite{Os21}), \begin{equation} \mathcal{S}^2 = \Omega_\mathfrak{osp} +\frac{1}{4} = \frac{(d-1)(d-2)}{8} - \frac{(d-2)}{t^2}\sum_{j=1}^d (\oO_{j})^2 -\frac{1}{t^2}\sum_{1\leq j<k \leq d } ({O}_{jk})^2. \end{equation} When $c=0$, the right-hand side is the quadratic Casimir of $\mathfrak{so}(d)$ (see Remark \ref{rem:classicalsod}, below). Its eigenvalues give the total angular momentum quantum numbers. \end{remark} \noindent As a $\mathbb{Z}_2$-graded algebra, $O_{t,c}$ is generated by (see \cite{Os21}) \begin{itemize} \item $\rho(\tilde W)$, which has its usual $\mathbb{Z}_2$-degree, \item the even elements \begin{equation}\label{e:Oij} O_{ij} = M_{ij} +t e_ie_j/2 + \oO_ie_j - \oO_je_i, \end{equation} \item and the odd elements \begin{equation}\label{e:Oijk} O_{ijk} = M_{ij} e_k -M_{ik} e_j +M_{jk} e_i + te_ie_je_k + \oO_i e_je_k-\oO_j e_ie_k+\oO_k e_ie_j. \end{equation} \end{itemize} In particular, we have~\cite{Os21} \begin{align*} O_{klmn} & = 6 \, \mathcal A (O_{kl} O_{mn})- 8\, \mathcal A ( O_{klm}\oO_n) \\ & = \{O_{kl} ,O_{mn}\}- \{O_{km},O_{ln}\} + \{O_{kn},O_{ln}\} \\ & \quad -2( O_{klm}\oO_n- O_{kln}\oO_m+ O_{kmn}\oO_l - O_{lmn}\oO_k )\\ O_{jklmn} & = 4 \mathcal A (O_{jkl} O_{mn})+ 48\mathcal A (O_{jkl} \oO_{m}\oO_{n}) - 36 \mathcal A (O_{jk} O_{lm}\oO_{n}) \end{align*} where $\mathcal A$ denotes the antisymmetriser or antisymmetrizing operator, which has the following action on a multilinear expression with $n$ indices \begin{equation}\label{e:asym} \mathcal A (f_{u_1u_2\dotsm u_n}) = \frac{1}{n!} \sum_{s \in \mathrm{S}_n} \sgn (s) f_{u_{s(1)}} \dotsm \gamma_{u_{s(n)}}. \end{equation} \begin{comment} In $\mathsf{H} \otimes Cl$, we can write \begin{align*} \mathcal{S} & = -\frac{(d-1)(d-2)}{4} - (d-2)\sum_{j} O_{j} e_{j} -\sum_{j<k } {O}_{jk} e_je_k \\ & = \frac{(d-1)}{2} +\sum_{j} O_{j} e_{j} -\sum_{j<k } M_{jk} e_je_k \end{align*} where \[ \Omega_c = \sum_{s} c(s) s= \sum_{j=1}^d O_{j}e_j \] $\mathfrak{osp}(1|2)$ Casimir \[ \Omega_\mathfrak{osp} +1/4 = \mathcal{S}^2 = \frac{(d-1)(d-2)}{8} - (d-2)\sum_{j=1}^d (O_{j})^2 -\sum_{1\leq j<k \leq d } ({O}_{jk})^2 \] \end{comment} \subsection{Relations}\label{s:rels} In the algebra $O_{t,c}$, we have the following relations. These relations differ slightly from \cite{Os21} inasmuch as we define the Cherednik algebra with a parameter $t \neq 0$, here. For $\rho(\tilde w ) \in \mathbb{C} \tilde W_-$, \begin{equation} \rho(\tilde w ) O_{u_1 \dotsb u_n} = (-1)^{|\tilde w|n} O_{p(\tilde w)\cdot u_1 \dotsb p(\tilde w)\cdot u_n} \rho(\tilde w ) . \end{equation} For $i,j,k,l,m,n$ distinct elements of the set $\{1,\dotsc,d\}$, using the non-graded commutator $[A,B] = AB - BA$ and anticommutator $\{A,B\} = AB + BA$, we have the relations \begin{align}\label{e:15} & [O_{ij},\oO_{k}] - [O_{ik},\oO_j] + [ O_{jk},\oO_i] = 0,\\ & \{O_{ijk},\oO_l\} - \{O_{ijl},\oO_k\} + \{O_{ikl},\oO_j\} - \{O_{jkl},\oO_i\} = 0,\label{e:16} \end{align} \begin{align} [O_{ij},O_{ki}] &= tO_{jk} + [\oO_i,\oO_j] +\{O_{ijk},\oO_i\},\label{e:17} \\ [O_{ij},O_{kl}] &= \{\oO_i,O_{jkl}\} -\{\oO_j,O_{ikl}\},\label{e:18} \end{align} \begin{align} [O_{jk},O_{lmn} ] & = [\oO_j, O_{klmn}] - [\oO_k,O_{jlmn}],\label{e:19}\\ [O_{jk},O_{jlm} ] & = -tO_{klm} - \{\oO_k,O_{lm}\} - [\oO_j,O_{jklm}],\label{e:20}\\ [O_{jk},O_{jkl} ] & = -\{\oO_j,O_{jl}\} - \{\oO_k,O_{kl}\}, \label{e:21} \end{align} \begin{align} \{O_{ijk},O_{ijk}\} & = 2\left( \oO_{i}^2+ \oO_{j}^2 + \oO_{k}^2 + O_{ij}^2 + O_{ik}^2 + O_{jk}^2\right) -\frac{t^2}{2}\label{e:22},\\ \{O_{ijk},O_{ijl}\} & = \{\oO_k,\oO_l\} + \{O_{ik},O_{il}\}+ \{O_{jk},O_{jl}\},\label{e:23}\\ \{O_{ijk},O_{imn}\} & = t O_{jkmn} + \{O_{jk},O_{mn}\} +\{\oO_i,O_{ijkmn}\}, \label{e:24}\\ \{O_{ijk},O_{lmn}\} & = \{\oO_i,O_{jklmn}\} - \{\oO_j ,O_{iklmn}\} +\{\oO_k , O_{ijlmn}\}. \label{e:25} \end{align} \begin{remark}\label{rem:classicalsod} When $c=0$, the commutation relations~\eqref{e:17} and~\eqref{e:18} show that the linear span of the 2-index symmetries $O_{ij}$ forms a realisation of the Lie algebra $\mathfrak{so}(d)$. \end{remark} \section{Centre of \texorpdfstring{$O_{t,c}$}{Oc}}\label{s:centre} In order to determine the graded center of $O_{t,c}$, we shall first look at the classical graded center, that is, when $c=0$. For $c=0$, $O_{t,0}$ is realised inside $\hc_{t,0}=(\mathcal{W}\rtimes W)\otimes\clif$. As a $\mathbb{Z}_2$-graded algebra, the Weyl-Clifford algebra $\mathcal{W}\otimes \clif$ is generated by $\mathcal{V} = \mathcal{V}_{\bar0} \oplus \mathcal{V}_{\bar1}$ where $ \mathcal{V}_{\bar0} = V\oplus V^*$ and $\mathcal{V}_{\bar1}=V$. As an $\mathsf{O}(V,B)$-module, $\mathcal{W}\otimes \clif$ is isomorphic to the supersymmetric algebra $S(\mathcal{V}) = S(\mathcal{V}_{\bar0}) \otimes \bigwedge (\mathcal{V}_{\bar1})$, via the quantisation maps (see \cite[Proposition 5.4]{CW12}). \begin{lemma}\label{lemma:Gamma} The algebra of invariants $(\mathcal{W}\otimes\clif)^{\mathsf{O}(V,B)}$ is generated by the realisation of $\mathfrak{osp}(1|2)$ given by~\eqref{e:osp12}. The algebra of invariants $(\mathcal{W}\otimes\clif)^{\mathsf{SO}(V,B)}$ is generated by $\mathfrak{osp}(1|2)$ and the Clifford algebra pseudo-scalar $\Gamma\in\clif$ given by~\eqref{e:Gamma}. \end{lemma} \begin{proof} By \cite[Theorem 4.19]{CW12}, the invariants for $\mathsf{O}(V,B)$ in $S(\mathcal{V})$ are generated by the quadratic invariants (the symmetric tensor corresponding to the bilinear form $B$, for a copy of $S^2(V^*)$ in $S^2(\mathcal{V})$). In $\mathcal{W}\otimes\clif$, these can all be written in terms of the realisation of $\mathfrak{osp}(1|2)$ given in~\eqref{e:osp12}, and the constants. By \cite[Theorem 2.1 pg. 390]{Pr07}, the invariants for $\mathsf{SO}(V,B)$ are generated by the scalar products (the quadratic invariants) and the determinants (the alternating tensor corresponding to the map $\det$, for a copy of $\bigwedge^d (V^*)$ in $\bigwedge^d(\mathcal{V})$). In $\mathcal{W}\otimes \clif$, the determinant tensors can all be written as products of quadratic invariants and the pseudo-scalar $\Gamma$ (which is the determinant tensor for the copy of $\bigwedge^d (V^*)$ inside $\bigwedge^d (V^*) \cong \clif$). \end{proof} \begin{proposition}\label{p:ClassCenter} When $c=0$, the graded center of $O_{t,0}$ in $\hc_{t,0}$ is the univariate polynomial ring in $\mathbb{S}$, where $\mathbb{S}$ is the Casimir $\Omega_{\mathfrak{osp}}$ of $\mathfrak{osp}(1|2)$ when $(-1)_V\notin W$, and $\mathbb{S}=\mathcal{S}(-1)_V$ with $\mathcal{S}$ the Scasimir of $\mathfrak{osp}(1|2)$ when $(-1)_V\in W$. \end{proposition} \begin{proof} For $c=0$, the linear span $\mathfrak{o} := \langle O_{ij} \mid 1\leq i,j\leq d \rangle$ of the 2-index symmetries $O_{ij}$ forms a realisation of $\mathfrak{so}(d)$ inside $(\mathcal{W}\rtimes W)\otimes\clif$, as was noted in Remark \ref{rem:classicalsod}. For the subalgebra $\mathcal{W}\otimes\clif \subset (\mathcal{W}\rtimes W)\otimes\clif$, by exponentiation, we have $\Cent_{\mathcal{W}\otimes\clif}(\mathfrak{o}) = (\mathcal{W}\otimes \clif)^{\mathsf{SO}(V,B)}$ which is generated by $ \mathfrak{osp}(1|2)$ and $ \Gamma $, as given by Lemma~\ref{lemma:Gamma}. From the action of $W$ on $\mathfrak{o}$, it follows that $\Cent_{(\mathcal{W}\rtimes W)\otimes\clif}(\mathfrak{o})$ is generated by $ \mathfrak{osp}(1|2)$, $\Gamma $ and $W \cap \{1,(-1)_V\} $. Now, $ \mathfrak{osp}(1|2)$, and $W \cap \{1,(-1)_V\} $ supercommute with the elements $O_{ijk} \in O_{t,0}$ and $\rho(\tilde W)$. However, $\Gamma$ is in the anti-center, and not the graded center of the Weyl-Clifford algebra. Hence, $\Gamma$ does not supercommute with elements that have odd $\mathbb{Z}_2$-grading, such as $O_{ijk} \in O_{t,0}$. Since $\Gamma^2 = 1$, it follows that $\Cent_{(\mathcal{W}\rtimes W)\otimes\clif}(O_{t,0} )$ is generated by $ \mathfrak{osp}(1|2)$ and $W \cap \{1,(-1)_V\} $. The claim now follows from intersecting $\Cent_{(\mathcal{W}\rtimes W)\otimes\clif}(O_{t,0} )$ with $O_{t,0}$: if $w_0 = (-1)_V$, then $\mathcal{S} (-1)_V = \mathcal{S} \,\Gamma\,\wlong \in O_{t,0}$, by~\eqref{e:SGamma}. \end{proof} \color{black} In order to determine the graded centre of the $O_{t,c}$ when $c\neq 0$, it is convenient to introduce a formal central parameter $\mathbf{q}$ and define the Weyl and the rational Cherednik algebras as algebras over the polynomial ring $\mathbb{C}[\mathbf{q},\mathbf{q}^{-1}]$. We will proceed in a similar fashion as in \cite{CD20b}. To that end, we define the generic Weyl algebra $\mathcal{W}_\mathbf{q}$ as the unital associative algebra over $\mathbb{C}$ generated by $\mathbf{q},\mathbf{q}^{-1}$, $x\in V^*$ and $y\in V$ subject to the relations $[\mathbf{q}^n,x] = 0 = [\mathbf{q}^n,y] = [x,x'] = [y,y']$ for all $n\in \mathbb{Z}$ and \begin{equation}\label{eq:gradedWeyl} [y,x] = \mathbf{q}^2 \langle y,x \rangle \end{equation} for all $x,x' \in V^*$ and $y,y' \in V$. In the next proposition, we will use the multiindex notation $x^\alpha = x_1^{a_1}\cdots x_d^{a_d}$ for any $\alpha = (a_1,\cdots,a_d)\in\mathbb{N}^d$. \begin{proposition}\label{p:genWeyl} The set $\{\mathbf{q}^nx^\alpha y^\beta\mid n\in \mathbb{Z}, \alpha,\beta\in\mathbb{N}^d\}$ forms a $\mathbb{C}$-linear basis of $\mathcal{W}_\mathbf{q}$. Furthermore, if we define, for each $m\in \mathbb{Z}$ \[ \mathcal{W}_\mathbf{q}^m := \textup{span}_\mathbb{C}\{\mathbf{q}^nx^\alpha y^\beta\mid n+|\alpha|+|\beta| = m\}, \] then the generic Weyl algebra $\mathcal{W}_\mathbf{q} = \oplus_{m\in\mathbb{Z}} \mathcal{W}_\mathbf{q}^m$ is a $\mathbb{Z}$-graded $\mathbb{C}$-algebra. \end{proposition} \begin{proof} The claim about the linear basis is immediate from the well-known linear isomorphism between the Weyl algebra and the symmetric algebra on $V\oplus V^*$. The description of the grading amounts to declaring $\mathbf{q}, S^1(V\oplus V^*)$ to be of degree $1$ and $\mathbf{q}^{-1}$ to be of degree $-1$. The result follows by observing that the defining relation (\ref{eq:gradedWeyl}) is a graded relation in $\mathcal{W}_\mathbf{q}$. \end{proof} Similarly, we define the generic rational Cherednik algebra $\mathsf{H}_{\mathbf{q},c}(V,W)$ by introducing the central parameter $\mathbf{q}$ and requiring that the defining relation satisfy \begin{equation}\label{eq:genRC} [y,x] = \mathbf{q}^2\langle y, x\rangle - \sum_{\alpha>0} \langle y, \alpha\rangle\langle \alpha^{\vee}, x \rangle c(\alpha) s_{\alpha} \end{equation} for all $x\in V^*,y\in V$. Note that the non-generic Cherednik algebra of Definition \ref{d:RCA} can be obtained from the generic Cherednik algebra by sending $\textbf{q}^2$ to $t$. By means of the well-known PBW linear basis of $\mathsf{H}_{t,c}(V,W)$, it is straightforward to check that monomials of the type $\mathbf{q}^nx^\alpha y^\beta w$, with $n\in \mathbb{Z}, \alpha,\beta$ multi-indices and $w\in W$ form a linear basis of the generic rational Cherednik algebra. This $\mathbb{C}$-linear basis is independent of the choice of the parameter function $c$, and note that when $c=0$, we get $\mathsf{H}_{\mathbf{q},0}(V,W) = \mathcal{W}_\mathbf{q} \rtimes W$. Now define a filtration $\mathcal{F}^{(m)}$ on $\mathsf{H}_{\mathbf{q},c}(V,W)$ in the following way. We declare $\mathbf{q},S^1(V+V^*)$ to be of degree $1$, $w \in W$ to be of degree $0$ and $\mathbf{q}^{-1}$ to be of degree $-1$. Let \begin{equation}\label{eq:filtration} \mathcal{F}^{(m)} = \textup{span}_\mathbb{C}\left\{\mathbf{q}^n x^\alpha y^\beta w \mid n+|\alpha|+|\beta| \leq m\right\}. \end{equation} \begin{proposition}\label{p:bracket} Given any $\xi\in\mathcal{F}^{(m)},\eta\in \mathcal{F}^{(n)}$ we have that \[ [\xi,\eta] \equiv [\xi,\eta]_0 \] modulo $\mathcal{F}^{(m+n-1)}$, where $[\xi,\eta]_0$ denotes the commutator product in the algebra $\mathsf{H}_{\mathbf{q},0}(V,W) = \mathcal{W}_\mathbf{q} \rtimes W$ at $c=0$. \end{proposition} \begin{proof} It suffices to prove the result when $\xi$ is a monomial in $S^m(V+V^*)$, since $[\mathbf{q}^n\xi w,\eta] = \mathbf{q}^n(\xi[w,\eta] + [\xi,\eta]w)$ and $[w,\eta] = [w,\eta]_0$, for all $w \in W$. Let $p\in S(V^*)$ and $q\in S(V)$. For any $y\in V, x\in V^*$, it is known that (see \cite{Gr10} and \cite[Propositions 2.5, 2.6]{CD20a}) \begin{align*} [y,p] &= \mathbf{q}^2\partial_y(p) - \sum_{\alpha >0} c(\alpha) \langle \alpha,y \rangle \frac{p-s_\alpha(p)}{\alpha}s_\alpha\\ [q,x] &= \mathbf{q}^2\partial_x(q) - \sum_{\alpha >0} c(\alpha) \langle x,\alpha^\vee \rangle \frac{q-s_\alpha(q)}{\alpha^\vee}s_\alpha. \end{align*} Hence, the claim holds when $\xi\in S^1(V+V^*)$. Now, given any $\nu \in S^{1}(V+V^*)$ and $\xi\in S^m(V+V^*)$, from $[\nu\xi,\eta] = \nu[\xi,\eta] + \xi[\nu,\eta]$, the result is proved by induction on the monomial degree. \end{proof} \begin{corollary}\label{c:genRCA} Let $\mathsf{Gr}(\mathsf{H}_{\mathbf{q},c}(V,W))$ be the associated graded algebra with respect to the filtration defined in (\ref{eq:filtration}). Then, as $\mathbb{Z}$-graded $\mathbb{C}$-algebras, we have $\mathsf{Gr}(\mathsf{H}_{\mathbf{q},c}(V,W))\cong \mathsf{H}_{\mathbf{q},0}(V,W)=\mathcal{W}_\mathbf{q}\rtimes W$. \end{corollary} Now let $\hc_{\mathbf{q},c} = \mathsf{H}_{\mathbf{q},c}(V,W)\otimes \clif$. We define a filtration $\mathcal{G}^{(m)}$ on $\hc_{\mathbf{q},c}$ similar to the filtration $\mathcal{F}^{(m)}$ of (\ref{eq:filtration}), but requiring that the Clifford elements are of degree $0$. \begin{corollary}\label{c:GrgenRCA} When $c=0$, the algebra $\hc_{\mathbf{q},0} = (\mathcal{W}_\mathbf{q}\rtimes W)\otimes \clif$ is a $\mathbb{Z}$-graded $\mathbb{C}$-algebra. For any $c$, with respect to the filtration $\mathcal{G}^{(m)}$, the associated graded object $\mathsf{Gr}(\hc_{\mathbf{q},c})$ is isomorphic to $\hc_{\mathbf{q},0}$ as $\mathbb{Z}$-graded $\mathbb{C}$-algebras. \end{corollary} \begin{proof} The set \[\{\mathbf{q}^nx^\alpha y^\beta w \otimes e_A\mid n\in \mathbb{Z},w \in W, A \subset \{1,\ldots,d\},\alpha,\beta\textup{ multiindices}\}\] is a $\mathbb{C}$-linear basis of $\hc_{\mathbf{q},c}$, for all $c$. Furthermore, the product of two such monomials $\mu_1=\mathbf{q}^{n_1}x^{\alpha_1} y^{\beta_1} w_1 \otimes e_{A_1}$ and $\mu_2=\mathbf{q}^{n_2}x^{\alpha_2} y^{\beta_2} w_2 \otimes e_{A_2}$ can be written as \[ \mu_1\mu_2 = \mathbf{q}^{n_1+n_2}x^{\alpha_1}(w_1(x^{\alpha_2})y^{\beta_1}+[y^{\beta_1},w_1(x^{\alpha_2})])y^{\beta_1}w_1(y^{\beta_2}) w_1w_2 \otimes e_{A_1}e_{A_2}. \] The filtration degree of such expression depends on the commutator in $\mathsf{H}_{\mathbf{q},c}(V,W)$, so our claims follow from Proposition \ref{p:genWeyl} and Corollary \ref{c:genRCA}. \end{proof} Next, let $\mathfrak{g}\subset \mathsf{H}_{t,c}(V,W)\otimes\clif$ denote the realisation of the $\mathfrak{osp}(1|2)$ Lie superalgebra of (\ref{e:osp12}) and let $\mathfrak{g}_{\overline{0}}$ and $\mathfrak{g}_{\overline{1}}$ denote the even and odd parts of $\mathfrak{g}$. Denote by $\mathsf{A}_{t,c} = \Cent_{\mathsf{H}_{t,c}}(\mathfrak{g}_{\bar{0}})$. Slightly abusing the notation, we still denote by $\mathfrak{g}$ the $5$-dimensional vector subspace of $\hc_{\mathbf{q},c}$ spanned by elements in (\ref{e:osp12}) defined by substituting $\mathbf{q}=\sqrt{t}$. Note that when $c=0$, $\mathfrak{g}$ is a Lie superalgebra concentrated in degree $0$ inside $\hc_{\mathbf{q},c}$ and we still denote by $\mathfrak{g}_{\overline{0}}$ and $\mathfrak{g}_{\overline{1}}$ to the even and odd parts (but we remark that the $\mathbb{Z}_2$-grading of $\mathfrak{g}$ is not compatible with the $\mathbb{Z}$-grading of $\hc_{\mathbf{q},c})$. Let $\mathsf{A}_{\mathbf{q},c} = \Cent_{\mathsf{H}_{\mathbf{q},c}}(\mathfrak{g}_{\overline{0}})$ and $O_{\mathbf{q},c} = \Cent_{\hc_{\mathbf{q},c}}(\mathfrak{g})$. It is straight forward to check that $\Cent_{\hc_{\mathbf{q},c}}(\mathfrak{g}_{\bar0}) = \mathsf{A}_{\mathbf{q},c}\otimes \clif$ and that the following assertions hold true (the equivalent proofs in \cite{Os21} generalise to include $\textbf{q}$ in a straight forward way): \begin{itemize} \item $\Cent_{\hc_{\mathbf{q},c}}(\mathfrak{g}) = P(\mathsf{A}_{\mathbf{q},c}\otimes \clif)$, where $P = \textup{Id} - \ad(F^-)\ad(F^+)$ is the projection operator $P:\Cent_{\hc_{\mathbf{q},c}}(\mathfrak{g}_{\overline{0}})\to \Cent_{\hc_{\mathbf{q},c}}(\mathfrak{g})$, \item $\rho(\tilde W)$ and the elements $O_A = -\tfrac{\mathbf{q}^2}{2}P(e_A)$, with $A\subseteq \{1,\ldots, d\}$, generate $O_{\mathbf{q},c}$ as an associative algebra over $\mathbb{C}[\mathbf{q},\mathbf{q}^{-1}]$. \end{itemize} \begin{proposition}\label{p:tamaiso} When $O_{\mathbf{q},c}$ is equipped with the filtration induced by the filtration $\mathcal{G}^{(m)}$ on $\hc_{\mathbf{q},c}$ as in Corollary \ref{c:GrgenRCA}, we have $\mathsf{Gr}(O_{\mathbf{q},c})\cong O_{\mathbf{q},0}$ is an isomorphism of $\mathbb{Z}$-graded $\mathbb{C}$-algebras. \end{proposition} \begin{proof} We note that for any $A\subset\{1,\ldots,d\}$ we have \begin{align*} -\frac{\mathbf{q}^2}{2}P(e_{A}) &= O_{A}\\ &=\left( \sum_{\{a,b\} \subset A}M_{ab}e_{ab} + \sum_{a \in A} \oO_ae_a + \frac{\mathbf{q}^2(|A|-1)}{2} \right) e_A\\ &\equiv \left( \sum_{\{a,b\} \subset A}M_{ab}e_{ab} + \frac{\mathbf{q}^2(|A|-1)}{2} \right) e_A \end{align*} modulo $\mathcal{G}^{(1)}$. The algebras $O_{\mathbf{q},c}$ and $O_{\mathbf{q},0}$ are in $\hc_{\mathbf{q},c}$ and $\hc_{\mathbf{q},0}$, respectively, and from Corollary \ref{c:GrgenRCA}, we have $\mathsf{Gr}(\hc_{\mathbf{q},c}) \cong \hc_{\mathbf{q},0}$. Therefore, $\mathsf{Gr}(O_{\mathbf{q},c})$ is a subalgebra of $\hc_{\mathbf{q},0}$. The equation above shows that the subalgebras $\mathsf{Gr}(O_{\mathbf{q},c})$ and $O_{\mathbf{q},0}$ of $\hc_{\mathbf{q},0}$ coincide, from which we conclude $(4)$. \end{proof} \begin{theorem}\label{t:TamaCent} The graded centre of $O_{t,c}$ is the polynomial ring $\mathbb{C}[\mathbb{S}]$, where $\mathbb{S} =\Omega_\mathfrak{osp}$ if $w_0 \neq(-1)_V$ and $\mathbb{S} =\mathcal{S} w_0$ if $w_0 = (-1)_V$. \end{theorem} \begin{proof} From Proposition \ref{p:ClassCenter} the centre $Z^{\textup{gr}}(O_{t,0})$ is generated by $\mathbb{S}$. We argue inclusion in both directions to show $Z^{\textup{gr}}(O_{t,c}) \cong Z^{\textup{gr}}(O_{t,0})$. Any polynomial in $\mathbb{S}$ is central in $O_{t,c}$ and powers of $\mathbb{S}$ are linearly independent. Thus, there is an injective map from $Z^{\textup{gr}}(O_{t,0})$ to $Z^{\textup{gr}}(O_{t,c})$. Any element in the centre $Z^{\textup{gr}}(O_{\mathbf{q},c})$ must be such that $[z,a] = 0$ for all $a \in O_{t,c}$. Using Proposition \ref{p:bracket}, $[z,a]_0 = 0$ for all $a \in O_{t,0}$. Hence, $z$ is in the centre of the associated graded algebra. Because $\mathsf{Gr}(O_{\mathbf{q},c})$ is isomorphic to $O_{\mathbf{q},0}$, the centre $Z^{\textup{gr}}\mathsf{Gr}(O_{\mathbf{q},c})$ is isomorphic to $Z^{\textup{gr}}(O_{\mathbf{q},0})$. Therefore, we have the inclusion $Z(O_{\mathbf{q},c}) \subset Z^{\textup{gr}}(O_{\mathbf{q},0})$. Specialising $\mathbf{q}$ to $\sqrt{t}$ proves that $Z^{\textup{gr}}(O_{t,c}) \subset Z(O_{t,0})$. \end{proof} \begin{corollary} The projection map $P = \textup{Id} - \ad(F^-)\ad(F^+)$ is a vector space isomorphism between $Z(\mathsf{A}_{t,c})$ and $\centre^{\textup{gr}}(O_{t,c})$. \end{corollary} \begin{proof} Recall that $w_0$ denotes the longest element of $W$. In \cite{CDM22,FH15} (see also \cite[Remark 3.3]{FH22}) it was proved that $Z(\mathsf{A}_{t,c})$ is the univariate polynomial ring $\mathcal{R}[\Omega_{\mathfrak{sl}(2)}]$, where $\mathcal{R}=\mathbb{C}$ if $ (-1)_V$ is not in $W$ and $\mathcal{R} = \mathbb{C}[w_0]$ if $w_0=(-1)_V$ is in $W$. One computes \begin{equation}\label{e:projScasi} P(\mathcal{S}) = (-2)( \Omega_{\mathfrak{osp}} + \tfrac{1}{4}), \end{equation} so that using $\Omega_{\mathfrak{sl}(2)} = \Omega_{\mathfrak{osp}} - \mathcal{S} -\tfrac{1}{2}$, one obtains \begin{equation}\label{e:projslcasi} P(\Omega_{\mathfrak{sl}(2)}) = 3 \Omega_{\mathfrak{osp}}. \end{equation} Furthermore, when $w_0 = (-1)_V$ \begin{equation}\label{e:projlongest} P(w_0) = (-2)\mathcal{S} w_0. \end{equation} So from Theorem \ref{t:TamaCent}, in any case, the generators of $Z(\mathsf{A}_{t,c})$ are sent to generators of $\centre^{\textup{gr}}(O_{t,c})$. However, the projection operator is not an algebra homorphism, when restricted to $Z(\mathsf{A}_{t,c})$. Notwithstanding, we claim that, for all $m\in\mathbb{Z}_{\geq 1}$, there exists $a_m\neq 0$ and a polynomial $q_{m-1}\in\mathbb{C}[\Omega_{\mathfrak{osp}}]$ of degree strictly smaller than $m$ such that $P(\Omega_{\mathfrak{sl}(2)}^m) = a_m\Omega_{\mathfrak{osp}}^m + q_{m-1}$. Indeed, the base case $m=1$ is (\ref{e:projslcasi}) with $a_1=3$. Assuming it holds true for $m$, note that \begin{equation}\label{e:mpower} \Omega_{\mathfrak{sl}(2)}^{m+1} = (\Omega_{\mathfrak{osp}} - \mathcal{S} -\tfrac{1}{2})\Omega_{\mathfrak{sl}(2)}^{m} = \Omega_{\mathfrak{osp}}\Omega_{\mathfrak{sl}(2)}^{m} - \mathcal{S}\Omega_{\mathfrak{sl}(2)}^{m} -\tfrac{1}{2}\Omega_{\mathfrak{sl}(2)}^{m}. \end{equation} The important property we shall use is that that $P(ST) = P(S)T$, whenever $T$ is already an element of $O_{t,c}$. Now, since $\mathcal{S}+\tfrac{1}{2}$ commutes with $\Omega_{\mathfrak{osp}}$, we can use the binomial formula to expand $\Omega_{\mathfrak{sl}(2)}^{m} =( \Omega_{\mathfrak{osp}} - (\mathcal{S} + \tfrac{1}{2}))^m$. Using (\ref{e:projScasi}), we get \begin{equation}\label{e:Smpower} P(\mathcal{S}\Omega_{\mathfrak{sl}(2)}^{m}) = (-2)\Omega_{\mathfrak{osp}}^{m+1} + p_{m}, \end{equation} where $p_m$ is a polynomial on $\Omega_{\mathfrak{osp}}$ of degree at most $m$. Using (\ref{e:mpower}), (\ref{e:Smpower}) and the inductive hypothesis, we get \[ P(\Omega_{\mathfrak{sl}(2)}^{m+1}) = a_{m+1}\Omega_{\mathfrak{osp}}^{m+1} + q_{m} \] with $a_{m+1} = a_m + 2$, proving our claim. We thus conclude that $P$ maps $\mathbb{C}[\Omega_{\mathfrak{sl}(2)}]$ isomorphically into $\mathbb{C}[\Omega_{\mathfrak{osp}}]\subseteq \centre^{\textup{gr}}(O_{t,c})$, as a linear map. This settles the proof in the case when $(-1)_V$ is not in $W$. Now suppose that $w_0 = (-1)_V$. The above argument shows that $P$ induces a linear isomorphism $\mathbb{C}[\Omega_{\mathfrak{sl}(2)}] \cong \mathbb{C}[\mathbb{S}^2]$. To conclude our proof, we need to show that $P$ maps $\Omega_{\mathfrak{sl}(2)}^mw_0$ to $a_m\mathbb{S}^{2m + 1} + q_{m}$, with $a_m\neq 0$ and $q_{m}$ a polynomial on $\mathbb{S} = \mathcal{S} w_0$ of degree\footnote{We can conclude that $q_m$ is an \emph{odd} polynomial on $\mathbb{S}$, but this is not essential in this proof.} at most $2m$. We use, once again, the binomial formula to expand $\Omega_{\mathfrak{sl}(2)}^{m}w_0 =( \Omega_{\mathfrak{osp}} - (\mathcal{S} + \tfrac{1}{2}))^mw_0 = \Omega_{\mathfrak{osp}}^mw_0 + p_m$, where we can interpret $p_m$ as a polynomial on $\Omega_{\mathfrak{osp}},\mathcal{S}$ and $w_0$. Noting that $\Omega_{\mathfrak{osp}},\mathbb{S} = \mathcal{S} w_0\in O_{t,c}$, we apply $P$ to $(\Omega_{\mathfrak{sl}(2)}^{m}w_0)$. From $\Omega_\mathfrak{osp}^m = \mathbb{S}^{2m} + r_m$ (with $r_m\in \mathbb{C}[\mathbb{S}]$ of degree less than $2m$) and (\ref{e:projlongest}), the result follows. \end{proof} \section{Analogue of the Vogan morphism}\label{s:Vogan} The rational Cherednik algebra $\mathsf{H}_{t,c}$ is endowed with an anti-involution $\ast$ defined as follows. \begin{equation}\label{eq:bullet} w^\ast = w^{-1}, \quad x_i^\ast = y_i, \quad y_i^\ast = x_i, \end{equation} for all $ w \in W$, with $\{y_1,\ldots y_d\}$ and $\{x_1,\ldots,x_d\}$ any fixed pair of dual bases for $V$ and $V^*$. We define an anti-involution on $\clif$, $\gamma^*= (-1)^{|\gamma|}\gamma^t$. Here, if $\gamma = \eta_1 \dotsm \eta_p$ then $\gamma^t= \eta_p \dotsm \eta_1$. We extend these anti-involutions to $\mathsf{H}_{t,c} \otimes \clif$ by defining $\bullet: \mathsf{H}_{t,c} \otimes \clif \to \mathsf{H}_{t,c} \otimes \clif$ where $\bullet = \ast \otimes *$. The algebra $O_{t,c}$ inherits the anti-involution $\bullet$ from $\mathsf{H}_{t,c} \otimes \clif$. \begin{definition}\label{d:WGamma} We define the element $\D$ by \[ \D = \Gamma \mathcal{S} \in O_{t,c}. \] \end{definition} \begin{remark} In the classical case $c =0$ and trivial $W$, the eigenvalues of the element $\D$ acting on the appropriate polynomial-spinor space is a square root of the total angular momentum quantum number. \end{remark} \begin{remark} In \cite{Os21}, a projection operator $P$ (which we use in Proposition \ref{p:tamaiso}) is defined from the centraliser of $\mathfrak{sl}_2$ to $O_{t,c}$. We have that $\D = -P(\Gamma)/2$, the projection of the chirality (or pseudo-scalar) element~\eqref{e:Gamma}. For this reason we shall refer to $\D$ as the projected chirality operator. \end{remark} \noindent Because the projection $P$ takes $\Cent_{\hc_{t,c}}(\mathfrak{g}_{\overline{0}})$ to $O_{t,c}$ and $\Gamma$ is in $\clif \subset \Cent_{\hc_{t,c}}(\mathfrak{g}_{\overline{0}})$, the projected chirality operator $\D$ is in $O_{t,c}$. \begin{proposition} The element $\D$ is self adjoint, that is \[ {\D}^\bullet = \D. \] \end{proposition} \begin{proof} Note that $\Gamma^\bullet= \Gamma^* = \Gamma$. Furthermore $(F^\pm)^\bullet = -F^\mp$. Hence, \[ \mathcal{S}^\bullet = \left(F^-F^+- F^+F^- - \frac{1}{2}\right)^\bullet = \left( (F^+)^\bullet(F^-)^\bullet-(F^-)^\bullet(F^+)^\bullet -\frac{1}{2} \right) = \mathcal{S}. \] The projected chirality operator is a product of two commuting self adjoint operators and therefore, is self adjoint. \end{proof} \begin{proposition}\label{p::Dsquare} In $O_{t,c}$, the element $\D$ is a square root of the Casimir $\Omega_{\mathfrak{osp}}$, \[ (\D)^2 = \Omega_{\mathfrak{osp}} + \frac{1}{4}. \] \end{proposition} \begin{proof} The square of $\D$ is equal to the square of $\mathcal{S}$. The proposition follows from Proposition \ref{p:scasisquare} which is the equivalent statement for the Scasimir $\mathcal{S}$ element of $\mathfrak{osp}(1|2)$. \end{proof} We define a function $\epsilon: \mathbb{C}\tilde{W}_- \to \{\pm 1\}$ such that, for every homogeneous $\rhow \in \mathbb{C}\tilde{W}_-$, \[ \D \rhow= \epsilon (\rhow) \rhow\D. \] If the dimension $d$ of $V$ is odd then $\epsilon(\rhow) = 1$ for all $\rhow \in \mathbb{C}\tilde{W}_-$. Alternatively, if $d$ is even then $\epsilon(\rhow) = (-1)^{|\rhow|}$ for $\rhow\in \mathbb{C}\tilde{W}_-$, where $|\rhow|$ is the $\mathbb{Z}_2$-grading of $\rhow$. \begin{definition} We define the $\epsilon$-centre of $\mathbb{C}\tilde{W}_-$ to be: \[ Z^\epsilon(\mathbb{C}\tilde{W}_-) = \{ a \in \mathbb{C}\tilde{W}_- : a b = \epsilon(b) b a, \text { for all } b \in \mathbb{C}\tilde{W}_-\}. \] Furthermore, we say an element is $\epsilon$-central if it is contained in the $\epsilon$-centre. \end{definition} \begin{comment} \color{red} R: Some thoughts on admissible elements and possibly void definition in the $d$ even case. If the aim is for $\C\in \mathbb{C}\tilde{W}_-$ to be such that $\ker(\D+\C)$ is a $\tilde{W}$-module, then for $v\in \ker(\D+\C)$ and $w_1\in\rho(\tilde{W})$ we want \[ (\D+\C) w_1 v = 0 \] which will be the case if for some $w_2 \in \mathbb{C}\tilde{W}_-$ \[ (\D+\C) w_1 = w_2 (\D+\C) \iff 0 = (\epsilon(w_1)w_1 - w_2)\D + \C w_1 - w_2 \C \] If $\C$ is $\epsilon$-central, then this is the case with $w_2 = \epsilon(w_1)w_1$. For a reflection $\C = \rho(\tilde s)$ this is satisfied, since $(\D+\C)^2 = \D^2 + 1$ so $(\D+\C)$ is invertible if $\D^2$ acts by a scalar and $w_2 = (\D+\C)w_1(\D+\C)^{-1} $, but I think then $\ker(\D+\C) =0$. KC: the issue with this is the that we also need $\C^2$ to be central. So a single reflection is not sufficient. If $\C w_1 = w_2 \C$ then we need that \[\D_\C w_1 = w_2 \D_\C.\] However, since the basis for $\mathbb{C}\tilde{W}_-$ is linearly independent then we need $\epsilon w_1 -w_2 =0$ which implies that $w_2 = \epsilon(w_1)$... so our condition is necessary and sufficient. \color{black} \end{comment} \noindent Since $\epsilon$ is valued in $\{-1,1\}$ then if two elements $\C,\nu$ are $\epsilon$-central then their product $\C\nu$ is central (in the ungraded sense). Furthermore if both $\C,\nu$ have the same $\mathbb{Z}_2$-degree, then $\C\nu$ is even and also central, in the graded sense. \begin{definition}\label{d:Dirac_C} A homogenous element $\C\in \mathbb{C}\tilde{W}_-$ is called {\bf admissible} if $\C$ is $\epsilon$-central and $ \C^\bullet = \C$. We will denote by $\mathfrak{A} = \mathfrak{A}(\mathbb{C}\tilde{W}_-)$ the set of admissible elements. For any admissible $\C\in \mathfrak{A}$, define \begin{equation}\label{e:CDirac} \D_\C := \D + \rho(\omega) \in O_{t,c}. \end{equation} \end{definition} \begin{lemma} The square of $\D_\C$ can we written as; \[ (\D_\C)^2 = \Omega_\mathfrak{osp} + \rho(\omega)^2 + (1+\epsilon(\rho(\omega))) \rho(\omega) \D + \frac{1}{4}. \] \end{lemma} \begin{proof} The following calculation uses Proposition \ref{p::Dsquare} to calculate the square of $\D_\C$. \begin{equation} \begin{aligned} (\D_\C)^2 &= (\D + \rho(\omega))^2 \\ &= (\D)^2 + \rho(\omega)^2 + \D \rho(\omega) + \rho(\omega) \D \\ &= (\D)^2 + \rho(\omega)^2 + \epsilon(\rho(\omega))\rho(\omega)\D + \rho(\omega) \D \\ &= \Omega_\mathfrak{osp} + \frac{1}{4} + \rho(\omega)^2 + (\epsilon(\rho(\omega))+1)\rho(\omega)\D, \end{aligned} \end{equation} finishing the proof. \end{proof} \begin{remark} In the equation for $\D_\C^2$ above the element $\Omega_\mathfrak{osp}$ is central in $O_{t,c}$ and $\rho(\C)^2$ is central in $\mathbb{C}\tilde{W}_-$. \end{remark} We now prove an analogue of Vogan's conjecture for the algebra $O_{t,c}$. Thus, for every choice of $\C$ we can relate the centre of $O_{t,c}$ with the centre of the algebra $\centre^{\textup{ug}}(\mathbb{C}\tilde{W}_-)$. This will allow us, once we have constructed the $\D_\C$-cohomology (Definition \ref{d:DiracCoh}), to relate the action of the centre of the these algebras on $O_{t,c}$-modules if the cohomology is non-zero. \begin{theorem}\label{t:Vogan} Given an admissible $\C \in \mathbb{C}\tilde{W}_-$, there is an algebra homomorphism \[ \zeta_\C: \centre^{\textup{gr}}(O_{t,c}) \to \centre^{\textup{ug}}(\mathbb{C}\tilde{W}_-) \] such that, for all $z \in Z(O_{t,c})$ there exists $a\in O_{t,c}$ satisfying \begin{equation}\label{e:zetaC} z = \zeta_\C(z) + \D_\C a + a\D_\C. \end{equation} \end{theorem} \begin{proof} The proof here employs identical ideas to the proof in \cite[Theorem 5.4]{CDM22}. By Theorem \ref{t:TamaCent} the centre of $O_{t,c}$ is polynomial in the element \[ \mathbb{S} = \begin{cases} \Omega_\mathfrak{osp} & \text{ if } w_0 \neq (-1)_V, \\ \D\wlong & \text{ if } w_0 = (-1)_V. \end{cases} \] Furthermore, we have shown that \begin{align*} \Omega_\mathfrak{osp} &= (\D_\C)^2 +\rho(\omega)^2 - \frac{1}{4} -\D_\C \rho(\omega) -\rho(\omega)\D_\C\\ &= \D_\C (\frac{1}{2} \D_\C - \rho(\omega)) + (\frac{1}{2}\D_\C - \rho(\omega))\D_\C + \rho(\omega)^2 - \frac{1}{4}. \end{align*} We split the proof, depending on $w_0$. If $w_0 \neq (-1)_V$, we conclude that if $\zeta_\C$ exists it must be such that $\zeta_\C(\Omega_\mathfrak{osp}) = \rho(\omega)^2 - \frac{1}{4}$. We define an algebra homomorphism $\zeta_\C: \centre^{\textup{gr}}(O_{t,c}) \to \centre^{\textup{ug}}(\mathbb{C}\tilde{W}_-)$ by $\zeta_\C(\Omega_\mathfrak{osp}^k) = (\rho(\omega)^2- \frac{1}{4})^k$ and extend linearly. We will prove that $\zeta_\C$ satisfies condition (\ref{e:zetaC}). Since this condition is linear and $\Omega_\mathfrak{osp}^k$ is a basis for $\centre^{\textup{gr}}(O_{t,c})$, we are left to prove that there exists an $a_k \in O_{t,c}$ such that \[ \Omega_\mathfrak{osp}^k = \D_\C a_k + a_k\D_\C + \left(\rho(\omega)^2-\frac{1}{4}\right)^k. \] We prove this by induction, let $a_{k-1}$ be such that $\Omega_\mathfrak{osp}^{k-1} = \D_\C a_{k-1} + a_{k-1}\D_\C + (\rho(\omega)^2-\frac{1}{4})^{k-1}$. Let us multiply by the equality $\Omega_\mathfrak{osp}= \D_\C a_1 + a_1\D_\C + \rho(\omega)^2 - \frac{1}{4}$. We now have that $\Omega_\mathfrak{osp}^k$ is equal to: \[ \left(\D_\C a_{k-1} + a_{k-1}\D_\C + (\rho(\omega)^2-\tfrac{1}{4})^{k-1}\right)\left( \D_\C a_1 + a_1\D_\C + \rho(\omega)^2 -\frac{1}{4}\right ) \] so that \[ \Omega_\mathfrak{osp}^k = \D_\C a_k + a_k\D_\C + (\rho(\omega)^2-\frac{1}{4})^k \] where $a_k = a_{k-1} (\rho(\omega)^2- \frac{1}{4}) + a_1 (\rho(\omega)^2- \frac{1}{4})^{k-1} + 2 a_{k-1}\D_\C a_1 + a_{k-1}(1-\epsilon(\rho(\omega)))\rho(\omega)\D_\C $. We have proved that, if $w_0 \neq (-1)_V$ then $\zeta_\C$ satisfies condition (\ref{e:zetaC}). If $w_0 = (-1)_V$, then the generator of $\centre^{\textup{gr}}(O_{t,c})$ is equal to $\D\wlong$. Using the definition (\ref{d:Dirac_C}) of $\D_\C$ we have the equality \[\mathbb{S} =\D\wlong = -\rho(\C)\wlong + \D_\C\wlong. \] Here $-\rho(\C)\wlong \in \centre^{\textup{ug}}(\tilde{W}_-)$. We conclude that if $\zeta_\C$ exists it would be such that $\zeta_\C(\D\wlong) = -\rho(\omega)\wlong \in \centre^{\textup{ug}}(\mathbb{C}\tilde{W}_-)$. To check that $\zeta_\C$ can be extended to a homomorphism with property (\ref{e:zetaC}) is identical to the part above when $w_0$ was not equal to $(-1)_V$ and we omit the details. \end{proof} \subsection{Unitary structures} In this section we define the notion of $\bullet$-Hermitian, introduce $\D_\C$-cohomology and prove that, when non-zero, the $\D_\C$-cohomology dictates that the central character of a $O_{t,c}$-module $(\pi,X)$ matches a character of any $\tilde{W}$ submodule of the given cohomology. \begin{definition} Let $(\pi,X)$ be an $O_{t,c}$-module. A sequilinear form $(-,-)_X$ on $X$ is called $\bullet$-hermitian if \[ ( \pi(\eta)x_1,x_2)_X = ( x_1,\pi(\eta^\bullet) x_2)_X \text{ for every } x_1,x_2 \in X, \eta \in O_{t,c}. \] Furthermore, $(\pi,X)$ is called $\bullet$-unitary if there exists a positive definite $\bullet$-Hermitian form on $X$. \end{definition} \begin{remark} If $(\pi,X)$ is a module restricted from a $\bullet$-Hermitian $\mathsf{H}_{t,c}\otimes \clif$-module then $(\pi,X)$ is also $\bullet$-Hermitian. \end{remark} \begin{proposition} If $(\pi,X)$ is a $\bullet$-Hermitian $O_{t,c}$-module, then the operators $\pi(\D)$ and $\pi(\D_\C)$, for admissible $\C$, are self-adjoint. Furthermore, if $(\pi,X)$ is $\bullet$-unitary, then \[ (\D_\C^2(x),x)_X \geq 0 \] for all $x \in X$. \end{proposition} \begin{proof} For $x \in X$, since $(-,-)_X$ is positive definite, $(\D_\C(x),\D_\C(x))_X \geq 0$. Using that $\D_\C$ is self adjoint proves that $(\D_\C^2(x),x)_X \geq 0$. \end{proof} \begin{definition}\label{d:DiracCoh} Let $(\pi,X)$ be an $O_{t,c}$-module and $\C$ be an admissible element in $Z^\epsilon(\mathbb{C}\tilde{W}_-)$. The {\bf $\D_\C$-cohomology} is defined by \[ H(X,\C) = \frac{\ker(\pi\D_\C)}{\ker(\pi\D_\C)\cap \im(\pi\D_\C)}. \] \end{definition} \begin{proposition}\label{p:unitarycoh} The $\D_\C$-cohomology of $\C$ is a $\tilde W$-module. Moreover, if $X$ is a $\bullet$-unitary $O_{t,c}$-module, then $H(X,\C) = \ker(\D_\C)$. \end{proposition} \begin{proof} Since, the element $\D_\C$ $\epsilon$-commutes with $\mathbb{C}\tilde{W}_-$ the vector spaces $\ker(\pi\D_\C)$ and $ \im(\pi\D_\C)$ carry a $\tilde{W}$-action. Hence $H(X,\C)$ is a $\tilde{W}$-module. If $X$ is $\bullet$-Hermitian then the image and kernel of $\D_\C$ are orthogonal with respect to $(-,-)_X$ and hence $\ker(\pi\D_\C)\cap \im(\pi\D_\C)={0}$ \end{proof} \begin{definition} Let $\C$ be an admissible element and $\zeta_\C:\centre^{\textup{gr}}(O_{t,c})\to \centre^{\textup{ug}}(\mathbb{C}\tilde{W})$ be the homomorphism of Theorem \ref{t:Vogan}. For any irreducible $\tilde{W}$-representation $\tilde{\tau}$, define the homomorphism $\chi_{\tilde{\tau}}: \centre^{\textup{gr}}(O_{t,c})\to\mathbb{C}$ via \[ \chi_{\tilde{\tau}}(z) = \frac{1}{\dim\tilde{\tau}}\Tr( \tilde{\tau}(\zeta_\C(z)) ), \] for any $z$ in $\centre^{\textup{gr}}(O_{t,c})$. \end{definition} \begin{theorem}\label{t:CentChar} Let $\C$ be an admissible element, $\tilde{\tau}$ be an irreducible $\mathbb{C}\tilde{W}_-$ representation and $(\pi,X)$ be an $O_{t,c}$-module with central character $\chi$. Suppose that \[ \Hom_{\tilde{W}}(\tilde{\tau},H(X,\C))\neq 0. \] Then, $\chi = \chi_{\tilde{\tau}}$. \end{theorem} \noindent The proof of this theorem, given Theorem \ref{t:Vogan}, is identical to \cite[Theorem 5.11]{CDM22} and \cite[Theorem 4.5]{BCT12}. We refer the reader to \cite{CDM22} if they wish to see details. We can conclude that if the $\D_\C$ cohomology of $X$ is non-zero then we are able to describe the central character $\chi$ using the $\tilde{W}$ module structure of $X$. \section{Admissible elements} The final part of this paper compiles descriptions of admissible elements and a criterion for non-zero $\D_\C$-cohomology of $\bullet$-unitary modules. \subsection{The \texorpdfstring{$\epsilon$}{epsilon}-centre of \texorpdfstring{$\mathbb{C}\tilde{W}_-$}{W-}} Before describing the admissible elements we start with a description of the $\epsilon$-centre $Z^\epsilon(\mathbb{C}\tilde{W}_-)$. We will break this section into two parts. Depending on whether $d$ is even or odd. Throughout we shall need to explore the notion of conjugacy classes ``splitting'' in different contexts (see \cite{St89}), so we formalize what we need below. It is worth noting that we consider $d \in \mathbb{Z}$ to be even and odd as well as an independent notion of $\tilde{g} \in \tilde{W}$ even or odd. Recall that $\mathbb{C} \tilde{W}$ and $\mathbb{C}\tilde{W}_-$ are $\mathbb{Z}_2$ graded algebras. Homogenous elements $\tilde{g}$ in $\mathbb{C}\tilde{W}$ and $\mathbb{C}\tilde{W}_-$ are even or odd when their grading is $0$ (respectively $1 \in \mathbb{C}_2$). \begin{definition} Given $g\in W$, we say that the conjugacy class $C(g)$ of $g$ splits in $\tilde W$ if $p^{-1}(C(g))$ consists of two conjugacy classes in $\tilde{W}$. The conjugacy class $C(g)$ does not split when $p^{-1}(C(g))$ is a single conjugacy class. Furthermore, considering the subgroup $\tilde{W}_{\overline{0}}\subseteq \tilde{W}$, we say that the conjugacy class $C(\tilde{g})$ of $\tilde{g}$ in $\tilde{W}$ splits in $\tilde{W}_{\overline{0}}$ if the conjugacy class breaks into more conjugacy classes in the subgroup $\tilde{W}_{\overline{0}}$. \end{definition} Recall that $\epsilon$ is uniformly $1$ when $d$ is odd. \begin{proposition} Let $d$ be odd. Then, the $\epsilon$-centre of $\mathbb{C}\tilde{W}_-$ is equal to its ungraded centre. Furthermore, $Z^\epsilon(\mathbb{C}\tilde{W}_-)$ is spanned by elements of the form \[ \{\frac{1-\theta}{2} T_{\tilde{g}} = \sum_{\tilde{w} \in \tilde{W}} \rho(\tilde{w}^{-1} \tilde{g} \tilde{w}) \in \mathbb{C}\tilde{W}_- : \tilde{g} \in \tilde{W}\}. \]\end{proposition} \begin{proof} The ungraded centre of the group algebra $\mathbb{C}\tilde{W}$ is spanned by conjugacy class sums; elements of the form \[ T_{\tilde{g}} = \sum_{\tilde{w} \in \tilde{W}} \tilde{w}^{-1} \tilde{g} \tilde{w} \in \mathbb{C}\tilde{W} \] for all $\tilde{g} \in \tilde{W}$. The ungraded centre of $\mathbb{C}\tilde{W}$ projects onto $\centre^{\textup{ug}}(\mathbb{C}\tilde{W}_-)$ and $\frac{1-\theta}{2}T_{\tilde{g}}$ is non-zero exactly when the elements in $p^{-1}(p(\tilde{g}))=\{\tilde{g}, \theta \tilde{g}\}$ are in different conjugacy classes of $\tilde{W}$. \end{proof} For the rest of this section, we assume $d$ is even. Recall that we have $\epsilon(\tilde{w})= (-1)^{|\tilde{w}|}$. To study this case, we need to introduce a new subalgebra of $\mathbb{C}\tilde{W}$. \begin{definition} Let us the define the $\theta$-centre of $\mathbb{C}\tilde{W}$, \[ Z^\theta(\mathbb{C}\tilde{W}) = \{ a \in \mathbb{C}\tilde{W} | a \tilde{w}= \theta^{|\tilde{w}|}\tilde{w}a \text{ for all } \tilde{w} \in \tilde{W}\}, \] where $|\tilde{w}|$ is the parity of $\tilde{w}$. \end{definition} \begin{proposition} The $\theta$-centre of $\mathbb{C}\tilde{W}$ is spanned by elements of the form \[ T^\theta_{\tilde{g}} =\sum_{\tilde{w} \in \tilde{W}}\theta^{|\tilde{w}|} \tilde{w}^{-1} g \tilde{w} = \sum_{\tilde{w} \in \tilde{W}_{\overline{0}}} \tilde{w}^{-1} \tilde{g} \tilde{w}+ \theta \sum_{\tilde{w} \in \tilde{W}_{\overline{1}}} \tilde{w}^{-1} \tilde{g} \tilde{w} \] for all $\tilde{g} \in \tilde{W}$. \end{proposition} \begin{proof} Given any $\tilde{g}$, then $T^\theta_{\tilde{g}}$ is in $Z^\theta( \mathbb{C}\tilde{W})$. For any $a \in Z^\theta (\mathbb{C}\tilde{W})$ that has a non-zero coefficient of $\tilde{g}$, then there exists a non-zero scalar $x$ such that $a - xT^\theta_{\tilde{g}}$ is $\theta$-central with no coefficient of $\tilde{g}$. Continuing the process shows that $a$ is in the space spanned by $T^\theta_{\tilde{g}}$. \end{proof} \begin{theorem} When $d$ is even then $\epsilon(\tilde{w}) = (-1)^{|\tilde{w}|}$ and the $\epsilon$-centre of $\mathbb{C}\tilde{W}_-$ is the projection of the $\theta$-centre of $\mathbb{C}\tilde{W}$: \[ Z^\epsilon(\mathbb{C}\tilde{W}_-) = \rho(Z^\theta( \mathbb{C}\tilde{W})). \] \end{theorem} \begin{proof} Recall from (\ref{e:idemptildeW}) that the algebra $\mathbb{C}\tilde{W}$ comes equipped with the idempotents $\theta_{\pm} := \tfrac{1}{2}(1\pm \theta)$ and that the homomorphism $\rho$ of (\ref{e:rho}) satisfies $\rho(\theta) = 1\otimes -1$. We will write here $\rho(\theta) = -1$. Furthermore, let $[-,-]_\theta$ denote the product $[a,b] = ab - \theta^{|b|}ba$, defined for any $\mathbb{Z}_2$-homogeneous elements $a,b\in \mathbb{C}\tilde{W}$. Note that $a\in Z^\theta(\mathbb{C}\tilde W)$ if and only of $[a,\tilde w]_\theta = 0$ for all $\tilde w\in \tilde W$. All that said, we shall show, by double inclusion, that \[ \rho(Z^\theta(\mathbb{C}\tilde W)) = Z^\epsilon(\rho(\mathbb{C}\tilde W)). \] Given $a\in Z^\theta(\mathbb{C}\tilde W)$, note that for any $\tilde w\in \tilde{W}$ \[ 0 = \rho([a,\tilde w]_\theta) = \rho(a)\rho(\tilde w) - \rho(\theta)^{|\tilde w|} \rho(\tilde w)\rho(a) = \rho(a)\rho(\tilde w) - (-1)^{|\tilde w|} \rho(\tilde w)\rho(a), \] showing that $\rho(a)$ is $\epsilon$-central. Conversely, let $\rho(a) \in Z^\epsilon(\rho(\mathbb{C}\tilde W))$. Note that this implies that, for all $\tilde w \in \tilde W$, we have \[ 0 = \rho(a)\rho(\tilde w) - (-1)^{|\tilde w|}\rho(\tilde w)\rho(a) = \rho([a,\tilde w]_\theta) = \rho([\theta_-a,\theta_-\tilde w]_\theta), \] since $\rho(\theta_-)=1$. It follows that $[\theta_-a,\theta_-\tilde w]_\theta$ is in $\theta_+(\mathbb{C}\tilde W)$, the kernel of $\rho$, and hence, we must have $[\theta_-a,\theta_-\tilde w]_\theta\in \mathbb{C}\tilde W_+\cap \mathbb{C}\tilde W_- = \{0\}$, for any $\tilde w\in\tilde W$. Now choose any element $a'\in Z^\theta(\mathbb{C} \tilde W)$ and define $b = \theta_+a' + \theta_-a \in \mathbb{C}\tilde W$. Then, for any $\tilde w\in \tilde W$, we have \begin{align*} [b,\tilde w]_\theta &= [\theta_+a' + \theta_-a,\theta_+\tilde w + \theta_-\tilde w]_\theta\\ &= [\theta_+a',\theta_+\tilde w]_\theta+[\theta_-a,\theta_-\tilde w]_\theta\\ &= 0 , \end{align*} where we used $\theta_+\theta_- = \theta_-\theta_+=0$, $[\theta_+a',\theta_+\tilde w]_\theta = \theta_+[a',\tilde w]_\theta =0$ (as $a'\in Z^\theta(\mathbb{C} \tilde W)$) and also $[\theta_-a,\theta_-\tilde w]_\theta = 0$ (from the assumption we made). Hence, $b\in Z^\theta(\mathbb{C}\tilde W)$ and $\rho(b) = \rho(a) \in Z^\epsilon(\rho(\mathbb{C}\tilde W))$, finishing the proof. \end{proof} \noindent In particular (when $d$ is even), the $\epsilon$-centre of $\mathbb{C}\tilde{W}_-$ is spanned by elements of the form \[ T^{(-1)}_{\tilde{g}} = \frac{1-\theta}{2}T^\theta_{\tilde{g}} = \rho(T^\theta_{\tilde{g}})= \sum_{\tilde{w} \in \tilde{W}}(-1)^{|\tilde{w}|} \rho(\tilde{w}^{-1} \tilde{g} \tilde{w}) \in \mathbb{C}\tilde{W}_-. \] These elements are signed sums of conjugacy class elements and are homogenous. We now describe precise conditions when these spanning elements are nonzero. \begin{lemma} \label{l:oddadm1} If $\tilde{g}$ is odd then $T^{(-1)}_{\tilde{g}} =0$. \end{lemma} \begin{proof} If $\tilde{g}$ is taken to be odd then $\rho(\tilde{g})^{-1} T^{(-1)}_{\tilde{g}} \rho(\tilde{g}) = - T^{(-1)}_{\tilde{g}}$. Isolating the $\rho(\tilde{g})$ coefficient in $T^{(-1)}_{\tilde{g}}$ shows that it must be zero and hence $T^{(-1)}_{\tilde{g}}$ is zero for every odd element $\tilde{g}$. \end{proof} \begin{lemma} \label{l:splitsplit} Suppose that $\tilde{g} \in \tilde{W}$ is even, let $C(\tilde{g}) = \{ \tilde{w }\in \tilde{W}: \tilde{w} = \tilde{h}^{-1}\tilde{g}\tilde{h}, \tilde{h} \in \tilde{W}\}$, then the element $T^{(-1)}_{\tilde{g}}$ is non zero if and only if the conjugacy class $C(\tilde{g})$ splits into two conjugacy classes in $\tilde{W}_{\overline{0}}$. \end{lemma} \begin{proof} First assume that $C(\tilde{g})$ splits in $\tilde{W}_{\overline{0}}$ and let $g = p(\tilde{g})$. We divide the proof of this implication depending on whether or not $C(g)$ splits in $\tilde{W}$. First assume that $C(g)$ does not split in $\tilde{W}$ but $C(\tilde{g})$ splits in $\tilde{W}_{\overline{0}}$, then the conjugacy class of $\tilde{g}$ in $\tilde{W}$ is $C(\tilde{g}) = C^+(\tilde{g}) \cup C^-(\tilde{g})$, where $C^\pm(\tilde{g})$ are conjugacy classes in $\tilde{W}_{\overline{0}}$ and $C^\pm(\tilde{g})= \theta C^\mp(\tilde{g})$. Then \[ T^\theta_{\tilde{g}} =\sum_{\tilde{h} \in C^+(\tilde{g})}\tilde{h} + \theta\sum_{\tilde{h} \in C^-(\tilde{g})} \tilde{h}= 2\sum_{\tilde{h} \in C^+(\tilde{g})}\tilde{h} \] and the projection $T^{(-1)}_{\tilde{g}} = 2\sum_{\tilde{h} \in C^+(\tilde{g})}\rho(\tilde{h})$ is non-zero in $Z^\epsilon(\mathbb{C}\tilde{W}_-)$. Now suppose that $C(g)$ splits in $\tilde{W}$ into $C(\tilde{g})$ and $C(\theta\tilde{g}) = \theta C(\tilde{g})$, where $\{\tilde{g},\theta\tilde{g}\}= p^{-1}(g)$. Then, by the assumptions of the lemma each of these conjugacy classes splits into two conjugacy classes $C^\pm(\tilde{g})$ and $C^\pm(\theta\tilde{g})$ in $\tilde{W}_{\overline{0}}$. Then $T^\theta_{\tilde{g}} = \sum_{\tilde{h} \in C^+(\tilde{g})}\tilde{h} + \theta\sum_{\tilde{h} \in C^+(\theta\tilde{g})}\tilde{h} $ and \[ T^{(-1)}_{\tilde{g}} = \sum_{\tilde{h} \in C^+(\tilde{g})}\rho(\tilde{h}) - \sum_{\tilde{h} \in C^+(\theta\tilde{g})}\rho(\tilde{h}), \] which is non-zero since $\rho(C^+(\tilde{g}))$ and $\rho(C^+(\theta\tilde{g}))$ are sums over linearly independent elements in $\mathbb{C}\tilde{W}_-$. Suppose now, that $C(\tilde{g})$ in $\tilde{W}$ remains a conjugacy class in $\tilde{W}_{\overline{0}}$ then $T^\theta_{\tilde{g}} = \sum_{\tilde{w} \in \tilde{W}_{\overline{0}}} \tilde{w}^{-1} \tilde{g} \tilde{w}+\theta \sum_{\tilde{w} \in \tilde{W}_{\overline{1}}} \tilde{w}^{-1} \tilde{g} \tilde{w}$ and \[ T^\theta_{\tilde{g}} = (1+\theta) \sum_{\tilde{w} \in \tilde{W}_{\overline{0}}} \tilde{w}^{-1} \tilde{g} \tilde{w}. \] Hence $T^{(-1)}_{\tilde{g}} = \frac{1-\theta}{2}T^\theta_{\tilde{g}} = 0$ because $(1+\theta) (1-\theta) = 0$. \end{proof} \subsection{The real subspace of admissible elements} We have now described the $\epsilon$-centre of $\mathbb{C}\tilde{W}_-$. The following two propositions (split into $d$ even and $d$ odd), describe the set $\mathfrak{A}$ of admissible elements (Definition \ref{d:Dirac_C}). That is, the elements that are $\epsilon$-central and self-adjoint. \begin{proposition}\label{l:oddadm} Let $d$ be odd. In this case $\epsilon(w) = 1$ for all $w$. Then $\mathfrak{A} = \centre^{\textup{ug}}_{\overline{0}} (\mathbb{R}\tilde{W}_-) + i \centre^{\textup{ug}}_{\overline{1}}(\mathbb{R}\tilde{W}_-)$. \end{proposition} \begin{proof} If $d$ is odd then the set of admissible elements $\mathfrak{A}$ is the intersection of $\centre^{\textup{ug}}(\mathbb{C}\tilde{W}_-)$ and the self adjoint elements in $\mathbb{C}\tilde{W}_-$. Fix a basis $B = \{\prod s_\alpha\otimes \alpha/{|\alpha|}: w \in W, w = \prod s_\alpha \text{ is a fixed word for } w \textup{ with }\alpha \textup{ simple}\}$ of $\rho(\mathbb{C}\tilde W) \cong \mathbb{C}\tilde W_-$. Note that $B$ has size $|W|$. Since $(\alpha/{|\alpha|})^\bullet = -\alpha/{|\alpha|}$ and $B = B_{\overline 0}\cup B_{\overline 1}$ consists of homogeneous elements then $B_{\overline{0}}^\bullet = B_{\overline{0}}$ and $B_{\overline{1}}^\bullet = -B_{\overline{1}}$. We can conclude that $(iB_{\overline{1}})^\bullet = iB_{\overline{1}}$. Given that $B_{\overline{0}} \cup iB_{\overline{1}}$ is a $\bullet$-invariant basis and, for any admissible element, its odd and even part are admissible, then the result follows. \end{proof} The ungraded centre of the group algebra $\mathbb{C}\tilde{W}$ is spanned by the conjugacy class sums. The algebra $\mathbb{C} \tilde{W}_-$ is a quotient and a subalgebra of $\mathbb{C}\tilde{W}$. The ungraded centre of $\mathbb{C}\tilde{W}_-$ is exactly $\frac{1-\theta}{2}\centre^{\textup{ug}}(\mathbb{C}\tilde{W})$. \begin{lemma}\label{l:noodd} The space $Z^\epsilon_{\overline{1}}(\mathbb{C}\tilde{W}_-)$ is zero. \end{lemma} \begin{proof} The space $\centre^{\textup{gr}}_1(\mathbb{C} \tilde{W})$ is the span of the elements $T^{(-1)}_{\tilde{g}}$ for odd $\tilde{g}$. However, Lemma \ref{l:oddadm1} shows that these elements are always zero. \end{proof} \begin{proposition} Let $d$ be even and let $\mathfrak{A}$ be the set of the admissible elements in $\mathbb{C}\tilde{W}_-$. In this case $\epsilon(w) = -1$ for all odd $w$. Then $\mathfrak{A} = Z^\epsilon_{\overline{0}} (\mathbb{R}\tilde{W}_-) $. \end{proposition} \begin{proof} The same mechanism as the proof of Lemma \ref{l:oddadm} apply here. Noting that the $\epsilon$-centre splits into homogenous parts; $Z^\epsilon(\mathbb{C}\tilde{W}_-) = \centre^{\textup{anti}}_{\overline{0}}(\mathbb{C} \tilde{W}_-) \oplus \centre^{\textup{gr}}_{\overline{1}}(\mathbb{C}\tilde{W}_-) $. The Lemma follows from intersecting with the real subspace of self adjoint elements and Lemma \ref{l:noodd} which states that $\centre^{\textup{gr}}_{\overline{1}}(\mathbb{C}\tilde{W}_-) = Z^\epsilon_{\overline{1}}(\mathbb{C}\tilde{W}_-)=0$.\end{proof} \begin{theorem}\label{t:oddadm} Let $d$ be odd. Let $S_{\mathrm{split}}$ be the a set of representatives of conjugacy classes in $W$ which split in $\tilde{W}$. That is, \[S_{\mathrm{split}} = \{\text{Representatives of conjugacy classes}\} \cap\{g \in W: C(g) \text{ splits in } \tilde{W}\} \] Then the admissible elements in $\mathbb{C}\tilde{W}_-$ is equal to the real span of the linearly independent set \[ \{\sqrt{-1}^{|\tilde{g}|}\frac{1-\theta}{2}T_{\tilde{g}}: g \in S_{\mathrm{split}} \}. \] \end{theorem} \begin{proof} By Lemma \ref{l:oddadm} the admissible elements are the real spans of the even ungraded centre plus the imagine span of the odd ungraded centre. The proof follows from the observation that the ungraded centre $\centre^{\textup{ug}}\mathbb{C}\tilde{W}_-$ is spanned by $\{\frac{1-\theta}{2}T_{\tilde{g}}:\tilde{g} \in S_{\mathrm{split}}\}$. \end{proof} When $d$ is odd the real dimension of the admissible elements is equal to the number of irreducible, genuine projective representation of $W$. That is, the representations of $\tilde{W}$, which do not factor through $\mathbb{C} W$. \begin{theorem}\label{t:evenadm} Let $d$ be even. Let $S^\epsilon_{\mathrm{split}}$ be the a set of representatives of conjugacy classes in $\tilde{W}$ which split in $\tilde{W}_{\overline{0}}$. Then the admissible elements in $\mathbb{C}\tilde{W}_-$ has real basis \[\{T^{(-1)}_{\tilde{g}}: \tilde{g} \in S^\epsilon_{\mathrm{split}}\}. \] \end{theorem} \begin{proof} Similar to the proof of Theorem \ref{t:oddadm}. In this setting the admissible elements are the real spans of the even $\epsilon$-central elements plus the imaginary spans of the odd $\epsilon$-central elements, of which there are none (by Lemma \ref{l:noodd}). The even $\epsilon$-central elements are spanned by \[ \{T^{(-1)}_{\tilde{g}}: C(\tilde{g}) \subset \tilde{W} \text{ splits into two conjugacy classes in } \tilde{W}_{\overline{0}}\}. \] The Theorem follows. \end{proof} For $n\in \mathbb{Z}$, we state a couple of theorems describing when conjugacy classes of the symmetric group $S_n$ split when considered in $\tilde{S}_n$ and $\tilde{A}_n$. We then draw a corollary (\ref{c:oddsymmadm}) from Theorems \ref{t:oddadm} and \ref{t:evenadm} describing the admissible elements in $\mathbb{C}\tilde{W}_-$, for $W=S_n$ acting on any vector space $V \cong\mathbb{C}^d$. Note that the $W$-module $V$ need not be irreducible or essential. \begin{theorem} \cite[p. 1721]{Sc11} The conjugacy classes of $S_n$ which split into two conjugacy classes of $\tilde{S}_n$ are precisely those with all cycles of odd length or those that are odd and have distinct cycles . \end{theorem} \begin{theorem}\cite{Sc11} \cite[Theorem 2.7]{St89} Let $\lambda$ be an even partition of $n$. The $\tilde{S}_n$ conjugacy classes $C_\lambda$ (or $C_\lambda^{\pm})$ if already split) split into two $\tilde{A}_n$ conjugacy classes iff $\lambda \in DP_n^+$. Here $DP_n^+$ is the set of distinct partitions of $n$ which are even. \end{theorem} \begin{corollary} \label{l:typeAex} Let $W=S_n$ and let $\{g\}$ be the set of elements in $S_n$ associated to an even partition $\lambda$ which has distinct cycles. Then $C(\tilde{g})$ splits in $\tilde{A}_n = (\tilde{S}_n)_{\overline{0}}$. Then from Lemma \ref{l:splitsplit}, $T^{(-1)}_g \neq 0$ and is an admissible element. \end{corollary} \begin{corollary}\label{c:oddsymmadm} Let $d$ be odd and $W = S_n$ acting on $V \cong \mathbb{C}^d$, then the admissible elements in $\mathbb{C}\tilde{W}_-$ is equal to the real span of the set \[\{T_{\tilde{g}}: g \text{ has partition with no even cycles}\}.\] Let $d$ be even and $W = S_n$, then the admissible elements in $\mathbb{C}\tilde{W}_-$ is equal to the real span of the set $\{T^{(-1)}_{\tilde{g}}: g \text{ has partition } \lambda \in DP_n^+\}$. \end{corollary} \subsection{Non-zero \texorpdfstring{$\D_\C$}{}-cohomology} \begin{proposition} Suppose that $(\pi,X)$ is $\bullet$-unitary. Since $\Omega_\mathfrak{osp}$ and $\C$ are self adjoint they must have positive real eigenvalues. \end{proposition} \begin{proposition}\label{p:nonzerocoh} Let $(\pi,X)$ be a $\bullet$-unitary module for $O_{t,c}$. Suppose that there exists an admissible element $\C$ such that $\pi(\rho(\omega)) \neq 0$. Then, there exists an admissible $\C \in \mathbb{C} \tilde{W}_-$ such that $H(X,\C) \neq 0$. \end{proposition} \begin{proof} Since $X$ is Hermitian then $H(X,\C) = \ker(\D_C) = \ker (\D_C^2)$. We study the kernel of the operator $(\D_\C -2\rho(\C)))\D_\C =\Omega_\mathfrak{osp} +\frac14 + \rho(\C)^2$. The elements $\Omega_\mathfrak{osp}$ and $\rho(\C)$ commute, hence have simultaneous eigenvalues. Given $\pi(\rho(\omega)) \neq 0$ it has a positive real eigenvalues. One can modify $\D_\C$ to $\D_{\lambda\C}$ to ensure that \[ \Omega_\mathfrak{osp} +\frac14 + \lambda^2\rho(\C)^2 \] has a non-zero kernel. This then proves that there exists a non-zero kernel for the operator $(\D_\C -(1+\epsilon(\rho(\C))\rho(\C))\D_\C$. It is impossible for both $\D_\C$ and $\D_\C-(1+\epsilon(\rho(\C))\rho(\C) = \D_{-(\epsilon(\rho(\C))\C}$ to have zero kernel. Hence there exists an admissible element which gives non-zero $\D_\C$-cohomology. \end{proof} \noindent If $W=S_d$, then $\centre^{\textup{ug}}_{\overline{0}}(\mathbb{R}\tilde{W}_-)$ is the symmetric polynomials in the squares of the Jucys-Murphy elements. Furthermore, $\centre^{\textup{ug}}_{\overline{1}}(\mathbb{R}\tilde{W}_-)$ is the projection of odd central elements in $\mathbb{C}\tilde{W}$, which since $\mathbb{C}\tilde{W}$ is a group algebra can be written as conjugacy class sums. It is shown in \cite{SV08} and \cite{Ca19} that these central elements act by a positive integer or half integer on irreducible modules. This observation, in combination with Proposition \ref{p:nonzerocoh} proves that if $W=S_d$, for every $\bullet$-Hermitian module $X$, there exists a $\D_\C$ with non-zero $\D_\C$-cohomology. \begin{example} Let $W$ contain $(-1)_V$, let us show that the action of the admissible element $w_0\otimes \Gamma$ acts on $O_{t,c}$ representations with nonzero eigenvalues. We note that $w_0\otimes \Gamma$ is invertible and self adjoint. Therefore, on any $\bullet$-Hermitian module $(\pi,X)$ the eigenvalues of $\pi(w_0\otimes \Gamma)$ must be positive reals. Therefore, if $d$ is even and $(-1)_V \in W$, any $\bullet$-Hermitian $O_{t,c}$-module $(\pi,X)$ has non-zero $\Gamma$-cohomology for the correct choice of $\C$. \end{example} \section*{Acknowledgements} We thank the Department of Mathematics, University of Manchester and both the Department of Applied Mathematics, Computer Science and Statistics, and the Department of Electronics and Information Systems at Ghent University for their hospitality during the research visits throughout the preparation of this manuscript. This research was supported by the Heilbronn Institute for Mathematical Research (K.C.), the special research fund (BOF) from Ghent University [BOF20/PDO/058] (M.D.M.), and by a postdoctoral fellowship, fundamental research, of the Research Foundation – Flanders (FWO) [grant number 12Z9920N] (R.O.). This support is gratefully acknowledged. \bibliographystyle{abbrv}
2024-02-18T23:40:43.730Z
2022-07-25T02:18:10.000Z
algebraic_stack_train_0000
3,198
13,930
proofpile-arXiv_065-15854
\section{Acknowledgements} \input{sections/acknowledgements} \bibliographystyle{IEEEtranS} \section{Background} \subsection{Trusted Execution Environments} \label{sec:bg-tee} \Glspl{TEE} isolate \glspl{TA}---programs running within the \gls{TEE}---from software outside the \gls{TEE} as well as from other \glspl{TA}. Software running outside the \gls{TEE} is called the \gls{REE} and includes the \gls{OS}. In addition to \gls{TA} isolation, \Glspl{TEE} also provide \emph{remote attestation} to assure clients that the code and configuration of server-side components is what they expect. \Glspl{TEE} are already provided by several x86, Arm and RISC-V \glspl{CPU}, and are available to clients through cloud service providers such as Amazon, Google, and Microsoft. \Glspl{TEE} contain a root of trust for attestation, typically in the form of a unique attestation key embedded in the hardware at the time of manufacture. The remote attestation protocol first authenticates the hardware by having the server prove that its \gls{TEE} possesses this key, which is certified by the device manufacturer. After authentication, remote attestation is provided by ``measuring'' the system: the code, configuration, and state of the system are checked by the \gls{TEE} to assure the client that they are as expected. Despite the claimed isolation guarantees, \Glspl{TEE} still suffer from certain vulnerabilities. Isolation only prevents malicious processes from naively accessing a \gls{TA}'s data through direct memory access. Remote attestation and code attestation only assure the clients that the system is set up as expected and that the code used to process the data is unmodified. Software bugs, which are present even in attested code, can lead to run-time attacks that circumvent client isolation and give adversaries direct access to sensitive data. Despite the number of protections created to defend against them, memory vulnerabilities and run-time attacks are still pervasive. Sensitive data can also be leaked through side channels (\Cref{sec:side-channels}). Even if the software is bug-free, side channel leakage can occur due to vulnerabilities in the underlying hardware. Modern \glspl{CPU} employ a variety of performance optimizations in the form of speculation and out-of-order execution that can leak microarchitectural state~\cite{Lipp2018meltdown,Kocher2018spectre,schwarzZombieLoadCrossprivilegeboundaryData2019,vanbulck2018foreshadow,ragabRageMachineClear2021,vanbulck2018foreshadow,gotzfriedCacheAttacksIntel2017,brasserSoftwareGrandExposure2017,leeInferringFinegrainedControl2017,chenSgxPectreStealingIntel2019,ryanHardwarebackedHeistExtracting2019,zhangTruSenseInformationLeakage2018}. Some processors have instructions with a data-dependent number of cycles, making it possible for adversaries to infer the instruction input values. This data-dependent behavior of the underlying hardware is largely transparent to the software developers, making it difficult to detect when benign-looking code can cause side-channel leakage. \subsection{Side-channel leakage} \label{sec:side-channels} Access control on data storage elements such as registers and memory can prevent direct access to sensitive data by malicious software on the server. However, sensitive data can also be leaked through side channels. Side channels are observable outputs of the system that are not part of the system's intended outputs. Prominent examples of \gls{CPU} side channels are execution time, memory access patterns, observable microarchitectural state (such as the state of shared caches, branch predictors and performance counters), voltage and electromagnetic radiation~\cite{paccagnellaLordRingSide2021,kocher99dpa,emAttack1,emAttack2,lippPLATYPUSSoftwarebasedPower2021,kimThermalBleedPracticalThermal2022}. Side-channel leakage can occur when an adversary is able to infer information about the sensitive data by observing the system while the data is processed. For example, if a conditional branch instruction depends on a sensitive value and the execution time of each branch is different, an adversary can infer some information about the sensitive value by monitoring the time it takes to complete the branch. Another example is when a sensitive value is used to index an array in memory; the memory access pattern, which is the sensitive address in this case, can change the observable state of a shared cache, or result in an observable request on the main memory bus. \section{Conclusion} We introduced \gls{BliMe}\xspace, a new approach to outsourced computation that uses hardware extensions to ensure that clients' sensitive data is not leaked from the system, even if an attacker is able to run malware, exploit software vulnerabilities, or use side channel attacks. \gls{BliMe}\xspace does this while maintaining compatibility with existing side-channel-resistant code, and without reducing performance. In designing \gls{BliMe}\xspace, we follow the design pattern of using a separate, discrete hardware security component in conjunction with the main \gls{CPU}, common on both servers~\cite{tpm2lib} and end user devices~\cite{titanM}. By using such a remotely attestable fixed-function \gls{TEE} in combination with taint tracking \gls{ISA} extensions, \gls{BliMe}\xspace can provide functionality similar to that of fully homomorphic encryption, but achieving native-level performance by replacing cryptography with hardware enforcement. \section{Design}% \label{sec:design} \subsection{System overview} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figures/dataflow.pdf} \caption{System Overview. The client completes a cryptographic handshake with the \gls{TEE}~\circnum{1}, which stores the resulting shared secret key in a protected register for use by the encryption engine~\circnum{2}. The client encrypts its secret data, which the \gls{REE} decrypts using the encryption engine~\circnum{3}. The \gls{REE} can then perform the requested computation on the resulting blinded\xspace data in a verifiably side-channel-resistant manner~\circnum{4}, and encrypt the output using the encryption engine~\circnum{5} for return to the client.} \label{fig:system-overview} \end{figure} \Cref{fig:system-overview} shows an overview of \gls{BliMe}\xspace. The server's \gls{REE}, including the \gls{OS}, runs on top of a \gls{CPU} that contains 1) the \gls{BliMe}\xspace extensions, which enforce a taint-tracking policy on all software running on the \gls{CPU} (\Cref{sec:taint-tracking}), and 2) a \gls{BliMe}\xspace encryption engine, used for data import and export (\Cref{sec:data-import,sec:data-export}). A \gls{TEE} is used to perform remote attestation, assuring the client that the server uses \gls{BliMe}\xspace. The \gls{TEE} is fixed-function and is a separate hardware component that shares few resources with the \gls{CPU}, reducing its exposure to side-channel attacks. It contains a key exchange engine responsible for negotiating a session key between itself and the client. The \gls{TEE} can provide this session key securely to the encryption engine without exposing it to the \gls{REE}. \subsection{Adversary model} In our adversary model, the client assumes that the server hardware, including the \gls{BliMe}\xspace extensions, encryption engine, and the \gls{TEE}, is implemented correctly. The client further assumes that the server \gls{OS} is \emph{honest-but-curious}: it performs the functionalities expected of it and does not execute malicious code to launch an attack (honest), but may leak data to which it has access or use it to draw inferences about sensitive client data (curious). The client assumes the adversary has control over all other software on the server and can make them behave as it sees fit, including making inferences based on side channel information such as memory access patterns and instruction traces. Attacks against the hardware itself are currently out-of-scope; the client assumes that the attacker cannot use physical means to make the hardware act differently from its specification. We present a discussion on this in \Cref{sec:discussion:other-attacks}. Side-channel attacks that require physical access (e.g., differential power analysis) are also out-of-scope. \subsection{Protocol}\label{sec:design-protocol} In this section, we outline the steps needed to perform safe outsourced computation using \gls{BliMe}\xspace. \subsubsection{Remote attestation and key agreement}\label{sec:remoteAttest} Before sending any data to the server, the client first performs a handshake procedure (\Cref{fig:system-overview}-\circnum{1}) with the TEE, which consists of remote attestation and agreement on a session key. Remote attestation is provided by the \gls{TEE} to assure the client that the assumptions made in the adversary model hold. This is done in two steps. First, the \gls{TEE} proves to the client that it is genuine using the root-of-trust embedded within it at manufacture, as described in \Cref{sec:bg-tee}. Second, the \gls{TEE} verifies the following properties and attests them to the client: \begin{enumerate} \item The server hardware incorporates \gls{BliMe}\xspace. The \gls{TEE} verifies this by reporting that it is attached to a \gls{CPU} that includes the \gls{BliMe}\xspace extensions and encryption engine. \item The server \gls{OS} is honest-but-curious. This can be done by verifying that the OS image is certified by a trusted authority. In \Cref{sec:discussion:trustedOS}, we discuss how this adversary model can be relaxed so that client data is protected even from a malicious \gls{OS}. \end{enumerate} At the end of the remote attestation process, the client and \gls{TEE} agree on a session key that is tied to this specific client. The \gls{TEE} stores this key inside the encryption engine using the dedicated secure channel (\Cref{fig:system-overview}-\circnum{2}). The encryption engine uses this key in two atomic functions that it exposes to the \gls{REE}: data import and data export \subsubsection{Data import}\label{sec:data-import} Once the key is in the encryption engine, the client locally encrypts its data using the session key, sends the resulting ciphertext to the server, and requests the required computation from the \gls{REE}. The \gls{REE} then calls the encryption engine's data import function on the ciphertext. This causes the encryption engine to atomically decrypt the ciphertext using the internal session key, and taint the resulting plaintext data by \emph{marking} it as \emph{``blinded\xspace''} (\Cref{fig:system-overview}-\circnum{3}). This is done by setting a \emph{blindedness\xspace} bit that is attached to the data in registers and memory. \vspace{\baselineskip} \begin{tcolorbox} [width=\linewidth, colback=white!95!black] Data import atomically decrypts data using the session key and marks it as \emph{blinded\xspace}. \end{tcolorbox} \subsubsection{Safe computation}\label{sec:safe-comp} At this stage, the \gls{REE} can perform the requested computation on the blinded\xspace plaintext data (\Cref{fig:system-overview}-\circnum{4}). The \gls{BliMe}\xspace CPU extensions apply a taint-tracking policy, limiting the operations that can be done on blinded\xspace data; this prevents the \gls{REE} from directly exfiltrating the data or even leaking it through side channels. Any operations by the \gls{REE} that are forbidden by the taint-tracking policy cause the \gls{REE} to fault. The taint-tracking policy ensures that the final results and any intermediate results derived from the data are also blinded\xspace. More details on the taint-tracking policy can be found in \Cref{sec:taint-tracking}. \subsubsection{Data export}\label{sec:data-export} Once the computation is complete, the \gls{REE} calls the encryption engine's data export function. This causes the encryption engine to atomically encrypt the blinded\xspace results and mark them as non-blinded\xspace (\Cref{fig:system-overview}-\circnum{5}), which is done by unsetting the blindedness\xspace bit. The \gls{REE} can then send the ciphertext back to the client, who can decrypt it and obtain the plaintext results. \vspace{\baselineskip} \begin{tcolorbox} [width=\linewidth, colback=white!95!black] Data export atomically encrypts data using the session key and marks it as \emph{non-blinded\xspace}. \end{tcolorbox} \subsection{Taint-tracking policy}\label{sec:taint-tracking} We use a taint-tracking approach to prevent sensitive data from flowing to observable outputs. First, we define which parts of the system can be tainted. We split the system state into two types: \begin{itemize} \item \emph{\xmakefirstuc{\blindable}} state consists of the values (not addresses) of lines in the cache, values in registers except the program counter, and values in main memory, as well as all busses and queues used to transfer these values. Each of these is extended with an additional bit for blindedness\xspace which, when set, indicates that the corresponding state is blinded\xspace. \item \emph{Visible} state consists of information that may be exposed outside the system, and must therefore never contain sensitive data. It includes all microarchitectural state that does not have a blindedness\xspace bit associated with it, e.g., the program counter, addresses of lines in the caches, and performance counters. We include the program counter because it encapsulates the control flow of the program and we forbid blinded\xspace data from affecting the control flow. \end{itemize} We then define the list of observable outputs we consider in \gls{BliMe}\xspace: visible state, non-blinded\xspace blindable\xspace state, the addresses of memory operations sent to main memory, the execution time of an instruction or set of instructions, and fault signals. Note that once blindable\xspace state becomes blinded\xspace, it is no longer observable. We exclude outputs that require physical access to be observed, such as voltage and electromagnetic radiation. The taint-tracking policy can now be defined as follows. Each instruction blinds\xspace (i.e., sets the blindedness\xspace bit of) each output that depends on a blinded\xspace input (i.e., whose associated blindedness\xspace bit is set). If an instruction attempts to affect any observable output except non-blinded\xspace blindable\xspace state in a manner that depends on the value of a blinded\xspace input, a fault is raised, since this can otherwise be used to exfiltrate sensitive data. This effectively means that the program cannot use a blinded\xspace value as the address of a jump, branch or memory access, or use instructions whose completion time or fault status depends on a blinded\xspace value. \subsection{Server OS} \label{sec:hostOS} \gls{BliMe}\xspace as described above relies on an honest-but-curious server \gls{OS} because it relies on the \gls{OS} to correctly manage client keys. On context switch, the \gls{OS} can ask the encryption engine to export the current client key by ``sealing'' (i.e., encrypt it using a key known only to the encryption engine), and provide a previously sealed key, corresponding to the incoming process, which the encryption engine can decrypt and set as the new current client key. This way, even though the \gls{OS} never sees any client key in the clear, it is trusted to load the correct sealed key onto the encryption engine. A malicious \gls{OS} can cause sensitive information of one client to be leaked to another client by swapping their sealed client keys. The normal process isolation features of the \gls{OS} ensure that a client cannot access another client's blinded\xspace data directly or any of the client session keys held by the \gls{OS}. Because data is tagged only with a single bit that does not track ownership, we must further ensure that an application cannot transfer blinded\xspace data to other applications. The \gls{OS} must therefore prevent blinded\xspace data passing via system calls or other \gls{IPC} mechanisms. Preventing a client from accessing another client's blinded\xspace data ensures two things: 1) an application serving one client cannot use the encryption engine to encrypt and unblind blinded\xspace data originating from another client, and 2) computation can never use blinded\xspace data belonging to two different clients, since there would be no clear way to determine the ownership of the result. Therefore, processed data is only encrypted and exported with a key corresponding to the client from which the input data was originally imported. \section{Future work}\label{sec:discussion} We identify a number of directions for future work. \subsection{Improving deployability}% \label{sec:compiler} The security guarantees of \gls{BliMe}\xspace are enforced solely by the hardware modifications without requiring any compiler support. We saw that some existing side-channel resistant code successfully runs on \gls{BliMe}\xspace (\Cref{sec:compEval}). Developers can write new \gls{BliMe}\xspace-compliant code. However, this can be a challenge, particularly for larger programs because \begin{enumerate*} \item manual data-flow tracking can be too complex, and \item \gls{BliMe}\xspace-unaware compilers might inadvertently induce data-flow dependencies that are not evident in the source language. \end{enumerate*} Consequently, compiler support can increase deployability by (a) \emph{verifying} code compatibility, and where possible (b) automatically \emph{transforming} non-compliant code to an equivalent compliant variant. To verify compliance, one approach would be to use binary analysis. This has a number of drawbacks. Binary lifting is itself error-prone~\cite{kimTesting2017}, and could induce uncertainty. Focusing on binary representation makes it difficult to generate useful error messages. Instead, we opt for augmenting the compiler so that we can use the \gls{IR} for transformations, provide useful error messages through the front-end, and control the machine-code generation. To transform code, we can employ known rules for safe coding like \emph{cryptocoding}~\cite{aumassonCryptocoding2022}, and use compiler-specific approaches that aid both analysis and verification. Specifically, we can use techniques such as \emph{function cloning} to improve the accuracy of the static data-flow tracking and to avoid instrumentation of data-flows that at run-time are not necessarily blinded\xspace. For instance, a function could be called in multiple places, both such that its arguments are never tainted, and that they may be tainted. By cloning the function we can avoid needlessly instrumenting and tainting the non-blinded\xspace data-flow and instead only taint the cloned variant of the function. We are currently exploring both the adaption of existing software-only approaches such as Constantine~\cite{borrelloConstantine2021}, and implementing \gls{BliMe}\xspace-specific analysis and transformations. Because complete program analysis is undecidable~\cite{riceClasses1953}, we cannot guarantee that any approach detects all compliant code. Given memory safe code, we require a sound analysis that rejects all non-compliant code and accepts only code that will not result in a taint tracking policy violation fault at run-time. \subsection{Tolerating OS compromise}\label{sec:discussion:trustedOS} Our current design assumes an honest-but-curious server \gls{OS} because it relies on the \gls{OS} to correctly manage client keys. \gls{BliMe}\xspace can be extended to enable hardware-assisted client key management, thereby ensuring security even if the \gls{OS} is malicious as follows. Each client is assigned a unique client ID (e.g., as a keyed hash of the corresponding client key) that will be used to tag memory pages containing the client's blinded\xspace data. Pages can only contain blinded\xspace data from a single client but can freely mix blinded\xspace data and non-blinded\xspace data. The single blindedness\xspace bit used in the current design can be used to differentiate between blinded\xspace and non-blinded\xspace bytes within a single page. If the OS maliciously loads an incorrect sealed key in an attempt to export (unblind\xspace and encrypt) another client’s blinded\xspace data with the chosen client’s key, the encryption engine will fault on detecting the mismatch between the current client ID (derived from the current key) and the tag on the page where the blinded\xspace data resides. Thus a malicious \gls{OS} can only cause denial of service but cannot breach confidentiality of blinded\xspace data. \subsection{Enabling safe local processing} \gls{BliMe}\xspace extensions can be usefully applied on the local machine as well as remotely. Since an application processing blinded\xspace data cannot infer anything about the data other than its length, it can safely process data belonging to other users or applications. For example, the \gls{OS} can allow an application to read data from a file that is normally inaccessible to the application with the constraint that any data read will be marked as blinded\xspace. This makes it possible to build useful computational pipelines while strongly adhering to the principle of least privilege. \subsection{Handling secret-dependent faults} \label{sec:secret-dep-faults} Suppose that a violation of the \gls{BliMe}\xspace taint tracking policy leads to a fault that can either crash the program or result in the invocation of an interrupt handler. Either scenario can leak information, for example, if a div-by-zero fault will occur if the divisor is a blinded\xspace data item with a value zero; if this fault leads to an interrupt handler being called, the change in control flow will reveal that the blinded\xspace value was zero. To prevent this information leakage, faults depending on blinded\xspace data must be suppressed by \gls{BliMe}\xspace, so that the control flow remains \emph{as if the fault did not occur}. The client can be informed of this fault and that the computation results are invalid by setting a bit in some protected storage (e.g., a special register) when the fault occurs, and conveying this bit to the client as part of the returned encrypted results. \subsection{Defending against other attacks} \label{sec:discussion:other-attacks} Rowhammer~\cite{originalRowhammer} is a vulnerability in modern \gls{DRAM} modules that threatens the integrity of data. Due to the high proximity of DRAM cells, toggling a row of cells at a sufficiently high rate can result in bit flips in adjacent rows. Exploits of this phenomenon by a remote adversary have been continuously demonstrated despite the number of defenses created in both academia and the industry~\cite{rowhammerRetro}. As our design stands, Rowhammer-based attacks are out of scope. We make the common and reasonable assumption that reading data from any address in memory retrieves the last value that was written to that address; this includes the extra blindedness\xspace bit. Rowhammer is an orthogonal vulnerability requiring orthogonal defenses to ensure memory integrity. Other fault-injection attacks based on \gls{DVFS}, such as CLKscrew~\cite{clkscrew}, V0LTpwn~\cite{voltpwn} and Plundervolt~\cite{plundervolt}, produce similar attack patterns and are also out of scope. Other attacks that are out of scope are ones with physical access to the server. Physical access enables full read-write side-band access to memory through direct connection to the \gls{DRAM} bus. It also facilitates more powerful side-channel attacks, such as those that rely on power~\cite{kocher99dpa}, electromagnetic~\cite{emAttack1,emAttack2} or temperature measurements~\cite{tempAttack}, or fault-injection attacks, such as VoltPillager~\cite{voltpillager}, that can break \gls{TEE} confidentiality and integrity guarantees. Note that an adversary model that includes physical attacks would automatically cover remote fault-injection attacks such as Rowhammer as physical attacks are a superset of remote fault-injection attacks. We leave this to future work. \subsection{Applying \gls{BliMe}\xspace to speculating \glspl{CPU}} \textsf{BliMe-Ibex}\xspace is based on the two-stage Ibex softcore; realistic computation platforms are more complex, relying heavily on speculation to achieve high performance. \gls{BliMe}\xspace can be equally applied to such processors, taking care to ensure that the taint-propagation rules are applied also to the speculative state of the processor. In doing so, the processor prevents sensitive values from being leaked even by Spectre-type attacks~\cite{Kocher2018spectre}. This follows from the security analysis in \Cref{sec:securityEval}, as each speculative execution trace complies with the taint tracking policy of \gls{BliMe}\xspace. A more comprehensive model will include the \glspl{CPU} speculation machinery directly, to ensure that blinded\xspace data cannot leak into microarchitectural state such as that of the branch predictor. \section{Security evaluation} \label{sec:securityEval} \subsection{Protocol}\label{sec:eval-protocol} Computation using \gls{BliMe}\xspace takes the form of the protocol shown in \Cref{fig:protocol}. This is equivalent to the protocol described in \Cref{sec:design-protocol}. The attacker obtains only encrypted client data, the encryption of an arbitrary computation on the plaintext data, along with leakage from the computation by the \gls{REE}, shown as a cloud in \Cref{fig:protocol}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/protocol.pdf} \caption{Protocol summary. A key agreement protocol is used to obtain secret keys, shared by the client and TEE. The TEE securely transports these keys to the encryption engine, which atomically decrypts and blinds the client data, performs safe computation on it, then atomically encrypts and blinds the client data before it is returned to the client.} \label{fig:protocol} \end{figure} If the key agreement protocol and secure channel protocols are secure against active attackers, then the messages provide the attacker with no information on either the session keys, input data from the client, or the result of the computation. Thus, the attacker gains information on the client data only if the computational leakage (during the Safe Computation phase in \Cref{fig:protocol}) reveals some information. It is therefore sufficient to show that the leakage is independent of the decrypted plaintext, in order to meet requirement \hyperlink{req.SR}{SR}\xspace. \subsection{Safe computation} The decrypted plaintext, shown in \Cref{fig:protocol}, is always marked as blinded\xspace. Therefore, it is sufficient to prove that the safe computation functionality described in Section~\ref{sec:safe-comp} does not leak any information on any state marked as blinded\xspace. We do this by constructing a model \gls{ISA} using the F* programming language~\cite{fstar}, and proving that its visible and non-blinded\xspace states are independent of the blinded\xspace values in memory: that is to say, no changes to the blinded\xspace values containing sensitive data, after a computational step, result in any change to the non-blinded\xspace values or blindedness\xspace bits that can be observed by the attacker. Our approach is as follows: \begin{enumerate} \item \label{safe-comp-step1}Define a low-level model of the server in terms of registers, memory cells, cache line allocations, and arbitrary transitions from one state to another, and a security definition with respect to this model that meets the requirements of \Cref{sec:eval-protocol}. \item \label{safe-comp-step2}Define a more detailed architectural model that expresses instructions in terms of register and memory reads/writes, and a security definition that implies the low-level security definition from step \ref{safe-comp-step1}. \item Define a minimal instruction architecture that implements standard arithmetic operations with special-case blindedness\xspace-bit propagation rules from \Cref{sec:taint-tracking}, and prove that it satisfies the security definitions from steps \ref{safe-comp-step1} and \ref{safe-comp-step2}. \end{enumerate} \newcommand{\stackrel{\mathrm{state}}{\equiv}}{\stackrel{\mathrm{state}}{\equiv}} \newcommand{\stackrel{\mathrm{list}}{\equiv}}{\stackrel{\mathrm{list}}{\equiv}} \subsubsection{Definitions} We begin by defining equivalence relations on values that may be blinded\xspace. In our model, we define a recursive type \prooflink{Value.html}{\texttt{maybeBlinded}}\footnote{We include links to specific modules and theorems where they are mentioned. The full model is available (anonymized for double-blinded review) at \url{https://blinded-computation.github.io/blime-model/}.} that represents a blindable value: \begin{lstlisting}[language=ML] type maybeBlinded (#t:Type) = | Clear : v:t -> maybeBlinded #t | Blinded : v:t -> maybeBlinded #t \end{lstlisting} Then, a blinded\xspace value $v$ is represented by $\texttt{Blinded}\; v$, and a non-blinded\xspace value $v$ by $\texttt{Clear}\; v$. We say that two such values $a$ and $b$ are equivalent, denoted $a \equiv b$, if they are both \texttt{Blinded}---that is to say, if both are blinded\xspace---or if they are both \texttt{Clear} and have equal values. We define equivalence of lists of \texttt{maybeBlinded} values similarly: two lists $\ell_1$ and $\ell_2$ are equivalent, denoted $\ell_1 \stackrel{\mathrm{list}}{\equiv} \ell_2$, if they have the same length, and each of their values is equivalent. \subsubsection{Low-level system model}\label{sec:fstar-low-level} The system is modelled in \prooflink{Cpu.html\#system-state}{{Cpu.fst}} by a system state type containing the following data: \begin{itemize} \item the program counter (\texttt{pc}), containing a 64-bit unsigned integer pointing into memory, \item an array of register values, each containing a 64-bit unsigned integer and a blindedness\xspace bit, \item an array of memory values, each containing a 64-bit unsigned integer and a blindedness\xspace bit, and \item an array of cache line assignments, each containing a 64-bit unsigned integer representing the address of the corresponding value in the cache. \end{itemize} We then define an equivalence relation $\stackrel{\mathrm{state}}{\equiv}$ (named \prooflink{Cpu.html\#equivalence}{\texttt{equiv\_\0system}} in the model) on the system states $\mathcal{S}$, such that two system states are equivalent if they have equal program counters and cache line assignments, their registers and memory have equal blindedness\xspace bits, and their non-blinded\xspace values are equal. We model the execution of instructions using a single-cycle fetch-execute model (\prooflink{Cpu.html\#execution-model}{\texttt{step}} in the model), with each instruction completing in a single cycle. An instruction $I \in \mathcal{I}$ is loaded from memory at the program counter address; if \texttt{pc} points to an instruction marked as blinded\xspace, then it jumps to a fault handler at address $0$. Otherwise, the state of the processor is transformed according to an instruction-dependent execution mapping $X: \mathcal{I} \times \mathcal{S} \rightarrow \mathcal{S}$. We denote a single processor step \[ P_X(s) = \begin{cases} X(s.\textsc{memory}[s.\textsc{pc}], s), &\text{if} \; s.\textsc{pc} \; \text{points to} \\ &\text{non-blinded\xspace memory,} \\ \\ s \text{ with s.\textsc{pc} = 0 } &\text{otherwise.} \end{cases} \] The security of the execution mapping $X$ is defined such that $X$ is secure if for all states $s_1$ and $s_2$, equivalent input states yield equivalent output states (\texttt{is\_safe} in the model): \begin{align} \forall s_1, s_2 \in \mathcal{S}: s_1 \stackrel{\mathrm{state}}{\equiv} s_2 \Longrightarrow P_X(s_1) \stackrel{\mathrm{state}}{\equiv} P_X(s_2). \label{eqn:system-safety} \end{align} That is, $X$ is secure, if after each step in a computation, the values of blinded\xspace registers and memory locations do not influence the blindedness\xspace of any component of the output state, nor the values of any unblinded\xspace value in the output, meaning that the attacker cannot infer anything about the blinded\xspace state. \subsubsection{Load-store model}\label{sec:fstar-load-store} The low-level system model described in \Cref{sec:fstar-low-level} is quite understandable, but since the execution mapping $X$ has unmediated access to the state, there is no easy way to express the taint propagation rule from \Cref{sec:design}. In \prooflink{InstructionDecoder.html}{InstructionDecoder.fst} we describe a higher-level model that allows us to better express statements about data flows between registers. This model, which we call the load-store model, includes some microarchitectural details. Its execution mapping is defined as \prooflink{InstructionDecoder.html\#execution-model}{\texttt{decoding\_execution\_unit}} in terms of two functions: \begin{itemize} \item \emph{An instruction decoder}, a function that takes as input an instruction word, and returns a decoded instruction containing an opcode, a list of input registers---either normal registers or \texttt{pc}---and a list of output registers. \item \emph{Instruction semantics}, a function that performs the actual computation, taking as input a decoded instruction and a list containing a value and blindedness\xspace bit for each input register, and which returns a list of output values with blindedness\xspace bits, along with a list of memory operations, each of which indicates a load or store between a register and an address in memory. \item \emph{A cache policy}, a function that accepts a set of cache line assignments and a memory operation, and returns a new set of cache line assignments. \end{itemize} The execution mapping then takes the instruction word, decodes it, reads the input operands from the initial system state, performs the computation, increments \texttt{pc}, writes the results to registers, and performs the stores and loads, updating the cache line assignments. The decoded instructions never depend upon blinded\xspace data as, from \Cref{sec:fstar-low-level}, blinded\xspace instructions result in a fault, and the execution mapping is never called. Therefore, the safety of the execution mapping can be demonstrated by analyzing only the instruction semantics, as shown in the load-store model's \prooflink{InstructionDecoder.html\#main-safety-theorem}{main safety theorem}. We define instruction semantics safety similarly to how we defined execution mapping safety in \Cref{eqn:system-safety}: for all instruction words, executing the instruction semantics function with equivalent lists of input operand values yields equivalent output operand lists and memory operation lists\footnote{Memory operations are defined to be equivalent where they are between the same register and the same address in memory.}. We then use this to prove the theorem \prooflink{InstructionDecoder.html\#main-safety-theorem}{\texttt{each\_\0load\0store\_\0exec\0ution\_\0unit\_\0with\_\0red\0acting\_\0equiv\0alent\_\0instr\0uc\0tion\_\0seman\0tics\_\0is\_\0safe}} that an execution mapping defined by any instruction decoder and safe instruction semantics is safe, no matter what cache policy is in use. Next, we will use this model to show the safety of a concrete \gls{ISA}. \subsubsection{Model ISA} In \prooflink{ISA.html}{ISA.fst}, we analyze a model \gls{ISA} with eight instructions: \texttt{STORE}, \texttt{LOAD} \texttt{BZ} (branch if zero), \texttt{ADD}, \texttt{SUB}, \texttt{MUL}, \texttt{AND}, and \texttt{XOR}. Each instruction accepts two input registers and one output register---the exceptions being \texttt{STORE} and \texttt{LOAD}, which require only two registers, one for the memory address, and one for the source or destination register respectively, and \texttt{BZ}, whose output is always written to \texttt{pc}. Each instruction specifies the blindedness\xspace of its outputs, as described in \Cref{sec:taint-tracking}. By default, each instruction marks its output as blinded\xspace if any of its inputs are blinded\xspace. However, there are some special cases that deviate from this treatment in order to better capture the data dependencies of the instructions and handle modifications of visible state: \begin{itemize} \item \texttt{STORE} and \texttt{LOAD} instructions become no-ops if their address input is blinded. \item \texttt{SUB} and \texttt{XOR} instructions yield a unblinded\xspace value zero if both their inputs are the same register. \item \texttt{MUL} and \texttt{AND} instructions yield a unblinded\xspace value zero if one of their inputs is a unblinded\xspace value zero. \item \texttt{BZ} jumps to a fault handler by setting \texttt{pc} to zero if its comparison input is blinded\xspace. \end{itemize} We then show in the model \gls{ISA}'s main safety theorem \prooflink{ISA.html\#safety}{\texttt{sample\_semantics\_are\_safe}} that these instruction semantics, defined in \texttt{sample\_semantics}, are safe in the sense described in \Cref{sec:fstar-load-store}, and that the architecture with these semantics is therefore safe according to the definition given in \Cref{eqn:system-safety}. This demonstrates that the taint tracking approach proposed in \Cref{sec:taint-tracking} is secure in general, and that the special cases that we have considered do not allow an attacker to violate the dataflow security definition from \Cref{eqn:system-safety}. This means that, so long as the external peripherals with which an outside observer can interact do not expose blinded\xspace data, an observer cannot infer anything about the blinded\xspace data in the system, satisfying objective \hyperlink{req.SR}{SR}\xspace. \section{Performance evaluation} \label{sec:perfEval} As the security policy enforced by \gls{BliMe}\xspace does not require changes to code that is already side channel resistant, performance does not change at the architectural level. We therefore focus on measuring the effect of \textsf{BliMe-Ibex}\xspace's safe computation (\Cref{sec:safe-comp}) implementation on clock rate, power consumption, and \gls{FPGA} resource usage. We synthesised the unmodified and modified Ibex cores from Section~\ref{sec:ibex} for the Xilinx \texttt{xc7a100tcsg324-1} \gls{FPGA}. The unmodified core used 2396 \glspl{LUT} out of an available 63400, and 911 registers out of an available 126800. The modified core used 2419 \glspl{LUT} out of an available 63400 (0.96\% increase), and 932 registers out of an available 126800 (2.3\% increase). The Ibex core is configured by default to run at a clock rate of 50~MHz. We measured the effect of our new instrumentation on clock rate by examining the timing slack in the design, which indicates how much faster the slowest path through the design is than that indicated by the clock rate constraint of the design. We found that the original design had a timing slack of 1.232ns, and our modified design a timing slack of 1.639ns. Our modified design is therefore slightly faster than the original, indicating that \textsf{BliMe-Ibex}\xspace does not significantly slow the Ibex core, and in this case even resulted in a slight improvement in performance due to a more favorable placement of its logic. We used Vivado's power analysis functionality to estimate the power consumption of each core initialized with a Coremark~\cite{coremark} memory image. The results are shown in \Cref{tab:power}. \begin{table} \centering \begin{tabular}{r|r|r|r} \multirow[b]{2}{*}{\textbf{Component}} & \multicolumn{3}{c}{\textbf{Power consumption (\si{\milli\watt})}} \\ & Unmodified & Modified & Change (\%) \\ \hline Clocks & $4.80$ & $4.70$ & $-2.0$ \\ Slice logic & $1.52$ & $1.52$ & $+0.1$ \\ Signals & $2.71$ & $2.68$ & $-1.1$ \\ Block RAM & $125.20$ & $133.03$ & $+6.3$ \\ PLL & $121.20$ & $121.20$ & $+0.0$ \\ DSPs & $0.00$ & $0.00$ & $+0.0$ \\ I/O & $0.06$ & $0.06$ & $+0.0$ \\ Static power & $145.20$ & $145.73$ & $+0.0$ \\ \textbf{Total} & $\mathbf{400.70}$ & $\mathbf{408.93}$ & $\mathbf{+2.1}$ \\ \hline \end{tabular} \vspace{1em} \caption{Power consumption by modified and unmodified Ibex cores. The increase in power consumption of approximately $2\,\%$ comes mainly from the additional block \gls{RAM} used to store the blindedness\xspace bits.} \label{tab:power} \end{table} The power consumption of most parts of the \gls{FPGA} does not change significantly; the overall increase of 2\% comes primarily from the additional block \gls{RAM} used to store the blindedness\xspace bits. Given the limited impact on clock rate, power consumption, and resource usage, we can therefore conclude that the performance requirement \hyperlink{req.PR}{PR}\xspace is satisfied. \section{Compatibility evaluation}% \label{sec:compEval} For backwards compatibility with existing code (requirement \hyperlink{req.CR}{CR}\xspace), two types of code are important: code that processes exclusively non-blinded\xspace data, and code that is already side-channel resistant and processes blinded\xspace data. The taint tracking policy described in \Cref{sec:taint-tracking} changes the behavior of the processor only where an instruction depends upon a blinded\xspace input value. Therefore, code that exclusively processes non-blinded\xspace data is unaffected by \gls{BliMe}\xspace. Code that processes blinded\xspace data will fault if it attempts to modify an observable output in a way that the processor determines to be dependent on the blinded\xspace data. To evaluate whether \textsf{BliMe-Ibex}\xspace has met requirement \hyperlink{req.CR}{CR}\xspace, we must therefore determine whether side-channel-resistant code will comply with the \textsf{BliMe-Ibex}\xspace taint tracking policy. \textsf{BliMe-Ibex}\xspace faults when blinded\xspace data attempts to flow to non-blindable\xspace observable outputs (\Cref{sec:taint-tracking}). Existing side-channel-resistant code already prevents the program counter and the addresses of memory operations from being affected by blinded\xspace data, e.g., as in~\cite{bernstein12}. Data flows of blinded\xspace data to other visible state, execution time and fault signals (\Cref{sec:taint-tracking}) are prevented by design by the hardware of \textsf{BliMe-Ibex}\xspace. Therefore, side-channel-resistant code is compatible with \textsf{BliMe-Ibex}\xspace's taint tracking policy, so long as only sensitive data is blinded\xspace. This final point is the main limitation of \gls{BliMe}\xspace: the \gls{CPU} cannot identify cases where the result of a computation remains blinded\xspace even though it no longer contains any sensitive data. For example, it will not be possible to branch on the result of the decryption/verification of an authenticated encryption, meaning that its cryptographic \glspl{API} will need to be modified so that its control flow does not depend on whether the verification was successful. In practice, this means that any computation must always continue as though verification was successful, with any failure being indicated by a flag that is returned to the client (\Cref{sec:secret-dep-faults}). Despite this limitation, we successfully ran stream cipher encryption and decryption with a blinded\xspace key using the TweetNaCl~\cite{tweetNaCl} library, demonstrating the backwards compatibility of \gls{BliMe}\xspace. \section{Implementation} \label{sec:implementation} We implement the \gls{BliMe}\xspace \gls{CPU} taint-tracking extensions first on the Spike RISC-V emulator~\cite{SpikeRISCVISA2022} as a functional proof-of-concept (\Cref{sec:spike}), and then on the Ibex core~\cite{IbexRISCVCore2022} (\Cref{sec:ibex}) for performance, area and power evaluation, which we present in \Cref{sec:perfEval}. The evaluation covers the hardware needed for safe computation (\Cref{sec:safe-comp}), including taint tracking (\Cref{sec:taint-tracking}). We do not include the \gls{TEE} in the implementation. Components with similar functionality already exist, such as Google's Titan-M chip~\cite{titanM} or Apple's \Gls{SEP}~\cite{SecureEnclave}. For the encryption engine in \gls{BliMe}\xspace, we do not implement a bulk encryption and decryption block as this is already present in existing hardware, such as the AES block in Apple \gls{SEP}, but add instructions to blind\xspace and unblind\xspace data in memory to simulate the data import and export functions. \subsection{Instruction set emulator} \label{sec:spike} The Spike emulator is a RISC-V ISA emulator implemented in C++. It can be used to quickly add and test new custom instructions as well as modify the behavior of existing instructions. Since Spike is an architectural emulator, it is not necessary to simulate execution pipelines and other microarchitectural details. The two main additions to Spike are dynamic taint-tracking and handling policy violations. \subsubsection{Taint-tracking} Taint-tracking is performed on registers and memory. Due to the load-store architecture of RISC-V, these can be handled separately. Register blindedness\xspace is modified by any instructions that modify registers, including the load instruction. Memory blindedness\xspace can only be modified by store instructions. \textbf{Registers and ALU operations.} For each register, we maintain a blindedness\xspace bit. Whenever an instruction reads from a blinded\xspace register, we set a `blinded\xspace' flag in the processor core. During the instruction's execution, any subsequent writes to the register file mark the modified registers as blinded\xspace. Once the instruction is complete, the blinded\xspace flag is reset. Implementation of this functionality is facilitated by the read and write functions in Spike's register file class. By default, any instruction that takes blinded\xspace input taints its output registers as described above. This is a conservative approximation, and implementations can make exceptions in order to more accurately model instruction dataflows. We demonstrate this capability with the XOR instruction: If a register is XORed with itself, the result is not blinded\xspace. This is because the result will always be zero, irrespective of the input value. Similar exceptions can be used for other situations in which an instruction takes blinded\xspace inputs but its output does not depend on said blinded\xspace input values. \textbf{Memory.} Within Spike's MMU, we maintain the set of all blinded\xspace physical memory locations with byte-level granularity. Whenever a value is stored into memory from a blinded\xspace register, we add its address to the blinded\xspace set. Whenever a value that is not blinded\xspace is written to an address in the blinded\xspace set, we remove this address from the set. Reading from memory into a register marks the register as blinded\xspace if the address is in the blinded\xspace set, and unsets it otherwise. Since physical (rather than virtual) addresses are used, all checks and modifications to the set are performed after virtual-to-physical address translation. \textbf{New Instructions.} We introduce two new instructions, \inlinec{blnd} and \inlinec{rblnd}, that correspond to the data import and export operations, respectively (\Cref{sec:data-import,sec:data-export}). In our proof-of-concept, they simulate the data import and data export functions by blinding\xspace and unblinding\xspace data without encryption and decryption. To blind\xspace a variable, the \inlinec{blnd} instruction takes its address as input and adds it to the blinded\xspace set. This effectively blinds one byte of the variable; for larger variables such as \inlinec{int} or arrays, we must iterate over all relevant addresses. The \inlinec{blnd} instruction is inserted as inline assembly at the beginning of our test program to blind\xspace sensitive dat . Since this inline assembly is run in the process's context, the address provided to it will be a virtual address. We perform virtual-to-physical address translation before adding the address to the blinded\xspace set. The \inlinec{rblnd} instruction unblinds\xspace variables by simply removing their addresses from the blinded\xspace set. \subsubsection{Handling violations} Violations can occur in three situations: \begin{enumerate} \item Attempting to write blinded\xspace values to the \inlinec{PC} register. Jumps and conditional branches relying on blinded\xspace registers, either as a jump destination address or as part of a conditional check, are forbidden. \item Attempting to read from or write to memory using a blinded\xspace value as an address. This occurs when a load or store uses a blinded\xspace register either as the address base or offset. \item Attempting to write blinded\xspace data to an ``unblindable'' memory location. We maintain a set of ``unblindable'' addresses in Spike's MMU that can correspond, for example, to MMIO addresses. This list is currently unused but allows a system to specify whether certain memory-mapped peripherals have access to blinded\xspace data. \end{enumerate} When any of the above conditions occurs, an illegal instruction fault is thrown. We overload the illegal instruction fault to avoid having to implement handling of a new type of fault in the \gls{OS} kernel. \subsection{Ibex}\label{sec:ibex} Ibex is an open-source 2-pipeline-stage 32-bit RISC-V core written in SystemVerilog. We extended the Ibex core to include byte-level taint-tracking and handling of blinded\xspace policy violations. The extensions are similar to the ones done for Spike but since this is an actual hardware design, it included microarchitectural details that were not required for Spike. \subsubsection{Main Memory \& Load-Store Unit} The main difference between the Spike and Ibex implementations is how memory blindedness\xspace is tracked. In the Ibex implementation, memory is extended with an extra blindedness\xspace bit for every byte of data. The word width of the data bus between the CPU and main memory is extended from 32 bits to 36 bits to allow for one blindedness\xspace bit per byte. The data bus width was increased to avoid having to perform an additional memory access per word to read/write the blindedness\xspace bits. \subsubsection{Data Independent Timing} \label{sec:data-independent-timing} Ibex provides an optional security feature~\cite{IbexRISCVCore2022-security} that ensures all instructions take a fixed number of cycles independent of the input data. This has three effects on the core's behavior: \hypertarget{dit1}{1)} branch instructions take the same number of cycles whether the branch is taken or not, \hypertarget{dit2}{2)} early completion of multiplication by 1 or 0 is disabled, and \hypertarget{dit3}{3)} early completion of division by 0 is disabled. We enable this feature to take advantage of items \hyperlink{dit2}{2} and \hyperlink{dit3}{3} above, i.e., disabling early completion of multiplication/division. This is needed to prevent blinded\xspace values from affecting the execution time, which is an observable output. Ensuring branch instructions take a fixed number of cycles (item \hyperlink{dit1}{1}) is not needed for our design. Branches that depend on blinded\xspace inputs always cause a fault before any target calculation or instruction fetching occurs. The core's behavior is therefore not affected by the \emph{value} of the blinded\xspace input, only by the fact that it \emph{is} blinded\xspace. Ibex provides no straightforward way to enable only parts of this feature. An alternative to enabling the entire feature would be to make the modifications manually. Ensuring branch instructions are constant-time would no longer be needed and early completion would only be disabled when blinded\xspace inputs are detected. We leave this optimization for future work. \subsubsection{Pipeline Stages} Ibex uses a 2-stage pipeline: Instruction Fetch (IF) and Instruction Decode - Execute (ID/EX). The IF stage required no changes. Jumps and branches that rely on blinded\xspace inputs cause a fault as soon as they are decoded in the ID/EX stage preventing secret-data-dependent instruction fetching. Taint-tracking for ALU operations is performed in the ID/EX stage. Whenever any ALU inputs are blinded\xspace, the ALU output is also marked as blinded\xspace. The ALU module itself is not modified; the calculation of the blindedness\xspace bit is added to the surrounding logic. Checks for policy violations are done in the decoder module, where the instruction opcode is extracted. For jumps and branches, a fault occurs when any of the target address or the registers used in the branch condition are blinded\xspace. For loads and stores, a fault occurs when any of the registers used as the base address or the offset are blinded\xspace. Like in the Spike implementation, we use the illegal instruction fault. \section{Introduction} Outsourced computation has become ubiquitous. Instead of creating and managing computing resources locally, individuals and organizations are resorting to cloud computing services where the physical hardware is owned and managed by the service provider and shared among many clients. While cost-effective, outsourced computation introduces confidentiality concerns because the clients’ sensitive data must be sent to the service provider’s servers for processing. A malicious service provider can directly access it. Furthermore, even if the service provider is trusted, other malicious actors may compromise the sensitive data through run-time attacks or side-channel leakage. One solution to this problem is to use cryptographic techniques such as \gls{FHE} which allow arbitrary computation on encrypted data. With \gls{FHE}, the server receives and processes only encrypted data. This cryptographically guarantees the confidentiality of the data against all adversaries that gain access to the ciphertext. However, \gls{FHE} comes with very large performance overheads, orders of magnitude worse than processing the plaintext directly~\cite{viand21}. As an example, a single multiplication operation using \gls{FHE} is slower than a normal multiplication instruction by seven orders of magnitude \cite{viand21}. This has led to a search for non-cryptographic methods that can protect the confidentiality of data sent out for computation; one such approach is to take advantage of the hardware-assisted security mechanisms present in modern \gls{CPU} architectures, which can potentially ensure security while maintaining high performance. An example is \glspl{TEE}, which provide isolation for each client from the server \gls{OS} as well as from other clients. But software bugs in the code running on the server (server code) can lead to run-time attacks. Furthermore, most implementations of \gls{TEE}s are designed with limited resistance to side-channel attacks~\cite{intelSGXExplained2016,sanctum,demystifyingTrustzone}. Therefore, the need remains for an efficient and effective approach to protect outsourced computation even in the presence of software vulnerabilities and side channels. Our goal is to address this need using hardware-assisted mechanisms without sacrificing high performance. We extend the RISC-V \gls{ISA} with a minimal set of hardware extensions that can preserve data confidentiality even in the presence of software vulnerabilities or side channels. The extensions and accompanying attestation architecture, which together we call \gls{BliMe}\xspace, allow a client to send conventionally encrypted data to a remote server, such that the \gls{CPU} will decrypt and process the data without allowing it or any data derived from it to be exfiltrated from the system. Results of the computation can then be returned only after encryption with the client's key. We do this by having the \gls{CPU} enforce a taint-tracking policy that prevents client data from being exported from the system. Previous attempts at hardware-enforced taint tracking~\cite{yu19} provide a naive untaint instruction to extract results, making them vulnerable to run-time attacks or speculative execution attacks~\cite{spectreDeclassified}. In contrast, \gls{BliMe}\xspace uses a small attestable, fixed-function \gls{TEE} and encryption engine to facilitate the secure import and export of data between clients and servers, with data never being decrypted without tainting the resulting plaintext. This allows \gls{BliMe}\xspace to provide its security guarantees without having to make any trust assumptions about the code operating on the sensitive data. Consequently, \gls{BliMe}\xspace protects not just against side channels, but even against malware and run-time attacks that allow an attacker to execute arbitrary code on the system. \ifdefined\IncSrcCode \footnotetext[1]{\url{https://drive.google.com/file/d/137bc1d3lGXk5nr772yipUFVMlwRKC37T/view?usp=sharing}} \fi Our contributions are (we will open-source all code\ifdefined\IncSrcCode \footnotemark[1] \fi): \begin{itemize} \item \gls{BliMe}\xspace, a set of taint tracking \gls{ISA} extensions and attestation architecture preventing exfiltration of sensitive data (Section \ref{sec:design}). \item \textsf{BliMe-Ibex}\xspace, an Ibex RISC-V core implementation of the \gls{BliMe}\xspace \gls{ISA} extensions (Section \ref{sec:implementation}). \item A machine-checked proof of the dataflow security of the \gls{BliMe}\xspace \gls{ISA} extensions, as applied to a small concrete \gls{ISA} (Section \ref{sec:securityEval}). \item A performance evaluation of \textsf{BliMe-Ibex}\xspace, showing minimal run-time, power and area overheads (Section \ref{sec:perfEval}). \end{itemize} \section{Problem Description} \subsection{Usage scenario} The scenario we target is one where a client sends data to a remote server for outsourced computation. The client starts a session with the server, sends its data, and invokes some software on the server to perform computation on the data (possibly in combination with the server's data). After the computation is complete, the results are then sent to the client. Data import/export and computation can be repeated multiple times per session. Data exchange between the client and server is secured using authenticated encryption. Multiple clients may connect to the server simultaneously, leading to multiple parallel client sessions. Execution of server code that can leak the client's data, or any byproduct of this data, must be prohibited, even when an attacker can run malware on the server, exploit vulnerabilities in the software processing the client data, or use side channels to extract data. In other words, sensitive client data must not flow to any observable output. \subsection{Goals and objectives} \newcommand{\goal}[2]{% \vspace{\baselineskip}% \par\noindent% \newdimen\protocolstepwidth% \protocolstepwidth=\linewidth% \addtolength\protocolstepwidth{-2\fboxsep}% \fbox{\begin{minipage}[c]{\protocolstepwidth}\textbf{#1}: \emph{#2}\end{minipage}}% \vspace{1.0em}% } In this section we discuss the requirements for systems based on our design. The first requirement relates to security. \goal{\hypertarget{req.SR}{SR}---Confidentiality}{ When a client provides sensitive input data to the server, no party other than the client can infer anything about the client's sensitive input data, other than its length. } Malware running on the server may attempt to gain access to the data, or the application processing the sensitive data may itself be malicious, but these applications must not be allowed to reveal sensitive data outside the system. The next requirement relates to performance. \goal{\hypertarget{req.PR}{PR}---Fast execution}{ The design will not significantly reduce the performance of code that is accepted by the \gls{CPU}, compared to running the same code on a similar processor without such protections. } It is important to ensure that any solution does not excessively degrade performance. Elimination of side channels may prevent certain optimizations, resulting in some overhead, but some high-performance security-critical code has already been hand-written in assembly to eliminate side channels, and solutions that significantly reduce its performance may prove unsuitable in applications that make heavy use of such code. The final requirements relate to backwards-compatibility. \goal{\hypertarget{req.CR}{CR}---Backwards compatibility}{ Code that does not leak sensitive data, by covert channels or otherwise, will successfully run on the system. } There will be a great volume of code in the system that does not process sensitive data, so it is important from a practical standpoint that existing software can run on the new hardware: if this is not the case, then a new software stack will be needed, greatly limiting its utility. Moreover, there is also code that handles sensitive data but that already does so safely, such as side-channel-resistant cryptographic code. It is equally desirable that this secure and well-tested code will continue to run on our hardware. \section{Related work} \textbf{Taint tracking.} A large body of work exists on taint tracking, also called \gls{DIFT} \cite{dynamicIFT2004}. Hu \emph{et al}.~\cite{huHardwareInformationFlow2021} present a survey that includes several hardware-based taint tracking techniques with varying goals and security/performance trade-offs. Taint tracking is available at varying levels of abstraction. Speculative Taint Tracking~\cite{STT} applies taint tracking at the architectural level to the results of speculatively executed instructions to prevent them from being used to leak information. Tiwari \emph{et al}.~\cite{GLIFT} propose a technique that allows taint tracking at the gate level. They use this technique to create a processor that is able to track all information flows, but has a limited instruction set and suffers from large performance overheads. Taint tracking can also be performed purely in software. Data flow integrity is a form of taint tracking that aims to protect software against non-control data attacks by employing reaching definitions analysis~\cite{dfi}. Pointer tainting~\cite{chen2005a} is another defense against non-control data attacks that taints user input and detects an attack when a tainted value is dereferenced as a pointer. \textbf{Data-oblivious execution.} Preventing side-channel leakage requires covering several observable outputs. In this work, we prevent leakage through memory access patterns as well as through timing of program execution. Several algorithms have been proposed to obfuscate data-dependent memory access patterns~\cite{ORAM,pathORAM2018,permuteRAM2017,optorama2020}. However, they all come at a significant cost to performance. Other work has focused on making code constant-time to prevent leakage through execution timing. An extreme case is complete linearization of the entire program using the x86 \texttt{MOV} instruction~\cite{domasVfuscator22015}. A more practical approach is Constantine~\cite{borrelloConstantine2021}, which extends LLVM to compile code into constant-time binaries. It relies on dynamic analysis for taint tracking, which is then fed back into the compiler to transform code that handles tainted data into a constant-time form. However, this dynamic analysis can be imprecise as full coverage of all possible execution paths is not guaranteed, potentially leading to some vulnerable code not being transformed. Lee \emph{et al}.~\cite{dove} propose DOVE to protect sensitive data used in outsourced computation from side channels. DOVE first uses a frontend to transform the client application code to a custom data oblivious intermediate representation called a Data Oblivious Transcript (DOT). The DOT is then sent to a trusted interpreter application (the backend) on the server, which verifies that the DOT is indeed data oblivious and then runs it on the sensitive data. The trusted interpreter must be run within a \gls{TEE}, such as an Intel SGX enclave, as it is part of the \gls{TCB}. Yu \emph{et al}.~\cite{yu19} develop an \gls{OISA} that performs run-time taint tracking of sensitive values and adds a duplicate set of instructions to the RISC-V \gls{ISA}. Each operand of the instructions in the set is defined as either safe or unsafe. Using any tainted values as unsafe operands results in a fault. The hardware guarantees that computation is oblivious to safe operands and, therefore, that any sensitive values used as safe operands are not leaked through side channels. The \gls{OISA} also exposes a small amount of memory called the \gls{OMP} whose access patterns are not exposed. This enables fast memory operations on sensitive values stored in small data structures by avoiding the need to hide the access patterns. The main drawback of the \gls{OISA} is that it relies on the client application code to correctly taint and untaint the sensitive values. This means that code attestation of the code performing the computation is required for security. Furthermore, software vulnerabilities in client application code can allow adversaries to untaint arbitrary sensitive values through the \gls{OISA}'s untainting instruction, which is exposed to any software running on the main CPU. \textbf{Point solutions for side-channel attacks.} The literature contains a variety of side-channel attacks that leak information through the processor's caches~\cite{flushReload,flushFlush,primeProbe,gruss2015,gruss2016}. Defenses against these attacks rely on temporal or spatial isolation between processes; the cache is either flushed on context switches or is partitioned in such a way that each process uses a separate fixed portion of the cache~\cite{partitionedCache,PLCacheRPCache}. However, this results in unnecessary overhead when processing non-sensitive data. Other methods change the cache architecture or replacement policy but also suffer from unnecessary overheads~\cite{zcache,secRandCache}. The cache attacks mentioned above are usually used as building blocks to create covert channels for more sophisticated attacks such as Meltdown and Spectre~\cite{Lipp2018meltdown,Kocher2018spectre}. In response, a range of defenses have been proposed to stop these sophisticated attacks~\cite{STT,kaiser}. However, they do not address the main source of information leakage (which is the data-dependent memory access pattern) but rather provide point solutions for specific attacks. For example, Speculative Taint Tracking~\cite{STT} prevents speculatively executed instructions from being used to leak information through cache side channels; however, it does not address cases where information is leaked through non-transient execution.
2024-02-18T23:40:44.993Z
2022-04-27T02:23:02.000Z
algebraic_stack_train_0000
3,250
10,394
proofpile-arXiv_065-15909
\section{Introduction} The incredible success of hydrodynamical approaches in describing the enormous data of heavy ion collision experiments \cite{Busza:2018rrf,Pasechnik:2016wkt}, including the nuclear supression factor, radial flow and elliptic flow measurements provides compelling evidence to believe that the produced hot and dense matter thermalizes within about 0.6 fm/c after the initial impact~\cite{Heinz:2004pj,Huovinen:2001cy,Hirano:2002ds}. However, in the seminal work by Baier, Mueller, Schiff and Son in Ref.~\cite{Baier:2000sb}, the thermalization time via scattering processes in weak-coupling limit has been estimated theoretically to be 2.5 fm/c or above. Recent studies \cite{Berges:2020fwq,Epelbaum:2013ekf,Berges:2013fga} in weak coupling limit have improved our understanding of quark-gluon-plasma(QGP) equilibration to a great extent. On the other hand, several attempts also has been made to study the thermalization at strong coupling limit within AdS/CFT formulation~\cite{Strickland:2013uga,Chesler:2008hg,Chesler:2009cy,Heller:2011ju,Casalderrey-Solana:2011dxg}. The modern formulations of relativistic fluid dynamics suggests that neither local near-equilibrium nor near-isotropy is required in order to have a successful hydrodynamical description of the experimental results \cite{Romatschke:2016hle}. Several efforts have been made over the years in the development of relativistic viscous hydrodynamics \cite{Romatschke:2017ejr} which systematically incorporates the dissipative effects~\cite{Florkowski:2017olj,Jaiswal:2016hex,Jeon:2015dfa,Kovtun:2012rj}. The viscous hydrodynamics concludes that at time $\sim$ 2 fm/c, the QGP created in ultra relativistic heavy-ion collisions (URHIC) has different longitudinal and transverse pressures~\cite{Strickland:2013uga}. This occurs due to the rapid expansion of the QCD matter along the longitudinal direction (beam direction) which gives rise to a large local rest frame momentum space anisotropy ~\cite{Strickland:2013uga,Mandal:2013jkd,Romatschke:2003ms} in the $p_T-p_L$ plane. This anisotropic momentum distribution can cause plasma instabilities in the system which contribute in the thermalization and isotropization process of the QCD plasma ~\cite{Arnold:2003rq,Mrowczynski:1988dz,Mrowczynski:1993qm,Mrowczynski:2000ed}. It is found that the exponential growth of the unstable modes plays an important role in the dynamics of the system in weak coupling limit~\cite{Randrup:2003cw}. In the hydrodynamic side, the anisotropic hydrodynamics (aHydro) framework is formulated to efficiently take into account the large momentum space anisotropy of the system \cite{Strickland:2014pga,Alqahtani:2017mhy}. On the other hand, the hard-thermal-loop perturbation theory \cite{Ghiglieri:2020dpq,Su:2012iy} has been employed~\cite{Romatschke:2003ms,Nopoush:2017zbu} to systematically study the properties of anisotropic QCD plasma. Typically, one uses a specific distribution function of light quarks and gluons which is widely known as the $`$Romatschke-Strickland' (RS) form~\cite{Romatschke:2003ms,Romatschke:2004jh}. There has been a concerted effort to study the effect of the momentum-space anisotropies on the heavy-quark potential~\cite{Nopoush:2017zbu,Dumitru:2007hy,Burnier:2009yu}, bottomonia suppression~\cite{Strickland:2011mw,Strickland:2011aa,Krouppa:2015yoa}, photon and dilepton production rates~\cite{Schenke:2006yp,Bhattacharya:2015ada}, wake potential~\cite{Mandal:2013jla} and so on. The generalized RS form of the distribution function \cite{Tinti:2013vba} that takes into account the azimuthal momentum-space anisotropy has been recently investigated in Refs.~\cite{Kasmaei:2018yrr,Ghosh:2020sng,Carrington:2021bnk}. On the other hand, the production of strong magnetic fields~\cite{Kharzeev:2007jp,Skokov:2009qp} at early stages of the non-central heavy-ion collisions has triggered enormous research interest in the theoretical, phenomenological and experimental understanding of the strongly interacting matter under extreme conditions \cite{Huang:2015oca,Miransky:2015ava}. The time dependence of the produced magnetic field has remained a subject of debate for a long period of time in the heavy-ion collision community~\cite{McLerran:2013hla,Roy:2017yvg,Huang:2015oca}. At the early stages of the collision, the system is dominated mainly by the gluons. Subsequently, a large number of quarks and anti-quarks are produced and the system evolves towards the equilibrium. Therefore, the system is believed to be much less conducting in the early times. Considering the Pb+Pb collision at $\sqrt{s}=2.76$ TeV, it is found in Ref.~\cite{Roy:2017yvg} that for an insulating medium, a magnetic field of strength $\sim$ 100 $m_\pi^2$ rapidly decays to a very low value within around 0.1 fm/c after the initial impact \cite{Huang:2015oca}. The rapid decrease in the field strength follows the $1/t^3$ behavior. However, the electrical conductivity of the medium significantly influences the time evolution of the electromagnetic fields in the late stage when the system reaches near the equilibrium state. It is shown in the Refs.~\cite{Tuchin:2013apa,Tuchin:2013ie} that the electrical conductivity of the medium can resist the decay of the magnetic field, at least to some extent. Intense research works have been performed to study the properties of the QCD matter in presence of such strong magnetic background which resulted in several interesting findings like chiral magnetic effect~\cite{Fukushima:2008xe,Kharzeev:2007jp}, magnetic catalysis~\cite{Lee:1997zj}, inverse magnetic catalysis~\cite{Bali:2011qj,Ayala:2014iba}, non-trivial magnetic modifications of chiral symmetry broken/restored phases~\cite{Andersen:2012dz,Avancini:2016fgq}, photon and dilepton production rate~\cite{Wang:2020dsr,Tuchin:2013bda,Bandyopadhyay:2016fyd,Das:2021fma,Ghosh:2018xhh,Hattori:2020htm}, thermodynamic properties~\cite{Bali:2011qj,Rath:2017fdv,Karmakar:2019tdp,Bandyopadhyay:2017cle}, heavy quark potential~\cite{Singh:2017nfa}, transport coefficients~\cite{Kurian:2018qwb,Kurian:2018dbn} and so on. The production of strong magnetic field at early stages of collision naturally motivates one to investigate the magnetic field effects on anisotropic QGP. In presence of external magnetic field ( with intensity $B$ ), one can define a hierarchy of energy scales as $\sqrt{|eB|}\gg T\gg g_sT$ which essentially determines the regime of validity of the strong magnetic field approximation. Here $e$ denotes the electric charge of proton and $g_s$ is the strong coupling constant. In this regime, the quarks occupy only the lowest Landau level and the dynamics becomes 1+1 dimensional. In this article, we restrict ourselves to the lowest Landau level approximation and investigate the gluon collective modes in presence of anisotropic momentum distribution. For this purpose, the one loop gluon self energy is obtained in the HTL approximation using the real time formalism of thermal field theory. We note here that the general structure of the polarization tensor, plays an important role in the determination of the effective propagator and the collective modes. The thermo-magnetic collective modes has been studied recently in Refs.~\cite{Hattori:2017xoo,Karmakar:2018aig}. The direction of the external magnetic field brings in an anisotropy in the system and naturally breaks the spherical symmetry. It also appears among the available four vectors that has to be taken into account for the construction of the general structure. The situation is similar to the spheroidal momentum space anisotropy. Thus, it is interesting to compare the two scenarios: one is the anisotropy due to the background field and the other is the anisotropy that arises due to the modeling of the non-equilibrium distribution function from the equilibrium distribution by suitable stretching or squeezing. In the present study we systematically address this issue. Throughout the article, we use the following convention: $\eta_{\stretchrel*{\parallel}{\perp}}^{\mu\nu}={\rm diag}(1,0,0,-1)$ and $\eta_{\perp}^{\mu\nu}={\rm diag}(0,-1,-1,0)$ with $\eta^{\mu\nu}=\eta_{\stretchrel*{\parallel}{\perp}}^{\mu\nu}+\eta_{\perp}^{\mu\nu}$ where the Lorentz indices $\{\mu, \nu\}\in\{0,1,2,3\}$. For a generic four vector $a^\mu$, we define $a_{\stretchrel*{\parallel}{\perp}}^\mu=(a^0,0,0,a^3)=(a_0,0,0,a_z)$ and $a_\perp^\mu=(0,a^1,a^2,0)=(0,a_x,a_y,0)$. The corresponding scalar products are defined as $(a_{\stretchrel*{\parallel}{\perp}}\cdot b_{\stretchrel*{\parallel}{\perp}})=a^0b^0-a^3b^3$ and $(a_\perp\cdot b_\perp)=-a^1b^1-a^2b^2$. \section{Formalism} In this section we obtain the one loop gluon self energy in presence of anisotropic thermo-magnetic medium within HTL approximation. For this purpose we follow the real-time Schwinger-Keldysh formalism \cite{Dumitru:2009fy,Carrington:1997sq,Carrington:1998jj,Mrowczynski:2000ed,Mrowczynski:2016etf} based on contour Green's functions which is applicable for non-equilibrium field theories. The basic formalism to obtain the retarded, advanced and the Feynman self-energies is reviewed in \cite{Mrowczynski:2016etf,Nopoush:2017zbu} in a self-contained manner. Here we briefly recall the essential steps to obtain the retarded part of the gluon self-energy in an anisotropic background. In the real time Keldysh formalism, the Green's functions for the quark field of a given flavour $\psi^i_\alpha$ and gluon field $A^a_\mu$ can be expressed as \begin{align} i\[S(x,y)\]^{ij}_{\alpha\beta}&=\left\langle\hat{T}\[\psi^i_\alpha(x)\overline{\psi}^j_\beta(y)\]\right\rangle~,\nonumber\\ i\[D(x,y)\]^{ab}_{\mu\nu}&=\left\langle\hat{T}\[A^a_\mu(x)A^b_\nu(y)\]\right\rangle~. \end{align} where the spinor indices are represented by the set $\{\alpha,\beta\}\in\{1,2,3,4\}$ and the color indices in fundamental and adjoint representations of $SU(N_c)$ group with $N_c=3$ are represented by the sets $\{i,j\}\in\{1,2,3\}$ and $\{a,b\}\in\{1,2\cdots 8\}$ respectively. Here the angular bracket notation $\left\langle\cdots\right\rangle$ denotes the expectation value and the time ordering $\hat{T}$ of two generic fields $\Phi_1$ and $\Phi_2$ has the usual meaning \begin{align} \hat{T}\[\Phi_1(x)\Phi_2(y)\]&=\Theta(x^0-y^0)\Phi_1(x)\Phi_2(y)\pm\Theta(y^0-x^0)\Phi_2(y)\Phi_1(x)~, \end{align} where $\Theta$ denotes the Heaviside step function and the upper (lower) sign corresponds to the bosonic (fermionic) nature of the $\Phi$ fields. At one loop level, the gluon polarization function has three different contributions arising namely from the gluon tadpole and loop diagrams, the ghost loop diagram and the quark loop diagram. In presence of external magnetic field, the contributions from gluon and ghost remain unmodified whereas corrections appear in the quark loop contribution. Moreover, in the HTL approximation, the expressions of the photon and gluon self-energy differ only in the definition of the Debye mass. Thus, to find the net contribution of the gluon and ghost loops in presence of anisotropic momentum distribution, it is convenient to obtain the photon polarization function first without the external magnetic field, and then replace the QED Debye mass by the corresponding QCD expression for pure glue \cite{Nopoush:2017zbu}. The retarded self energy so obtained, is given by \cite{Kasmaei:2018yrr,Ghosh:2020sng}: \begin{align} \tilde{\Pi}_{ab}^{\mu\nu}(\omega,{\bm p},\xi)&=\delta_{ab}~\tilde{m}_D^2\int\frac{d\Omega_{\bm v}}{4\pi}v^\mu\frac{v^l+\xi_1({\bm v}\cdot {\bm a_1})a_1^l+\xi_2({\bm v}\cdot {\bm a_2})a_2^l}{(1+\xi_1({\bm v}\cdot {\bm a_1})^2+\xi_2({\bm v}\cdot {\bm a_2})^2)^2}\left.\Big[\eta^{\nu l}-\frac{v^\nu p^l}{\omega-{\bm p}\cdot {\bm v}+i0^+}\Big]\right\vert_{{\scriptscriptstyle l \in\{1,2,3\}}},\label{glughost_part} \end{align} where, the strong coupling constant $g_s$ appears explicitly in the expression of $\tilde{m}_D^2=\frac{g_s^2\Lambda_T^2}{3}N_c$ which corresponds to the QCD Debye mass with $N_f=0$ and the scale $\Lambda_T$ in the equilibrium limit, corresponds to the temperature. Together with the scale $\Lambda_T$, the anisotropy tuple $\xi=(\xi_1,\xi_2)$ characterizes the ellipsoidal anisotropic distribution function constructed from the bosonic equilibrium distribution function as \cite{Nopoush:2017zbu,Ghosh:2020sng} \begin{align} f^{\rm B}_{\mbox{aniso}}({\bm k})\equiv f^{\rm B}_{\mbox{iso}}\Bigg(\frac{\sqrt{{\bm k}^2+\xi_1({\bm k}\cdot{\bm a_1})^2+\xi_2({\bm k}\cdot{\bm a_2})^2}}{\Lambda_T}\Bigg). \end{align} In this work, the spatial anisotropy vectors ${\bm a_1}$, ${\bm a_2}$ are chosen along $\hat{x}=(1,0,0)$ and $\hat{z}=(0,0,1)$ directions respectively whereas the spatial components $v^l$ of the parton four velocity $v^\mu=(1,{\bm v})$ as well as the external momentum vector ${\bm p}$ are chosen in the spherical polar coordinates with angles $(\theta_k, \phi_k)$ and $ (\theta_p, \phi_p)$ respectively. To obtain the quark loop contribution of the retarded self energy in real time, here we recall the required definitions of the four Green's functions based on the propagation along the contour: \begin{align} i\[S^>(x,y)\]^{ij}_{\alpha\beta}&=\left\langle\psi^i_\alpha(x)\overline{\psi}^j_\beta(y)\right\rangle~,\nonumber\\ i\[S^<(x,y)\]^{ij}_{\alpha\beta}&=-\left\langle\overline{\psi}^j_\beta(y)\psi^i_\alpha(x)\right\rangle~,\nonumber\\ i\[S^{\bar{c}}(x,y)\]^{ij}_{\alpha\beta}&=\left\langle\hat{T}^{\bar{c}}\[\psi^i_\alpha(x)\overline{\psi}^j_\beta(y)\]\right\rangle~,\nonumber\\ i\[S^{\bar{a}}(x,y)\]^{ij}_{\alpha\beta}&=\left\langle\hat{T}^{\bar{a}}\[\psi^i_\alpha(x)\overline{\psi}^j_\beta(y)\]\right\rangle~. \label{sdef} \end{align} Here $\hat{T}^{\bar{c}}$ is same as the usual time-ordering operator $\hat{T}$ defined earlier whereas the anti-time-ordering operator $\hat{T}^{\bar{a}}$ is defined as \begin{align} \hat{T}^{\bar{a}}\[\Phi_1(x)\Phi_2(y)\]&=\Theta(y^0-x^0)\Phi_1(x)\Phi_2(y)\pm\Theta(x^0-y^0)\Phi_2(y)\Phi_1(x)~, \end{align} where the upper (lower) sign corresponds to the bosonic (fermionic) nature of the generic $\Phi$ fields. The Green's function $S^{\bar{c}/\bar{a}}(x,y)$ is same as the time ordered propagator $S(x,y)$ with both $x^0$ and $y^0$ chosen on the upper/lower branch of the contour where the contour runs along the forward/backward time direction. On the other hand $S^{<}(x,y)$ and $S^{>}(x,y)$ is same as $S(x,y)$ with $x^0$ on the upper and $y^0$ on the lower branch and vice versa. To avoid clutter in the notations, let us first consider the photon self-energy in presence of magnetic background which is given by \begin{align} i\Pi^{\mu\nu}(x,y)&=-e^2 {\rm Tr}\[\gamma^\mu S(x,y)\gamma^\nu S(y,x)\] \end{align} where $e$ is the magnitude of the electron charge and $S(x,y)$ represents the electron propagator in presence of magnetic field. From the similar definitions as given in \eqref{sdef}, one can easily express the polarization tensor as a sum of $\Pi_{\mu\nu}^>$ and $\Pi_{\mu\nu}^<$ where \begin{align} i\Pi_{\mu\nu}^>(x,y)&=-e^2 {\rm Tr}\[\gamma_\mu S^>(x,y)\gamma_\nu S^<(y,x)\] ~,\nonumber\\ i\Pi_{\mu\nu}^<(x,y)&=-e^2 {\rm Tr}\[\gamma_\mu S^<(x,y)\gamma_\nu S^>(y,x)\] ~. \label{pi_grt_less} \end{align} Now, the retarded self energy is defined as \begin{align} \Pi_{\mu\nu}^R(x,y)&=\theta(x^0-y^0)\big[\Pi_{\mu\nu}^>(x,y)-\Pi_{\mu\nu}^<(x,y)\big]~. \label{retarded} \end{align} It should be noted here that the fermion propagator in presence of background magnetic field possess a multiplicative phase factor which spoils translational invariance \cite{ Schwinger:1951nm}. However, in the one loop photon polarization, the phase factor arising from the two fermion propagator cancels each other and only the translationally invariant parts of the propagators contribute. The same argument also applies for the quark loop in the gluon polarization tensor that we are interested in. Thus, from here on out, it is useful to decompose the fermion propagator as \cite{Shovkovy:2012zn} $S(x,y)=e^{i\Phi(x_{\perp},y_{\perp})}\overline{S}(x-y)$ and consider only the invariant $\overline{S}(x-y)$ part in the self energy. In that case, we are free to choose $y=0$ because of translational invariance and obtain the retarded self-energy as \begin{align} i\Pi^{\mu\nu}_R(x)&=-\frac{e^2}{2} {\rm Tr}\[\gamma^\mu \overline{S}_F(x) \gamma^\nu \overline{S}_A(-x)+\gamma^\mu \overline{S}_R(x) \gamma^\nu \overline{S}_F(-x)\]~. \end{align} Note that, in the above expression, the $S^>$ and $S^<$ propagators that arise from Eq.\eqref{pi_grt_less} and Eq.\eqref{retarded}, have been expressed in terms of the Feynman, advanced and retarded propagators which are defined respectively as \begin{align} S_F(x,y)&=S^>(x,y)+S^<(x,y)~,\nonumber\\ S_A(x,y)&=-\theta(y^0-x^0)\[S^>(x,y)-S^<(x,y)\]~,\nonumber\\ S_{R}(x,y)&=\theta(x^0-y^0)\[S^>(x,y)-S^<(x,y)\]~. \end{align} In the momentum space one obtains \begin{align} i\Pi^{\mu\nu}_R(p)&=-\frac{e^2}{2}\int\frac{d^4k}{(2\pi)^4} {\rm Tr}\[\gamma^\mu \overline{S}_F(k) \gamma^\nu \overline{S}_A(q)+\gamma^\mu \overline{S}_R(k) \gamma^\nu \overline{S}_F(q)\]~,\label{momentum_space_pi} \end{align} where $q=k-p$. In the mass-less limit, the invariant part of the propagators with lowest Landau level approximation are given by \cite{Shovkovy:2012zn} \begin{align} \overline{S}_R(k)&=k_{\paral}\!\!\!\!\!\!/~\left(1+s_\perp i\gamma^1\gamma^2\right) \Delta_R(k)=k_{\paral}\!\!\!\!\!\!/~\left(1+s_\perp i\gamma^1\gamma^2\right)\frac{\exp\big(-\frac{k^2_\perp}{|e_f B|}\big)}{k_{\paral}^2+i\epsilon~ {\rm sgn}(k^0)}~, \nonumber\\ \overline{S}_A(k)&=k_{\paral}\!\!\!\!\!\!/~\left(1+s_\perp i\gamma^1\gamma^2\right) \Delta_A(k)=k_{\paral}\!\!\!\!\!\!/~\left(1+s_\perp i\gamma^1\gamma^2\right)\frac{\exp\big(-\frac{k^2_\perp}{|e_f B|}\big)}{k_{\paral}^2-i\epsilon~{\rm sgn}(k^0)} ~, \nonumber\\ \overline{S}_F(k)&=k_{\paral}\!\!\!\!\!\!/~\left(1+s_\perp i\gamma^1\gamma^2\right) \Delta_F(k)=k_{\paral}\!\!\!\!\!\!/~\left(1+s_\perp i\gamma^1\gamma^2\right)\Big[(2\pi i)\big[-1+2 f_{\rm F}(k_z)\big]\Big] \delta(k_{\paral}^2) \exp\Big(-\frac{k^2_\perp}{|e_f B|}\Big)~, \end{align} where $s_\perp={\rm sgn}(e_f B)$ with `${\rm sgn}$' representing sign function and the electric charge of the fermion is denoted as $e_f=q_f e$ which is equal to $-e$ for the electron. Also we note that the expressions of the fermion propagator used here is derived for a constant magnetic field intensity $B$ along the $\hat{z}$ direction which is same as the direction of the anisotropy vector ${\bm a_2}$. It should be noted that in presence of background magnetic field, the energy eigenvalue for the free fermion only depends on the longitudinal momentum (say $k_z$) and the Landau level index (say $n$) as these are the conserved quantum numbers independent of the gauge choice. On the other hand, the transverse momentum, which appears in the expression of the propagators, should be considered only as a conjugate variable arising from the Fourier transform of the translationally invariant part and it does not appear in the energy eigenvalue. Hence, in the lowest landau level approximation ($n=0$), we construct the nonequilibrium fermion distribution function from the equilibrium Fermi-Dirac distribution function $f_{\rm F}(k_z)$ as \begin{align} f^{\rm F}_{\rm aniso}(k_z)&\equiv f^{\rm F}_{\rm iso}\Big(\sqrt{k^2_z+\xi_2k^2_z}/\Lambda_T\Big)=f^{\rm F}_{\rm iso}\Big(|k_z|/\lambda_T\Big)~,\label{fermi_aniso} \end{align} where the newly defined momentum scale $\lambda_T$ is related to $\Lambda_T$ as $\lambda_T=\Lambda_T/\sqrt{1+\xi_2}$. Now, using the definition of the propagators in Eq.\eqref{momentum_space_pi}, the spinor trace can be performed and one obtains \begin{align} i\Pi^{\mu\nu}_R(p)&=-4e^2\int\frac{d^4k}{(2\pi)^4}\Big[k_{\paral}^\muq_{\paral}^\nu+k_{\paral}^\nuq_{\paral}^\mu-\eta^{\mu\nu}_{\stretchrel*{\parallel}{\perp}}(k_{\paral}\cdotq_{\paral})\Big]\Big[\Delta_R(k)\Delta_F(q)+\Delta_F(k)\Delta_A(q)\Big]~,\nonumber\\ &=-8e^2\int\frac{d^4k}{(2\pi)^4}\Big[k_{\paral}^\muq_{\paral}^\nu+k_{\paral}^\nuq_{\paral}^\mu-\eta^{\mu\nu}_{\stretchrel*{\parallel}{\perp}}(k_{\paral}\cdotq_{\paral})\Big]\Delta_F(k)\Delta_A(q)~, \end{align} where in the last step we have used $\Delta_F(-k)=\Delta_F(k)$ and $\Delta_R(-k)=\Delta_A(k)$. As we are interested in the medium effects, here we only consider the medium modified part of $\Delta_F(k)=4\pi i f_{\rm F}(k_z)e^{-\frac{k^2_\perp}{|e_f B|}}\delta(k_{\stretchrel*{\parallel}{\perp}}^2)$. Moreover, with this structure of the propagator, the longitudinal and the transverse part of the integrals gets separated and one can easily perform the integral over the transverse momentum as \begin{align} \int\frac{d^2k_\perp}{(2\pi)^2}\exp\Big(-\frac{k^2_\perp}{|e B|}\Big)\exp\Big(-\frac{q^2_\perp}{|e B|}\Big)&=\frac{|eB|}{8\pi}\exp\Big(-\frac{p^2_\perp}{2|eB|}\Big)~. \end{align} The polarization function now becomes \begin{align} \Pi^{\mu\nu}_R(p)&=-4 e^2|eB|\exp\Big(-\frac{p^2_\perp}{2|eB|}\Big)\int\frac{d^2k_{\stretchrel*{\parallel}{\perp}}}{(2\pi)^2}f_{\rm F}(k_z)\Bigg[\frac{2k_{\paral}^\muk_{\paral}^\nu-(k_{\paral}^\mup_{\paral}^\nu+k_{\paral}^\nup_{\paral}^\mu)+\eta^{\mu\nu}_{\stretchrel*{\parallel}{\perp}}(k_{\paral}\cdotp_{\paral})}{p_{\paral}^2-2(k_{\paral}\cdotp_{\paral})-i\epsilon~ {\rm sgn}(k^0-p^0)}\Bigg]\delta(k_{\paral}^2)~. \end{align} In the HTL approximation we consider the external momentum to be `soft' that is $p\sim e\Lambda_T$ and the internal momentum is `hard' that is $k\sim\Lambda_T$. With this hierarchy, a Taylor series expansion of the terms in side the square brackets can be performed which up to second order is given as \begin{align} &\frac{2k_{\paral}^\muk_{\paral}^\nu-(k_{\paral}^\mup_{\paral}^\nu+k_{\paral}^\nup_{\paral}^\mu)+\eta^{\mu\nu}_{\stretchrel*{\parallel}{\perp}}(k_{\paral}\cdotp_{\paral})}{p_{\paral}^2-2(k_{\paral}\cdotp_{\paral})-i\epsilon~ {\rm sgn}(k^0-p^0)}=\frac{2k_{\paral}^\muk_{\paral}^\nu-(k_{\paral}^\mup_{\paral}^\nu+k_{\paral}^\nup_{\paral}^\mu)+\eta^{\mu\nu}_{\stretchrel*{\parallel}{\perp}}(k_{\paral}\cdotp_{\paral})}{-2(k_{\paral}\cdotp_{\paral})-i\epsilon~ {\rm sgn}(k^0)}\Bigg[1-\frac{p_{\stretchrel*{\parallel}{\perp}}^2}{2(k_{\paral}\cdotp_{\paral})+i\epsilon~ {\rm sgn}(k^0)}\Bigg]^{-1}~,\nonumber\\ &\approx\frac{2k_{\paral}^\muk_{\paral}^\nu}{-2(k_{\paral}\cdotp_{\paral})-i\epsilon~ {\rm sgn}(k^0)}-\frac{\eta_{\stretchrel*{\parallel}{\perp}}^{\mu\nu}}{2}+\frac{k_{\paral}^\mup_{\paral}^\nu+k_{\paral}^\nup_{\paral}^\mu}{2(k_{\paral}\cdotp_{\paral})+i\epsilon~ {\rm sgn}(k^0)}-p_{\paral}^2\frac{2 k_{\paral}^\mu k_{\paral}^\nu }{\big[2(k_{\paral}\cdotp_{\paral})+i\epsilon~ {\rm sgn}(k^0)\big]^2}~. \end{align} As in the thermal case, the first term in the expansion does not contribute. Integrating over the $k^0$ variable using the delta function property \begin{align} \delta(k^2_{\stretchrel*{\parallel}{\perp}})&=\frac{\delta(k^0-|k_z|)+\delta(k^0+|k_z|)}{2|k_z|}~, \end{align} one obtains \begin{align} \Pi^{\mu\nu}_R(p)&= e^2\frac{|eB|}{\pi}\exp\Big(-\frac{p^2_\perp}{2|eB|}\Big)\int\frac{dk_z}{2\pi}\frac{f_{\rm F}(k_z)}{|k_z|}\left.\Bigg[\eta_{\stretchrel*{\parallel}{\perp}}^{\mu\nu}-\frac{k_{\paral}^\mup_{\paral}^\nu+k_{\paral}^\nup_{\paral}^\mu}{(k_{\paral}\cdotp_{\paral})+i\epsilon}+\frac{p_{\paral}^2k_{\paral}^\mu k_{\paral}^\nu }{\big[(k_{\paral}\cdotp_{\paral})+i\epsilon\big]^2}\Bigg]\right\vert_{k^0=|k_z|}~. \end{align} The term in the square braces can be related to a total derivative term as \begin{align} \left.\Bigg[\eta_{\stretchrel*{\parallel}{\perp}}^{\mu\nu}-\frac{k_{\paral}^\mup_{\paral}^\nu+k_{\paral}^\nup_{\paral}^\mu}{(k_{\paral}\cdotp_{\paral})+i\epsilon}+\frac{p_{\paral}^2k_{\paral}^\mu k_{\paral}^\nu }{\big[(k_{\paral}\cdotp_{\paral})+i\epsilon\big]^2}\Bigg]\right\vert_{k^0=|k_z|}&=-|k_z|\frac{\partial}{\partial k_z}\left.\Bigg[p_z\frac{k_{\paral}^\muk_{\paral}^\nu}{|k_z|(k_{\paral}\cdotp_{\paral}+i \epsilon)}-\frac{k_{\paral}^\mu\eta_{\stretchrel*{\parallel}{\perp}}^{\nu 3}}{|k_z|}\Bigg]\right\vert_{k^0=|k_z|}~. \end{align} After performing an integration by parts with the assumption $\lim_{k_z\rightarrow\pm\infty}f(k_z)=0$ one obtains \begin{align} \Pi^{\mu\nu}_R(p)&= e^2\frac{|eB|}{\pi}\exp\Big(-\frac{p^2_\perp}{2|eB|}\Big)\int\frac{dk_z}{2\pi}\frac{\partial f_{\rm F}(k_z)}{\partial k_z}\left.\Bigg[p_z\frac{k_{\paral}^\muk_{\paral}^\nu}{|k_z|(k_{\paral}\cdotp_{\paral}+i \epsilon)}-\frac{k_{\paral}^\mu\eta_{\stretchrel*{\parallel}{\perp}}^{\nu 3}}{|k_z|}\Bigg]\right\vert_{k^0=|k_z|}~. \end{align} As in the thermal case, the above expression can further be simplified by expressing the magnitude and the angular integrals separately. Considering the anisotropic distribution function as given in Eq.~\eqref{fermi_aniso} one obtains \begin{align} \Pi^{\mu\nu}_R(p)&=\frac{m^2_{D,e}}{2}\exp\Big(-\frac{p^2_\perp}{2|eB|}\Big)\sum_{{\rm sgn}(k_z)=\pm1}\left.\frac{v_{\stretchrel*{\parallel}{\perp}}^\mu v_{\stretchrel*{\parallel}{\perp}}^l}{1+\xi_2}\Bigg[\eta_{\stretchrel*{\parallel}{\perp}}^{\nu l}-\frac{v_{\paral}^\nu p^l}{(v_{\paral}\cdotp_{\paral}+i \epsilon)}\Bigg]\right\vert_{l=3}~,\label{electron_loop} \end{align} where the Debye mass is defined as \begin{align} m^2_{D,e}&=-\frac{e^2}{\pi^2}|e B|\int d|k_z| \frac{\partial f^{\rm iso}_{\rm F}(|k_z|)}{\partial |k_z|}=e^2\frac{|eB|}{2\pi^2}~. \end{align} One can observe that, as a consequence of dimensional reduction in the strong field approximation, the solid angle integral with the $4\pi$ angular average in Eq.~\eqref{glughost_part}, now reduces to a summation along with an average over two possible directions. It should be noticed that unlike the thermal case, the self-energy is independent of the momentum scale $\Lambda_T$ and the anisotropy parameter appears only in a multiplicative factor without any directional dependence. However, the implicit dependence on the momentum scale is present due to the running of the coupling constant. Now, incorporating the flavor sum and the color factor, the quark loop contribution in the retarded gluon polarization tensor can be obtained from Eq.~\eqref{electron_loop} as \cite{Fukushima:2015wck} \begin{align} \bar{\Pi}_{ab}^{\mu\nu}(p)&=\delta_{ab}\sum_{f}g_s^2\frac{|e_f B|}{8\pi^2}\exp\Big(-\frac{p^2_\perp}{2|e_f B|}\Big)\sum_{{\rm sgn}(k_z)=\pm1}\left.\frac{v_{\stretchrel*{\parallel}{\perp}}^\mu v_{\stretchrel*{\parallel}{\perp}}^l}{1+\xi_2}\Bigg[\eta_{\stretchrel*{\parallel}{\perp}}^{\nu l}-\frac{v_{\paral}^\nu p^l}{(v_{\paral}\cdotp_{\paral}+i \epsilon)}\Bigg]\right\vert_{l=3}~.\label{quark_loop} \end{align} In the static limit ($\omega=0, {\bm p}\rightarrow 0 $), the temporal component $\bar{\Pi}^{00}$ with $\xi_2=0$ becomes \cite{Karmakar:2018aig} \begin{align} \bar{m}_D^2&=\sum_{f}g_s^2\frac{|e_f B|}{4\pi^2}~, \end{align} which, together with the magnetic field independent contribution $\tilde{m}_D^2$, defines the Debye screening mass $\hat{m}_D=\sqrt{\tilde{m}_D^2+\bar{m}_D^2}$. Finally, the retarded gluon polarization function is obtained from the individual contributions given in Eq.~\eqref{glughost_part} and Eq.~\eqref{quark_loop} as \begin{align} \Pi_{ab}^{\mu\nu}(p,eB,\xi,\Lambda_T)&=\tilde{\Pi}_{ab}^{\mu\nu}(p,\xi_1,\xi_2,\Lambda_T)+\bar{\Pi}_{ab}^{\mu\nu}(p, eB,\xi_2,\Lambda_T)~, \label{pi} \end{align} where the dependence on the external parameters $p$, $eB$, $\xi$ and $\Lambda_T$ has been shown explicitly in each case. The polarization function is symmetric in the Lorentz indices ($\Pi^{\mu\nu}=\Pi^{\nu\mu}$) and satisfies the transversality condition $p_\mu\Pi^{\mu\nu}=0$. Incorporating these constraint relations, the general structure of the polarization function can be constructed from the available basis tensors. A suitable choice in this regard is the basis set constructed for the ellipsoidal momentum anisotropy in Ref.~\cite{Ghosh:2020sng}. A list of the required basis tensors is provided in the Appendix \ref{list_tensor} for completeness. In that basis, the gluon polarization tensor can be expressed as \begin{align} \Pi^{\mu\nu}&=\alpha A^{\mu\nu}+\beta B^{\mu\nu}+\gamma C^{\mu\nu}+\delta D^{\mu\nu}+\sigma E^{\mu\nu}+\lambda F^{\mu\nu}~, \end{align} and the corresponding form factors can be extracted from Eq.~\eqref{pi} through suitable projections. Here we note that, all of the projection tensors are symmetric and transverse to the external momentum. Thus, the decomposition of the polarization function trivially satisfies the symmetry constraint as well as the transversality condition. Now, the effective gluon propagator can be obtained from the Dyson--Schwinger equation \begin{align} \mathcal{D}&=\mathcal{D}_0-\mathcal{D}_0\Pi \mathcal{D}~. \end{align} Here $\mathcal{D}_0$ is the bare propagator and its inverse, with the gauge fixing parameter $\zeta$, is given by \begin{align} (\mathcal{D}_0^{-1})^{\mu\nu}&=-p^2\eta^{\mu\nu}-\frac{1-\zeta}{\zeta}p^\mu p^\nu~. \end{align} From the pole of the effective propagator, one can obtain the gluon collective modes by solving \begin{eqnarray} p^2-\Omega_{0,\pm}(p)&=&0~.\label{disp} \end{eqnarray} Any deviation from the light like dispersion is encoded in the mode functions $\Omega_{0,\pm}$ which are given by \cite{Ghosh:2020sng} \begin{eqnarray} \Omega_0&=&\frac{1}{3}(\alpha + \beta + \delta)- \frac{1}{3}\frac{\varpi}{\Big(\frac{\chi +\sqrt{4\varpi^3+\chi^2}}{2}\Big)^{\frac{1}{3}}}+\frac{1}{3}\Big(\frac{\chi +\sqrt{4\varpi^3+\chi^2}}{2}\Big)^{\frac{1}{3}},\label{om0}\\ \Omega_{\pm}&=&\frac{1}{3}(\alpha + \beta + \delta)+ \frac{1\pm i\sqrt{3}}{6}\frac{\varpi}{\Big(\frac{\chi +\sqrt{4\varpi^3+\chi^2}}{2}\Big)^{\frac{1}{3}}}-\frac{1\mp i\sqrt{3}}{6}\Big(\frac{\chi +\sqrt{4\varpi^3+\chi^2}}{2}\Big)^{\frac{1}{3}},\label{ompm} \end{eqnarray} where, the $\varpi$ and $\chi$ in the expression are defined in terms of the form factors as \begin{eqnarray} \varpi&=&\alpha(\beta-\alpha)+\beta(\delta-\beta)+\delta(\alpha-\delta)-3(\gamma^2+\lambda^2+\sigma^2)~,\\ \chi&=&(2\alpha-\beta-\delta)(2\beta-\delta-\alpha)(2\delta-\alpha-\beta)+54\gamma\lambda\sigma\nonumber\\ &-&9\big[\alpha(2\lambda^2-\sigma^2-\gamma^2)+\beta(2\sigma^2-\gamma^2-\lambda^2) +\delta(2\gamma^2-\lambda^2-\sigma^2)\big]~. \end{eqnarray} It should be noted here that among the six form factors, only the $\alpha$, $\beta$ and $\gamma$ gets modified in presence of the external magnetic field. However, all of the form factors depend on the external anisotropy parameter $\xi$. Now, each of the mode functions being a nontrivial combination of all six form factors, it is expected that, in addition to the anisotropy induced effects, all the gluon collective modes will possess magnetic modifications. We explore such anisotropic gluon collective modes in the following section. \section{Results} As mentioned earlier, for fixed set of external parameters, the gluon collective modes can be obtained by solving Eq.~\eqref{disp} which only requires the knowledge of the momentum dependence of the form factors. Choosing a particular orientation of the reference frame, this momentum dependence can be obtained from the components of the polarization tensor. Here we note that, without the background field, $\Lambda_T$ being the only available energy scale for the anisotropic plasma, the tensor components (and consequently the form factors and the mode functions) are proportional to the square of the thermal Debye screening mass given by \begin{align} m_D&=\sqrt{\frac{g_s^2 \Lambda_T^2}{3}\Bigg(N_c+\frac{N_f}{2}\Bigg)}~, \end{align} and one can do away with the $\Lambda_T$ dependence by simply expressing the dispersion in terms of the scaled variables $\omega/m_D$ and $|{\bm p}|/m_D$. When the external magnetic field is turned on, the gluon and the quark loop contribution respectively becomes proportional to $\tilde{m}_D^2$ and $\bar{m}_D^2$. However, to compare with the thermal case, here we consider the same $m_D$ for scaling instead of the thermo-magnetic Debye mass $\hat{m}_D$. With $N_c=3$, the ratio of the square of the Debye masses arising from the gluon and the quark contribution can be expressed as \begin{align} \frac{\bar{m}_D^2}{\tilde{m}_D^2}&=\mathcal{R}^2\sum_f|q_f|~, \end{align} where the ratio of the two energy scales is set by $2\pi\mathcal{R}=\sqrt{|eB|}/\Lambda_T$. In the present study, we consider $eB=30 m_\pi^2 $ and $\Lambda_T=0.2$ GeV which gives $2\pi \mathcal{R}\sim 3$. The value of the coupling $g_s$ at the fixed $\Lambda_T$ is determined considering the one loop running. For this purpose, the $\overline{\rm{MS}}$ renormalization scale is set at 0.176 GeV by fixing the QCD fine structure constant $\alpha_s(1.5~ {\rm GeV},N_f=3)=0.326$ \cite{Bazavov:2012ka,Haque:2014rua}. With these fixed set of external parmeters, we now obtain the three stable gluon collective modes characterized by the corresponding mode functions given in Eqs.~\eqref{om0} and \eqref{ompm}. At first we consider the case with one anisotropy direction. This can arise either due to the presence of the external magnetic field or by the expansion of the medium resulting in an anisotropic momentum distribution of the partons. Now, irrespective of the origin of the anisotropy, the gluon polarization tensor can be expressed in terms of four basis tensors and the pole of the effective propagator gives rise to the same mode functions given by \cite{Karmakar:2018aig,Ghosh:2020sng} \begin{eqnarray} \Omega_0&=& \frac{1}{2}\bigg( \alpha + \beta+\sqrt{(\alpha - \beta)^2+4\gamma^2} \bigg), \label{RS1}\\ \Omega_+&=& \frac{1}{2}\bigg( \alpha + \beta-\sqrt{(\alpha - \beta)^2+4\gamma^2} \bigg), \label{RS2}\\ \Omega_-&=& \delta \label{RS3}. \end{eqnarray} However, it should be observed that the parameter dependences of the form factors in the two cases are completely different and it is interesting to compare the two scenarios. In Fig.~\ref{disp_plot_mo1}, we consider the dispersion corresponding to the mode function $\Omega_0$ for two different values of $\theta_p=\{\pi/2,\pi/4\}$ which represents the angel between the anisotropy vector and the external momentum. One can notice that the angular dependence is weak in both cases. In contrast to the magnetic field case, the mode corresponding to the spheroidal anisotropy shows more prominent angular dependence in the low momentum regime. Also it can be noticed that the plasma frequency in presence of the external magnetic field is significantly larger compared to the spheroidal anisotropy scenario. The introduction of the magnetic field enhances the plasma frequency for this mode compared to the isotropic case (also shown in the figure) whereas a spheroidal anisotropy decreases it. \begin{figure}[tbh!] \includegraphics[width=7 cm, scale=0.7]{Omega0com.pdf} \caption{The collective mode of gluon corresponding to the mode function $\Omega_0$ is shown at fixed momentum scale $\Lambda_T=0.2$ GeV and propagation angles $\theta_p=\pi/2$ and $\pi/4$ for two different cases: (i) with external magnetic field $eB=30 m_\pi^2$ (shown in red) and (ii) with spheroidal anisotropy (shown in blue). The light cone and the isotropic collective modes are also shown for comparison. } \label{disp_plot_mo1} \end{figure} The scenario is quite different in case of the collective mode corresponding to $ \Omega_+$ as shown in Fig.~\ref{disp_plot_mo2}. In presence of background magnetic field one can observe a prominent angular dependence in the dispersion shown in Fig.~\ref{disp_plot_mo2}(a). In the two limiting cases when the propagation angle $\theta_p$ is zero and $\pi/2$, the collective mode becomes identical respectively to the transverse ($\Pi_T$) and the longitudinal ($\Pi_L$) mode of the isotropic gluonic medium \cite{Karmakar:2018aig,Hattori:2017xoo}. It should be noted here that as $\gamma$ vanishes in the isotropic case and $\beta$ and $\delta=\Pi_T$ become degenerate, one obtains two distinct dispersive modes (also shown in the figure) corresponding to the mode functions \cite{Bellac:2011kqa} \begin{eqnarray} \Omega_0=\Pi_L=-\tilde{m}_D^2\frac{\omega^2-p^2}{ p^2}\bigg[ 1-\frac{\omega}{2p}\ln \frac{\omega+p}{\omega-p}\bigg]~, \label{disp_iso_1} \end{eqnarray} and \begin{eqnarray} \Omega_\pm=\Pi_T=\frac{\tilde{m}_D^2}{2}\frac{\omega^2}{p^2}\bigg[1-\frac{\omega^2-p^2}{2\omega p}\ln \frac{\omega+p}{\omega-p} \bigg]~. \label{disp_iso_2} \end{eqnarray} For the intermediate angles (shown for $\theta_p=\pi/12,\pi/6$), the mode lies within the isotropic dispersion curves of the pure gluonic medium. On the other hand, in case of spheroidal momentum space anisotropy, the angular dependence of the collective mode is quite different from the magnetic field case as shown in Fig.~\ref{disp_plot_mo2}(b). Here we consider anisotropy tuple $\xi=(0,10)$. One can notice that the angular dependence is weaker. Moreover, the isotropic dispersions can not be recovered by simply varying the propagation angle. It should be noted here that in this case the isotropic mode functions are same as Eq.~\eqref{disp_iso_1} and Eq.~\eqref{disp_iso_2} however with the replacement of $\tilde{m}_D^2$ by $m_D^2$. Comparing the modes in Fig.~\ref{disp_plot_mo2}(a) and Fig.~\ref{disp_plot_mo2}(b), one can observe that in both cases the plasma frequency decreases compared to the isotropic value with $N_f=3$ and for the external magnetic field, it becomes equal to the plasma frequency of the isotropic pure gluonic medium ($ N_f=0$). Interestingly, due to the similar decomposition of the basis tensor, in both cases the mode characterized by $\Omega_-$ becomes identical to the corresponding isotropic transverse mode (see Eq.~\eqref{RS3} where $\delta$ is respectively proportional to $\tilde{m}_D^2$ and $m_D^2$ for the magnetic and spheroidal anisotropy case) and consequently becomes independent of the propagation angle. \begin{figure}[tbh!] \includegraphics[width=7 cm, scale=0.7]{Oplx0z0eB30.pdf} \includegraphics[width=7cm, scale=0.7]{Oplx0z10eB0.pdf} \caption{Angular variation of the collective mode of gluon corresponding to the mode function $\Omega_+$ is shown at fixed momentum scale $\Lambda_T=0.2$ GeV for two different cases: (a) with external magnetic field $eB=30 m_\pi^2$ and (b) with spheroidal anisotropy. The light cone and the isotropic collective modes ( with (a) $N_f=0$ and (b) $N_f=3$ ) are also shown for comparison. } \label{disp_plot_mo2} \end{figure} \begin{figure}[tbh!] \includegraphics[width=7 cm, scale=0.7]{x10z0th4ph6eB30.pdf} \includegraphics[width=7cm, scale=0.7]{x10z5th4ph6eB30.pdf} \caption{The collective modes of gluon with (a) spheroidal and (b) ellipsoidal anisotropy are shown for $\theta_p=\pi/4$ and $\phi_p=\pi/6$ at fixed momentum scale $\Lambda_T=0.2$ GeV and magnetic field strength $30m_\pi^2$. The light cone is also shown for comparison.} \label{disp_plot} \end{figure} Finally, the dispersion relation for the three stable modes in presence of momentum space anisotropy as well as external magnetic field is shown in Fig.~\ref{disp_plot}. In the left panel, we consider the momentum anisotropy along $\hat{x}$ which is orthogonal to the magnetic field direction (along $\hat{z}$) and fix the anisotropy tuple at $\xi=(10,0)$, whereas, in right panel, the dispersion is shown for ellipsoidal momentum anisotropy with two anisotropy directions : one along the magnetic field ({\it{i.e.}} along $\hat{z}$) and the other orthogonal to it ({\it{i.e.}} along $\hat{x}$). In this case the anisotropy tuple is set at $\xi=(10,5)$. It should be noted that in the presence of either magnetic field or spheroidal momentum anisotropy (say along $\hat{z}$), the rotational symmetry of the system is broken and the dispersive modes depend on the direction of propagation of the gluons which is characterized by the polar angle $\theta_p$. However, when the two anisotropy directions are considered together, as long as they are not parallel to each other, the azimuthal symmetry of the system is also broken and consequently, the collective modes show azimuthal angular dependence. Here we consider a fixed propagation direction now characterized by $\theta_p=\pi/4$ and $\phi_p=\pi/6$. Unlike the magnetic field case discussed earlier (where the plasma frequencies of $\omega_+$ and $\omega_-$ were degenerate), one can observe from Fig.~\ref{disp_plot}(a), that all the collective modes possess different plasma frequencies. Moreover, an overall decrease in the magnitude is observed compared to the thermo-magnetic modes (shown in Figs.\ref{disp_plot_mo1} and ~\ref{disp_plot_mo2}(a)). Once the ellipsoidal anisotropy is considered, the plasma frequencies further decreases for all the modes as can be seen from Fig.~\ref{disp_plot}(b). This is in fact expected from Eq.~\eqref{quark_loop} as the anisotropy parameter $\xi_2$ essentially suppresses the quark loop contribution thereby decreasing the overall magnitude. \begin{center} \begin{figure}[tbh!] \begin{center} \includegraphics[scale=0.5]{mass_eB30_piby12_xi2.pdf} \caption{Variation of the squared mass with polar angle $\theta_p$ is shown for each mode functions at fixed values of external parameters $\phi_p=\pi/12$, $\xi_1=10$, $\Lambda_T=0.2$ GeV and $eB=30m_\pi^2$. The continuous and the dashed curves represent $\xi_2=5$ and $\xi_2=0$ respectively.} \label{mass} \end{center} \end{figure} \end{center} Let us now consider the influence of the magnetic field on the unstable modes of the anisotropic medium. As in the case of spheroidal~\cite{Romatschke:2003ms} and ellipsoidal momentum anisotropy~\cite{Ghosh:2020sng}, in the limit $\omega\rightarrow0$, one can define three mass scales ($m_0$ and $m_\pm$) corresponding to the mode functions $\Omega_0$ and $\Omega_\pm$. A negative value of a given squared mass indicates the existence of an unstable mode. It should be mentioned here that instead of considering $N_f=3$, if one considers a two flavour plasma, all the qualitative features remain the same and in the following, we study the mass scales and the instability growth rate considering $N_f=2$. \begin{center} \begin{figure}[tbh!] \begin{center} \includegraphics[width=5.5cm,scale=0.35]{mass_m1_eB_compare.pdf} \includegraphics[width=5.5cm,scale=0.35]{mass_m2_eB_compare.pdf} \includegraphics[width=5.5cm,scale=0.35]{mass_m3_eB_compare.pdf} \caption{Variation of the squared masses with the polar angel $\theta_p$ is shown at $\xi_1=10$ and $\xi_2=5$ with $\phi_p$ as a parameter. The continuous and the dashed curves represent the magnetic field strength 30$m_\pi^2$ and $0$ respectively.} \label{mass_compare} \end{center} \end{figure} \end{center} In Fig.~\ref{mass} we show the variation of the squared mass with the propagation angle of the gluon with respect to the magnetic field direction. For a fixed $\phi_p=\pi/12$, we consider two scenarios: one with $\xi=(10,0)$ and the other with $\xi=(10,5)$. In the former case, as we increase $\theta_p$, $m^2_+$ and $m^2_-$ gradually become negative. However, a positive value value is observed for $m^2_0$ throughout the $\theta_p$ range. One should note that, at small $\phi_p$ ( as considered here ), the higher values of $\theta_p$ indicates proximity to the anisotropy axis and the observed angular dependence of the mass scales is similar to the spheroidal anisotropy case \cite{Romatschke:2003ms}. When the momentum anisotropy along the magnetic field direction is turned on, all the mass scales become nearly independent of $\theta_p$. In this case, a prominent negative value for $m^2_+$ is observed for the entire range of the polar angle. It is interesting to compare the scenario with the ellipsoidal anisotropy results as obtained in Ref.~\cite{Ghosh:2020sng}. For this purpose, in Fig.~\ref{mass_compare}, we show the directional dependence of the square mass scales with and without the external magnetic field. Here we consider $\xi=(10,5)$. One can notice that the angular dependence of the mass scales are similar to the ellipsoidal anisotropy scenario showing a positive $m^2_0$ throughout the considered range of $\theta_p$ and $\phi_p$ along with instability windows for $m^2_\pm$. \begin{center} \begin{figure}[tbh!] \begin{center} \includegraphics[width=7cm,scale=0.7]{insta_1.pdf} \includegraphics[width=7cm,scale=0.7]{insta_2.pdf} \caption{The growth rate corresponding to $\Omega_+$ mode is plotted for (a) $\xi=(10,0)$ and (b) $\xi=(10,5)$ at fixed angles $\theta_p=\pi/3$, $\phi_p=\pi/12$. The continuous and the dashed curves correspond to the magnetic field strength $0$ and 30$m_\pi^2$ respectively.} \label{inst_mag} \end{center} \end{figure} \end{center} As already mentioned, the negative values in the square mass indicate the presence of unstable modes whose amplitude grows exponentially with time. The growth rate of such instabilities (that is the imaginary part of the mode frequency) can be obtained from the pole of the effective propagator. For this purpose, the mode frequency ($p^0=\omega$) in Eq.~\eqref{disp} is replaced by $ i \Gamma_{0,\pm}$ and one looks for the solution of $\Gamma$ corresponding to each mode functions ~\cite{Ghosh:2020sng,Kasmaei:2018yrr}. The numerical solution for $\Gamma_+$ is shown in Fig.~\ref{inst_mag} for a fixed propagation direction $(\theta_p,\phi_p)=(\pi/3,\pi/12)$. In the left panel, we consider the spheroidal momentum anisotropy with $\xi=(10,0)$ whereas in the right panel, we take $\xi=(10,5)$ characterizing an ellipsoidal momentum space anisotropy. It can be observed that in both cases the amplitude of the growth rate significantly decreases in presence of the external magnetic field. For the spheroidal and ellipsoidal anisotropy without any magnetic background, there exists a critical value of the momentum beyond which the growth rate becomes negative and the instability ceases to exist. When the external magnetic field is turned on, we observe a significant decrease in the critical momentum providing a smaller momentum window for the positive growth rate. The situation may be compared to the instabilities in collisional plasma \cite{Schenke:2006xu,Jamal:2017dqs,Kumar:2017bja} where a critical collisional frequency exists beyond which the growth rate becomes negative for any value of external momentum. In a similar way, one may expect a critical magnetic field intensity beyond which no instabilities occur. Here we recall that in the present study we have considered the field intensity $\sqrt{eB}$ as high as three times the momentum scale $\Lambda_T$ to justify the lowest Landau level approximation. Now, for the anisotropic collisional plasma, a small change in the collisional frequency significantly reduces the growth rate \cite{Schenke:2006xu}. However, in the present study we find that, even if one increases the magnetic field to several times the considered value, the amplitude and the critical momentum corresponding to the growth rate hardly decreases. Thus, as long as the heavy ion collisions are concerned, a critical magnetic field intensity is unlikely to be present in the realistic scenario. \iffalse It also largely reduces the momentum range within which the gluon modes are unstable. Following this behavior, one can conclude that the modes become stable at high enough magnetic field strength. Although, such high magnetic field may not be produced in the heavy-ion collisions. One can also conclude that the magnetic field and the anisotropy ($\xi_2$) behave in an opposite manner. The effect of the magnetic field gets reduced by the presence of an anisotropy along the same direction due to introduction of the factor $(1+\xi_2)$ in the momentum scale $\lambda_T=\Lambda_T/\sqrt{1+\xi_2}$ in the nonequilibrium fermion distribution function. The presence of ellipsoidal anisotropy ({\it{i.e.,}} when the momentum space anisotropy in the transverse plane is also taken into account) enhances the growth rate of the unstable modes when compared to the shperoidal anisotropy as found in Ref.~\cite{Kasmaei:2018yrr}. On the contrary, the magnetic field (along $z$ direction in the transverse plane {\it{i.e.,}} $y-z$ plane) reduces the growth rate of the unstable modes leading to a competition between the magnetic field and the anisotropy $\xi_2$. \fi \section{Summary and Conclusion} In this article, the collective modes of gluon in the presence of momentum space anisotropy along with a constant background magnetic field have been studied using the hard-thermal loop perturbation theory. For this purpose, we have obtained the one loop gluon self energy in the real time Schwinger-Keldysh formalism. The contributions from the gluon and ghost loops remain unaffected by the external magnetic field whereas the entire modification arises from the quark loop contribution which has been evaluated in the lowest Landau level approximation. To extract the Lorentz invariant form factors from the polarization tensor, we implement the basis decomposition obtained in Ref.~\cite{Ghosh:2020sng} which is originally constructed for describing the ellipsoidal momentum anisotropy. From the pole of the effective gluon propagator, we obtain three stable dispersive modes of gluon. At first we compare the collective modes of spheroidal anisotropy with that of isotropic thermal background along with external magnetic field. In both cases, the dispersion is governed by four non-vanishing form factors. Though the mode functions in terms of the form factors are identical in the two cases, the form factors themselves are different. Consequently, significant differences are observed in the angular dependence of the collective modes . When the external magnetic field is considered along with spheroidal or ellipsoidal momentum anisotropy, the azimuthal symmetry of the system is lost. As a result, the collective modes depend on the polar as well as on the azimuthal angels corresponding to the propagation direction. It is observed that due to the dimensional reduction in the LLL approximation, the parameter $\xi_2$ that characterizes the anisotropy along the magnetic field direction, appears in the quark loop only in an overall suppressing factor. Thus, the momentum anisotropy along the magnetic field direction essentially counterbalances the magnetic field effects. As the quark loop contribution is suppressed in this case, we observe smaller plasma frequencies for all the collective modes. To investigate the unstable modes, we have studied the angular dependence of the squared mass scales corresponding to each mode functions. Depending upon the propagation direction, we have observed negative values in the squared masses corresponding to $\Omega_\pm$ indicating instability in the collective modes. \iffalse We note that the tensor structure used in Ref.~\cite{Karmakar:2018aig} is sufficient if one considers themo-magnetic system with spheroidal anisotropy (the anisotropy should be along the magnetic field direction). However, one needs to have the six tensors used in this article while considering ellipsoidal anisotropy of the medium in the presence of a magnetic field. The real time formalism has been used to compute the gluon self-energy form factors. Three out of the six form factors go to zero at zero magnetic field and isotropic limit and one gets back the thermal form factors. From the pole of the effective gluon propagator, we obtain three stable dispersive modes of gluon. In general, the dispersive modes of gluon in anisotropic medium strongly depend on the propagation angle. However, comparing the collective modes of the spheroidal momentum anisotropy case with the anisotropy due to the presence of external magnetic field the angular dependence Comparing the spheroidal anisotropy case with the Also, we found that one out of the three modes $\omega_0$ depends largely on the magnetic field strength. \fi Here we note that no unstable gluon mode exists in an isotropic medium even in the presence of a background magnetic field. It is the momentum space anisotropy that gives rise to the instability. However, the external magnetic field has a significant influence on the growth rate of the unstable modes. In particular, the amplitude as well as the critical momentum corresponding to the growth rate of the unstable mode is significantly reduced in presence of strong magnetic background. This observation is similar to the instability growth rate in anisotropic collisional plasma \cite{Schenke:2006xu} where larger collisional frequency suppresses the growth rate and eventually, no unstable mode exists beyond a critical frequency. However, it has been argued that the realistic collision frequencies usually lie within the critical value. Here also we find that for anisotropic thermal medium with realistic magnetic field intensity ( which is expected to be present in heavy ion collisions ), unstable collective modes do exist in certain propagation direction. The present study has several interesting future directions. First of all, due to the lowest Landau level approximation, only a 1+1 dimensional quark dynamics is considered here. Consequently, the momentum anisotropy orthogonal to the magnetic field direction does not affect the quark loop contribution at all. However, a non trivial influence of such momentum space anisotropy is expected in the weak field limit where the energy eigen value of the quarks have the usual three momentum dependence. Thus it is interesting to contrast such scenario with the strong field case as presented here. Also, the fermionic collective modes have recently been studied in presence of magnetic field \cite{Das:2017vfh} and also in case of ellipsoidal anisotropy \cite{Kasmaei:2016apv}. Thus, the combined effect of the magnetic field and momentum space anisotropy on the fermionic collective modes deserves further investigation. Similar scenario also exists in the studies of heavy quark potential where the effect of the external magnetic field and the momentum anisotropy has been considered individually \cite{Dumitru:2007hy,Nopoush:2017zbu,Singh:2017nfa,Hasan:2020iwa,Ghosh:2022sxi} and their mutual influence remains to be explored. We intend to pursue such exploration in future. \begin{acknowledgments} A. M. would like to acknowledge fruitful discussions with Ashutosh Dash and Sunil Jaiswal. B. K. acknowledges HORIZON 2020 European research council (ERC) 2016 Consolidation grant, ERC-2016-COG: 725741: QGP TOMOGRAPHY (under contract with ERC). R. G. is funded by University Grants Commission (UGC). A. M. acknowledges Department of Science and Technology (DST), Government of India, for funding. \end{acknowledgments}
2024-02-18T23:40:45.270Z
2022-04-21T02:40:51.000Z
algebraic_stack_train_0000
3,262
8,991
proofpile-arXiv_065-15935
\section{Introduction} Radiology report analysis has long been a labor-some and error-prone process \cite{brady2016radiology-error}, which raises the need for accurate analysis tools to alleviate the workloads of radiologists and enhance accurate diagnosis. Though existing natural language processing (NLP) toolkits such as cTAKES \cite{savova2010ctakes}, scispaCy \cite{neumann-etal-2019-scispacy}, MedTagger \cite{liu2013medtagger}, and CLAMP \cite{soysal2017clamp} have been widely used in text mining of clinical narratives in electronic health record (EHR), none of these tools on the use of NLP in EHRs is specific to radiology domain. \begin{table}[] \vspace{1em} \centering \begin{tabular}[width=\textwidth]{llcccc} \toprule System & Language & Raw-Text & Locally & Fully & Open\\ & & Processing & Process & Neural & Source\\ \midrule MetaMap & Prolog/Java & \cmark & Hybrid & \xmark & \cmark\\ cTakes & Java & \cmark & \cmark & \xmark & \cmark\\ medspaCy & Python & \cmark & \cmark & \xmark & \cmark\\ MedTagger & Java/C & \cmark & \cmark & \xmark & \cmark\\ CLAMP & Java & \cmark & \cmark & Hybrid & \xmark\\ \midrule RadText\xspace & Python & \cmark & \cmark & Hybrid & \cmark\\ \bottomrule \end{tabular} \vspace{.5cm} \caption{Feature comparisons of RadText\xspace against other widely used NLP toolkits. Fully Neural: full neural network pipeline.} \label{tab:comparison} \end{table} One recognized challenge is the requirement of proper radiology domain knowledge, without which the process of analyzing the structure of radiology text and interpreting the underlying meaning would be highly error-prone. For example, standardized terminology for each concept is important for NLP applications. Existing clinical NLP systems frequently use UMLS Methathesaurus as the medical lexicon\cite{bodenreider2004unified}. However, few support RadLex, which offers radiology-specific terms such as devices and imaging techniques \cite{langlotz2006radlex}. As a result, ambiguous terms (e.g., acronyms) can be interpreted differently. Another example is negation detection, which is also essential in radiology because diagnostic imagining is often used to rule out a condition. Systems in the clinical domain frequently implement this functionality by combining manually crafted rules with key terms based on the syntactic analysis\cite{chapman2013extending, chapman2011documentlevel}. While they usually achieve good results in the general clinical domain, most cannot be directly applied to radiology reports mostly because sentences in radiology reports are usually telegraphic, with missing subjects and verbs. In addition, sentences in the radiology reports also contain long, complicated noun phrases. These obstacles pose a challenge to existing parsers that are modeled over well-formed sentences \cite{fan2013syntactic}. Therefore, the performance of negation detection algorithms significantly drops\cite{peng2017negbio} in the case of radiology reports. In such cases, filling in the gaps requires additional rules to handle ill-formed sentences. Another challenge is that every software intends to perform tasks on data in various formats. It thus remains challenging to seamlessly interchange data in and between different NLP tools. Such a bottleneck prevents combining these tools into a larger, more powerful, and more capable system in the clinical domain. To bridge this gap, the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) is proposed to harmonize disparate observational databases of EHR \cite{voss2015CDM}. The goal is to transform data contained within those databases into a common format (data model) and representation (terminologies, vocabularies, coding schemes) so that systematic analyses can be conducted in the common format. While OMOP CDM is an excellent schema to store structured data and provides a \texttt{NOTE\_NLP} table to store NLP final results, it does not support representing complex, messy data between different NLP modules, such as hierarchical note structure (section, passage, sentence, token). Furthermore, it is almost impossible to store the parsing trees of each sentence in \texttt{NOTE\_NLP} table. However, such text-preprocessing information is frequently reused in NLP algorithms and should be interchangeable and reusable. In addition, OMOP CDM must be realized in a relational database, which most of the common NLP tools do not support. These limitations result in the main barrier to the reuse of tools and modules and the development of text mining pipelines customized for different workflows. One alternative solution is the BioC format \cite{comeau2013bioc}, an XML-based simple format to share text data and annotations. Unlike OMOP CDM, BioC emphasizes simplicity, interoperability, broad use and reuse of data interchange. It is thus suitable to represent, store and exchange the NLP results, especially complex intermediate results, in a simple manner. However, as initially designed for sharing different annotations relevant for biomedical research, BioC cannot be directly used for clinical notes. To overcome this issue, we propose to extend the BioC format with the OMOP CDM schema, called BioC-CDM, to store the results generated in the annotation process of clinical NLP that can be easily converted and imported into OMOP CDM. In this work, we present RadText\xspace, an open-source Python radiology text analysis system. Unlike previous methods, RadText\xspace features a hybrid text analysis pipeline that utilizes high-performance third-party implementations, including machine learning-based methods and rule-based methods. As shown in Table \ref{tab:comparison}, compared to existing widely-used NLP toolkits, RadText\xspace has the following advantages: \begin{itemize} \item \textbf{Unified Interface}. RadText\xspace uses BioC-CDM format as the unified interface throughout the system pipeline. BioC format simplifies data representation and data exchange and satisfies all the NLP task requirements in RadText\xspace. \item \textbf{Compatible with OMOP CDM}. RadText\xspace standardizes its outputs into a structured representation compatible with OMOP CDM. This allows for transforming data into a common representation and further enables a systematic analysis of disparate observational data sources. \item \textbf{Easy to Use}. RadText\xspace provides a user-friendly interface. RadText\xspace sequentially runs de-identification, section segmentation, sentence split, word tokenization, named entity recognition, parsing, and negation detection. Modular choice of design greatly improves flexibility, which enables users to adjust any module according to their specific use case, and to re-run each module if needed. \item \textbf{Raw Text Processing}. RadText\xspace takes raw text as input, which means no text preprocessing (e.g., tokenization, annotation) is needed. This greatly enhances the usability and generalizability of RadText\xspace. \item \textbf{Local Machine}. The entire system pipeline of RadText\xspace is running locally on CPU machines. No data will be uploaded to remote servers, greatly preserving user data privacy. \item \textbf{Open Source}. To facilitate and drive future clinical NLP research and applications, RadText\xspace is fully open source. We make the source code, documentation, examples, and human-annotated test set publicly available. \end{itemize} \section{Related Work} Various NLP toolkits have been introduced to the clinical NLP community\cite{pons2016naturala} and have been successfully applied to the information extraction task from clinical text. MetaMap \cite{aronson2010metamap} uses a knowledge-intensive approach based on symbolic, NLP, and computational-linguistic techniques to map the biomedical text into the Unified Medical Language System (UMLS) Metathesaurus \cite{bodenreider2004umls}. Apache Clinical Text Analysis and Knowledge Extraction System (cTAKES) focuses on extracting clinical information from electronic health record free text, including processing clinical notes, and identifying clinical named entities \cite{savova2010ctakes}. Different from MetaMap and Apache cTAKES, which utilize machine learning methods to map words to medical concepts, MedTagger for indexing is built upon a fast string matching algorithm leveraging lexical normalization \cite{liu2013medtagger}. It thus requires rules designing and expert knowledge engineering. Instead of conducting sole information extraction, medspaCy \cite{eyre2021medspacy} and Clinical Language Annotation, Modeling and Processing (CLAMP) \cite{soysal2017clamp} are designed to be modularized so that users can choose from various choices of modular components for their individual applications. medspaCy features performing clinical NLP and text processing tasks with the popular spaCy \cite{honnibal2020spacy} framework, which provides a robust architecture for building and sharing custom, high-performance NLP pipelines \cite{eyre2021medspacy}. CLAMP also highlights enabling users to quickly build customized NLP pipelines for their clinical NLP tasks. Distinguished from these previous works, RadText\xspace aims to provide a high-performance clinical NLP toolkit in Python that focuses on radiology text analysis. RadText\xspace hence adopts a hybrid radiology text processing pipeline, bringing together a number of third-party analysis tools in the radiology domain, with each tool implementing one or more components of RadText\xspace's working pipeline. \section{System Design and Architecture} \subsection{BioC-CDM: BioC format compatible with OMOP CDM}\label{bioc} \begin{table*}[ht] \centering \begin{threeparttable}[b] \begin{tabularx}{.99\textwidth}{lllX} \toprule OMOP CDM field & BioC field & BioC class & Description\\ \midrule note\_nlp\_id & id & annotation & A unique identifier for each term extracted from a note.\\ note\_id & doc & document & A foreign key to the Note table, uniquely identifying the note.\\ section\_concept\_id & section\_concept\_id & passage & A foreign key to the predefined Concept in the Standardized Vocabularies representing the section of the extracted term.\\ snippet & - & - & A small window of text surrounding the term. \\ offset & offset & \begin{tabular}[t]{@{}l@{}}passage\\sentence\\ annotation\end{tabular} & Character offset of the extracted term in the input note. \\ lexical\_variant & text & annotation & Raw text extracted by the NLP tool. \\ note\_nlp\_concept\_id & lemma & annotation & A foreign key to a Concept table, representing the normalized concept of the extracted term.\\ note\_nlp\_source\_concept\_id & source\_concept\_id & annotation & A foreign key to a Concept table that refers to the code in the source vocabulary used by the NLP system.\\ nlp\_system & nlp\_system & collection & Name and version of the NLP system that extracted the term.\\ nlp\_date,nlp\_date\_time & date & collection & The date of the note processing. \\ term\_exists & exists\tnote{1} & annotation & If the patient actually has or had the condition.\\ term\_temporal & temporal & annotation & If a condition is “present” or just in the “past”.\\ term\_modifiers & modifiers & annotation & Describes compactly all the modifiers extracted by the NLP system.\\ \bottomrule \end{tabularx} \begin{tablenotes} \item [1] currently called ``negation'' \end{tablenotes} \caption{Mapping radiology notes to the OMOP CDM and BioC using RadText\xspace.} \label{tab:bioc-cdm} \end{threeparttable} \end{table*} We propose BioC-CDM to store the results generated in the annotation process of clinical NLP in the BioC format that can be easily converted and imported into OMOP CDM. A BioC-format file is an XML document as the basis of data class representation and data exchange, which can satisfy the needs of RadText\xspace's NLP tasks throughout the entire pipeline\cite{comeau2013bioc}. OMOP CDM harmonizes disparate coding systems to a standardized vocabulary with minimal information loss. As a result, adopting BioC-CDM as RadText\xspace's unified interface and using it as a common format representing all modular components’ output eliminates the barrier of integration and greatly enhances RadText\xspace's interoperability. Table \ref{tab:bioc-cdm} shows the current and our proposed mappings between OMOP CDM and BioC. Section \ref{api-usage-convert} shows how RadText\xspace can be used to implement mutual conversion between BioC format and OMOP CDM. \begin{comment} \begin{figure}[!hbpt] \centering \includegraphics[width=0.5\textwidth]{radtext-cdm.pdf} \caption{An example of RadText\xspace's output in OMOP CDM format.} \label{fig:cdm} \end{figure} \end{comment} \begin{comment} RadText\xspace supports the BioC-CDM and the map of it to OMOP CDM \texttt{NOTE\_NLP\_TABLE}. The following code snippet shows an example of RadText\xspace's output in BioC format, whose fields are naturally compatible with OMOP CDM fields, ensuring that mapping BioC format to OMOP CDM can be done without any information loss. \end{comment} \begin{comment} \begin{testexample}[Output in BioC] \lstset{language=XML} \begin{lstlisting} <annotation id="0"> <infon key="source_concept_id">C1522460</infon> <infon key="source">UMLS</infon> <infon key="nlp_date">2022-01-14</infon> <infon key="nlp_system">RadText</infon> <text>tortuosity of the thoracic aorta</text> <location offset="518" length="33"> </annotation> </document> \end{lstlisting} \end{testexample} \end{comment} \subsection{Pipeline} The implementation of RadText\xspace is highly modular (Figure \ref{fig:architecture}). We highlight the details of each module in this section. \begin{figure}[h] \centering \includegraphics[width=0.35\textwidth]{radtext-architecture.pdf} \caption{Overview of RadText\xspace's NLP pipeline, main components, and implementations.} \label{fig:architecture} \end{figure} \subsubsection{De-Identification} Radiology reports often contain protected health information (PHI), such as patient and provider names, addresses, and numbers\cite{Norgeot2020ProtectedHI}. Removal of PHI is important; otherwise, radiology reports remain largely unused for research. To address this issue, RadText\xspace uses Philter \cite{Norgeot2020ProtectedHI} for de-identification. It uses both rule-based and statistical approaches to remove identifiers defined in the HIPAA Safe Harbor guidelines \cite{rightsocr2012guidance}. The following code snippet shows an example of RadText\xspace's de-identification output. The mentions of patient's name, provider's name, and dates belong to PHI. They are replaced with a sequence of ``X"s respectively for de-identification purposes. \begin{comment} \begin{testexample}[Output in BioC] \lstset{language=XML} \begin{lstlisting} <passage> <text>COMPARISONS: Chest radiograph from Mayo Clinic. FINDINGS: PA and lateral radiographs demonstrate clear lungs. Heart size is normal. There is no pneumothorax or pleural effusion. IMPRESSION: No acute cardiopulmonary process.</text> <annotation id="A0"> <infon key="phi_type">OTHER</infon> <location offset="42" length="5"/> <text>Sinai</text> </annotation> <annotation id="A1"> <infon key="phi_type">OTHER</infon> <location offset="48" length="8"/> <text>Hospital</text> </annotation> </passage> \end{lstlisting} \end{testexample} \end{comment} \begin{testexample}[Output in BioC] \lstset{language=XML} \begin{lstlisting} <infon key="nlp_system">Philter</infon> <document> <passage> <text>Patient's Name: XXXXXXXXXXXXXX Referred by: XXXXXXXXXXX XX Date Taken: XXXXXXXXXX Date of Report: XXXXXXXXXX Clinical statement: Shortness of breath, wheezing, and bilateral lower extremity edema. Technique: AP and lateral chest radiographs. Comparison: XXXXXXXXXXXXX...</text> <annotation id="A0"> <infon key="source_concept">Date</infon> <infon key="source_concept_id">C1547350</infon> <location offset="70" length="10"/> <text>02/07/2016</text> </annotation> <annotation id="A1"> <infon key="source_concept">Date</infon> <infon key="source_concept_id">C1547350</infon> <location offset="97" length="10"/> <text>02/07/2016</text> </annotation> <annotation id="A2"> <infon key="source_concept">Date</infon> <infon key="source_concept_id">C1547350</infon> <location offset="263" length="13"/> <text>July 18, 2015</text> </annotation> <annotation id="A5"> <infon key="source_concept">Person Name</infon> <infon key="source_concept_id">C1547383</infon> <location offset="16" length="14"/> <text>LATTE, MONICA</text> </annotation> <annotation id="A6"> <infon key="source_concept">Person Name</infon> <infon key="source_concept_id">C1547383</infon> <location offset="43" length="11"/> <text>SAVEM, CARL</text> </annotation> <annotation id="A7"> <infon key="source_concept">Degree/license/certificate</infon> <infon key="source_concept_id">C1547754</infon> <location offset="55" length="2"/> <text>MD</text> </annotation> </passage> ... </document> \end{lstlisting} \end{testexample} \subsubsection{Section Segmentation} Although radiology reports are in the form of free text, they are often structured in terms of sections, such as INDICATION, FINDINGS, and IMPRESSION. Identifying section types and section boundaries can help various successive processing steps to use a subset of sections or assign specific weights to the content of different sections \cite{tepper2012statistical}. For example, effusion and edema were mentioned in the INDICATION section of the sample report below. But we should not identify them as positive because the radiologist ruled them out in the FINDINGS section. Therefore, a named entity recognition tool that does not differentiate between sections will likely make errors. \begin{testexample}[An example of chest x-ray report] \small \begin{alltt} {INDICATION}: Please evaluate for pneumonia, {effusions, edema} {FINDINGS}: The lungs are clear without consolidation, {effusion or edema}... {IMPRESSION}: No acute cardiopulmonary process. \end{alltt} \end{testexample} In a preprocessing step, RadText\xspace splits each report into sections and provides two options: NegBio or medspaCy. Both approaches rely on hand-coded heuristics for section segmentation (boundary detection) and achieve good performances. \begin{itemize} \item \textbf{NegBio}. The heuristics in NegBio are based on conventions like the capitalization of headers and the presence of colon and blank lines between headers and text. The set of heuristics was collected from the NIH Chest X-ray dataset\cite{Wang2017ChestXRay8HC} and the MIMIC-CXR dataset\cite{johnson2019mimic}. \item \textbf{medspaCy}. medspaCy includes an implementation of clinical section detection based on rule-based matching of the section titles with the default rules adapted from SecTag \cite{denny2008sectag} and expanded through practice. The default rules were collected from different resources such as the Logical Observation Identifiers Names and Codes (LOINC) headers \cite{mcdonald2003loinc} and Quick Medical Reference (QMR) Findings Hierarchy \cite{miller1989use} and were further revised based on the actual clinical notes from Vanderbilt EHR. \end{itemize} The following code snippet shows an example of the section segmentation output for the sample report above. \begin{testexample}[Output in BioC] \begin{lstlisting}[language=XML2] <infon key="nlp_system">NegBio</infon> <document> <passage> <infon key="section_concept">clinical information section </infon> <infon key="section_concept_id">RID13166</infon> <offset>0</offset> <text>INDICATION:</text> </passage> <passage> <offset>12</offset> <text>Please evaluate for ... edema</text> </passage> <passage> <infon key="section_concept">observations section</infon> <infon key="section_concept_id">RID28486</infon> <offset>60</offset> <text>FINDINGS:</text> </passage> <passage> <offset>70</offset> <text>The lungs are clear ... edema</text> </passage> ... </document> \end{lstlisting} \end{testexample} \subsubsection{Sentence Split and Word Tokenization} RadText\xspace tokenizes the input raw text and groups tokens into sentences as one part of preprocessing. RadText\xspace offers three options to tokenize and split reports into sentences, including NLTK \cite{bird2009NLTK}, spaCy \cite{honnibal2020spacy}, and Stanza \cite{qi2020stanza}. \begin{itemize} \item \textbf{NLTK}. The sentence tokenizer in NLTK uses an unsupervised algorithm to build a model for abbreviation words, collocations, and words that start sentences. It then uses that model to find the sentence boundaries \cite{bird2009NLTK}. \item \textbf{spaCy}. Sentence segmentation is part of spaCy's English pipeline. It uses a variant of the non-monotonic arc-eager transition system \cite{honnibal-johnson-2015-improved} with the addition of a ``break" transition for sentence segmentation \cite{honnibal2020spacy}. \item \textbf{Stanza}. Stanza combines tokenization and sentence segmentation from the raw text as one single module in its pipeline. Stanza models it as a tagging task over character sequences, where the model predicts whether a given character is the end of a token, end of a sentence, or end of a multi-word token. \end{itemize} The following code snippet gives an example of RadText\xspace's sentence split output. The input paragraph is split into three \texttt{Sentence} instances. \begin{testexample}[Output in BioC] \lstset{language=XML2} \begin{lstlisting} <infon key="nlp_system">NLTK</infon> <document> <passage> <text>PA and lateral radiographs demonstrate clear lungs. Heart size is normal. There is no pneumothorax or pleural effusion.</text> <sentence> <offset>0</offset> <text>PA and lateral ... clear lungs.</text> </sentence> <sentence> <offset>52</offset> <text>Heart size is normal.</text> </sentence> <sentence> <offset>73</offset> <text>There is no ... pleural effusion.</text> </sentence> </passage> ... </document> \end{lstlisting} \end{testexample} \subsubsection{Named Entity Recognition} Named entity recognition (NER) aims to determine and identify the words or phrases in text into predefined labels that describe the concepts of interest in a given domain \cite{nadeau2007ner}. To recognize the radiology-domain named entities (e.g., thoracic disorders) in each input sentence, RadText\xspace offers two options, spaCy-enabled rule-based method and MetaMap. \begin{itemize} \item \textbf{Rule-based Regular Expression}. Rule-based NER methods use regular expressions that combine information from terminological resources and characteristics of the entities of interest manually constructed from report corpus. RadText\xspace adopts spaCy's PhraseMatcher as part of this component. Rules defining concepts specify the text regular patterns to be matched and additional information about a concept, such as its unique id in the terminology. \item \textbf{MetaMap}. UMLS is the most comprehensive standard terminology that is typically used as the basis for clinical concept extraction. Enabled by MetaMap, RadText\xspace is able to detect all the concepts in UMLS and map them to Concept Unique Identifier (CUI). In general, MetaMap is much more comprehensive than vocabulary-based patterns. But at the same time, MetaMap could be noisy and less accurate. \end{itemize} The following code snippet shows an example of RadText\xspace's NER output, where ``Pneumonia" and ``Pneumothorax" are correctly recognized and their corresponding UMLS concept IDs are also identified. \begin{testexample}[Output in BioC] \lstset{language=XML} \begin{lstlisting} <infon key="nlp_system">MetaMap</infon> <document> <passage> <text>There is no pneumonia or pneumothorax.</text> <annotation id="a1"> <infon key="source_concept">Pneumonia</infon> <infon key="source_concept_id">RID5350</infon> <location offset="12" length="9"/> <text>pneumonia</text> </annotation> <annotation id="a2"> <infon key="source_concept">Pneumothorax</infon> <infon key="source_concept_id">RID5352</infon> <location offset="24" length="12"/> <text>pneumothorax</text> </annotation> </passage> ... </document> \end{lstlisting} \end{testexample} \subsubsection{Parsing} RadText\xspace utilizes the universal dependency graph (UDG) to describe the grammatical relationships within a sentence in a way that can be understood by non-linguists and effectively used by downstream processing tasks \cite{peng2017negbio}. UDG is a directed graph, which represents all universal dependency information within a sentence. The vertices in a UDG represent the information such as the word, lemma, and part-of-speech tag. The edges in a UDG represent the typed dependencies from the governor to its dependent and are labeled with the corresponding dependency type. UDG effectively represents the syntactic head of each word in a sentence and the dependency relations between words. Figure \ref{fig:ud} shows a UDG example of the sentence ``There is no pleural effusion or pneumothorax" generated by Stanza \cite{qi2020stanza}. In this example, ``pleural" is the adjectival modifier of ``effusion" and ``effusion" and ``pneumothorax" are coordinated findings. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{radtext-ud.pdf} \caption{The obtained dependency graph of ``There is no pleural effusion or pneumothorax" using Stanza \cite{qi2020stanza}.} \label{fig:ud} \end{figure} To obtain the UDG of a sentence, RadText\xspace provides two options, Stanza or Bllip Parser with the Stanford dependencies converter \cite{charniak-johnson-2005-coarse}. \begin{itemize} \item \textbf{Stanza}. Stanza's dependency parsing module builds a tree structure of words from the input sentence, representing the syntactic dependency relations between words. After \emph{tokenization}, \emph{multi-word token (MWT) expansion}, \emph{part-of-speech (POS) and morphological features tagging}, and \emph{lemmatization}, each sentence would have been directly parsed into the universal dependencies structure \cite{qi2020stanza}. \item \textbf{Bllip Parser with Stanford dependencies converter}. RadText\xspace first parses each sentence to obtain the parse tree using the Bllip parser, which was trained with the biomedical model \cite{charniak-johnson-2005-coarse, Charniak2010AnyDP}. It then applies the Stanford dependencies converter on the resulting parse tree with the \emph{CCProcessed} and \emph{Universal} option \cite{de-marneffe-etal-2014-universal, marneffe2008dependency} to derive the universal dependencies. \end{itemize} The following code snippet shows an example of RadText\xspace's parsing result. In the sample sentence, ``effusion" and ``pneumothorax" are respectively assigned with node id of ``T31" and ``T33". Derived from the universal dependency result, there is a conjunction relation between ``T31" and ``T33". \begin{testexample}[Output in BioC] \lstset{language=XML} \begin{lstlisting} <infon key="nlp_system">Bllip Parser</infon> <document> <passage> <sentence> <infon key="parse tree">(S1 (S (S (NP (EX There)) (VP (VBZ is) (ADVP (RB no)) (NP (NP) (JJ pleural) (NN effusion)) (CC or) (NP (NN pneumothorax))))) (. .)))</infon> <text>There is no pleural effusion or pneumothroax.</text> ... <annotation id="T31"> <text>effusion</text> </annotation> <annotation id="T33"> <text>pneumothorax</text> </annotation> ... <relation id="R33"> <infon key="dependency">conj</infon> <node refid="T33" role="dependant"/> <node refid="T31" role="governor"/> </relation> ... </sentence> ... </passage> ... </document> \end{lstlisting} \end{testexample} \subsubsection{Negation Detection} Negative and uncertain medical findings are frequent in radiology reports \cite{Chapman2001EvaluationON}. Since they may indicate the absence of findings mentioned within the radiology report, identifying them is as important as identifying positive findings. For negation and uncertainty detection, RadText\xspace employs NegBio \cite{peng2017negbio, Wang2017ChestXRay8HC}, which utilizes universal dependencies for pattern definition and subgraph matching for graph traversal search so that the scope for negation/uncertainty is not limited to the fixed word distance \cite{de-marneffe-etal-2014-universal}. The following code snippet shows an example of RadText\xspace's negation detection output. In this sample sentence, ``pneumothorax" is identified as negative according to NegBio's internal negation rule of ID ``nn180". \begin{testexample}[Output in BioC] \lstset{language=XML} \begin{lstlisting} <infon key="nlp_system">NegBio</infon> <document> <passage> <text>There is no pneumonia or pneumothorax.</text> ... <annotation id="a2"> <infon key="source_concept">Pneumothorax</infon> <infon key="source_concept_id">RID5352</infon> <infon key="exists">False</infon> <infon key="negation">True</infon> <infon key="negbio_pattern_id">nn180</infon> <infon key="negbio_pattern_str">{}=f &gt;{} {lemma:/no/}=k0</infon> <location offset="24" length="12"/> <text>pneumothorax</text> </annotation> </passage> ... </document> \end{lstlisting} \end{testexample} \section{System Usage} RadText\xspace is designed to have a user-friendly interface and allow quick out-of-the-box usage for radiology text analysis. To achieve this, RadText\xspace provides automated pipeline usage and step-by-step modular choice of design. Therefore, Users can run RadText\xspace directly through the command line interface or import RadText\xspace as a Python library to use any functionality through RadText\xspace's API. \subsection{Installation} The latest RadText\xspace releases are available on PyPI \footnote{\url{https://pypi.org/project/radtext/}}. Using pip, RadText\xspace releases can be downloaded as source packages and binary wheels. It is also generally recommended installing RadText\xspace packages in a virtual environment to avoid modifying system state: \begin{testexample}[Installation instructions] \begin{lstlisting}[language=bash] $ python -m venv venv $ source venv/bin/activate $ pip install -U radtext $ python -m spacy download en_core_web_sm $ radtext-download --all \end{lstlisting} \end{testexample} \subsection{Command Line Usage} The following command runs RadText\xspace's entire pipeline in the sequential order of de-identification, section segmentation, sentence split and word tokenization, NER, parsing, and negation detection. The default section title vocabulary for the section segmentation module and concept vocabulary for the NER module is designed to be configurable. All intermediate result files will be generated and saved for use and reuse. The automatic pipeline execution enables users to use RadText\xspace as an out-of-the-box toolkit without the need and effort to figure out how each module of RadText\xspace works. \begin{comment} \begin{testexample}[An example of command line usage] \begin{lstlisting}[language=bash] $ bash run_pipeline.sh [args] \end{lstlisting} \end{testexample} \end{comment} \begin{testexample}[An example of command line usage] \begin{lstlisting}[language=bash] $ bash run_pipeline.sh \end{lstlisting} \end{testexample} \begin{comment} In addition to running RadText\xspace's pipeline as a whole, users can also choose to run every single module of RadText\xspace by configuring the argument of the command above (Table \ref{tab:commands}). This enables users to re-run each single modular component to reproduce the result in case of any error, without the need of re-running RadText\xspace's entire pipeline. All intermediate results are saved so that users can easily check the output of each module, which we believe will greatly facilitate error analysis and enhance RadText\xspace's flexibility. The following code snippet shows a part of RadText\xspace's pipeline. \end{comment} In addition to running RadText\xspace's pipeline as a whole, users can also choose to run every single module of RadText\xspace through easy-to-use command line commands (see Table \ref{tab:radtext-commands}). This enables users to re-run each single modular component to reproduce the result in case of any error, without the need of re-running RadText\xspace's entire pipeline. All intermediate results are saved so that users can easily check the output of each module, which we believe will greatly facilitate error analysis and enhance RadText\xspace's flexibility. The following code snippet shows a an example of RadText\xspace's modular command line usage. \begin{testexample}[An example of modular command line usage] \begin{lstlisting}[language=bash] $ [command] [options] -i INPUT -o OUTPUT $ radtext-deid -i /path/to/input.xml -o /path/to/output.xml \end{lstlisting} \end{testexample} \begin{comment} \begin{testexample}[The bash script of run\_pipeline.sh] \lstset{language=bash} \begin{lstlisting} #!/bin/bash export PYTHONPATH=. top_dir=$HOME/radtext # specify resource files. ner_phrase_file=$top_dir/resources/cxr14_phrases_v2.yml ... # specify the directory for intermediate results. ... bioc_file=$top_dir/bioc.xml bioc_deid_file=$top_dir/Results/deid.xml ud_file=$top_dir/Results/ud.xml ner_file=$top_dir/Results/ner.xml parse_file=$top_dir/Results/parse.xml ... # execute the pipeline. while [ "$1" != "" ]; do ... 'deid' ) echo "-- De-identification --" python cmd/deidentify.py -i "$bioc_file" -o "$bioc_deid_file" ;; ... 'ner' ) echo "-- Named entity recognition --" python cmd/ner.py regex --phrases "$ner_phrase_file" -i "$ud_file" -o "$ner_file" ;; 'parse' ) echo "-- Dependency Parsing --" python cmd/parse.py spacy -i "$ner_file" -o "$parse_file" ;; ... * ) echo "Cannot recognize parameter $1" ;; esac shift done \end{lstlisting} \end{testexample} \end{comment} \begin{comment} \begin{table}[h] \centering \begin{tabular}{ll} \toprule Argument & Description \\ \midrule \textcolor{red}{download} & \textcolor{red}{Download all models needed.}\\ csv2bio & Transforms .csv text file into a BioC-format XML file.\\ deid & De-identifies all the reports.\\ split\_section & Segments sections. \\ preprocess & Splits sentences and tokenizes words. \\ ner & Recognizes named entities.\\ parse & Parses to obtain the universal dependency graph. \\ neg & Detects negations.\\ collect\_label & Collects and merges labels.\\ \bottomrule \end{tabular} \vspace{.5cm} \caption{Command line arguments.} \label{tab:commands} \end{table} \end{comment} \begin{table}[h] \centering \begin{tabular}{ll} \toprule Commands & Description \\ \midrule radtext-download & Download all models needed.\\ radtext-deid & De-identifies all the reports.\\ radtext-secsplit & Segments sections. \\ radtext-ssplit & Splits sentences and tokenizes words. \\ radtext-ner & Recognizes named entities.\\ radtext-parse & Parses the sentences to obtain the parse tree. \\ radtext-tree2dep & Parses to obtain the universal dependency graph. \\ radtext-neg & Detects negations.\\ radtext-collect & Collects and merges labels.\\ radtext-csv2bioc & Converts CSV format to BioC format. \\ radtext-cdm2bioc & Converts OMOP CDM format to BioC format. \\ radtext-bioc2cdm & Converts BioC format to OMOP CDM format.\\ \bottomrule \end{tabular} \vspace{.5cm} \caption{Command line commands.} \label{tab:radtext-commands} \end{table} \subsection{Python API Usage} RadText\xspace can be directly imported as a Python library. Users can access all the functionalities of RadText\xspace through Python API. \subsubsection{BioC-CDM Conversion} \label{api-usage-convert} RadText\xspace's Python API supports the mutual conversion between BioC format and OMOP CDM. The following code snippet shows an example of converting BioC format to CDM and then converting CDM back to BioC format. \begin{testexample}[An example of API usage] \lstset{language=Python} \begin{lstlisting} import bioc from radtext import BioC2CDM, CDM2BioC # initialize RadText's BioC2CDM converter. bioc2cdm = BioC2CDM() with open(filepath) as fp: collection = bioc.load(fp) cdm_df = bioc2cdm(collection) # initialize RadText's CDM2BioC converter. cdm2bioc = CDM2BioC() bioc_collection = cdm2bioc(cdm_df) \end{lstlisting} \end{testexample} \subsubsection{Pipeline Usage} The following code snippet shows a minimal usage of RadText\xspace's entire pipeline through Python API, which annotates a sample report and prints out all annotation results. \begin{testexample}[An example of API usage] \lstset{language=Python} \begin{lstlisting} import bioc import radtext # initialize RadText's pipeline. nlp = radtext.Pipeline() # load a BioC-format sample report. with open(filepath) as fp: doc = bioc.load(fp) # run RadText's pipeline on the sample report. collection = nlp(doc) print(collection) \end{lstlisting} \end{testexample} \begin{comment} \begin{testexample}[An example of API usage] \lstset{language=Python} \begin{lstlisting} import radtext # initialize RadText's pipeline. nlp = radtext.Pipeline() # run RadText's pipeline on a sample report. collection = nlp('FINDINGS: The lungs are clear without consolidation, effusion or edema...') print(collection) \end{lstlisting} \end{testexample} \end{comment} After running all modules, RadText\xspace returns a \texttt{Collection} instance that stores the final annotation results. Within a \texttt{Collection} instance, the annotations are stored in either \texttt{Passage} or \texttt{Sentence} classes. The following code snippet shows how we can access the detected disease findings and the corresponding negation status after obtaining the \texttt{Collection} instance. \begin{testexample}[An example of API usage] \lstset{language=Python} \begin{lstlisting} for doc in collection.documents: for passage in doc.passages: for annotation in passage.annotations: print(annotation.infon['source_concept'], annotation.infon['negation']) \end{lstlisting} \end{testexample} RadText\xspace's Python API also allows partial pipeline execution. Therefore, users can pause after any module of RadText\xspace to access the intermediate NLP results. The following code snippet shows an example of the partial execution of RadText\xspace. By specifying the \texttt{annotators} to be \emph{secsplit} and \emph{ssplit}, RadText\xspace will run section segmentation and sentence split sequentially. The output \texttt{Collection} instance will have the annotation results of sentence split. \begin{comment} Since each module requires the output of the previous module, RadText\xspace's API will automatically run every module until it finishes running the specified module. \end{comment} \begin{testexample}[An example of API usage] \lstset{language=Python} \begin{lstlisting} import radtext # initialize RadText's pipeline which will perform section segmentation and sentence split. nlp = radtext.Pipeline(annotators=['secsplit', 'ssplit']) # load a BioC-format sample report. with open(filepath) as fp: doc = bioc.load(fp) # run RadText's pipeline on the sample report. collection = nlp(doc) print(collection) \end{lstlisting} \end{testexample} \section{Evaluation} \subsection{Dataset} We evaluated RadText\xspace on the MIMIC-CXR dataset \cite{johnson2019mimic}. MIMIC-CXR is a large publicly available dataset of radiographic studies performed at the Beth Israel Deaconess Medical Center. This dataset contains 227,827 radiology reports in total. \subsection{Experiments and Results} We evaluated RadText\xspace's performance on five new disease findings that were not covered by previous works, including Calcification of the Aorta, Pneumomediastinum, Pneumoperitoneum, Subcutaneous Emphysema, Tortuous Aorta. \begin{table}[!htpb] \centering \small \begin{tabular}[width=0.5\textwidth]{lccc} \toprule Disease Finding & Precision & Recall & F-1 \\ \midrule Calcification of the Aorta & 1.00 & 0.87 & 0.93 \\ Pneumomediastinum & 0.70 & 1.00 & 0.82 \\ Pneumoperitoneum & 0.88 & 1.00 & 0.94 \\ Subcutaneous Emphysema & 0.95 & 0.91 & 0.93 \\ Tortuous Aorta & 1.00 & 0.94 & 0.97 \\ \midrule Macro Average & 0.91 & 0.94 & 0.92\\ \bottomrule \end{tabular} \vspace{0.5cm} \caption{RadText\xspace performances on five new disease findings.} \label{tab:f-1scores} \end{table} We randomly selected 200 test reports from the MIMIC-CXR dataset and manually annotated the five new disease findings. We evaluated RadText\xspace by comparing the results of RadText\xspace with the manually-annotated gold standard. Precision, recall, and F1-score were computed accordingly based on the number of true positives, false positives, and false negatives (see Table \ref{tab:f-1scores}). The average precision score is 0.91, with the highest precision being 1.0 for Calcification of the Aorta and Tortuous Aorta; the average recall score is 0.94, with the highest recall being 1.0 for Pneumomediastinum and Pneumoperitoneum; and the average F-1 score is 0.92, with the highest F-1 score being 0.97 for Tortuous Aorta. RadText\xspace achieves an average precision of 0.91, an average recall of 0.94, and an average F-1 score of 0.92. All reports in the MIMIC-CXR dataset were analyzed using RadText\xspace (see Table \ref{tab:mimic-radtext-results}). Among the five new disease findings, Calcification of the Aorta is mentioned in 3,380 reports, which Pneumoperitoneum is mentioned in only 1,604 reports. The labels can also be found at the RadText\xspace homepage. \begin{table}[ht!] \begin{tabular}{lrrrr} \toprule Finding & Positive & Negative & Uncertain & \textbf{Total} \\ \midrule Calcification of the Aorta & 3,344 & 13 & 23 & 3,380 \\ Pneumomediastinum & 779 & 856 & 131 & 1,766 \\ Pneumoperitoneum & 580 & 938 & 86 & 1,604 \\ Subcutaneous Emphysema & 2,529 & 131 & 31 & 2,691 \\ Tortuous Aorta & 2,681 & 41 & 131 & 2,853 \\ \bottomrule \end{tabular} \vspace{.5cm} \caption{Statistics of five new disease findings in MIMIC-CXR dataset.} \label{tab:mimic-radtext-results} \end{table} \section{Conclusion and Future Work} In this work, we presented RadText\xspace, a high-performance Python radiology text analysis system. We highlighted that RadText\xspace features hybrid neural analysis, raw text processing and local processing, bringing better usability and data privacy. RadText\xspace's modular design, user-friendly user interface, easy-to-use command line usage and Python APIs allow users to have great flexibility on the radiology text analysis task. We evaluated RadText\xspace on the MIMIC-CXR dataset, especially on five new disease findings that were not covered by previous work, and the results demonstrated RadText\xspace's superior performances on radiology report analysis. RadText\xspace employs BioC-CDM, which stores the results in the extended BioC format that is compatible with OMOP CDM. RadText\xspace' compatibility with OMOP CDM supports collaborative research across disparate data sources. In the future, RadText\xspace is going to be continuously maintained and expanded as new resources become available. For example, the NER module can be improved by incorporating scispaCy, developed for processing biomedical, scientific or clinical text \cite{neumann-etal-2019-scispacy}. By making RadText\xspace publicly available, we envision it can facilitate future research and applications in the healthcare informatics community. \section*{Acknowledgment} This work is supported by the National Library of Medicine under Award No. 4R00LM013001 and the NIH Intramural Research Program, National Library of Medicine. \bibliographystyle{IEEEtran}
2024-02-18T23:40:45.407Z
2022-04-21T02:38:28.000Z
algebraic_stack_train_0000
3,265
6,125
proofpile-arXiv_066-39
\section{Introduction} Retrieving 3D information from image pairs is a major topic in computer vision and has become even more popular in recent years because of the advances in autonomous driving, robotics and remote sensing. A typical stereo method consists of the following four steps: feature extraction, matching cost calculation, disparity estimation and disparity refinement. In the past, handcrafted feature extraction methods like Census~\cite{feat:census} or dense gradient features~\cite{feat:dense} were used. In recent years however, many applications have shown that deep learning methods are advantageous~\cite{disp:mc_cnn}\cite{disp:cnn_crf}\cite{disp:ga_net}\cite{disp:psm_net}\cite{disp:gc_net}\cite{disp:efficient_stereo} and can improve matching results in many real world challenges by learning. Such challenges for example include textureless areas like floors, walls or the sky, specular reflections on smooth surfaces or thin structures and clutter. By learning more expressive features for such challenging areas using deep learning techniques, the number of correct matches found can be improved. We follow the work of Zbontar and LeCun~\cite{disp:mc_cnn} by creating a shared-weights siamese network structure for feature extraction. However instead of using one connection between subsequent layers, we use a densely-connected network structure. As described by G. Huang et al.~\cite{densenet}, this structure helps to alleviate the vanishing-gradient problem due to better feature reuse and better feature propagation. This allows us to reduce the number of trainable parameters in comparison to traditional feed-forward CNN networks. Our whole network structure is illustrated in Fig.~\ref{fig:network_arch}. The arrows in color depict the additional connections of the dense network structure between layers. \begin{figure}[!t] \centering \includegraphics[width=9cm]{Netzwerk_new2} \caption{FC-DCNN network structure. The left and the right image are processed at the same time by individual branches with shared weights. Each branch consists of 5 convolutional layers (Conv1-Conv5) with $k=3x3$ kernels and $m=64$ feature maps. After the last layer a cost volume is created using the cosine similarity. The final prediction is the winner-takes-all estimate along the third dimension of the cost-volume.} \label{fig:network_arch} \end{figure} Matching costs are based on the similarity measurements between the extracted features of the image pairs. Over the years many different similarity measurements have been studied and proposed, such as the sum of absolute difference/sum of squared difference (SAD/SSD)~\cite{feat:sadssd}, normalized cross correlation (NCC)~\cite{feat:ncc} or Mutual Information (MI)~\cite{mi}. Previous works have integrated the learning of a matching cost function within their network architecture. The advantage of this is that the whole matching pipeline is fully automated and trainable, however we decided against it in our implementation as the corresponding parts of the network would highly increase the complexity while only improving slightly over traditional matching costs like normalized cross correlation~\cite{feat:ncc} or cosine similarity (in our experiments only about 1-2\%). Once all the matches are calculated, the most likely candidate for each position is chosen. Even with better and more expressive learned features, the resulting disparity map can often still be subject to strong outliers and noise. This is why a post-processing or regularization step of the cost-volume is important. Traditional methods use regularization techniques such as semi-global matching (SGM)~\cite{reg:sgm} or more-global matching (MGM)~\cite{reg:mgm}. However these regularization techniques, while still being competitive in regards to accuracy, are relatively slow and memory-inefficient because most implementations are not optimized for GPU usage. Therefore we use a pytorch implementation of the median filter~\cite{kornia} and guided filter~\cite{guided_filter} on each slice of the cost volume before taking the maximum for the final prediction instead. Afterwards we rely on a left-right consistency check to identify inconsistent points and use a watershed foreground-background segmentation in order to decide how to update these values. The input, all intermediate results and the final disparity estimation can be seen in Fig.~\ref{fig:all_results}.\\ In summary our contributions are as follows: \begin{itemize} \item We propose a novel fully-convolutional densely connected siamese network structure for feature extraction. We use dense-layer connections and do not use any fully-connected layers or 3D convolutions. Therefore we are able to produce a lightweight network structure. \item We train and evaluate our network on three challenging datasets, namely Middlebury, KITTI and ETH3D. We discuss the results both qualitatively as well as quantitatively. We show that our method can compete with state-of-the-art methods. \item We implement our own post-processing based on filtering, finding inconsistencies via a left-right check and updating found inconsistent values by using a watershed algorithm on the disparity map with removed inconsistencies. This allows us to be independent from out-of-the box regularization techniques which might not be optimized for GPU usage. \end{itemize} Our method can be seen as a hybrid method, which is faster and more accurate as traditional non-learning methods such as SGM~\cite{reg:sgm} while needing less GPU-Ressources than fully end-to-end methods such as PSMNet~\cite{psmnet} while still producing comparable accuracies and being well suited for typical applications. \section{Related Work} \begin{figure*}[!t] \centering \includegraphics[width=17cm]{Header} \caption{All intermediate results of our method. From left to right: Input RGB stereo pair, winner-takes-all (WTA) output of the network, WTA output with removed inconsistencies, final prediction by filling in the previously removed values.} \label{fig:all_results} \end{figure*} Our work is based on previous work on feature extraction using CNNs, similarity measurements and disparity refinement. \textbf{Feature extraction} is an important step for any stereo method. While there are many still used handcrafted feature extractors such as Census~\cite{feat:census} or dense gradient features~\cite{feat:dense}, many new approaches use CNNs in order to learn more expressive and robust features. A popular model for this task is a siamese CNN structure with shared weights~\cite{disp:mc_cnn}\cite{disp:cnn_crf}\cite{disp:ga_net}\cite{disp:psm_net}\cite{disp:gc_net}\cite{disp:efficient_stereo}. This network structure was popularized by the work of Zbontar and LeCun~\cite{disp:mc_cnn} for the task of disparity estimation. In their work they extract small image patches from the left image and corresponding correct and wrong patches from the right image. They then formulate the training of the feature extractor as a binary classification where they want to maximize the distance in similarity between the correct and not correct pair of patches. \textbf{Similarity measurements} are widely used in machine learning and are often a vital part of the loss function and training. In stereo vision similarity measurements are additionally used as the matching cost function in order to find corresponding points between the image pair. Many of the commonly used cost functions are window-based, as single pixel values are often not expressive enough to confidently find the correct match. Such window-based function include the sum of absolute difference/sum of squared difference (SAD/SSD)~\cite{feat:sadssd}, normalized cross correlation (NCC)~\cite{feat:ncc}, Census or Rank~\cite{feat:census}. H. Hirschmueller and D. Scharstein evaluated all previously mentioned matching costs and more in their paper~\cite{feat:eval_cost}. Window-based matching costs however can be time- and ressource-inefficient if naively implemented due to the sliding window problem (though some improvements have been suggested~\cite{fast-rcnn}\cite{locnet}). We argue that due to the nature of deep-CNNs, each image point has already implicit knowledge of its immediate neighbourhood encoded in its multi-dimensional feature vector. We therefore use the pixel-wise cosine similarity as our cost function. Many stereo networks do not use handcrafted cost functions but rather learn it together with the feature extractor as part of the network~\cite{disp:mc_cnn}\cite{disp:cnn_crf}, however most use fully-connected layers (or 1x1 convolution layers) in order to accomplish this and therefore increase their model complexity manifold. While we do not deny that there are advantages of a fully end-to-end trainable model, we decided against learning a cost function in order to keep our network smaller. \textbf{Disparity refinement} is done in order to create the final disparity prediction. In this step the often still noisy and outlier/peak-prone output is taken and optimized. One of the most-popular methods for disparity refinement is semi-global matching (SGM)~\cite{reg:sgm}. In the original paper, Hirschmueller uses Mutual Information~\cite{mi} for the matching cost, this however can be substituted for any matching cost. SGM aggregates the matching costs from all 16 cardinal direction for each pixel, by approximating 2D smoothness constraint by combining many 1D line optimization problems. G. Facciolo et al.~\cite{reg:mgm} improve upon this method by using different elements and using more than one cardinal direction for the belief update of one disparity value. He also discusses the artefacts that can be produced by the update scheme of SGM and its variants. F. Tosi et al.~\cite{NLA} use a confidence map in order to detect reliable and unreliable points in a disparity map. After removing the unreliable points from the map, they then update these unreliable points by aggregating the first reliable points along different paths that they call "anchor". They weigh each anchor according to a Gaussian similarity function and finally take a weighted median to update the unreliable point. S. Gidaris and N. Komodakis~\cite{reg:drr} use three steps in order to improve the final disparity prediction. First they \textbf{detect} erroneous disparities by taking the initial estimate and performing a consistency check. Then they \textbf{replace} these inconsistently labeled pixels with a new label which is produced by a convex combination of the initial label field. In the end they \textbf{refine} the disparity map by doing a residual correction in order to get a ''softer'' output with finer structures. Additionally to cost-aggregation/belief propagation, disparity refinement often includes subpixel refinement~\cite{reg:subpx} a consistency check~\cite{reg:subpx_and_cons}\cite{reg:cons_check1}\cite{reg:cons_check2} and hole-filling/gap interpolation~\cite{reg:int_gaps}. Our work differs from prior work on stereo vision in (1) the network structure for feature extraction and (2) the post-processing step. Prior work often relies on deep structures with many trainable weights and out-of-the-box disparity refinements like semi-global matching or conditional random fields. We address these issues by using a dense network structure with a three-step disparity refinement procedure. \section{Network} \begin{table} \center \caption{Layer ablation study} \label{tab:layer_ablation} \begin{tabular}{|c|c|c|} \hline layers & parameters & 2-PE\\ \hline 2-layer & 37k & 49.08\\ \hline 3-layer & 111k & 41.23\\ \hline 4-layer & 222k & 37.32\\ \hline 5-layer & 369k & 35.90 \\ \hline 6-layer & 554k & 35.15\\ \hline \end{tabular} \end{table} In this section, we describe the network architecture of our model. We use fully-convolutional siamese branches with shared weights for our network. This network consists of five convolutional layers with a kernel size of $k = 3x3$ and $m = 64$ feature maps per layer. We took inspiration from DenseNet by G. Huang et al.~\cite{densenet} by connecting the output of each layer to each subsequent layer. While this network was originally build with the task of object-detection in mind, we argue that this structure is a good fit for disparity estimation as well. The benefits described in their work, such as strengthened feature propagation and better feature reuse while alleviating the vanishing gradient problem leads to a more lightweight feature extractor. However we adapt the original implementation to fit our needs. The changes to the original structure are as follows: \begin{itemize} \item Following Zbontar and LeCun~\cite{disp:mc_cnn} we do not use down-sampling in our network. Therefore we do not use ''Dense Blocks'' and transition layers as described by G. Huang et al.~\cite{densenet}. The transition layer is described as having a $1x1$ convolution as well as down-sampling and batch-normalization. All layers in between these transition layers are called a ''Dense Block''. \item In the original work of G. Huang et al.~\cite{densenet} the structure is described as being deep and narrow, e.g. having only $12$ filters per layer but having many layers. We decided on a shallower and wider network structure, with $64$ filters per layer. The amount of input connections increases linearly with $2*m$ input weights in the third layer and up to $4*m$ in the output layer. \item The original implementation uses ReLU~\cite{act:relu} as their activation function. Related work~\cite{disp:tanh1}\cite{disp:tanh2}\cite{disp:cnn_crf} has shown that using a TanH activation function gives better results than using ReLUs~\cite{act:relu} for feature matching. Therefore we also use TanH as our activation function. \end{itemize} \begin{table}[t] \renewcommand{\arraystretch}{1.3} \caption{Number of trainable parameter comparison of popular networks} \label{tab:param} \centering \begin{tabular}{|c|c|} \hline Method & Param\\ \hline FC-DCNN (ours) & 0.37M \\ \hline MC-CNN-ACRT~\cite{disp:mc_cnn} & 0.5M\\ \hline GC-Net~\cite{disp:gc_net} & 2.9M \\ \hline PSMNet~\cite{disp:psm_net} & 3.5M \\ \hline \end{tabular} \end{table} The total number of parameters for the feature extractor network is around 370k in 5-layers. To motivate this architecture, we conduct an ablation study by increasing the number of layers from 2 to 6. We train each network overnight and report the number of trainable parameter as well as the end-point error with a threshold of two (2-point error or 2-PE) on the Middlebury dataset. The results of this study can be seen in Tab.~\ref{tab:layer_ablation}. This experiment shows that the network accuracy improves noticeably by increasing the number of layers up until 5 layers. After that point adding another layer seems to affect the overall accuracy less, however the number of trainable parameters increases drastically. We therefore decided on a 5-layer network structure as illustrated in Fig.~\ref{fig:network_arch}. Tab.~\ref{tab:param} compares the number of parameters of our network (FC-DCNN) to some other popular methods. This illustrates that our network is a very lightweight network and therefore easily extendable, but still produces comparable results on challenging outdoor and indoor datasets. \subsection{Post-processing} In order to produce the final result, some post-processing is necessary. The following steps are done in order to improve the final disparity estimation: First we filter each dimension of the cost-volume, then we find and remove inconsistent points. Last we replace this points with new values. This update changes depending on if the value is part of the foreground or background. We argue that post-processing is an important step of the method. Although it is responsible for most of the overall runtime, it further decreases the error by almost half. We motivate this by reporting on the 2-point error and runtime on the Middlebury training dataset for each post-processing step. The runtime and accuracy will always be given as all the previous steps in addition to the step currently described. Without any post-processing the network achieves a 2-point error of $33.3\%$ with an average execution time of $3$ seconds. \subsubsection{Filtering} Median filter is known to work well for Salt\&Pepper noise~\cite{S_and_P}. This kind of noise is common in flat or textureless regions in disparity maps, even with learned, more expressive features. To get rid of some of this noise we apply a median filter with a $5x5$ kernel to each dimension of the cost-volume consecutively. To this end the differentiable and time-efficient implementation of the median-filter from the Kornia library is used~\cite{kornia}. Afterwards each slice of the cost-volume is filtered by a guided filter with a radius size of $r= 8$ and a regularization parameter $\eta = 10$ to produce a final smooth output. We use a fast deep-FCN implementation with pre-trained weights implemented in pytorch for this task~\cite{guided_filter}. This decreases the error on Middlebury from $33.3\%$ to $22.6\%$ with an average execution time of $12.1$ seconds. \subsubsection{LR-consistency check} The left-right consistency aims to get rid of inconsistencies between the disparity map calculated for the left image and the disparity map calculated for the right image. These inconsistencies are expected to occur because of self-occlusions, for instance at the object-boundaries, however they also occur if the match is predicted wrong. The check is done by also treating the right image as reference and searching corresponding image points in the left image along the opposite search direction. Let $D^{L}$ be the disparity map obtained by treating the left image as reference and $D^{R}$ the disparity map obtained from treating the right image as reference. Let furthermore $d$ be the value of $D^{L}$ at position $(x,y)$, i.e. $d = D^{L}(x,y)$ Then a value is marked as inconsistent if: \begin{equation} |D^{L}(x,y) - D^{R}(x-d,y)| > 1.1. \end{equation} This step doubles the execution time, as all of the steps have to be done for the left and for the right image individually (plus the runtime for the consistency check itself). On Middlebury this increases the runtime from $12.1$ seconds to $21.6$ seconds. This step does not improve the accuracy, however it produces a disparity map with removed inconsistencies. \subsubsection{Update inconsistent points} Once all inconsistent points in a disparity map have been found, their values should be updated. We argue that inconsistencies occur because of either: 1) self-occlusion of an object, 2) that the structure is outside of the field-of-view of the second image, in which case you cannot find the right prediction with the data alone, or 3) simply put the prediction was wrong. We further argue that image points that are part of the background are most likely flat and because of the epipolar constraint will therefore have most likely the same depth as points in the same horizontal line. If we find a pixel marked as invalid that is part of the background, we search for the first valid measurement on the same horizontal line that is also part of the background and copy it. If the end of the horizontal line is reached, the direction is reversed and a valid background measurement is searched to the left. However, the same cannot be said about invalid pixels that are part of the foreground, as it contains complex structured objects. If an invalid point is part of the foreground, an averaging approach is taken. Here we search in all eight cardinal directions, starting from the invalid point until a valid point, that is also part of the foreground, is found along this scanline. Afterwards all eight values are summed up and averaged for the new disparity value of this point. In order to get the foreground and background segmentation, a simple watershed algorithm is used on the disparity with removed inconsistencies. This is because we want image points that are outside the field-of-view of the right image to be treated as background pixels, as averaging would not make sense in this case. Furthermore, object boundaries of the disparity image are often not exactly were they are in the RGB image. An example can be seen in Fig.~\ref{fig:segmentation}. This small area was chosen to illustrate that the overlap of the object boundary between the RGB image and the corresponding estimated disparity map is not perfect and therefore the mask obtained from the RGB image (second to last image in Fig.~\ref{fig:segmentation}) would wrongly classify disparities as part of the foreground. Therefore we use the disparity map with deleted inconsistent points as an input (second image in Fig.~\ref{fig:segmentation}) to produce the final mask (last image in Fig.~\ref{fig:segmentation}). This means however, that larger holes within foreground structures will be classified as background. To close such holes in the mask we use a dilation scheme with a $5x5$ filter kernel size for two iterations. Afterwards we thin the mask again with an erosion scheme for two iterations with a $5x5$ kernel. \begin{figure}[!t] \centering \includegraphics[width=2cm]{images/mb_mask_rgb.png} \includegraphics[width=2cm]{images/mb_mask_disp_s.png} \includegraphics[width=2cm]{images/mb_mask_watershed.png} \includegraphics[width=2cm]{images/mb_mask_bilat.png} \caption{From left to right: RGB image of an image detail, disparity map with inconsistencies removed (black), watershed mask on RGB image, watershed mask on disparity map (white = foreground, black = background).} \label{fig:segmentation} \end{figure} On Middlebury this step increases the accuracy from $22.6\%$ to $17.9\%$ with an average execution time of $27.6$ seconds. Depending on the dataset, this method is 4 to 5 times faster than MC-CNN-acrt~\cite{disp:mc_cnn} improving the runtime from $106$ to $27$ seconds on the Middlebury dataset. This time measurements have been taken directly from the corresponding official benchmarks. \section{Experiments and results} In this section we discuss all the conducted experiments and their results. It is structured as follows: First we will discuss how the training task is defined and subsequently how the training data is prepared for this task. Afterwards we will discuss the implementation details. Finally we will show the qualitative and quantitative results of our experiments and compare them to other methods. \subsection{Training the feature extractor} Disparity estimation can be viewed as a multi-classification problem. For each position in the reference image there is a (previously fixed) number of candidates corresponding to a possible position in the second image. In the end the most likely candidate is chosen (winner-takes-all) for the final prediction. However, instead of directly predicting the final winner, for instance by calculating the Cross Entropy loss over all possible classes, a simpler approach is taken in order to train the feature extracting siamese network. Following Zbontar and Lecun's work~\cite{disp:mc_cnn}, we instead train the network as a binary classification task. For each sample a small grayscale patch is extracted at position $p = (x,y)$ from the reference image. From the second image, two patches are extracted, one positive example $q_{pos}$ at the correct position and one negative example $q_{neg}$ at the wrong position. \subsection{Preparing the training set} As suggested by Zbontar and LeCun~\cite{disp:mc_cnn} we choose a patch size of $11x11$ for the randomly cropped patches of our training set. The center position of the left patch $p$ is randomly chosen over the whole image domain, as long as the corresponding ground truth $gt$ position is valid. \begin{equation} p = (x, y) \end{equation} \begin{equation} gt(x,y) = valid. \end{equation} The positive patch $q_{pos}$ is created by using the correct disparity $d$ of position $(x,y)$ \begin{equation} q_{pos} = (x-d, y). \end{equation} In the original paper by Zbontar and LeCun~\cite{disp:mc_cnn} a small offset is added to the position of the positive patch because the post-processing worked better with it. As we use different post-processing steps we do not use any random offset for the positive patch. For the negative sample a random offset $o_{neg}$ within the range of either $(-6,-2)$ or $(2,6)$ is chosen. In our implementation the probability of either being shifted to the left or to the right is 50\%. \begin{equation} q_{neg} = (x-d + o_{neg}, y). \end{equation} The reason why the random offset is limited and not chosen from all possible positions is because it is expected that points far away from the true position have a low similarity score anyway. Points closer to the positive position can be more ambiguous and should therefore be learned to be more robust. In the original paper, Zbontar and LeCun~\cite{disp:mc_cnn} describe that about 20\% of their training set for the Middlebury benchmark consists of samples where the lighting condition or shutter exposure changed between the image pairs. In our experiments we found having 10\% of training samples with these variations works better, as it seems that it introduced more noise otherwise. We do not use any data set augmentation, however by using the "perfect" and "imperfect" rectification with the variations in lighting and exposure for the Middlebury training set, we still end up with around 40 million training samples, although many of them will be similar as we allow a position to be drawn multiple time. \subsection{Implementation details} The whole project is implemented using Python3 and pytorch 1.2.1~\cite{pytorch}. The loss function is implemented as a hinge-loss: \begin{equation} loss = max(0, 0.2 + s_{-} - s_{+}), \end{equation} where $s_{-}$ is the similarity score between the left patch $p$ and the patch from a wrong position of the right image $q_{neg}$. $s_{+}$ is the similarity score between $p$ and the patch from the correct position $q_{pos}$. This loss forces the network to train features, such that the similarity between correct matches should be higher by at least $0.2$. In our implementation we slightly change the formulation of this loss by using a ReLU~\cite{act:relu} instead of the traditional hinge loss for ease of implementation. This leads to a sign change in the loss definition: \begin{equation} loss = ReLU(s_{+} - s_{-} - 0.2). \end{equation} The similarity between patches is calculated using the pytorch implementation of the cosine similarity. It is defined as: \begin{equation} sim(A,B) = \frac{A \cdot B}{\norm{A}\norm{B}}, \end{equation} where $A$ and $B$ are the two vectors to be compared. We us the Adam Optimizer~\cite{adam} with a relative small learning rate of $\eta = 6 \times 10^{-6}$ for training. We use a batch-size of $800$ samples per iteration and trained for roughly 2 days on each dataset using a GeForce RTX 2080. \subsection{Results} We compare our results to state-of-the-art methods from three challenging benchmarks. For each benchmark the network was trained for around 2 days. We decided to train KITTI2012 and KITTI2015~\cite{kitti} together, however better results may be achieved by training and testing on each dataset individually. \subsubsection{Middlebury} \begin{table} \renewcommand{\arraystretch}{1.3} \caption{Accuracy comparison on the Middlebury training dataset} \label{tab:results_mb} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Method & 4-PE & 2-PE & 1-PE & 0.5 PE \\ \hline FC-DCNN (ours) & 12.3 & \textbf{17.9} & \textbf{34.7} & 65.1 \\ \hline iResNet~\cite{iresnet} & \textbf{11.1} & 20.3 & 35.1 & 58.7 \\ \hline SGM (Q)~\cite{reg:sgm} & 12.9 & 21.0 & 37.3 & 64.6 \\ \hline PSMNet~\cite{psmnet} & 13.1 & 23.0 & 40.2 & 64.9 \\ \hline SGBM1 (H)~\cite{opencv} & 17.6 & 23.3 & \textbf{36.5} & 57.8 \\ \hline \end{tabular} \end{table} The Middlebury stereo dataset~\cite{mb} consists of challenging indoor scenes with large disparity ranges under different, controlled exposure and lighting technique. The dataset is provided in full (F), half (H) and quarter (Q) resolution. We chose to submit in half resolution due to hardware constraints. Tab.~\ref{tab:results_mb} shows that our method is already better than popular stereo methods, such as SGM~\cite{reg:sgm} on quarter (Q) resolution, as well as SGM~\cite{reg:sgm} on full resolution (F) (not shown in Tab.~\ref{tab:results_mb}) or OpenCV's reimplementation of SGM which they call SGBM~\cite{opencv}. It further shows that we are on par with well-known deep-learning methods such as iResNet~\cite{iresnet} or PSMNet~\cite{psmnet}. On average our method took about $13$ seconds per Megapixel for the Middlebury data. \begin{figure}[!t] \centering \includegraphics[width=2.8cm]{images/error-fc-dcnn/adiron.png} \includegraphics[width=2.8cm]{images/error-fc-dcnn/adiron.jpg} \includegraphics[width=2.8cm]{images/error-fc-dcnn/adiron_error.jpg} \includegraphics[width=2.8cm]{images/error-fc-dcnn/jade.png} \includegraphics[width=2.8cm]{images/error-fc-dcnn/jade.jpg} \includegraphics[width=2.8cm]{images/error-fc-dcnn/jade_error.jpg} \includegraphics[width=2.8cm]{images/error-fc-dcnn/australiap.png} \includegraphics[width=2.8cm]{images/error-fc-dcnn/AustraliaP.png} \includegraphics[width=2.8cm]{images/error-fc-dcnn/AustraliaP_white.png} \caption{Qualitative results from the Middlebury test and training dataset. From left to right: RGB, final disparity map, 2-point error on full resolution (not available for test data).} \label{fig:error_mb} \end{figure} Fig.~\ref{fig:error_mb} shows qualitative results of our method of three different scenes from the Middlebury dataset. It illustrates that our method performs well even in cases of clutter, like the leaves of the ''Jadeplant'' sample (second example), or in homogeneous areas. An example for a homogenous area is the background area of the samples, here shown as blue colors in the disparity map. The last column consists of the 2-point error map obtained from the official submission page. Here darker colors correlate to a higher error. \subsubsection{KITTI} The KITTI stereo dataset~\cite{kitti} consists of outdoor street images used for autonomous driving. The ground truth was taken by a laser scanner which leads to a rather sparse ground-truth disparity. We use the same number of disparities for all pairs, namely $192$ for KITTI2012 and $228$ for KITTI2015. \begin{table}[!h] \renewcommand{\arraystretch}{1.3} \caption{Accuracy comparison on the KITTI 2012 testing dataset} \label{tab:results_kitti_test} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Method & 5-PE & 4-PE & 3-PE & 2-PE\\ \hline FC-DCNN (ours) & \textbf{3.71} & \textbf{4.40} & \textbf{5.61} & \textbf{8.81}\\ \hline OASM-Net~\cite{disp:oasm} & 4.32 & 5.11 & 6.39 & 9.01 \\ \hline SGBM~\cite{opencv} & 5.03 & 6.03 & 7.64 & 10.60 \\ \hline ADSM~\cite{disp:adsm} & 6.20 & 7.09 & 8.71 & 13.13 \\ \hline GF (Census)~\cite{disp:gf_census} & 8.49 & 9.57 & 11.65 & 16.75\\ \hline \end{tabular} \end{table} \begin{table}[!h] \renewcommand{\arraystretch}{1.3} \caption{Accuracy comparison on the KITTI 2015 testing stereo dataset} \label{tab:results_kitti15_test} \centering \begin{tabular}{|c|c|} \hline Method & 3-PE\\ \hline FC-DCNN (ours) & \textbf{7.71}\\ \hline MeshStereo~\cite{disp:mesh_stereo} & 8.38 \\ \hline OASM-Net~\cite{disp:oasm} & 8.98 \\ \hline OCV-SGBM~\cite{sgm_mi} & 10.86 \\ \hline SGM\&FlowFie+~\cite{sgm_fie} & 13.37 \\ \hline \end{tabular} \end{table} Tab.~\ref{tab:results_kitti_test} compares our results on the KITTI 2012 test dataset with other methods. Tab.~\ref{tab:results_kitti15_test} compares our results on the KITTI 2015 test dataset~\cite{kitti} with other methods. Without any bells and whistles our method is better than well-known methods such as OpenCVs~\cite{opencv} implementation of SGM called SGBM or an implementation of Census features with guided filtering~\cite{disp:gf_census} while being on par with other state-of-the-art deep learning methods such as OASM-Net~\cite{disp:oasm}. We believe that doing further evaluations on the post-processing parameters would improve the results. On average the whole method takes about 7 seconds per image pair for the KITTI dataset~\cite{kitti}. \begin{figure}[!t] \centering \includegraphics[width=3.9cm]{images/kitti/000003_10.jpg} \includegraphics[width=3.9cm]{images/kitti/000007_10.jpg} \includegraphics[width=3.9cm]{images/kitti/kitti03.jpg} \includegraphics[width=3.9cm]{images/kitti/07_10_disp.jpg} \includegraphics[width=3.9cm]{images/kitti/kitti03_s.jpg} \includegraphics[width=3.9cm]{images/kitti/07_10_s.jpg} \vspace{0.1cm} \includegraphics[width=3.9cm]{images/kitti/kitti03_f.jpg} \includegraphics[width=3.9cm]{images/kitti/07_10_f.jpg} \includegraphics[width=3.9cm]{images/kitti/000059_10.jpg} \includegraphics[width=3.9cm]{images/kitti/000042_10.jpg} \includegraphics[width=3.9cm]{images/kitti/kitti59.jpg} \includegraphics[width=3.9cm]{images/kitti/42_10_disp.jpg} \includegraphics[width=3.9cm]{images/kitti/kitti59_s.jpg} \includegraphics[width=3.9cm]{images/kitti/42_10_s.jpg} \includegraphics[width=3.9cm]{images/kitti/kitti59_f.jpg} \includegraphics[width=3.9cm]{images/kitti/42_10_f.jpg} \caption{Qualitative results from the KITTI train dataset (left column) and test dataset (right column). From top to bottom in both columns: left RGB image, initial disparity estimation, disparity with inconsistent points removed, final disparity map.} \label{fig:results_kitti} \end{figure} Fig.~\ref{fig:results_kitti} shows the final disparity prediction plus all intermediate outputs of four chosen examples from the KITTI 2015 set~\cite{kitti}. It shows that while further hyperparameter studies might improve the accuracy, our method yields good results. \subsubsection{ETH3D} The ETH3D~\cite{eth3d} dataset for two-view stereo estimation consists of 27 training and 20 test pairs. The scenes of these pairs vary from tunnels to playgrounds and forest areas. In contrast to the other benchmarks however, the image-pairs have a small baseline and fine structures, which leads to a low number of disparities (maximum disparity in the training set is 64) with more steps in-between each discrete disparity step. We predict discrete-valued disparity steps, and also prepare our training patches as such. This would suggest, that our method is not well suited for this benchmark, however we show that we get decent results even with the previously stated limitations. \begin{table}[!h] \renewcommand{\arraystretch}{1.3} \caption{Accuracy comparison on the ETH3D test dataset} \label{tab:results_eth3d_test} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Method & 4-PE & 2-PE & 1-PE & 0.5 PE \\ \hline FC-DCNN (ours) & 3.38 & \textbf{5.77} & \textbf{10.41} & 24.12 \\ \hline MeshStereo~\cite{disp:mesh_stereo} & \textbf{2.61} & 5.78 & 11.52 & \textbf{22.27} \\ \hline LSM & 4.58 & 7.38 & 14.01 & 29.98 \\ \hline ELAS\_RVC~\cite{disp:elas} & 2.84 & 7.69 & 16.54 & 33.79 \\ \hline \end{tabular} \end{table} Tab.~\ref{tab:results_eth3d_test} compares our method with other methods from the online leaderboard. The LSM methods is an anonymous submissions, therefore we cannot credit them. One can see that while there is still room for improvement, especially in the lower threshold end-point errors, our method already performs decently without any need of modification with only around 6\% 2-point error. On average our method took about $1.6$ seconds for one image pair to produce the final output. \begin{figure}[!t] \centering \includegraphics[width=3.5cm]{images/eth3d/lakeside_1l.jpg} \includegraphics[width=3.5cm]{images/eth3d/lakeside_1l_disp.jpg} \includegraphics[width=3.5cm]{images/eth3d/playground_1s.jpg} \includegraphics[width=3.5cm]{images/eth3d/playground_1s_disp.jpg} \includegraphics[width=3.5cm]{images/eth3d/playground_2s.jpg} \includegraphics[width=3.5cm]{images/eth3d/playground_2s_disp.jpg} \includegraphics[width=3.5cm]{images/eth3d/playground_3l.jpg} \includegraphics[width=3.5cm]{images/eth3d/playground_3l_disp.jpg} \caption{Qualitative results from the ETH3D train and test dataset. The left column shows the first image of the stereo pair, the right column the final disparity map.} \label{fig:results_eth3d_train} \end{figure} Fig.~\ref{fig:results_eth3d_train} shows some qualitative results of the ETH3D~\cite{eth3d} test and training set. It shows that while some details within the subpixel range might be missing, the overall structure is predicted very well. \section{Generalization experiments} In order to show that the network is not overfitted on a single dataset, we perform a simple generalization test, where the weight trained on one dataset is used in order to do inference on the other two datasets. The 2-point error is reported. Missing or invalid ground-truth measurements are not taken into account in this evaluation. This changes the metric for the KITTI2012~\cite{kitti} benchmark where the official benchmark interpolates the missing ground-truth measurements. \begin{table} \center \caption{Generalization test} \label{tab:gen_test} \begin{tabular}{|c|c|c|c|} \hline & Middlebury~\cite{mb} & Kitti2012~\cite{kitti} & ETH3D~\cite{eth3d}\\ & (trained) & (trained) & (trained) \\ \hline Middlebury~\cite{mb} & \textbf{17.9} & 19.6 & 20.4 \\ \hline Kitti2012~\cite{kitti} & 20.30 & \textbf{16.66} & 19.70\\ \hline ETH3D~\cite{eth3d} & 7.65 & 6.78 & \textbf{5.77}\\ \hline \end{tabular} \end{table} Tab.~\ref{tab:gen_test} shows that while the best performance is achieved by training on the corresponding dataset, the network performance stays stable even when trained on completely different scenes. This shows that our method generalizes well and is usable for many different applications. However, the conducted experiments suggest that the KITTI2012~\cite{kitti} benchmark profits the most from training, as the achieved accuracies varied the most of all tested datasets. \section{Conclusion and future work} We have shown that a dense-network structure can be advantageous for the stereo vision task. By using this structure and by not using any fully-connected layers or 3D convolutions we were able to produce a very lightweight network that still has comparable results on difficult outdoor and indoor datasets. Instead of relying on out-of-the-box post-processing solutions that are often not optimized for GPU usage such as semi-global matching or conditional random fields we use our own three-steps post processing. First we filter the cost-volume, then we detect inconsistencies that we afterwards update by using a foreground-background segmentation. In the future we want to conduct an exhaustive network architecture study. To this end the number of layers experiments should be extended to also include number of feature maps. This may lead to an even better network structure. The foreground-background segmentation may be improved upon by using more advanced techniques. For all of our experiments we hold the parameters for the segmentation of the disparity map static which will not lead to optimal solutions for each individual scene. This might be improved upon in the future by having an adaptive method or by using machine learning. The runtime of our updating scheme for inconsistent points can be improved, especially when large portions of the image are inconsistent. This frequently happened in the KITTI benchmark, as often large portions of the image are sky, which cannot be correctly matched and therefore are marked as inconsistent. In the future it might proof to be beneficial to learn the weights of the guided filter, instead of using pre-trained weights.
2024-02-18T23:40:46.150Z
2020-10-15T02:17:26.000Z
algebraic_stack_train_0000
3,274
6,377
proofpile-arXiv_066-84
\section{Introduction}\label{intro} The factorization problems of multivariate polynomial matrices have attracted much attention over the past decades because of their fundamental importance in multidimensional systems, circuits, signal processing, controls, and other related areas \citep{Bose1982,Bose2003}. Up to now, the factorization problems have been solved for univariate and bivariate polynomial matrices \citep{Guiver1982Polynomial,Morf1977New}. However, there are still many challenging open problems for multivariate (more than two variables) polynomial matrix factorizations due to the lack of a mature polynomial matrix theory. \cite{Youla1979Notes} studied the basic structure of multidimensional systems theory, and proposed three types of factorizations for multivariate polynomial matrices: zero prime factorization, minor prime factorization and factor prime factorization. The existence problem of zero prime factorizations for multivariate polynomial matrices with full rank first raised in \citep{Lin1999Notes}, and has been solved in \citep{Pommaret2001Solving,Wang2004On}. In recent years, the factorization problems of multivariate polynomial matrices without full rank deserve some attention. \cite{Lin2001A} studied a generalization of Serre's conjecture, and they pointed out some relationships between the existence of a zero prime factorization for a multivariate polynomial matrix without full rank and its an arbitrary full rank submatrix. \cite{Mingsheng2005On} completely solved the existence problem of minor prime factorizations for multivariate polynomial matrices with full rank, and proposed an effective algorithm. \cite{Guan2019} extended the main result in \citep{Mingsheng2005On} to the case of non-full rank. In order to study the existence problem of factor prime factorizations for multivariate polynomial matrices with full rank, \cite{Mingsheng2007On} proposed the concept of regularity and obtained a necessary and sufficient condition. \cite{Guan2018} gave an algorithm to determine whether a class of multivariate polynomial matrices without full rank has factor prime factorizations. Although some achievements have been made on the existence for factor prime factorizations of some classes of multivariate polynomial matrices, factor prime factorizations are still open problems. Therefore, we focus on factor left prime factorization problems for multivariate polynomial matrices without full row rank in this paper. The rest of the paper is organized as follows. In section \ref{sec_PP}, we introduce some basic concepts and present the two major problems on factor left prime factorizations. We present in section \ref{sec_MR} a necessary and sufficient condition for the existence of factor left prime factorizations of a class of multivariate polynomial matrices without full row rank. In section \ref{sec_AE}, we construct an algorithm and use two examples to illustrate the effectiveness of the algorithm. We end with some concluding remarks in section \ref{sec_conclusions}. \section{Preliminaries and Problems}\label{sec_PP} We denote by $k$ an algebraically closed field, ${\bf z}$ the $n$ variables $z_1,\ldots,z_n$ where $n\geq 3$. Let $k[{\bf z}]$ be the polynomial ring, and $k[{\bf z}]^{l\times m}$ be the set of $l\times m$ matrices with entries in $k[{\bf z}]$. Throughout this paper, we assume that $l\leq m$. In addition, we use ``w.r.t." to represent ``with respect to". For any given polynomial matrix $\mathbf{F}\in k[{\bf z}]^{l\times m}$, let ${\rm rank}({\mathbf{F}})$ and $\mathbf{F}^{\rm T}$ be the rank and the transposed matrix of $\mathbf{F}$, respectively; if $l = m$, we use ${\rm det}(\mathbf{F})$ to denote the determinant of $\mathbf{F}$; we denote by $\rho({\mathbf{F}})$ the submodule of $k[{\bf z}]^{1\times m}$ generated by the rows of ${\mathbf{F}}$; for each $i$ with $1\leq i \leq {\rm rank}({\mathbf{F}})$, let $d_i({\mathbf{F}})$ be the greatest common divisor of all the $i\times i$ minors of $\mathbf{F}$; let ${\rm Syz}({\mathbf{F}})$ be the syzygy module of $\mathbf{F}$, i.e., ${\rm Syz}({\mathbf{F}}) = \{ \vec{v}\in k[{\bf z}]^{m\times 1} : {\mathbf{F}}\vec{v} = \vec{0} \}$. \subsection{Basic Notions} The following three concepts, which were first proposed in \citep{Youla1979Notes}, play an important role in multidimensional systems. \begin{definition} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ be of full row rank. \begin{enumerate} \item If all the $l\times l$ minors of $\mathbf{F}$ generate $k[{\bf z}]$, then $\mathbf{F}$ is said to be a zero left prime (ZLP) matrix. \item If all the $l\times l$ minors of $\mathbf{F}$ are relatively prime, i.e., $d_l(\mathbf{F})$ is a nonzero constant, then $\mathbf{F}$ is said to be an minor left prime (MLP) matrix. \item If for any polynomial matrix factorization $\mathbf{F} = \mathbf{F}_1\mathbf{F}_2$ in which $\mathbf{F}_1\in k[{\bf z}]^{l\times l}$, $\mathbf{F}_1$ is necessarily a unimodular matrix, i.e., ${\rm det}(\mathbf{F}_1)$ is a nonzero constant, then $\mathbf{F}$ is said to be a factor left prime (FLP) matrix. \end{enumerate} \end{definition} Let $\mathbf{F}\in k[{\bf z}]^{m\times l}$ with $m\geq l$, then a ZRP (MRP, FRP) matrix can be similarly defined. Note that ZLP $\Rightarrow$ MLP $\Rightarrow$ FLP. Youla and Gnavi proved that when $n=1$, the three concepts coincide; when $n=2$, ZLP is not equivalent to MLP, but MLP is the same as FLP; when $n\geq 3$, these concepts are pairwise different. A factorization of a multivariate polynomial matrix is formulated as follows. \begin{definition}\label{matrix_factorization} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$ and $f$ is a divisor of $d_r({\mathbf{F}})$, where $1\leq r \leq l$. $\mathbf{F}$ is said to admit a factorization w.r.t. $f$ if $\mathbf{F}$ can be factorized as \begin{equation}\label{gerneral-matirx-factorization} \mathbf{F} = \mathbf{G}_1\mathbf{F}_1 \end{equation} such that $\mathbf{F}_1\in k[{\bf z}]^{r\times m}$, $\mathbf{G}_1\in k[{\bf z}]^{l\times r}$ with $d_r(\mathbf{G}_1) = f$. In particular, Equation (\ref{gerneral-matirx-factorization}) is said to be a ZLP (MLP, FLP) factorization of ${\mathbf{F}}$ w.r.t. $f$ if ${\mathbf{F}}_1$ is a ZLP (MLP, FLP) matrix. \end{definition} In order to state conveniently problems and main conclusions of this paper, we introduce the following concepts and results. \begin{definition}\label{quotient-define} Let $\mathcal{K}$ be a submodule of $k[{\bf z}]^{1\times m}$, and $J$ be an ideal of $k[{\bf z}]$. We define $\mathcal{K} : J= \{ \vec{u}\in k[{\bf z}]^{1\times m} : J \vec{u} \subseteq \mathcal{K} \}$, where $J \vec{u}$ is the set $\{f\vec{u} : f\in J\}$. \end{definition} Obviously, $\mathcal{K} \subseteq \mathcal{K} : J$. Let $I \subset k[{\bf z}]$ be another ideal, it is easy to show that \begin{equation}\label{quotient-module-2} \mathcal{K} : (IJ) = (\mathcal{K} : I) : J. \end{equation} Equation (\ref{quotient-module-2}) is a simple generalization of Proposition 10 in subsection $4$, Zariski closure and quotients of ideals in \citep{Cox2007Ideals}. For convention, we write $\mathcal{K} :\langle f \rangle$ as $\mathcal{K} : f$ for any $f\in k[{\bf z}]$. \begin{definition}\label{torsion-define} Let $\mathcal{K}$ be a $k[{\bf z}]$-module. The torsion submodule of $\mathcal{K}$ is defined as ${\rm Torsion}(\mathcal{K}) = \{\vec{u}\in \mathcal{K} : \exists f \in k[{\bf z}] \backslash \{0\} \text{ such that } f\vec{u} = \vec{0} \}$. \end{definition} We refer to \citep{Eisenbud2013} for more details about the above two concepts. Let $\mathcal{K}_1,\mathcal{K}_2$ be two $k[{\bf z}]$-modules, we define $\mathcal{K}_1/\mathcal{K}_2 = \{\vec{u}+ \mathcal{K}_2 : \vec{u} \in \mathcal{K}_1\}$. \cite{Liu2015Further} established a relationship between Definition \ref{quotient-define} and Definition \ref{torsion-define}. \begin{lemma}\label{LW-Torsion} Let ${\mathbf{F}}\in k[{\bf z}]^{l\times m}$ be of full row rank, $d = d_l({\mathbf{F}})$ and $\mathcal{K} = \rho({\mathbf{F}})$. Then $(\mathcal{K}:d)/\mathcal{K} = {\rm Torsion}(k[{\bf z}]^{1\times m}/\mathcal{K})$. \end{lemma} Moreover, Liu and Wang further extended the Youla's MLP lemma, which had been used to give another proof of the Serre's problem. \begin{lemma}\label{Serre-LW} Let ${\mathbf{F}}\in k[{\bf z}]^{l\times m}$ be of full row rank and $d = d_l({\mathbf{F}})$. Then for each $i=1,\ldots,n$, there exists $\mathbf{V}_i\in k[{\bf z}]^{m\times l}$ such that ${\mathbf{F}}\mathbf{V}_i = d\varphi_i \mathbf{I}_{l\times l}$, where $\varphi_i$ is nonzero and independent of $z_i$. \end{lemma} \cite{Guan2018} proved the following lemma, which is similar to the above result. \begin{lemma}\label{minor-Guan} Let ${\mathbf{G}}\in k[{\bf z}]^{l\times r}$ be of full column rank with $l \geq r$, and $g$ be an arbitrary $r\times r$ minor of ${\mathbf{G}}$. Then there exists ${\mathbf{G}}'\in k[{\bf z}]^{r\times l}$ such that ${\mathbf{G}}'{\mathbf{G}} = g\mathbf{I}_{r\times r}$. \end{lemma} In order to study the properties of multivariate polynomial matrices, \cite{Lin1988On} and \cite{Sule1994Feed} introduced the following important concept. \begin{definition} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, where $1\leq r \leq l$. For any given integer $i$ with $1\leq i \leq r$, let $a_1,\ldots,a_\beta$ denote all the $i\times i$ minors of $\mathbf{F}$, where $\beta = \binom l{i} \cdot \binom m{i}$. Extracting $d_i(\mathbf{F})$ from $a_1,\ldots,a_\beta$ yields \[a_j = d_i(\mathbf{F})\cdot b_j, ~ j=1,\ldots,\beta.\] Then, $b_1,\ldots,b_\beta$ are called all the $i\times i$ reduced minors of $\mathbf{F}$. \end{definition} \cite{Lin1988On} showed that reduced minors are important invariants for multivariate polynomial matrices. \begin{lemma}\label{RM_relation} Let ${\mathbf{F}}_1\in k[{\bf z}]^{r\times t}$ be of full row rank, $b_1, \ldots, b_{\gamma}$ be all the $r\times r$ reduced minors of ${\mathbf{F}}_1$, and ${\mathbf{F}}_2\in k[{\bf z}]^{t\times (t-r)}$ be of full column rank, $\bar{b}_1, \ldots, \bar{b}_{\gamma}$ be all the $(t-r)\times (t-r)$ reduced minors of ${\mathbf{F}}_2$, where $r<t$ and $\gamma = \binom {t}{r}$. If ${\mathbf{F}}_1{\mathbf{F}}_2 = \mathbf{0}_{r\times(t-r)}$, then $\bar{b}_i=\pm b_i$ for $i=1,\ldots,\gamma$, and signs depend on indices. \end{lemma} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, where $1\leq r < l$. Let $\bar{{\mathbf{F}}}_1,\ldots,\bar{{\mathbf{F}}}_\eta \in k[{\bf z}]^{l\times r}$ be all the full column rank submatrices of ${\mathbf{F}}$, where $1\leq \eta \leq \binom{m}{r}$. According to Lemma \ref{RM_relation}, it follows that $\bar{{\mathbf{F}}}_1,\ldots,\bar{{\mathbf{F}}}_\eta$ have the same $r\times r$ reduced minors. Based on this phenomenon, we give the following concept which was first proposed in \citep{Lin2001A}. \begin{definition} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, and $\bar{{\mathbf{F}}}\in k[{\bf z}]^{l\times r}$ be an arbitrary full column rank submatrix of ${\mathbf{F}}$, where $1\leq r < l$. Let $c_1,\ldots, c_\xi$ be all the $r\times r$ reduced minors of $\bar{{\mathbf{F}}}$, where $\xi = \binom{l}{r}$. Then $c_1,\ldots, c_\xi$ are called all the $r\times r$ {\bf column} reduced minors of ${\mathbf{F}}$. \end{definition} The above concept will play an important role in this paper. Obviously, the calculation amount of all the $r\times r$ column reduced minors of ${\mathbf{F}}$ is much less than that of all the $r\times r$ reduced minors of ${\mathbf{F}}$ in general. \begin{lemma}\label{QS-theorem} Let $\mathbf{U}\in k[{\bf z}]^{l \times m}$ be a ZLP matrix, where $l<m$. Then there exists a ZRP matrix $\mathbf{V}\in k[{\bf z}]^{m \times l}$ such that $\mathbf{U}\mathbf{V} = \mathbf{I}_{l\times l}$. Moreover, ${\rm Syz}(\mathbf{U})$ is a free submodule of $k[{\bf z}]^{m\times 1}$ with rank $m-l$. \end{lemma} The above result is called the Quillen-Suslin theorem. In order to solve the problem whether any finitely generated projective module over a polynomial ring is free, \cite{Quillen1976Projective} and \cite{Suslin1976Projective} solved the problem positively and independently. Using the Quillen-Suslin theorem, \cite{Pommaret2001Solving} and \cite{Wang2004On} solved the Lin-Bose conjecture. \begin{lemma}\label{Lin-Bose-conjecture} Let ${\mathbf{F}}\in k[{\bf z}]^{l \times m}$ be of full row rank, where $l<m$. If all the $l\times l$ reduced minors of ${\mathbf{F}}$ generate $k[{\bf z}]$, then ${\mathbf{F}}$ has a ZLP factorization. \end{lemma} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ be of full row rank, and $f$ be a divisor of $d_l(\mathbf{F})$. In order to study a factorization of $\mathbf{F}$ w.r.t. $f$, \cite{Mingsheng2007On} introduced the concept of regularity. $f$ is said to be regular w.r.t. ${\mathbf{F}}$ if and only if $d_l([f\mathbf{I}_{l\times l} ~ {\mathbf{F}}]) = f$ up to multiplication by a nonzero constant. Then, Wang obtained the following result. \begin{lemma}\label{W-flp} Let ${\mathbf{F}}\in k[{\bf z}]^{l\times m}$ be of full row rank, and $f$ be regular w.r.t. ${\mathbf{F}}$. Then ${\mathbf{F}}$ has a factorization w.r.t. $f$ if and only if $\rho({\mathbf{F}}):f$ is a free module of rank $l$. \end{lemma} \subsection{Problems} According to Lemma \ref{W-flp}, Wang proposed a necessary and sufficient condition to verify whether ${\mathbf{F}}$ has a FLP factorization w.r.t. $f$. After that, \cite{Guan2018} considered the case of multivariate polynomial matrices without full row rank. When $f$ satisfies a special property, they obtained a necessary condition that $\mathbf{F}$ has a factorization w.r.t. $f$, and designed an algorithm to compute all FLP factorizations of ${\mathbf{F}}$ if they exist. In this paper we will further consider the following two problems concerning FLP factorizations. \begin{problem}\label{main-problem-1} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, and $f$ be a divisor of $d_r(\mathbf{F})$, where $1\leq r < l$. Determine whether ${\mathbf{F}}$ has a FLP factorization w.r.t. $f$. \end{problem} \begin{problem}\label{main-problem-2} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, where $1\leq r < l$. Constructing an algorithm to compute all FLP factorizations of ${\mathbf{F}}$. \end{problem} Youla and Gnavi used an example to show that it is very difficult to judge whether a multivariate polynomial matrix is a FLP matrix. Hence, Problem \ref{main-problem-1} and Problem \ref{main-problem-2} may be very difficult in general. In this paper, we will give partial solutions to the above two problems. \section{Main Results}\label{sec_MR} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, and $f$ be a divisor of $d_r({\mathbf{F}})$, where $1\leq r < l$. We use the following lemma to illustrate that all the $r\times r$ column reduced minors of ${\mathbf{F}}$ play an important role in a factorization of ${\mathbf{F}}$ w.r.t. $f$. \begin{lemma}\label{comple-lemma} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, $f$ be a divisor of $d_r({\mathbf{F}})$, and $c_1,\ldots, c_\xi$ be all the $r\times r$ column reduced minors of ${\mathbf{F}}$, where $1\leq r < l$. If there exist $\mathbf{G}_1\in k[{\bf z}]^{l\times r}$ and $\mathbf{F}_1\in k[{\bf z}]^{r\times m}$ such that $\mathbf{F} = \mathbf{G}_1\mathbf{F}_1$ with $d_r({\mathbf{G}}_1)=f$, then $I_r({\mathbf{G}}_1) = \langle fc_1,\ldots, fc_\xi \rangle$. \end{lemma} \begin{proof} Since $\mathbf{F}$ is a matrix with rank $r$, there exists a full row rank matrix $\mathbf{A}\in k[{\bf z}]^{(l-r)\times l}$ such that $\mathbf{A}\mathbf{F} = \mathbf{0}_{(l-r)\times m}$. Let $\bar{{\mathbf{F}}} \in k[{\bf z}]^{l\times r}$ be an arbitrary full column rank submatrix of ${\mathbf{F}}$, then $\mathbf{A}\bar{\mathbf{F}} = \mathbf{0}_{(l-r)\times r}$. Based on Lemma \ref{RM_relation}, all the $r\times r$ reduced minors of $\mathbf{A}$ are $c_1,\ldots, c_\xi$. It follows from ${\rm rank}({\mathbf{F}}) \leq {\rm min}\{{\rm rank}({\mathbf{G}}_1),{\rm rank}({\mathbf{F}}_1)\}$ that ${\mathbf{G}}_1$ is a full column rank matrix and ${\mathbf{F}}_1$ is a full row rank matrix. Then $\mathbf{A}\mathbf{G}_1\mathbf{F}_1 =\mathbf{0}_{(l-r)\times m}$ implies that $\mathbf{A}\mathbf{G}_1 =\mathbf{0}_{(l-r)\times r}$. Using Lemma \ref{RM_relation} again, all the $r\times r$ reduced minors of ${\mathbf{G}}_1$ are $c_1,\ldots, c_\xi$. Consequently, $I_r({\mathbf{G}}_1) = \langle fc_1,\ldots, fc_\xi \rangle$ since $d_r({\mathbf{G}}_1)=f$. \qed \end{proof} Now, we give the first main result in this paper. \begin{theorem}\label{LWX-theorem-1} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, $f$ be a divisor of $d_r({\mathbf{F}})$ and $c_1,\ldots, c_\xi$ be all the $r\times r$ column reduced minors of ${\mathbf{F}}$, where $1\leq r < l$. Let $d = d_r({\mathbf{F}})$ and $\mathcal{K} = \rho({\mathbf{F}})$, then the following are equivalent: \begin{enumerate} \item ${\mathbf{F}}$ has a factorization w.r.t. $f$; \item there exists ${\mathbf{F}}_1\in k[{\bf z}]^{r\times m}$ with full row rank such that $d_r({\mathbf{F}}_1)= \frac{d}{f}$ and $\mathcal{K} \subseteq \rho({\mathbf{F}}_1) \subseteq \mathcal{K}:\langle fc_1,\ldots, fc_\xi \rangle$. \end{enumerate} \end{theorem} \begin{proof} $1\rightarrow 2$. Suppose that ${\mathbf{F}}$ has a factorization w.r.t. $f$. Then there exist ${\mathbf{G}}_1\in k[{\bf z}]^{l\times r}$ and ${\mathbf{F}}_1\in k[{\bf z}]^{r\times m}$ such that ${\mathbf{F}} = {\mathbf{G}}_1{\mathbf{F}}_1$ with $d_r({\mathbf{G}}_1) = f$. Clearly, $\mathcal{K} \subseteq \rho({\mathbf{F}}_1)$. From $d_r({\mathbf{F}}) = d_r({\mathbf{G}}_1)d_r({\mathbf{F}}_1)$ we have $d_r({\mathbf{F}}_1)= \frac{d}{f}$. According to Lemma \ref{comple-lemma}, $I_r({\mathbf{G}}_1) = \langle fc_1,\ldots, fc_\xi \rangle$. Let $g$ be any $r\times r$ minor of ${\mathbf{G}}_1$, then there exists ${\mathbf{G}}'\in k[{\bf z}]^{r\times l}$ such that ${\mathbf{G}}'{\mathbf{G}}_1 = g\mathbf{I}_{r\times r}$ by Lemma \ref{minor-Guan}. Multiplying both left sides of $\mathbf{F} = \mathbf{G}_1\mathbf{F}_1$ by ${\mathbf{G}}'$, we get ${\mathbf{G}}'\mathbf{F} = {\mathbf{G}}'\mathbf{G}_1\mathbf{F}_1 = g \mathbf{F}_1$. This implies that $g\cdot \rho({\mathbf{F}}_1) \subseteq \mathcal{K}$. Noting that $g$ is an arbitrary $r\times r$ minor of ${\mathbf{G}}_1$, we obtain $\rho({\mathbf{F}}_1) \subseteq \mathcal{K} : I_r({\mathbf{G}}_1) = \mathcal{K} : \langle fc_1,\ldots, fc_\xi \rangle$. $2\rightarrow 1$. Thanks to $\mathcal{K} \subseteq \rho({\mathbf{F}}_1)$, there exists ${\mathbf{G}}_1\in k[{\bf z}]^{l\times r}$ such that ${\mathbf{F}} = {\mathbf{G}}_1{\mathbf{F}}_1$. It follows from $d_r({\mathbf{F}}) = d_r({\mathbf{G}}_1)d_r({\mathbf{F}}_1)$ that $d_r({\mathbf{G}}_1)= f$. Then, ${\mathbf{F}}$ has a factorization w.r.t. $f$. \end{proof} Although Theorem \ref{LWX-theorem-1} gives a necessary and sufficient condition for ${\mathbf{F}}$ to have a factorization w.r.t. $f$, it is difficult to find a full row rank matrix ${\mathbf{F}}_1\in k[{\bf z}]^{r\times m}$ that satisfies $d_r({\mathbf{F}}_1)= \frac{d}{f}$ and $\mathcal{K} \subseteq \rho({\mathbf{F}}_1) \subseteq \mathcal{K}:\langle fc_1,\ldots, fc_\xi \rangle$. Next, we will further study the relationship between $\rho({\mathbf{F}})$ and $\rho({\mathbf{F}}_1)$. \begin{theorem}\label{LWX-module} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, $f$ be a divisor of $d_r({\mathbf{F}})$ and $c_1,\ldots, c_\xi$ be all the $r\times r$ column reduced minors of ${\mathbf{F}}$, where $1\leq r < l$. Suppose there exist $\mathbf{G}_1\in k[{\bf z}]^{l\times r}$ and $\mathbf{F}_1\in k[{\bf z}]^{r\times m}$ such that $\mathbf{F} = \mathbf{G}_1\mathbf{F}_1$ with $d_r({\mathbf{G}}_1)=f$. Let $d = d_r({\mathbf{F}})$, $\mathcal{K} = \rho({\mathbf{F}})$ and $\mathcal{K}_1 = \rho({\mathbf{F}}_1)$, then the following are equivalent: \begin{enumerate} \item $(\mathcal{K}_1:\frac{d}{f})/\mathcal{K}_1$; \item $(\mathcal{K}:\langle dc_1,\ldots, dc_\xi \rangle)/\mathcal{K}_1$; \item ${\rm Torsion}(k[{\bf z}]^{1\times m}/\mathcal{K}_1)$. \end{enumerate} \end{theorem} \begin{proof} It follows from ${\rm rank}({\mathbf{F}}) \leq {\rm min}\{{\rm rank}({\mathbf{G}}_1),{\rm rank}({\mathbf{F}}_1)\}$ that ${\mathbf{F}}_1$ is a full row rank matrix. Since $d_r({\mathbf{F}}) = d_r({\mathbf{G}}_1)d_r({\mathbf{F}}_1)$, we have $d_r({\mathbf{F}}_1) = \frac{d}{f}$. It is apparent from Lemma \ref{LW-Torsion} that \begin{equation}\label{LWX-module-equ-1} (\mathcal{K}_1:\frac{d}{f})/\mathcal{K}_1 = {\rm Torsion}(k[{\bf z}]^{1\times m}/\mathcal{K}_1). \end{equation} If the following equation \begin{equation}\label{LWX-module-equ-0} \mathcal{K}_1:\frac{d}{f} =\mathcal{K}:\langle dc_1,\ldots, dc_\xi \rangle \end{equation} holds, then $(\mathcal{K}_1:\frac{d}{f})/\mathcal{K}_1$ and $(\mathcal{K}:\langle dc_1,\ldots, dc_\xi \rangle)/\mathcal{K}_1$ are obviously equivalent. We first verify $\mathcal{K}_1:\frac{d}{f} \subseteq \mathcal{K}:\langle dc_1,\ldots, dc_\xi \rangle$. Proceeding as in the proof of $1\rightarrow 2$ in Theorem \ref{LWX-theorem-1}, we get \begin{equation}\label{LWX-module-equ-8} \mathcal{K}_1 \subseteq \mathcal{K} : \langle fc_1,\ldots, fc_\xi \rangle. \end{equation} Using Equation (\ref{quotient-module-2}), we can derive \begin{equation}\label{LWX-module-equ-2} \mathcal{K}_1:\frac{d}{f} \subseteq (\mathcal{K}:\langle fc_1,\ldots, fc_\xi \rangle):\frac{d}{f} = \mathcal{K}:\langle dc_1,\ldots, dc_\xi \rangle. \end{equation} Next we show $\mathcal{K}:\langle dc_1,\ldots, dc_\xi \rangle \subseteq \mathcal{K}_1:\frac{d}{f}$. For any vector $\vec{u}\in \mathcal{K}:\langle dc_1,\ldots, dc_\xi \rangle = \bigcap_{j=1}^{\xi}(\mathcal{K}:dc_j)$, there exists $\vec{v}_j\in k[{\bf z}]^{1\times l}$ such that \begin{equation}\label{LWX-module-equ-3} dc_j\vec{u} = \vec{v}_j{\mathbf{F}} = \vec{v}_j{\mathbf{G}}_1{\mathbf{F}}_1, ~j =1,\ldots,\xi. \end{equation} Using Lemma \ref{Serre-LW}, for each $i=1,\ldots,n$, there exists $\mathbf{V}_i\in k[{\bf z}]^{m\times r}$ such that \begin{equation}\label{LWX-module-equ-4} {\mathbf{F}}_1\mathbf{V}_i = \frac{d}{f}\varphi_i \mathbf{I}_{r\times r}, \end{equation} where $\varphi_i$ is nonzero and independent of $z_i$. Combining Equation (\ref{LWX-module-equ-3}) and Equation (\ref{LWX-module-equ-4}), we see that \begin{equation}\label{LWX-module-equ-5} dc_j \vec{u} \mathbf{V}_i = \vec{v}_j\mathbf{G}_1\mathbf{F}_1\mathbf{V}_i = \vec{v}_j\mathbf{G}_1(\frac{d}{f}\varphi_i \mathbf{I}_{r\times r}) = \frac{d}{f}\varphi_i\vec{v}_j\mathbf{G}_1. \end{equation} As ${\rm gcd}(\varphi_1,\ldots,\varphi_n) = 1$, we have $dc_j\mid \frac{d}{f}\vec{v}_j\mathbf{G}_1$. This implies that $\frac{\vec{v}_j\mathbf{G}_1}{fc_j}$ is a polynomial vector. Then, it follows from Equation (\ref{LWX-module-equ-3}) that \begin{equation}\label{LWX-module-equ-6} \frac{d}{f}\vec{u} = \frac{\vec{v}_j\mathbf{G}_1}{fc_j}\mathbf{F}_1, ~j =1,\ldots,\xi. \end{equation} Thus, $\vec{u} \in \mathcal{K}_1 : \frac{d}{f}$, and we infer that $\mathcal{K}:\langle dc_1,\ldots, dc_\xi \rangle \subseteq \mathcal{K}_1:\frac{d}{f}$. Consequently, $(\mathcal{K}_1:\frac{d}{f})/\mathcal{K}_1 =(\mathcal{K}:\langle dc_1,\ldots, dc_\xi \rangle)/\mathcal{K}_1$. \end{proof} In Theorem \ref{LWX-module}, we obtain $\mathcal{K}_1:\frac{d}{f} = \mathcal{K}:\langle dc_1,\ldots, dc_\xi \rangle$. Naturally, we consider under what conditions $\mathcal{K}_1$ and $\mathcal{K}:\langle fc_1,\ldots, fc_\xi \rangle$ are equal. Now, we propose the following conclusion. \begin{theorem}\label{LWX-Guan} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, $f$ be a divisor of $d_r({\mathbf{F}})$ and $c_1,\ldots, c_\xi$ be all the $r\times r$ column reduced minors of ${\mathbf{F}}$, where $1\leq r < l$. Suppose there exist $\mathbf{G}_1\in k[{\bf z}]^{l\times r}$ and $\mathbf{F}_1\in k[{\bf z}]^{r\times m}$ such that $\mathbf{F} = \mathbf{G}_1\mathbf{F}_1$ with $d_r({\mathbf{G}}_1) = f$. Let $d = d_r({\mathbf{F}})$, $\mathcal{K} = \rho({\mathbf{F}})$ and $\mathcal{K}_1 = \rho({\mathbf{F}}_1)$. If ${\rm gcd}(f,\frac{d}{f}) = 1$, then $\mathcal{K}_1 = \mathcal{K} :\langle fc_1,\ldots,fc_\xi\rangle$ and $\mathcal{K} :\langle fc_1,\ldots,fc_\xi\rangle$ is a free module of rank $r$. \end{theorem} The above theorem is a generalization of Theorem 3.11 in \citep{Guan2018}. The proof of Theorem \ref{LWX-Guan} is basically the same as that of Theorem 3.11, except that we explicitly give a system of generators of $I_r({\mathbf{G}}_1)$. Hence, the proof is omitted here. Evidently, the calculation amount of $\rho({\mathbf{F}}_1) = \rho({\mathbf{F}}) :\langle fb_1,\ldots,fb_\beta \rangle$ in Theorem 3.11 is much larger than that of $\rho({\mathbf{F}}_1) = \rho({\mathbf{F}}) :\langle fc_1,\ldots,fc_\xi\rangle$ in Theorem \ref{LWX-Guan}. Suppose ${\rm gcd}(f,\frac{d}{f}) = 1$. Let $\mathcal{K} :\langle fc_1,\ldots,fc_\xi\rangle$ be a free module of rank $r$, and a free basis of the module constitutes $\mathbf{F}_1\in k[{\bf z}]^{r\times m}$. Then, $\rho({\mathbf{F}}_1) = \mathcal{K} :\langle fc_1,\ldots,fc_\xi\rangle$. Given $\mathcal{K} \subseteq \rho({\mathbf{F}}_1)$, there exists $\mathbf{G}_1\in k[{\bf z}]^{l\times r}$ such that $\mathbf{F} = \mathbf{G}_1\mathbf{F}_1$ with $d_r({\mathbf{G}}_1) = f'$, where $f'$ is a divisor of $d$. Notice that $f$ and $f'$ may be different. The condition that $\mathcal{K} :\langle fc_1,\ldots,fc_\xi\rangle$ is a free module of rank $r$ is only a necessary condition for the existence of a factorization of ${\mathbf{F}}$ w.r.t. $f$. In order to study the relationship between $f'$ and $f$, we first introduce a result in \citep{Liu2015Further}. \begin{lemma}\label{LW-constant} Let ${\mathbf{F}}\in k[{\bf z}]^{l\times m}$ be of full row rank, $d = d_l({\mathbf{F}})$ and $\mathcal{K} = \rho({\mathbf{F}})$. If there exists a divisor $f$ of $d$ such that $\mathcal{K} : f = \mathcal{K}$, then $f$ is a constant. \end{lemma} Now, we can draw the following conclusion. \begin{proposition}\label{LWX-Guan-equivalent} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, and $c_1,\ldots, c_\xi$ be all the $r\times r$ column reduced minors of ${\mathbf{F}}$, where $1\leq r < l$. Let $\mathcal{K} = \rho({\mathbf{F}})$, $d = d_r({\mathbf{F}})$ be a square-free polynomial and $f$ be a divisor of $d$. Suppose $\mathcal{K}_1 = \mathcal{K}:\langle fc_1,\ldots, fc_\xi \rangle$ is a free module of rank $r$ and $\mathbf{F}_1\in k[{\bf z}]^{r\times m}$ is composed of a free basis of $\mathcal{K}_1$. Then, there is no a proper divisor $f'$ of $f$ such that $\mathbf{F} = \mathbf{G}_1\mathbf{F}_1$, where $\mathbf{G}_1\in k[{\bf z}]^{l\times r}$ with $d_r({\mathbf{G}}_1) = f'$. \end{proposition} \begin{proof} Note that $\mathcal{K} \subseteq \mathcal{K}_1$, there exists $\mathbf{G}_1\in k[{\bf z}]^{l\times r}$ such that $\mathbf{F} = \mathbf{G}_1\mathbf{F}_1$ with $d_r({\mathbf{G}}_1) = f'$, where $f'$ is a divisor of $d$. Since $d$ is a square-free polynomial, ${\rm gcd}(f',\frac{d}{f'}) = 1$. According to Theorem \ref{LWX-Guan}, it follows that $\mathcal{K}_1 = \mathcal{K}:\langle f'c_1,\ldots, f'c_\xi \rangle$, i.e., \begin{equation}\label{LWX-Guan-equivalent-equ-1} \mathcal{K}:\langle fc_1,\ldots, fc_\xi \rangle = \mathcal{K}:\langle f'c_1,\ldots, f'c_\xi \rangle. \end{equation} Assume that $f'$ is a proper divisor of $f$. It can easily be seen from Equation (\ref{LWX-Guan-equivalent-equ-1}) that \begin{equation}\label{LWX-Guan-equivalent-equ-2} \mathcal{K}_1 : \frac{f}{f'} = \mathcal{K}_1. \end{equation} Because $d_r({\mathbf{F}}_1) = \frac{d}{f'}$, we have $\frac{f}{f'} \mid d_r({\mathbf{F}}_1)$. Based on Lemma \ref{LW-constant}, $\frac{f}{f'}$ is a constant. This contradicts the fact that $f'$ is a proper divisor of $f$. \end{proof} Before giving a new necessary and sufficient condition for the existence of a factorization of ${\mathbf{F}}$ w.r.t. $f$, we present the following result. \begin{lemma}\label{equivalent-zlp} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, and $c_1,\ldots, c_\xi$ be all the $r\times r$ column reduced minors of ${\mathbf{F}}$, where $1\leq r < l$. Then the following are equivalent: \begin{enumerate} \item there exist $\mathbf{U}\in k[{\bf z}]^{l\times r}$ and $\mathbf{F}_1\in k[{\bf z}]^{r\times m}$ such that $\mathbf{F} = \mathbf{U}\mathbf{F}_1$ with $\mathbf{U}$ being a ZRP matrix; \item $\langle c_1,\ldots, c_\xi \rangle =k[{\bf z}]$. \end{enumerate} \end{lemma} \begin{proof} $1\rightarrow 2$. Suppose there exist $\mathbf{U}\in k[{\bf z}]^{l\times r}$ and $\mathbf{F}_1\in k[{\bf z}]^{r\times m}$ such that $\mathbf{F} = \mathbf{U}\mathbf{F}_1$, where $\mathbf{U}$ is a ZRP matrix. Using Lemma \ref{comple-lemma}, $c_1,\ldots, c_\xi$ are all the $r\times r$ reduced minors of $\mathbf{U}$. Then, $\langle c_1,\ldots, c_\xi \rangle =k[{\bf z}]$ since $\mathbf{U}$ is a ZRP matrix. $2\rightarrow 1$. Because ${\rm rank}({\mathbf{F}})=r$, there exists a full row rank matrix $\mathbf{H}\in k[{\bf z}]^{(l-r)\times l}$ such that \begin{equation}\label{equivalent-zlp-equ-1} \mathbf{H}{\mathbf{F}} = \mathbf{0}_{(l-r)\times m}. \end{equation} According to Lemma \ref{RM_relation}, $c_1,\ldots, c_\xi$ are all the $(l-r)\times (l-r)$ reduced minors of $\mathbf{H}$. Assume that $\langle c_1,\ldots, c_\xi \rangle =k[{\bf z}]$. By Lemma \ref{Lin-Bose-conjecture}, $\mathbf{H}$ has a ZLP factorization \begin{equation}\label{equivalent-zlp-equ-2} \mathbf{H} = \mathbf{G}\mathbf{H}_1, \end{equation} where $\mathbf{G}\in k[{\bf z}]^{(l-r)\times (l-r)}$, and $\mathbf{H}_1\in k[{\bf z}]^{(l-r)\times l}$ is a ZLP matrix. Let $\vec{v}\in {\rm Syz}(\mathbf{H})$, then $\mathbf{H}\vec{v} = \mathbf{G}\mathbf{H}_1\vec{v} = \vec{0}$. Since $\mathbf{G}$ is a full column rank matrix, $\mathbf{H}_1\vec{v} = \vec{0}$. This implies that $\vec{v}\in {\rm Syz}(\mathbf{H}_1)$. Let $\vec{u}\in {\rm Syz}(\mathbf{H}_1)$, it is obvious that $\vec{u}\in {\rm Syz}(\mathbf{H})$. It follows that \begin{equation}\label{equivalent-zlp-equ-3} {\rm Syz}(\mathbf{H}) = {\rm Syz}(\mathbf{H}_1). \end{equation} Thus we conclude that ${\rm Syz}(\mathbf{H})$ is a free module of rank $r$ by the Quillen-Suslin theorem. Suppose that $\mathbf{U}\in k[{\bf z}]^{l\times r}$ is composed of a free basis of ${\rm Syz}(\mathbf{H})$. It follows from $\mathbf{H}\mathbf{U} = \mathbf{0}_{(l-r)\times r}$ that all the $r\times r$ reduced minors of $\mathbf{U}$ generate $k[{\bf z}]$. Using Lemma \ref{Lin-Bose-conjecture} again, there exist $\mathbf{U}_1\in k[{\bf z}]^{l\times r}$ and $\mathbf{G}_1\in k[{\bf z}]^{r\times r}$ such that \begin{equation}\label{equivalent-zlp-equ-4} \mathbf{U} = \mathbf{U}_1\mathbf{G}_1 \end{equation} with $\mathbf{U}_1$ being a ZRP matrix. Since ${\mathbf{G}}_1$ is a full row rank matrix, from $\mathbf{H} \mathbf{U}_1\mathbf{G}_1 = \mathbf{0}_{(l-r)\times r}$ we have \begin{equation}\label{equivalent-zlp-equ-5} \mathbf{H} \mathbf{U}_1= \mathbf{0}_{(l-r)\times r}. \end{equation} This implies that \begin{equation}\label{equivalent-zlp-equ-6} \rho(\mathbf{U}_1^{\rm T}) \subseteq \rho(\mathbf{U}^{\rm T}). \end{equation} Using $d_r(\mathbf{U}) = d_r(\mathbf{U}_1){\rm det}({\mathbf{G}}_1)$, we get $d_r(\mathbf{U}) = \delta{\rm det}({\mathbf{G}}_1)$, where $\delta$ is a nonzero constant. If ${\rm det}({\mathbf{G}}_1) \in k[{\bf z}] \setminus k$, then Equation (\ref{equivalent-zlp-equ-4}) implies that \begin{equation}\label{equivalent-zlp-equ-7} \rho(\mathbf{U}^{\rm T}) \subsetneq \rho(\mathbf{U}_1^{\rm T}). \end{equation} This leads to a contradiction. Thus, ${\rm det}({\mathbf{G}}_1)$ is a nonzero constant. Consequently, we infer that $\mathbf{U}$ is a ZRP matrix. Equation (\ref{equivalent-zlp-equ-1}) implies that the columns of ${\mathbf{F}}$ belong to ${\rm Syz}(\mathbf{H})$, then there exists $\mathbf{F}_1\in k[{\bf z}]^{r\times m}$ such that \begin{equation}\label{equivalent-zlp-equ-8} \mathbf{F} = \mathbf{U}\mathbf{F}_1. \end{equation} \end{proof} Now, we give the second main result in this paper. \vspace{4pt} \begin{theorem}\label{LWX-main-flp} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, and $c_1,\ldots, c_\xi$ be all the $r\times r$ column reduced minors of ${\mathbf{F}}$, where $1\leq r < l$. Let $\mathcal{K} = \rho({\mathbf{F}})$, $d = d_r({\mathbf{F}})$ and $f$ be a divisor of $d$ with ${\rm gcd}(f,\frac{d}{f}) = 1$. If $\langle c_1,\ldots, c_\xi \rangle =k[{\bf z}]$, then the following are equivalent: \begin{enumerate} \item ${\mathbf{F}}$ has a factorization w.r.t. $f$; \item $\mathcal{K} : f$ is a free module of rank $r$. \end{enumerate} \end{theorem} \begin{proof} $1\rightarrow 2$. Suppose that ${\mathbf{F}}$ has a factorization w.r.t. $f$. Then there exist $\mathbf{G}_1\in k[{\bf z}]^{l\times r}$ and $\mathbf{F}_1\in k[{\bf z}]^{r\times m}$ such that $\mathbf{F} = \mathbf{G}_1\mathbf{F}_1$ with $d_r({\mathbf{G}}_1) = f$. According to Theorem \ref{LWX-Guan}, $\rho({\mathbf{F}}_1) = \mathcal{K} : \langle fc_1,\ldots, fc_\xi \rangle$. It follows from $\langle c_1,\ldots, c_\xi \rangle =k[{\bf z}]$ that $\langle fc_1,\ldots, fc_\xi \rangle = \langle f \rangle$. Then, $\rho({\mathbf{F}}_1) = \mathcal{K} : f$. As $\mathbf{F}_1$ is a full row rank matrix, $\mathcal{K} : f$ is a free module of rank $r$. $2\rightarrow 1$. Since $\langle c_1,\ldots, c_\xi \rangle =k[{\bf z}]$, by Lemma \ref{equivalent-zlp} we obtain \begin{equation}\label{LWX-main-flp-equ-1} \mathbf{F} = \mathbf{U}\mathbf{F}', \end{equation} where $\mathbf{U}\in k[{\bf z}]^{l\times r}$ is a ZRP matrix and $\mathbf{F}'\in k[{\bf z}]^{r\times m}$. Without loss of generality, we assume that $d_r(\mathbf{U}) =1$. Clearly, $\rho({\mathbf{F}}) \subseteq \rho({\mathbf{F}}')$. Based on the Quillen-Suslin theorem, there is a ZLP matrix $\mathbf{V} \in k[{\bf z}]^{r\times l}$ such that $\mathbf{V}\mathbf{U} = \mathbf{I}_{r\times r}$. Then, $\mathbf{F}' = \mathbf{V}\mathbf{F}$. This implies that $\rho({\mathbf{F}}') \subseteq \rho({\mathbf{F}})$. Thus, $\rho({\mathbf{F}}') = \mathcal{K}$, $d_r({\mathbf{F}}') = d_r({\mathbf{F}})$ and $\rho({\mathbf{F}}') : f$ is a free module of rank $r$. Since ${\rm gcd}(f,\frac{d}{f}) = 1$, $f$ is regular w.r.t. ${\mathbf{F}}'$. By Lemma \ref{W-flp}, there exist $\mathbf{G}'\in k[{\bf z}]^{r\times r}$ and $\mathbf{F}_1\in k[{\bf z}]^{r\times m}$ such that \begin{equation}\label{LWX-main-flp-equ-4} {\mathbf{F}}' = {\mathbf{G}}'{\mathbf{F}}_1 \end{equation} with ${\rm det}({\mathbf{G}}') = f$. By substituting Equation (\ref{LWX-main-flp-equ-4}) into Equation (\ref{LWX-main-flp-equ-1}), we get \begin{equation}\label{LWX-main-flp-equ-5} {\mathbf{F}} = (\mathbf{U}{\mathbf{G}}'){\mathbf{F}}_1. \end{equation} Let ${\mathbf{G}}_1 = \mathbf{U}{\mathbf{G}}'$, then $d_r({\mathbf{G}}_1) = d_r(\mathbf{U}) {\rm det}({\mathbf{G}}') = f$. Thus ${\mathbf{F}}$ has a factorization w.r.t. $f$. \end{proof} \begin{remark} \cite{Mingsheng2007On} proved that $f$ is regular w.r.t. ${\mathbf{F}}'$ if ${\rm gcd}(f,\frac{d}{f}) = 1$. \end{remark} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$ and $f$ be a divisor of $d_r({\mathbf{F}})$, where $1\leq r < l$. We define the following set: \[ M(f)=\{ h \in k[{\bf z}]: f \mid h \text{ and } h \mid d_r({\mathbf{F}}) \}. \] \vspace{4pt} Now, we give a partial solution to Problem \ref{main-problem-1}. \begin{theorem}\label{LWX-main-flp-2} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, and $c_1,\ldots, c_\xi$ be all the $r\times r$ column reduced minors of ${\mathbf{F}}$, where $1\leq r < l$. Let $\mathcal{K} = \rho({\mathbf{F}})$, $d = d_r({\mathbf{F}})$ and $f$ be a divisor of $d$. Suppose every $h\in M(f)$ satisfies ${\rm gcd}(h,\frac{d}{h}) = 1$ and $\langle c_1,\ldots, c_\xi \rangle =k[{\bf z}]$, then the following are equivalent: \begin{enumerate} \item ${\mathbf{F}}$ has a FLP factorization w.r.t. $f$; \item $\mathcal{K} : f$ is a free module of rank $r$, but $\mathcal{K} : h$ is not a free module of rank $r$ for every $h\in M(f)\setminus \{f \}$. \end{enumerate} \end{theorem} \begin{remark} With the help of Theorem \ref{LWX-main-flp}, the proof of Theorem \ref{LWX-main-flp-2} is similar to that of Theorem 3.2 in \citep{Mingsheng2007On}, and is omitted here. \end{remark} In the above theorem, we need to verify whether a submodule of $k[{\bf z}]^{1\times m}$ is a free module of rank $r$. The traditional method is to calculate the $r$-th Fitting ideal of the submodule. We refer to \citep{Cox2005Using,Eisenbud2013,Greuel2002A} for more details. Next, we will give a simpler verification method. \vspace{8pt} \begin{proposition}\label{free-module-check} Let $\mathbf{F}\in k[{\bf z}]^{l\times m}$ with rank $r$, and $J \subset k[{\bf z}]$ be a nonzero ideal, where $1\leq r < l$. Suppose $\mathbf{F}_0\in k[{\bf z}]^{s\times m}$ is composed of a system of generators of $\rho({\mathbf{F}}): J$, then the following are equivalent: \begin{enumerate} \item $\rho({\mathbf{F}}): J$ is a free module of rank $r$; \item all the $r\times r$ column reduced minors of ${\mathbf{F}}_0$ generate $k[{\bf z}]$. \end{enumerate} \end{proposition} \begin{proof} It is evident that $\rho({\mathbf{F}}): J = \rho({\mathbf{F}}_0)$. According to Proposition 3.14 in \citep{Guan2018}, the rank of $\rho({\mathbf{F}}): J$ is $r$. This implies that ${\rm rank}({\mathbf{F}}_0) = r$ and $s\geq r$. $1\rightarrow 2$. Suppose that $\rho({\mathbf{F}}): J$ is a free module of rank $r$. Let $\mathbf{F}_1\in k[{\bf z}]^{r\times m}$ be composed of a free basis of $\rho({\mathbf{F}}): J$, then $\rho({\mathbf{F}}_1) = \rho({\mathbf{F}}_0)$. On the one hand, $\rho({\mathbf{F}}_0) \subseteq \rho({\mathbf{F}}_1)$ implies that there exists $\mathbf{G}_1\in k[{\bf z}]^{s\times r}$ such that ${\mathbf{F}}_0 = {\mathbf{G}}_1{\mathbf{F}}_1$. On the other hand, it follows from $\rho({\mathbf{F}}_1) \subseteq \rho({\mathbf{F}}_0)$ that there exists $\mathbf{G}_0\in k[{\bf z}]^{r\times s}$ such that ${\mathbf{F}}_1 = {\mathbf{G}}_0{\mathbf{F}}_0$. Combining the above two equations, we have ${\mathbf{F}}_1 = ({\mathbf{G}}_0{\mathbf{G}}_1){\mathbf{F}}_1$. Because ${\mathbf{F}}_1$ is a full row rank matrix, we obtain $\mathbf{I}_{r\times r} = {\mathbf{G}}_0{\mathbf{G}}_1$. According to the Binet-Cauchy formula, all the $r\times r$ minors of ${\mathbf{G}}_1$ generate $k[{\bf z}]$. Therefore, ${\mathbf{G}}_1$ is a ZRP matrix. Based on Lemma \ref{equivalent-zlp}, all the $r\times r$ column reduced minors of ${\mathbf{F}}_0$ generate $k[{\bf z}]$. $2\rightarrow 1$. There are two cases. First, $s>r$. Using Lemma \ref{equivalent-zlp}, there exist $\mathbf{F}_1\in k[{\bf z}]^{r\times m}$ and a ZRP matrix $\mathbf{U}\in k[{\bf z}]^{s\times r}$ such that $\mathbf{F}_0 = \mathbf{U}\mathbf{F}_1$. It follows from the proof of $2\rightarrow 1$ in Theorem \ref{LWX-main-flp} that $\rho({\mathbf{F}}_0) = \rho({\mathbf{F}}_1)$. Since ${\mathbf{F}}_1$ is a full row rank matrix, $\rho({\mathbf{F}}): J$ is a free module of rank $r$. Second, $s=r$. In this situation, ${\mathbf{F}}_0$ is a full row rank matrix. This implies that $\rho({\mathbf{F}}):J$ is a free module of rank $r$. Obviously, all the $r\times r$ column reduced minors of ${\mathbf{F}}_0$ are only one polynomial which is the constant $1$, and generate $k[{\bf z}]$. In summary, $\rho({\mathbf{F}}):J$ is a free module of rank $r$. \end{proof} \section{Algorithm and Examples}\label{sec_AE} \subsection{Algorithm} Before solving Problem \ref{main-problem-2}, we make the following analysis on the main results obtained in section \ref{sec_MR}. We first construct a polynomial matrix set of $k[{\bf z}]^{l\times m}$ as follows: \[\mathcal{M}=\{{\mathbf{F}}\in k[{\bf z}]^{l\times m} : d_r({\mathbf{F}}) \text{ is a square-free polynomial}\},\] where $r = {\rm rank}({\mathbf{F}})$. Let $\mathbf{F}\in \mathcal{M}$, $d = d_r({\mathbf{F}})$, $\mathcal{K} = \rho({\mathbf{F}})$, $f$ be an arbitrary divisor of $d$, and $c_1,\ldots, c_\xi$ be all the $r\times r$ column reduced minors of ${\mathbf{F}}$, where $1\leq r < l$. There are two cases as follows. First, $\langle c_1,\ldots, c_\xi \rangle = k[{\bf z}]$. According to Theorem \ref{LWX-main-flp}, ${\mathbf{F}}$ has a factorization w.r.t. $f$ if and only if $\mathcal{K}:f$ is a free module of rank $r$. Since $f$ is an arbitrary divisor of $d$, we can compute all matrix factorizations of ${\mathbf{F}}$. After that, we obtain all FLP factorizations of ${\mathbf{F}}$ by Theorem \ref{LWX-main-flp-2}. Second, $\langle c_1,\ldots, c_\xi \rangle \neq k[{\bf z}]$. We only get a necessary condition for the existence of a factorization of ${\mathbf{F}}$ w.r.t. $f$ in Theorem \ref{LWX-Guan}. Nevertheless, we can get all factorizations of $\mathbf{F}$. The specific process is as follows. Let $f_1,\ldots,f_s$ be all different divisors of $d$ and $\mathcal{K}_j = \mathcal{K}: \langle f_jc_1,\ldots, f_jc_\xi \rangle$, then we verify whether $\mathcal{K}_j$ is a free module of rank $r$, where $j=1,\ldots,s$. For each $j$, one of the following three cases holds: \begin{enumerate} \item $\mathcal{K}_j$ is not a free module of rank $r$, then ${\mathbf{F}}$ has no factorization w.r.t. $f_j$; \item $\mathcal{K}_j$ is a free module of rank $r$, and a free basis of $\mathcal{K}_j$ constitutes $\mathbf{F}_j\in k[{\bf z}]^{r\times m}$, \begin{itemize} \item[2.1] if $d_r({\mathbf{F}}_j) = \frac{d}{f_j}$, then ${\mathbf{F}}$ has a factorization w.r.t. $f_j$; \item[2.2] if $d_r({\mathbf{F}}_j) \neq \frac{d}{f_j}$, then ${\mathbf{F}}$ has a factorization w.r.t. $f_i$, where $f_i\nmid f_j$. \end{itemize} \end{enumerate} Let ${\mathbf{F}} = {\mathbf{G}}_{i_1}{\mathbf{F}}_{i_1} =\cdots = {\mathbf{G}}_{i_t}{\mathbf{F}}_{i_t}$ be all different factorizations of ${\mathbf{F}}$ and $\mathcal{K}_{i_j} = \rho({\mathbf{F}}_{i_j})$, where ${\mathbf{G}}_{i_j}\in k[{\bf z}]^{l\times r}$, ${\mathbf{F}}_{i_j}\in k[{\bf z}]^{r\times m}$, $j=1,\ldots, t$ and $0\leq t \leq s$ ($t=0$ implies that ${\mathbf{F}}$ has no factorizations). For each $\mathcal{K}_{i_j}$, if there does not exist $j'$ such that $\mathcal{K}_{i_j} \subsetneq \mathcal{K}_{i_{j'}}$, then ${\mathbf{F}} = {\mathbf{G}}_{i_j}{\mathbf{F}}_{i_j}$ is a FLP factorization of ${\mathbf{F}}$. The reason is as follows. Assume that there exist ${\mathbf{G}}_0\in k[{\bf z}]^{r\times r}$ and ${\mathbf{F}}_0\in k[{\bf z}]^{r\times m}$ such that ${\mathbf{F}}_{i_j} = {\mathbf{G}}_0{\mathbf{F}}_0$. If ${\rm det}({\mathbf{G}}_0) \in k[{\bf z}] \setminus k$, then $\mathcal{K}_{i_j} \subsetneq \rho({\mathbf{F}}_0)$. It can be seen that ${\mathbf{F}} = ({\mathbf{G}}_{i_j}{\mathbf{G}}_0){\mathbf{F}}_0$ is a factorization of ${\mathbf{F}}$ and it is different from ${\mathbf{F}} = {\mathbf{G}}_{i_j}{\mathbf{F}}_{i_j}$. This contradicts the fact that there exists no $j'$ such that $\mathcal{K}_{i_j} \subsetneq \mathcal{K}_{i_{j'}}$. Then, ${\rm det}({\mathbf{G}}_0)$ is a nonzero constant and ${\mathbf{F}}_{i_j}$ is a FLP matrix. \vspace{4pt} According to the above analysis, we now give a partial solution to Problem \ref{main-problem-2}. We construct the following algorithm to compute all FLP factorizations for $\mathbf{F}\in \mathcal{M}$. \vspace{6pt} Before proceeding further, let us remark on Algorithm \ref{FLP_Algorithm}. \begin{enumerate} \item[(1)] In step 14 and step 26, we need to compute free bases of free submodules in $k[{\bf z}]^{1\times m}$. \cite{Fabianska2007Applications} first designed a Maple package, which is called QUILLENSUSLIN, to implement the Quillen-Suslin theorem. At the same time, they implemented an algorithm for computing free bases of free submodules in this package. Based on this fact, Algorithm \ref{FLP_Algorithm} is implemented on Maple. For interested readers, more examples can be generated by the codes at: \url{http://www.mmrc.iss.ac.cn/~dwang/software.html}. \item[(2)] In step 8 and step 20, we need to compute a system of generators of $\mathcal{K}:J$, where $\mathcal{K}\subset k[{\bf z}]^{1\times m}$ and $J$ is a nonzero ideal. \cite{Mingsheng2005On} proposed an algorithm to compute $\mathcal{K}:J$, and we have implemented this algorithm on Maple. \item[(3)] In step 9 and step 21, if ${\mathbf{F}}_i'$ is a full row rank matrix, then $\rho({\mathbf{F}}_i')$ is a free module of rank $r$ and we do not need to compute a reduced Gr\"{o}bner basis of all the $r\times r$ column reduced minors of ${\mathbf{F}}_i'$; otherwise, we need to use Proposition \ref{free-module-check} to determine whether $\mathcal{K}:J$ is a free module of rank $r$. \item[(4)] In step 20, $\rho({\mathbf{F}}):(f_i\mathcal{G}) = \rho({\mathbf{F}}):\langle f_ic_1,\ldots,f_ic_\xi \rangle$ since $\mathcal{G}$ is a reduced Gr\"{o}bner basis of $\langle c_1,\ldots,c_\xi\rangle$. This can help us reduce some calculations. \item[(5)] In step 15 and step 27, we need to compute ${\mathbf{G}}_i\in k[{\bf z}]^{l\times r}$ such that ${\mathbf{F}} = {\mathbf{G}}_i{\mathbf{F}}_i$. \cite{Lu2020On} designed a Maple package, which is called poly-matrix-equation, for solving multivariate polynomial matrix Diophantine equations. We use this package to compute ${\mathbf{G}}_i$. \item[(6)] In step 15, Theorem \ref{LWX-main-flp} can guarantee that $d_r({\mathbf{G}}_i) = f_i$. In step 27, we can not ensure that $d_r({\mathbf{G}}_i) = f_i$. Proposition \ref{LWX-Guan-equivalent} only tell us that there is no a proper divisor $f_i'$ of $f_i$ such that $d_r({\mathbf{G}}_i) = f_i'$. Hence, we need to compute $d_r({\mathbf{G}}_i)$. \item[(7)] In step 25 and step 29, we can use Gr\"{o}bner bases to verify the inclusion relationship of two submodules of $k[{\bf z}]^{1\times m}$. \item[(8)] In step 17, the element $({\mathbf{F}}_i',f_i)$ is also deleted since $f_i$ divides itself. Similarly, the element $({\mathbf{F}}_i',f_i)$ in step 29 is also deleted since $\rho({\mathbf{F}}_i') \subseteq \rho({\mathbf{F}}_i')$. \item[(9)] In fact, we can obtain all factorizations of ${\mathbf{F}}$ by making appropriate modifications to Algorithm \ref{FLP_Algorithm}. \end{enumerate} \vskip 12 pt \begin{algorithm}[H] \DontPrintSemicolon \SetAlgoSkip{} \LinesNumbered \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$\mathbf{F}\in \mathcal{M}$, the rank $r$ of ${\mathbf{F}}$ and $d_r({\mathbf{F}})$.} \Output{all FLP factorizations of $\mathbf{F}$.} \Begin{ $P: = \emptyset$ and $W: = \emptyset$; compute all different divisors $f_1,\ldots,f_s$ of $d_r({\mathbf{F}})$; compute all the $r\times r$ column reduced minors $c_1,\ldots,c_\xi$ of ${\mathbf{F}}$; compute a reduced Gr\"{o}bner basis $\mathcal{G}$ of $\langle c_1,\ldots,c_\xi\rangle$; \If{$\mathcal{G} = \{1\}$} { \For{$i$ from $1$ to $s$} { compute a system of generators of $\rho({\mathbf{F}}):f_i$, and use all the elements in the system to constitute a matrix ${\mathbf{F}}_i'\in k[{\bf z}]^{s_i\times m}$; \If{the reduced Gr\"{o}bner basis of all the $r\times r$ column reduced minors of ${\mathbf{F}}_i'$ is $\{1\}$} { $P := P \cup \{({\mathbf{F}}_i',f_i)\}$; } } \While{$P \neq \emptyset$} { select any element $({\mathbf{F}}_i',f_i)$ from $P$; \If{there is no other elements $({\mathbf{F}}_j',f_j)\in P$ such that $f_i \mid f_j$} { compute a free basis of $\rho({\mathbf{F}}_i')$, and use all the elements in the basis to constitute a matrix ${\mathbf{F}}_i\in k[{\bf z}]^{r\times m}$; compute a matrix ${\mathbf{G}}_i\in k[{\bf z}]^{l\times r}$ such that ${\mathbf{F}} = {\mathbf{G}}_i{\mathbf{F}}_i$; $W: = W \cup \{({\mathbf{G}}_i,{\mathbf{F}}_i,f_i)\}$; } delete all elements $({\mathbf{F}}_t',f_t)$ that satisfy $f_t\mid f_i$ from $P$; } } \Else { \For{$i$ from $1$ to $s$} { compute a system of generators of $\rho({\mathbf{F}}):(f_i\mathcal{G})$, and use all the elements in the system to constitute a matrix ${\mathbf{F}}_i'\in k[{\bf z}]^{s_i\times m}$; \If{the reduced Gr\"{o}bner basis of all the $r\times r$ column reduced minors of ${\mathbf{F}}_i'$ is $\{1\}$} { $P := P \cup \{({\mathbf{F}}_i',f_i)\}$; } } \While{$P \neq \emptyset$} { select any element $({\mathbf{F}}_i',f_i)$ from $P$; \If{there is no other elements $({\mathbf{F}}_j',f_j)\in P$ such that $\rho({\mathbf{F}}_i') \subsetneq \rho({\mathbf{F}}_j')$} { compute a free basis of $\rho({\mathbf{F}}_i')$, and use all the elements in the basis to constitute ${\mathbf{F}}_i\in k[{\bf z}]^{r\times m}$; compute a matrix ${\mathbf{G}}_i\in k[{\bf z}]^{l\times r}$ such that ${\mathbf{F}} = {\mathbf{G}}_i{\mathbf{F}}_i$ with $d_r({\mathbf{G}}_i) = f_i'$; $W: = W \cup \{({\mathbf{G}}_i,{\mathbf{F}}_i,f_i')\}$; } delete all elements $({\mathbf{F}}_t',f_t)$ that satisfy $\rho({\mathbf{F}}_t') \subseteq \rho({\mathbf{F}}_i')$ from $P$; } } {\bf return} $W$. } \caption{FLP factorization algorithm} \label{FLP_Algorithm} \end{algorithm} \subsection{Examples} We first use the example in \citep{Guan2018} to illustrate the calculation process of Algorithm \ref{FLP_Algorithm}. \begin{example}\label{example-1} {\rm Let \[\mathbf{F} = \begin{bmatrix} z_1z_2-z_2 & 0 & z_3+1 \\ 0 & z_1z_2-z_2 & z_1^2-2z_1+1 \\ z_1^2z_2-z_1z_2 & z_1z_2^2-z_2^2 & z_1^2z_2-2z_1z_2+z_1z_3+z_1+z_2 \end{bmatrix}\] be a multivariate polynomial matrix in $\mathbb{C}[z_1,z_2,z_3]^{3\times 3}$, where $z_1>z_2>z_3$ and $\mathbb{C}$ is the complex field. It is easy to compute that the rank of ${\mathbf{F}}$ is $2$, and $d_2({\mathbf{F}})=(z_1-1)z_2$. Since $d_2({\mathbf{F}})$ is a square-free polynomial, ${\mathbf{F}}\in \mathcal{M}$. Then, we can use Algorithm \ref{FLP_Algorithm} to compute all FLP factorizations of ${\mathbf{F}}$. The input of Algorithm \ref{FLP_Algorithm} are $\mathbf{F}$, $r=2$ and $d_2({\mathbf{F}})=(z_1-1)z_2$. Let $P = \emptyset$ and $W = \emptyset$. All different divisors of $d_2({\mathbf{F}})$ are: $f_1 = 1$, $f_2 = z_1-1$, $f_3 = z_2$ and $f_4 = (z_1-1)z_2$. All the $2\times 2$ column reduced minors of ${\mathbf{F}}$ are: $c_1 = 1$, $c_2 = z_2$ and $c_3 = -z_1$. The reduced Gr\"{o}bner basis of $\langle c_1,c_2,c_3 \rangle$ w.r.t. the degree reverse lexicographic order is $\mathcal{G} = \{1\}$. Now, we use the steps from 7 to 17 to compute all FLP factorizations of ${\mathbf{F}}$. (1) When $i=1$, we first compute a system of generators of $\rho({\mathbf{F}}):f_1$ and the system is $\{[z_1z_2-z_2, ~ 0, ~ z_3+1],~[0, ~ z_1z_2-z_2, ~ z_1^2-2z_1+1]\}$. Let \[\mathbf{F}_1' = \begin{bmatrix} z_1z_2-z_2 & 0 & z_3+1 \\ 0 & z_1z_2-z_2 & z_1^2-2z_1+1 \end{bmatrix}.\] Since $\rho({\mathbf{F}}_1') = \rho({\mathbf{F}}):f_1$ and ${\mathbf{F}}_1'$ is a full row rank matrix, $\rho({\mathbf{F}}):f_1$ is a free module of rank $2$. (2) When $i=2$, a system of generators of $\rho({\mathbf{F}}):f_2$ is $\{[0, ~ z_2, ~ z_1-1],~[z_1z_2-z_2, ~ 0, ~ z_3+1]\}$. Let \[\mathbf{F}_2' = \begin{bmatrix} 0 & z_2 & z_1-1 \\ z_1z_2-z_2 & 0 & z_3+1 \end{bmatrix}.\] Since $\rho({\mathbf{F}}_2') = \rho({\mathbf{F}}):f_2$ and ${\mathbf{F}}_2'$ is a full row rank matrix, $\rho({\mathbf{F}}):f_2$ is a free module of rank $2$. (3) When $i=3$, a system of generators of $\rho({\mathbf{F}}):f_3$ is $\{[z_1z_2-z_2, ~ 0, ~ z_3+1],~[0, ~ z_1z_2-z_2, ~ z_1^2-2z_1+1], ~[z_1^3-3z_1^2+3z_1-1, ~ -z_1z_3-z_1+z_3+1, ~ 0]\}$. Let \[\mathbf{F}_3' = \begin{bmatrix} z_1z_2-z_2 & 0 & z_3+1 \\ 0 & z_1z_2-z_2 & z_1^2-2z_1+1 \\ z_1^3-3z_1^2+3z_1-1 & -z_1z_3-z_1+z_3+1 & 0 \end{bmatrix}.\] All the $2\times 2$ column reduced minors of ${\mathbf{F}}_3'$ are $(z_1-1)^2, -z_2$, $z_3+1$. Since $\langle (z_1-1)^2, -z_2,z_3+1 \rangle \neq \mathbb{C}[z_1,z_2,z_3]$, $\rho({\mathbf{F}}):f_3$ is not a free module of rank $2$. (4) When $i=4$, a system of generators of $\rho({\mathbf{F}}):f_4$ is $\{[0, ~ z_2, ~ z_1-1],~[z_1z_2-z_2, ~ 0, ~ z_3+1], ~[z_1^2-2z_1+1, ~ -z_3-1, ~ 0]\}$. Let \[\mathbf{F}_4' = \begin{bmatrix} 0 & z_2 & z_1-1 \\ z_1z_2-z_2 & 0 & z_3+1 \\ z_1^2-2z_1+1 & -z_3-1 & 0 \end{bmatrix}.\] All the $2\times 2$ column reduced minors of ${\mathbf{F}}_4'$ are $z_1-1, z_2,z_3+1$. Since $\langle z_1-1, z_2,z_3+1 \rangle \neq \mathbb{C}[z_1,z_2,z_3]$, $\rho({\mathbf{F}}):f_4$ is not a free module of rank $2$. Then, $P=\{({\mathbf{F}}_1',f_1),({\mathbf{F}}_2',f_2)\}$. Since $f_2$ is a proper multiple of $f_1$, ${\mathbf{F}}$ has a FLP factorization w.r.t. $f_2$. Obviously, the rows of ${\mathbf{F}}_2'$ constitute a free basis of $\rho({\mathbf{F}}):f_2$. Let ${\mathbf{F}}_2 = {\mathbf{F}}_2'$, we compute a polynomial matrix ${\mathbf{G}}_2\in \mathbb{C}[z_1,z_2,z_3]^{3\times 2}$ such that \[{\mathbf{F}} = {\mathbf{G}}_2{\mathbf{F}}_2 = \begin{bmatrix} 0 & 1 \\ z_1-1 & 0 \\ z_1z_2- z_2 & z_1 \end{bmatrix} \begin{bmatrix} 0 & z_2 & z_1-1 \\ z_1z_2-z_2 & 0 & z_3 +1 \end{bmatrix},\] where $d_2({\mathbf{G}}_2) = f_2$ and ${\mathbf{F}}_2$ is a FLP matrix. Then, $W = \{({\mathbf{G}}_2,{\mathbf{F}}_2,f_2)\}$.} \end{example} \begin{remark} Since $\langle c_1,c_2,c_3 \rangle = \langle 1 \rangle$, we can use Theorem \ref{LWX-main-flp-2} to compute all FLP factorizations of ${\mathbf{F}}$. The above calculation process is simpler than that of Example 3.20 in \citep{Guan2018}. Obviously, Algorithm \ref{FLP_Algorithm} is more efficient than the algorithm proposed in \citep{Guan2018}. \end{remark} \begin{example}\label{example-2} {\rm Let \[\mathbf{F} = \begin{bmatrix} z_1z_2^2 & z_1z_3^2 & z_2^2z_3+z_3^3 \\ z_1z_2 & 0 & z_2z_3 \\ 0 & z_1^2z_3 & z_1z_3^2 \end{bmatrix}\] be a multivariate polynomial matrix in $\mathbb{C}[z_1,z_2,z_3]^{3\times 3}$, where $z_1>z_2>z_3$ and $\mathbb{C}$ is the complex field. It is easy to compute that the rank of ${\mathbf{F}}$ is $2$, and $d_2({\mathbf{F}})=z_1z_2z_3$. Since $d_2({\mathbf{F}})$ is a square-free polynomial, ${\mathbf{F}}\in \mathcal{M}$. Then, we can use Algorithm \ref{FLP_Algorithm} to compute all FLP factorizations of ${\mathbf{F}}$. The input of Algorithm \ref{FLP_Algorithm} are $\mathbf{F}$, $r=2$ and $d_2({\mathbf{F}})=z_1z_2z_3$. Let $P = \emptyset$ and $W = \emptyset$. All different divisors of $d_2({\mathbf{F}})$ are: $f_1 = 1$, $f_2 = z_1$, $f_3 = z_2$, $f_4 = z_3$, $f_5 = z_1z_2$, $f_6 = z_1z_3$, $f_7 = z_2z_3$ and $f_8 = z_1z_2z_3$. All the $2\times 2$ column reduced minors of ${\mathbf{F}}$ are: $c_1 = z_1$, $c_2 = z_3$ and $c_3 = z_1z_2$. The reduced Gr\"{o}bner basis of $\langle c_1,c_2,c_3 \rangle$ w.r.t. the degree reverse lexicographic order is $\mathcal{G} = \{z_1,z_3\}$. Now, we use the steps from 19 to 29 to compute all FLP factorizations of ${\mathbf{F}}$. Let $\mathcal{K}_i = \rho({\mathbf{F}}): \langle f_ic_1,f_ic_2,f_ic_3 \rangle$, where $i=1,\ldots,8$. Since $\mathcal{G}$ is a Gr\"{o}bner basis of $\langle c_1,c_2,c_3 \rangle$, for each $i$ we have $\mathcal{K}_i = \rho({\mathbf{F}}): \langle f_ic_1,f_ic_2 \rangle = (\rho({\mathbf{F}}):f_ic_1) \cap (\rho({\mathbf{F}}):f_ic_2)$. (1) When $i=1$, the systems of generators of $\rho({\mathbf{F}}):z_1$ and $\rho({\mathbf{F}}):z_3$ are $\{[z_1z_2, ~ 0, ~ z_2z_3],~[0, ~ z_1z_3,$ $z_3^2],~[-z_2z_3^2, ~ z_2z_3^2, ~ 0]\}$ and $\{[z_1z_2, ~ 0, ~ z_2z_3],~[0, ~ z_1z_3, ~ z_3^2]$, $[0, ~ z_1^2, ~ z_1z_3]\}$, respectively. Then, a system of generators of $\mathcal{K}_1$ is \[\{[z_1z_2, ~ 0, ~ z_2z_3],~[0, ~ z_1z_3, ~ z_3^2],~[-z_1z_2z_3^2, ~ z_1z_2z_3^2, ~ 0]\}.\] Let \[\mathbf{F}_1' = \begin{bmatrix} z_1z_2 & 0 & z_2z_3 \\ 0 & z_1z_3 & z_3^2 \\ -z_1z_2z_3^2 & z_1z_2z_3^2 & 0 \end{bmatrix}.\] It is easy to compute that all the $2\times 2$ column reduced minors of ${\mathbf{F}}_1'$ are $1, z_2z_3,z_3^2$. Since $\langle 1, z_2z_3,z_3^2 \rangle = \mathbb{C}[z_1,z_2,z_3]$, $\mathcal{K}_1$ is a free module of rank $2$. (2) When $i=2$, the systems of generators of $\rho({\mathbf{F}}):z_1^2$ and $\rho({\mathbf{F}}):z_1z_3$ are $\{[z_1z_2, ~ 0, ~ z_2z_3],$ $[0, ~ z_1z_3, ~ z_3^2],~[z_2z_3, ~ -z_2z_3, ~ 0]\}$ and $\{[z_1z_2, ~ 0, ~ z_2z_3],~[0, ~ z_1, ~ z_3]$, $[z_2z_3, ~ -z_2z_3, ~ 0]\}$, respectively. Then, a system of generators of $\mathcal{K}_2$ is \[\{[z_1z_2, ~ 0, ~ z_2z_3],~[0, ~ z_1z_3, ~ z_3^2],~[z_2z_3, ~ -z_2z_3, ~ 0]\}.\] Let \[\mathbf{F}_2' = \begin{bmatrix} z_1z_2 & 0 & z_2z_3 \\ 0 & z_1z_3 & z_3^2 \\ z_2z_3 & -z_2z_3 & 0 \end{bmatrix}.\] It is easy to compute that all the $2\times 2$ column reduced minors of ${\mathbf{F}}_2'$ are $z_1,-z_2,-z_3$. Since $\langle z_1,-z_2,-z_3 \rangle \neq \mathbb{C}[z_1,z_2,z_3]$, $\mathcal{K}_2$ is not a free module of rank $2$. (3) According to the above same steps, we have that the systems of generators of $\mathcal{K}_3, \ldots, \mathcal{K}_8$ are $\{[z_1, ~ 0, ~ z_3],~[0, ~ z_1z_3, ~ z_3^2]\}$, $\{[0, ~ z_1, ~ z_3],~[z_1z_2, ~ 0, ~ z_2z_3]\}$, $\{[-z_3, ~ z_3, ~ 0],~[z_1, ~ 0, ~ z_3]\}$, $\{[0, ~ z_1, ~ z_3],$ $[z_2, ~ -z_2, ~ 0]\}$, $\{[z_1, ~ 0, ~ z_3],~[0, ~ z_1, ~ z_3]\}$ and $\{[0, ~ z_1, ~ z_3],~[-1, ~ 1, ~ 0]\}$, respectively. Let ${\mathbf{F}}_i'\in \mathbb{C}[z_1,z_2,z_3]^{2\times 3}$ be composed of the above system of generators of $\mathcal{K}_i$, where $i =3,\ldots,8$. For each $i$, it is easy to compute that ${\rm rank}({\mathbf{F}}_i')=2$. This implies that ${\mathbf{F}}_i'$ is a full row rank matrix. Then, $\mathcal{K}_i = \rho({\mathbf{F}}_i')$ is a free module of rank $2$. Then, we have \[P = \{({\mathbf{F}}_1',f_1),({\mathbf{F}}_3',f_3),\ldots,({\mathbf{F}}_8',f_8)\}.\] (4) Since $\rho({\mathbf{F}}_i') \subsetneq \rho({\mathbf{F}}_8')$ for each $1\leq i \leq 7$ with $i \neq 2$, ${\mathbf{F}}$ has only one FLP factorization. Since \[\mathbf{F}_8' = \begin{bmatrix} 0 & z_1 & z_3 \\ -1 & 1 & 0 \end{bmatrix}\] is a full row rank matrix, the rows of $\mathbf{F}_8'$ constitute a free basis of $\mathcal{K}_8 = \rho({\mathbf{F}}_8')$. Let $\mathbf{F}_8 = \mathbf{F}_8'$, we compute a polynomial matrix ${\mathbf{G}}_8\in \mathbb{C}[z_1,z_2,z_3]^{3\times 2}$ such that \[{\mathbf{F}} = {\mathbf{G}}_8{\mathbf{F}}_8 = \begin{bmatrix} z_2^2+z_3^2 & -z_1z_2^2 \\ z_2 & -z_1z_2 \\ z_1z_3 & 0 \end{bmatrix} \begin{bmatrix} 0 & z_1 & z_3 \\ -1 & 1 & 0 \end{bmatrix},\] where ${\mathbf{F}}_8$ is a FLP matrix. It is easy to compute that $d_2({\mathbf{G}}_8) = f_8$. Then, $W = \{({\mathbf{G}}_8,{\mathbf{F}}_8,f_8)\}$.} \end{example} \section{Concluding Remarks}\label{sec_conclusions} In this paper we have studied two FLP factorization problems for multivariate polynomial matrices without full row rank. As we all know, FLP factorizations are still open problems so far. In order to solve some special situations, we have introduced the concept of column reduced minors. Then, we have proved a theorem which provides a necessary and sufficient condition for a class of multivariate polynomial matrices without full row rank to have FLP factorizations. Moreover, we have given a simple method to verify whether a submodule of $k[{\bf z}]^{1\times m}$ is a free module by using column reduced minors of polynomial matrices. Compared with the traditional method, the new method is more efficient. Based on our results, we have also proposed an algorithm for FLP factorizations and have implemented it on the computer algebra system Maple. Two examples have been given to illustrate the effectiveness of the algorithm. Let ${\mathbf{F}}\in k[{\bf z}]^{l\times m}$, every full column rank submatrix of ${\mathbf{F}}$ is a square matrix if ${\rm rank}({\mathbf{F}}) = l$. In this case, all the $l\times l$ column reduced minors of ${\mathbf{F}}$ are only one polynomial which is the constant $1$. Therefore, all the results in this paper are also valid for the case where ${\mathbf{F}}$ is a full row rank matrix. We can define the concept of row reduced minors, and all the results in this paper can be translated to similar results for FRP factorizations of multivariate polynomial matrices without full column rank. We hope the results provided in the paper will motivate further research in the area of factor prime factorizations. \section*{Acknowledgments} This research was supported by the CAS Key Project QYZDJ-SSW-SYS022. \bibliographystyle{elsarticle-harv}
2024-02-18T23:40:46.375Z
2020-10-15T02:20:00.000Z
algebraic_stack_train_0000
3,284
10,718
proofpile-arXiv_066-90
\section{Introduction} \label{sec:intro} Twitter is a valuable data source for many research topics due to the richness of data it provides and the developed and recorded social interactions. The RecSys Challenge 2020 addresses the prediction tasks of four types of user engagements on Twitter. For privacy reasons the dataset provided in the challenge is an artificial one: it is collected in one week span and consists of public engagements along with pseudo negatives randomly sampled from the public follow graph~\cite{organizersrecsys}. The artificial estimation of pseudo negatives along with the unbalance of the four engagement classes make the prediction task difficult for learning methods. We present the results of our solution implemented utilizing a method based on Click-Through Rate (CTR) on the metrics provided by the challenge and we compare them with those obtained with a gradient boosting learning model. We observe that our solution outperforms the learning method by a margin on the dataset provided. \subsection{Dataset insights} \begin{figure}[H] \includegraphics[scale=0.4]{images/action_distribution.png} \caption{The chart shows the distribution of each action in the training set. Retweet with comment and Reply are very unbalanced, while pseudo negatives represent a large part of the dataset. This chart refers to the official and latest training set released.} \label{fig:action_distribution} \end{figure} The dataset is a collection of public interactions on tweets along with information about their author and the user that generates the engagement. This dataset has an uneven class distribution. As illustrated in Figure ~\ref{fig:action_distribution}, class unbalance in the training set makes the classification process difficult in both validation and test sets. This condition is further stressed in the modality by which pseudo-negative features are obtained as described in~\cite{organizersrecsys}. In that work, authors explain the difficulties in including pseudo-negatives, samples that represent interactions with no engagement. However, the collection of this data hides the reason why a user did not interact with a tweet. In fact, a user could not interact willingly or because he did not see the tweet at all. This implies that a binary classifier is potentially misled in considering negative class candidates. Additionally, users' past history absence leads to the avoidance of user-based and personalized recommendation algorithms. The lack of user historical data is presented in the histogram in Figure \ref{fig:tcpu} that highlights how the majority of users interact at most with less than three tweets. \begin{figure}[H] \centering \includegraphics[scale=0.3]{images/tcpu.pdf} \caption{The histogram in figure represents the amount of tweets for which a user appears in the challenge dataset as content consumer. The horizontal axis represents how many tweets are paired with a unique user, while the vertical axis shows how many users have that number of interactions (both positive or negative).} \label{fig:tcpu} \end{figure} \subsection{Proposed metrics} The organizers of the RecSys Challenge 2020 proposed two different metrics to evaluate the solutions: \begin{description} \item[PRAUC] (Precision Recall Area Under the Curve) \item[RCE] (Relative Cross Entropy) \end{description} The PRAUC is useful to deal with unbalanced classes like \textit{Retweet with comment} and \textit{Reply}. These classes have numerous rows with null values. This condition means that no action is performed. The final ranking is computed in different steps: \begin{itemize} \item Averaging the PRAUC score across the four engagements \item Averaging the RCE score across the four engagements \item Compute the ranking for both metrics \item Sum the two obtained ranking \end{itemize} As we have observed in our experimental results, this way of computing the score favors solutions with a good score on the least competitive metric rankings. \section{Our solution} For the final submission, we propose two solutions that we assess in terms of performance score as observed as results on the evaluation set. In particular we submit: \textit{a CTR (Click-Through Rate-based) method} and \textit{a gradient boosting model}. This choice is supported by the following logical reasoning: the \textit{gradient boosting model} performs very well on our local test set but it has a significant worsening on the public leaderboard measured on the evaluation set. On the other hand, the CTR method behaves almost in the same way on both sets. This method used an optimized constant that is exactly the CTR value for each class. We report them in detail in the two following sections. \subsection{Click-Through Rate-based method} We estimate which constant value has the best outcome on both PRAUC and RCE to get a better understanding of the evaluation metrics. The result of this investigation generates two different outcomes: \begin{itemize} \item \textbf{PRAUC}: any constant produces the same effect in terms of score. \item \textbf{RCE}: has different outcomes depending on the engagement's type. \end{itemize} The best result, as pointed out by the way this metric is calculated, is given by a Click-Through Rate specific for the type of engagement. The CTR represents the ratio of positive engagements to the number of times a tweet has been viewed. This value is calculated, on the training set, in different steps: \begin{itemize} \item Count the number of positive engagements for each class: an engagement is considered \textit{positive} if the timestamp of the interactions between a user and a tweet is not null. \item Count the total number of rows of the training set: this includes the four types of engagement along with pseudo-negatives. \item The CTR is calculated with the following equation depending on the class c: \begin{equation} CTR = \frac{engagement_{c}}{N_{rows}} \label{eq:ctr} \end{equation} \end{itemize} In Equation \ref{eq:ctr}, \textit{c} represents one of the four engagements to predict: Like, Retweet, Reply, Retweet with comment. The CTR numerical values are pointed out in the Table ~\ref{tab:ctr-training}. \begin{table}[H] \begin{tabular}{|c|c|c|c|c|} \hline & \textbf{Like} & \textbf{Reply} & \textbf{Retweet} & \textbf{Retweet with comment} \\ \hline \textbf{CTR} & 0.428 & 0.025 & 0.108 & 0.007 \\ \hline \end{tabular} \caption{CTR values calculated on the training set: the class with the highest ratio is the Like that is intuitively the most frequent.} \label{tab:ctr-training} \end{table} The optimum value is found, for both RCE and PRAUC, comparing the results of different constant value on the training set. The optimum value is the CTR in each class. The CTR scores are computed as illustrated in Figure \ref{fig:constant_teaser} where the sum of existing engagements per class (timestamp exists) over the total existing interaction per class (timestamp exists or it is a null value) represents the related CTR value. The score is an average of the performance obtained over different chunks of the dataset. RCE and PRAUC for the four different classes of engagement are listed in Table~\ref{tab:constant_tuning}. The second line reports the scores obtained with a random constant that is used as baseline. As shown by the other rows in Table \ref{tab:constant_tuning} when the constant values increase its absolute distance from the CTR value, RCE decreases in all the four classes. \begin{table*}[!ht] \begin{tabular}{@{}ccccccccc@{}} \toprule & \multicolumn{4}{c}{RCE} & \multicolumn{4}{c}{PRAUC} \\ \midrule & Like & Reply & Retweet & \makecell{Retweet\\ with comment} & Like & Reply & Retweet & \makecell{Retweet\\ with comment} \\ \midrule \textbf{CTR} & -0.01 & -0.002 & -0.003 & -0.001 & 0.72 & 0.51 & 0.554 & 0.503 \\ \textbf{Random} & -46.09 & -739.49 & -189.84 & -2219.446 & 0.43 & 0.03 & 0.109 & 0.007 \\ \textbf{0} & -2091.49 & -642.44 & -994.003 & -483.57 & 0.72 & 0.51 & 0.554 & 0.503 \\ \textbf{0.1} & -54.81 & -35.68 & -0.135 & -181.54 & 0.72 & 0.51 & 0.554 & 0.503 \\ \textbf{0.3} & -5.87 & -217.64 & -30.219 & -741.73 & 0.72 & 0.51 & 0.554 & 0.503 \\ \textbf{0.5} & -1.26449 & -481.89 & -100.908 & -1507.97 & 0.72 & 0.51 & 0.554 & 0.503 \\ \textbf{1} & -2754.47 & -28153.38 & -8817.25 & -79441.8 & 0.72 & 0.51 & 0.554 & 0.503 \\ \bottomrule \end{tabular} \caption{Evaluation of different constants averaged across distinct portions of the training set: the CTR outperforms all the other numbers on RCE, while the PRAUC behaves in the same way with all the numbers provided.} \label{tab:constant_tuning} \end{table*} This constant value was tested on different partitions of the full training set to assess the validity of the approach. Each partition contains 16 million rows to make them similar to the size of the challenge's validation and test set. This number has always the same performance in terms of RCE and PRAUC throughout the different time's spans, as highlighted in Figure~\ref{fig:constant_performance}. \begin{figure}[ht] \begin{subfigure} \centering \includegraphics[width=.45\linewidth]{images/constant_performance_rce.png} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=.45\linewidth]{images/constant_performance_prauc.png} \end{subfigure} \caption{The two charts describe the score obtained by the CTR constant over different time spans: the PRAUC has always the exact same value for each type of engagement and the RCE has almost the same score as well.} \label{fig:constant_performance} \end{figure} \begin{figure} \includegraphics[scale=0.1]{images/constant_model.png} \caption{Overview of the model based on the CTR constant: the first step is to compute the number of actions of each type, then the constant is calculated, with a different value for each class.} \label{fig:constant_teaser} \end{figure} \subsection{Gradient boosting} The gradient boosting approach is implemented using the XGBoost~\cite{xgboost} library and includes four different models, one for each engagement to predict. The input of this model is enriched with 59 features that are reported in the following section. \subsubsection{Feature engineering} 59 additional features were derived from the dataset provided by the challenge organizers, in strict adherence with the terms and conditions of the challenge. These additional features are grouped in different categories to facilitate their understanding: \textbf{Dataset features (12 features)}: given directly by the dataset, they are exploited with little or no adjustments. Examples of these features are the number of hashtags, the language of the tweet and the number of followers. \textbf{Author features (18 features)}: this group profiles each \textit{tweet author} included in the training set. They are pre-computed features detailing the behaviour of each author during the history documented by the dataset. The most relevant features belonging to this category are: \begin{itemize} \item \textit{Author engagement ratio}: $\frac{N_{eng_c}}{N_{tweet}}$ where $N_{eng_c}$ represents the number of actions of a particular type of engagement \textit{c}, where c is one among \textit{Like, Retweet, Reply, Retweet with comment}, received by a specific tweet author. Instead, the denominator $N_{tweet}$ refers to the total number of tweets published by him. In the end, there are four \textit{author engagement ratio}, one per engagement type. \item \textit{Number of received engagement}: expresses the total number of interactions received by the user for each type of engagement. \end{itemize} \textbf{User features (18 features)}: similar to the one applied for the authors but with the involvement of the person interacting with the tweet. In this group, we find statistical features such as the \textit{engagement ratio} and the \textit{number of actions for each type}, calculated from the user point of view. \textbf{Languages spoken (1 feature)}: the main intuition behind this feature is that understanding the language of a tweet plays a key role for a user in having possible interactions. This approach includes the pre-computation of a file containing, for each user id, the number of times that a specific user has interacted with a tweet written in a specific language. The goal of this computation is to identify, for each user $U_{id}$, a list of languages \textit{spoken} by that specific user. In more formal terms: \begin{equation} f(U_{id}, Lang_{id}) = n_{LT} \end{equation} where $Lang_{id}$ is the id of a language and $n_{LT}$ is the number of tweets written in that language engaged by the user. \textbf{Previous actions (4 features)}: another pre-computation is performed to reconstruct the history of previous interactions. This set of features are formalized with the following function: \begin{equation} f(U_{id}, A_{id}, c) = n_{PA} \end{equation} where: \begin{description} \item[$U_{id}$] is the user id \item[$A_{id}$] is the author id \item[$c$] is the class representing the engagement type \item[$n_{PA}$] is the number of previous actions for the triplet $(U_{id}, A_{id}, c)$ \end{description} \textbf{Word search (6 features)} This class of features is the only one referring to the text of the tweet. We extract some meaningful words from the text tokens and generate a boolean variable to identify when that specific word is included in the text of the tweet. The words used are related to the \textit{call to action}, a situation when the tweet author invites his followers to do a specific action with respect to the tweet. The considered words are \textit{share}, \textit{retweet}, \textit{reply}, \textit{comment}. \section{Submission} \subsection{Click-Through Rate-based} This submission was performed using the value of the CTR calculated on the whole training set. The intuition behind this approach was that, if the distribution of positive actions with respect to the negative ones does not change too much, we can achieve a score that outperforms several proposed models including the \textit{gradient boosting model}. \subsection{Gradient Boosting} The final model includes almost sixty different features. The early stopping feature of the XGBoost library was used to avoid overfitting. After each training round the model is evaluated on a validation set using the RCE metric and if there is no improvement with respect to the last N round the training is stopped. The four models were trained on the final release of the dataset with the parameters in Table \ref{tab:xgboostparam}. \begin{table}[H] \begin{tabular}{|l|l|l|l|l|l|} \hline \textbf{Parameter} & \textbf{Value} & \textbf{Parameter} & \textbf{Value} & \textbf{Parameter} & \textbf{Value} \\ \hline eta & 0.09 & tree\_method & gpu\_hist & sampling\_method & gradient\_based \\ \hline subsample & 0.2 & objective & binary:logistic & max\_depth & 5 \\ \hline max\_delta\_step & 5 & epochs & 200 & early\_stopping\_rounds & 10 \\ \hline \end{tabular} \caption{XGBoost parameters used by the four different models during the training phase.} \label{tab:xgboostparam} \end{table} \subsection{Results} The optimized constant based model achieves better results when compared to those obtained with gradient boosting. The results are reported in Table \ref{tab:leaderboard}. The reason why a constant-based method performs better than a gradient boosted one is due to the way score and ranking are computed in the RecSys Challenge 2020. As presented by the winning team of the challenge, a very computational heavy, more complex and deeply parallel boosting model is able to explore each row in the dataset characteristics, while our fine tuned XGBoost on a subset of the dataset does not. In this way, an optimized constant is able to generalize better the classification procedure avoiding overfitting with respect to the gradient boosted one as described both in Figure \ref{fig:constant_performance} and in Table \ref{tab:leaderboard}. \begin{table*}[!ht] \begin{tabular}{@{}cccccccccc@{}} \toprule & & \multicolumn{2}{c}{Retweet} & \multicolumn{2}{c}{Reply} & \multicolumn{2}{c}{Like} & \multicolumn{2}{c}{RwC} \\ \midrule Model & Dataset & PRAUC & RCE & PRAUC & RCE & PRAUC & RCE & PRAUC & RCE \\ \midrule \multirow{3}{*}{CTR-based} & \makecell{Final Leaderboard\\ (Test set)} & 0.5516 & -0.03 & 0.5135 & -0.05 & 0.7131 & 0 & 0.5037 & 0\\ & \makecell{Public Leaderboard\\ (Validation set)} & 0.5516 & -0.0315 & 0.5135 & -0.0476 & 0.7133 & -0.0008 & 0.5037 & -0.0045 \\ & Local test set & 0.72 & -0.01 & 0.51 & -0.002 & 0.554 & -0.003 & 0.503 & -0.001 \\ \midrule \multirow{2}{*}{XGBoost} & \makecell{Public Leaderboard\\ (Validation set)} & 0.41 & 6.68 & 0.10 & -3.23 & 0.66 & -32.01 & 0.04 & -1.67\\ & Local test set & 0.710 & 46.85 & 0.626 & 35.24 & 0.830 & 32.56 & 0.586 & -8.916 \\ \bottomrule \end{tabular} \caption{Results of the two described models in different contexts: while the CTR constant maintains almost the same score, the XGBoost model loses efficiency when the time difference with respect to the training set period increases.} \label{tab:leaderboard} \end{table*} \section{Final Consideration} The CTR-based method addresses the issues related to the unbalanced classes and pseudo negatives as described in Section \ref{sec:intro}. However, the literature on recommender systems \cite{quadrana,dacremadeep,recissues,netflixrec,reccase} highlights some issues that are intrinsically related to the problems addressed in this challenge and, therefore, observable in the dataset provided. \textit{Short term trends}: those trends that tend to change or disappear quickly due to the rapid evolution of individual and community preferences. \textit{Cold start problem}: when new users enter the system, the preferences of the new users could not be predicted. \textit{Gray sheep}: problematical users that cannot be traced or predictably aligned to any trend, so the suggestions related to current trends are not an effective solution. \textit{Real-time analysis}: real-time data have to be collected to perform analysis for unexpected events (e.g., earthquakes, pandemics) and a continuous update of the model with real-time information. \textit{Context-Awareness, Privacy, and Sparsity}: users' short and long term history and unnoticeable context-related information may not be retrieved~\cite{recissues}. Despite the huge size of data typically collected, most users are occasional or not inclined to interact, both for privacy issues and unwanted exposure of information. These aspects lead to a sparse characterization matrix, thus resulting in less accurate recommendations. \textit{Baseline metrics}: the available evaluation metrics are for general-purpose recommender systems and are not always applicable in different domains, especially in evaluating the context-aware ones. Metrics used in common machine learning approaches do not always lead to well suited recommendations~\cite{dacremadeep}. The constant preserves its performance among training, validation and test set, while the XGBoost model gets worse. A model could be inefficient if it is not able to capture time and event independent features. This, along with the above issues, could be a probable cause of the considerable variation of the challenge leaderboard, from the validation to the final test phase. In fact, as reported\footnote{https://recsys-twitter.com}, the entire dataset was produced in two different weeks. Thanks to our observations over the result of the validation set, we conclude that an optimized constant performs better. This intuition turns out to be successful in the test phase, indeed, the POLINKS solution ranked sixth at the end of the RecSys Challenge 2020. \begin{acks} This research was supported by FITEC S.r.l., LINKS Foundation, Politecnico di Torino and Istituto Italiano di Tecnologia. Computational resources were provided by HPC@POLITO\footnote{http://www.hpc.polito.it}, a project of Academic Computing within the Department of Control and Computer Engineering at the Politecnico di Torino. \end{acks}
2024-02-18T23:40:46.398Z
2020-10-15T02:19:08.000Z
algebraic_stack_train_0000
3,288
3,182
proofpile-arXiv_066-243
\section{Preliminaries} The class of differentially closed fields of characteristic zero with a generic automorphism is elementary, we denote it {\it DCFA}. Our aim in this paper is to study definable groups in models of {\it DCFA}: in section~\ref{sec:embed} we prove that a definable group in a model of {\it DCFA} embeds in an algebraic group. In section~\ref{sec:stable} prove that we can reduce questions about 1-basedness and stable, stable embeddability in {\it DCFA} to questions about 1-basedness and stable, stable embeddability in either {\it DCF} or {\it ACFA}. We use this in section~\ref{sec:abelian} to study the model theory of definable abelian groups. We give now a brief summary of what we know about {\it DCFA}. Since we will work in difference, differential and difference-differential fields we will denote the respective languages by $\mathcal{L}_{\sigma}$, $\mathcal{L}_{D}$ and $\mathcal{L}_{\sigma, D}$ In \cite{rbdcfa1} we give an axiomatisation of {\it DCFA} and prove its main properties: given a model of {\it DCFA} it is of course a differentially closed field (model of {\it DCF}) and an algebraically closed field with a generic automorphism (model of {\it ACFA}). Independence is defined by linear disjointness. This theory is not complete, but its completions are easily described, those completions eliminate imaginaries (moreover, they satisfy the Independence Theorem over algebraically closes sets) and thus are supersimple and types are ranked by the $SU$-rank. Forking is determined by quantifier-free formulas, thus {\it DCFA} is quantifier-free $\omega$-stable. A basis theorem for (perfect) difference-differential ideals imply that in a model of {\it DCFA} the difference-differential Zariski topology (defined in analogy with Zariski topology in algebraically closed fields) is Noetherian. Let $(K, \sigma, D)$ be a model of {\it DCFA}, there are two important definable subfields of $K$, the field of constants ${\mathcal C}=\{x \in K: Dx=0\}$ and the fixed field $Fix(\sigma)=\{x \in K: \sigma(x)=x\}$. Given $a \in K$ and $A \subseteq K$, we define the $(\sigma,D)$-transcendence degree of $a$ over $A$ as the transcendence degree of the difference-differential field generated by $A$ and $a$ over $A$. In the cases of {\it DCF} and {\it ACFA} the finiteness of such a degree is equivalent to the finiteness of the rank of $a$ over $A$. However this does not hold for {\it DCFA}: in \cite{rbrank1} we give an example of a set whose generic type has infinite $(\sigma,D)$-transcendence degree but $SU$-rank 1). This represents a difficulty in the treatment of definable groups, so we shall try different ways to describe definable groups departing from properties of groups definable in differential and difference fields. In \cite{rbjets} and \cite{rbarcs01} we proved that Zilber's dichotomy holds for {\it DCFA}: a type of $SU$-rank 1 either has a simple geometry (it is 1-based) or has a strong interaction with (is non-orthogonal to) $Fix(\sigma) \cap {\mathcal C}$. We now introduce some definitions and useful facts about definable groups in supersimple theories. Let $T$ be a supersimple theory, $M$ a saturated model of $T$, let $G$ be a type-definable (definable by an infinite number of formulas) group and let $A \subset M$ be a set of parameters. \begin{defi} Let $G$ be a Let $p \in S(A) $. We say that $p$ is a left generic type of $G$ over $A$ if it is realized in $G$ and for every $a \in G$ and $b$ realizing $p$ such that $a \,\raise.2em\hbox{$\,\mathrel|\kern-.9em\lower.35em\hbox{$\smile$}$}_A b $, we have $b \cdot a \,\raise.2em\hbox{$\,\mathrel|\kern-.9em\lower.35em\hbox{$\smile$}$}_A a$. \end{defi} The following result is proved in \cite{simpg} : \begin{fac \begin{enumerate} \item Let $a,b \in G $. If $tp(a/Ab) $ is left generic of $G$, then so is $tp(b \cdot a/Ab ) $. \item Let $p \in S(A) $ be realized in $G$, $B=acl(B) \supset A$, and $q \in S(B) $ a non-forking extension of $p$. Then $p$ is a generic of $G$ if and only if $q$ is a generic of $G$. \item Let $tp(a/A) $ be generic of $G$; then so is $tp(a^{-1}/A)$. \item There exists a generic type of $G$. \item A type is left generic if and only if it is right generic. \end{enumerate} \end{fac} The following fact is proved in \cite{wag}, chapter 5. \begin{fac}\label{GIII5} Let $H$ a type-definable subgroup of $G$, \begin{enumerate} \item Let $p \in S(A) $, then $ p$ is a generic of $G$ over $A$ if and only if $SU(G)=SU(p) $. \item $SU(G)=SU(H)$ if and only if $[H:G]< \infty $. \item $SU(H)+SU(G/H) \leq SU(G) \leq SU(H) \oplus SU(G/H) . $ \end{enumerate} \end{fac} \section{Every Definable Group Embeds in an Algebraic Group}\label{sec:embed} We introduce $*$-definable groups in stable theories. Suppose that $T$ is a complete theory and $M$ a saturated model of $T$. A $*$-tuple is a tuple $(a_i)_{i \in I}$, where $I$ is an index set of cardinality less than the cardinality of $M$, and $a_i \in M^{eq}$ for all $i \in I$. Let $A \subset M$. A $*$-definable set is a collection of $*$-tuples, indexed by the same set of parameters $I$, which is the set of realizations of a partial type $p(x_i)_{i \in I}$ over $A$. A $*$-definable group is a group with $*$-definable domain and multiplication. The following propositions are proved in \cite{kopi}. Recall that the canonical base of a strong type $p$, $Cb(p)$ is the set that is fixed pointwise by the automorphisms that fix $p$. \begin{prop}\label{GIII1} Let $T$ be a stable theory; $M$ a saturated model of $T$. Let $a,b,c,x,y,z$ be $*$-tuples of $M$ of length strictly less than the cardinal of $M$, such that: \begin{enumerate} \item $acl(M,a,b)=acl(M,a,c) = acl(M,b,c) $ \item $acl(M,a,x) = acl(M,a,y)$ and $Cb(stp(x,y/M,a)) $ is interalgebraic with $a$ over $M $. \item As in 2. with $b,z,y$ in place of $a,x,y $ \item As in 2. with $c,z,x$ in place of $a,x,y $ \item Other than $\{a,b,c \}, \{a,x,y \}, \{b,z,y \},\{c,z,x \} $, any 3-element subset of $\{a,b,c,x,y,z \} $ is independent over $\mathcal M$. \end {enumerate} Then there is a $*$-definable group $H$ defined over $M$ and $a',b',c' \in H $ generic independent over $M$ such that $a$ is interalgebraic with $a'$ over $M$, $b$ is interalgebraic with $b'$ over $M$ and $c$ is interalgebraic with $c'$ over $M$. \end{prop} \begin{prop}\label{GIII2} Let $T$ be a simple theory; $M$ a saturated model of $T$. Let $G,H$ be type-definable groups, defined over $K \prec {M}$, and let $a,b,c \in G $ and $a',b',c' \in H $ such that \begin{enumerate} \item $a,b$ are generic independent over $M$. \item $a \cdot b = c $ and $a' \cdot b' =c' $. \item $a$ is interalgebraic with $a'$ over $M$, $b$ is interalgebraic with $b'$ over $M$ and $c$ is interalgebraic with $c'$ over $M$ \end{enumerate} Then there is a type-definable over $M$ subgroup $G_1$ of bounded index in $G$, and a type-definable over $M$ subgroup $H_1$ of $H$ and a type-definable over $M$ isomorphism $f$ between $G_1/N_1 $ and $H_1 / N_2$ where $N_1 $ and $N_2$ are finite normal subgroups of $G_1 $ and $H_1$ respectively. \end{prop} \begin{rem} If $T$ in \ref{GIII2} is supersimple and $G,H$ are definable, then we can choose $G_1$ definable of finite index in $G$ and $f$ definable. \end{rem} The following result is proved in \cite{Hru}: \begin{prop}\label{GIII3} Let $G$ be a $*$-definable group in a stable structure. Then there is a projective system of definable groups with inverse limit $G'$, and a $*$-definable isomorphism between $G$ and $G'$. \end{prop} In \cite{difgp} the author proved that a ${\mathcal L}_D$-definable (definable in the language of differential fields) group in {\it DCF} is essentially a differential algebraic group and that a definable group in {\it DCF} virtually embeds in an algebraic group. So, to prove that a definable group in {\it DCFA} embeds in an algebraic group we will show that it embeds in a ${\mathcal L}_D$-definable group. \begin{teo}\label{GIII6} Let $({\mathcal U},\sigma,D)$ be a model of {\it DCFA}, $K \prec {\mathcal U} $ and $G$ a $K$-definable group. Then there is an ${\mathcal L}_D$-definable group $H$, a definable subgroup $G_1$ of $G$ of finite index, and a definable isomorphism between $G_1/N_1$ and $H_1 / N_2$, where $H_1$ is a definable subgroup of $H(\mathcal{U})$, $N_1$ is a finite normal subgroup of $G_1$, and $N_2$ is a finite normal subgroup of $H_1$. \end{teo} {\it Proof:}\\ Let $a,b,y $ be generic independent elements of $G$ over $K$. Let $x = a \cdot y, z = b^{-1} \cdot y, c = a \cdot b $, so $x = c \cdot z $. Let $\bar{a}=(\sigma^i (a): j \in {\mathbb Z}) $, and similarly for $\bar{b}, \bar{c}, \bar{x},\bar{y},\bar{z} $. Then, as the model-theoretic algebraic closure of a set is the differential-field-theoretic algebraic closure of the set closed by $\sigma$, working in {\it DCF}, $\bar{a},\bar{b},\bar{c},\bar{x},\bar{y},\bar{z} $ satisfy the conditions of \ref{GIII1}. Thus there is a $*$-${\mathcal L}_D$-definable group $H$ over $K$, and generic $K$-independent elements $a^*,b^*, c^* \in H $ such that $\bar{a} $ is interalgebraic with $a^* $ over $K$, $\bar{b} $ is interalgebraic with $b^* $ over $K$, $\bar{c} $ is interalgebraic with $c^* $ over $K$ and $ c^*= a^* \cdot b^* $ (the interalgebraicity, independence and generics in the sense of {\it DCF}). Since {\it DCF} is $\omega$-stable, by \ref{GIII3}, $H$ is the inverse limit of $H_i, i \in \omega $, where the $H_i$ are ${\mathcal L}_D$-definable groups. Let $\pi_i : H \longrightarrow H_i $ be the $i$-th canonical epimorphism. Let $a_i = \pi_i (a^*) $, $b_i = \pi_i (b^*) $ and $c_i = \pi_i (c^*) $. Then $a^* $ is interalgebraic with $(a_i)_{i \in \omega } $ over $K$ , $b^* $ is interalgebraic with $(b_i)_{i \in \omega} $ over $K$ and $c^* $ is interalgebraic with $(c_i)_{i \in \omega} $ over $K$, all interalgebraicities in the sense of {\it DCF}. Since for $i < j$, $a_i \in K(a_j)$ , $b_i \in K(b_j)$ and $c_i \in K(c_j)$, there is $i \in \omega$ such that $a$ is interalgebraic with $a_i$ over $K$, $b$ is interalgebraic with $b_i$ over $K$ and t$c$ is interalgebraic with $c_i$ over $K$ in the sense of {\it DCFA}. So we can apply \ref{GIII2} to $a,b,c \in G$ and $a_i,b_i,c_i \in H_i$.\\ $\Box$ \begin{cor} Let $G$ be a definable group. Then there is an algebraic group $H$, a definable subgroup $G_1$ of $G$ of finite index, and a definable isomorphism between $G_1/N_1$ and $H_1 / N_2$, where $H_1$ is a definable subgroup of $H(\mathcal{U})$, $N_1$ is a finite normal subgroup of $G_1$, and $N_2$ is a finite normal subgroup of $H_1$ \end{cor} \section{Stability, Stable Embeddability and 1-basedness}\label{sec:stable} In this section we discuss how to apply results from \cite{salinas} to obtain similar results in models of {\it DCFA}. We also give a criterion for 1-basedness in {\it DCFA}. We begin with general definitions and facts on supersimple theories. $T$ will denote a supersimple theory which eliminates imaginaries. Let $M$ be a saturated model of $T$. Let us recall that two types $p,q$ over $A \subseteq M$ are orthogonal, denoted $p \perp q$, if for every set $B \supseteq A$ and every realisations $a, b$ of $p$ and $q$ respectively, $a \,\raise.2em\hbox{$\,\mathrel|\kern-.9em\lower.35em\hbox{$\smile$}$}_B b$. \begin{defi} \begin{enumerate} \item Let $A \subset M$ and let $S$ be an $(\infty)$-definable set over $A$. We say that $S$ is 1-based if for every tuple $a$ of $S$ and every $B \supseteq A$, $a$ and $B$ are independent over $acl(Aa) \cap acl(B)$. \item A type is 1-based if the set of its realizations is 1-based. \end{enumerate} \end{defi} The following useful result is proved in \cite{wagbase}. \begin{prop}\label{wgr} \begin{enumerate} \item The union of $1$-based sets is $1$-based. \item If $tp(a/A)$ and $tp(b/Aa)$ are $1$-based, so is $tp(a,b/A)$. \end{enumerate} \end{prop} We introduce now stable, stably embedded types (also called fully stable types). \begin{defi} A (partial) type $p$ over a set $A$ is stable, stably embedded if whenever $a$ realizes $p$ and $B\supset A$, then $tp(a/B)$ is definable. Equivalently, let $P$ denote the set of realizations of $p$. Then $p$ is stable, stably embedded if and only if for all set $S\cap P^n$ where $S$ is definable, there is a set $S'$ definable with parameters from $P$ and such that $S'\cap P^n=S\cap P^n$. \end{defi} The following result is proved in the Appendix of \cite{salinas}: \begin{lem}\label{st0} If $tp(b/A)$ and $tp(a/Ab)$ are stable, stably embedded, so is $tp(a,b/A)$. \end{lem} \begin{rem}\label{sserem} In \cite{salinas}, a certain property of models of {\it ACFA} (called superficial stability) is isolated, and guarantees that certain types over algebraically closed sets are stationary, and therefore definable. It follows from model theoretic considerations that if for any algebraically closed set $B$ containing $A$, $tp(a/B)$ is stationary, then $tp(a/A)$ will be stable, stably embedded. \end{rem} \begin{lem}\label{lemsse} Let $(K,\sigma)$ be a model of {\it ACFA}, $A=acl_\sigma(A)\subset K$ and $a\in K$. Then $tp(a/A)$ is stationary if and only if $tp(a/A)\perp (\sigma(x)=x)$, where $acl_\sigma$ denotes the model-theoretic algebraic closure in {\it ACFA}. \end{lem} {\it Proof}:\\ Indeed, write $SU(a/A)=\omega k+n$, and let $b\in acl_\sigma(Aa)$ be such that $SU(b/A)=n$. Then $tp(b/A)\perp (\sigma(x)=x)$ and, by Theorem 4.11 of [2], $tp(acl_\sigma(Ab)/A)$ is stationary. If $c\in acl_\sigma(Aa)$ satisfies some non-trivial difference equation over $acl_\sigma(Ab)$ then $SU(c/Ab)<\omega$ and therefore $c\in acl_\sigma(Ab)$. Hence, by Theorem 5.3 of [3], $tp(a/acl_\sigma(Ab))$ is stationary, and therefore so is $tp(a/A)$. For the converse, there are independent realizations $a_1,\cdots,a_n$ of $tp(a/A)$, and elements $b_1,\cdots,b_m\in Fix(\sigma)$ such that $(a_1,\cdots,a_n)$ and $(b_1,\cdots,b_m)$ are not independent over $A$. Looking at the field of definition of the algebraic locus of $(b_1,\cdots,b_m)$ over $acl_\sigma(A,a_1,\cdots,a_n)$, there is some $b\in Fix( \sigma)\cap acl_\sigma(A,a_1,\cdots,a_n)$, $b \not\in A$. Then $tp(b/A)$ is not stationary: if $c \in Fix(\sigma)$ is independent from $b$ over $A$, then $tp(b/A)$ has two distinct non-forking extensions to $Ac$, one in which $\sqrt{b+c}\in Fix(\sigma)$, the other in which $\sqrt{b+c}\not\in Fix(\sigma)$. Hence $tp(a_1,\cdots,a_n/A)$ is not stationary, and neither is $tp(a/A)$.\\ $\Box$ It is important to note that stationarity alone does not imply stability: if $a$ is transformally transcendental over $A=acl_\sigma(A)$ (a is not the root of a non-zero $\sigma$-polynomial over $A$), then $tp_{ACFA}(a/A)$ is stationary, but it is not stable. These results can be used to give sufficient conditions on types in {\it DCFA} to be stationary, and stable, stably embedded. \begin{prop}\label{st1} Let $(K,\sigma,D)$ be a model of DCFA, let $A=acl(A)\subset K$, and $a$ a tuple in $K$. \begin{enumerate} \item Assume that $tp_{ACFA}(a,Da,D^2a,\cdots/A)\perp \sigma(x)=x$. Then $tp(a/A)$ is stationary.\label{st1_1} \item Assume that for every $n$, every extension of $tp_{ACFA}(D^na/Aa\cdots D^{n-1}a)$ is orthogonal to $(\sigma(x)=x)$. Then $tp(a/A)$ is stable, stably embedded. It is also 1-based. \label{st1_2} \item If $tp(a/A)$ has an extension that is not orthogonal to $(\sigma(x)=x)$, then $tp(a/A)$ is not stable, stably embedded. \end{enumerate} \end{prop} {\it Proof}:\\ 1. As $tp_{ACFA}(a,Da,D^2a,\cdots/A)\perp \sigma(x)=x$, \ref{lemsse} implies that $tp_{ACFA}(a,Da,D^2a,\cdots/A)$ is stationary. Since the $tp(a/A) $ is determined by $tp_{ACFA}(a,Da,D^2a,\cdots/A)$ , $tp(a/A)$ is stationary: Let $b,c$ be two realizations of non-forking extensions of $tp(a/A)$ to a set $B=acl(B) \supset A$. As $tp_{ACFA}(a,Da,D^2a,\cdots/A)$ is stationary we have that $tp_{ACFA}(b,Db,D^2b,\cdots/B) =tp_{ACFA}(c,Dc,D^2c,\cdots/B)$. If $\varphi(x)$ is an $\mathcal{L}_{\sigma,D}(B)$-formula satisfied by $b$, then there is a $\mathcal{L}_{\sigma}(B)$-formula $\psi(x_0,\cdots,x_k)$ such that $\phi(b)=\psi(b,Db,\cdots,D^kb)$; so we have $\psi(b,Db,\cdots,D^kb)\in tp_{ACFA}(b,Db,D^2b,\cdots/B) =tp_{ACFA}(c,Dc,D^2c,\cdots/B)$. This implies that $tp(b/B)=tp(c/B)$, and thus $tp(a/A)$ is stationary.\\ 2. By \ref{lemsse} for all $n \in \mathbb{N}$ and for all $B \supset A$, $tp_{ACFA}(D^na/Ba\cdots D^{n-1}a)$ is stationary. Thus, by \ref{sserem}, for all $n$, $tp_{ACFA}(D^na/Aa\cdots D^{n-1}a)$ is stable, stably embedded and 1-based. By \ref{st0} stability, stable embeddability is preserved by extensions, hence $tp_{ACFA}(a,Da,\cdots/A)$ is stable, stably embedded, and this implies that all extensions to algebraically closed sets are stationary. As above, we deduce that all extensions of $tp(a/A)$ to algebraically closed sets are stationary, hence $tp(a/A)$ is stable, stably embedded. By \ref{wgr} we have also that $tp_{ACFA}(a,Da,\cdots/A)$ is 1-based. As $tp(a/A) $ is determined by $tp_{ACFA}(a,Da,D^2a,\cdots/A)$ , $tp(a/A)$ is 1-based. 3. If $tp(a/K)$ is not hereditarily orthogonal to $\sigma(x)=x$ then there is $B=acl(B) \supset A$ such that $tp(a/B) \not \perp \sigma(x)=x$. Then there are independent realizations $a_1,\cdots,a_n$ of $tp(a/B)$, and elements $b_1,\cdots,b_m\in Fix(\sigma)$ such that $(a_1,\cdots,a_n)$ and $(b_1,\cdots,b_m)$ are not independent over $B$. If we look at the field of definition of the algebraic locus of $(b_1,\cdots,b_m)$ over $acl(A,a_1,\cdots,a_n)$, we can find $b\in Fix (\sigma)\cap acl(A,a_1,\cdots,a_n)$, $b \not\in A$. Then $tp(b/A)$ is not stationary: Let $c \in Fix(\sigma)$ be independent from $b$ over $A$, then $tp(b/A)$ has two distinct non-forking extensions to $Ac$, one in which $\sqrt{b+c}\in Fix(\sigma)$, the other in which $\sqrt{b+c}\not\in Fix(\sigma)$. Hence $tp(a_1,\cdots,a_n/A)$ is not stationary, and neither is $tp(a/A)$.\\ $\Box$ \begin{rem}\label{st2} Let $A,K$ and $a$ be as above. \begin{enumerate} \item If $SU(a/A)=1$, then the stationarity of $tp(a/A)$ implies its stability and stable embeddability. \item There are examples of types of $SU$-rank $1$ which satisfy \ref{st1}(\ref{st1_1}) above but do not satisfy \ref{st1}(\ref{st1_2}). Thus condition \ref{st1}(\ref{st1_2}) is not implied by stationarity. \end{enumerate} \end{rem} \begin{cor}\label{st3} Let $A=acl(A)$, and $a$ a tuple in ${\mathcal C}$. Then $tp(a/A)$ is stable, stably embedded if and only if $tp_{ACFA}(a/A)$ is stable, stably embedded. In this case, it will also be $1$-based. \end{cor} \begin{prop}\label{st4} Let $A=acl(A)\subset K$, and $a$ a tuple in $K$, with $SU(a/A)=1$. If $tp_{ACFA}(a/A)\perp (\sigma(x)=x)$ then $tp(a/A)$ is stable, stably embedded. In particular, if $tp_{ACFA}(a/A)$ is stable, stably embedded, then so is $tp(a/A)$. \end{prop} {\it Proof}:\\ Suppose that $tp(a/A)$ is not stable, stably embedded; then there is $B=acl(B)\supset A$ such that $tp(a/B)$ is not stationary, and therefore $tp_{ACFA}(a,Da,D^2a,\ldots /B)$ is not stationary. By \ref{st1} $tp_{ACFA}(a,Da,D^2a,\ldots /A) \not\perp (\sigma(x)=x)$. Hence, there is some algebraically closed difference field $L$ containing $A$, which is linearly disjoint from $acl(Aa)$ over $A$, and an element $b\in Fix(\sigma)\cap (Lacl(Aa))^{alg}, b \not\in L$. Looking at the coefficients of the minimal polynomial of $b$ over $Lacl(Aa)$, we may assume that $b\in Lacl(Aa)$. Let $M=acl(L)$, and chose $(M',L')$ realizing $tp(M,L/A)$ and independent from $a$ over $A$. Then $qftp_{ACFA}(L'/Aa)=qftp_{ACFA}(L/Aa)$ and there is $b'\in L'acl(Aa)$ such that $\sigma(b')=b'$. Since $SU(a/L')=1$, we get $a\in acl(L'b')=L(b')_D^{alg}$. This implies that $tp_{ACFA}(a/L')\not\perp (\sigma(x)=x)$, and gives us a contradiction.\\ $\Box$ \begin{rem}\label{st5} As stated, the result of \ref{st4} is false if one only assumes $SU(a/A)<\omega$. The correct formulation in that case is as follows: Assume $SU(a/A)<\omega$ and that $acl_{\sigma}(Aa)$ contains a sequence $a_1,\cdots,a_n$ of tuples such that, for all $i\leq n$, working in {\it DCFA}, $SU(a_i/Aa_1,\cdots,a_{i-1})=1$. Under these hypotheses, if $tp_{ACFA}(a/A)$ is stable, stably embedded then so is $tp(a/A)$. \end{rem} The proof of the following lemma is analogue to the last statement in the proof of \ref{st1}(\ref{st1_2}).t \begin{lem}\label{dcfdcfa} Let $a$ be a tuple of a model of {\it DCFA}, and $A$ a subset of that model. If $tp_{DCF}(a/A)$ is 1-based then $tp(a/A)$ is 1-based. \end{lem} Lemmas 2 and 3 of \cite{salinas} and \ref{wgr} imply the following condition for 1-basedness, stability and stable embeddability for groups. \begin{teo} \label{wgrsec} Let $1 \longrightarrow G_1 \longrightarrow G_2 \longrightarrow G_3 \longrightarrow 1$ be a short exact sequence of definable groups in a simple theory. Then $G_2$ is stable, stably embedded (resp. 1-based) if and only if $G_1$ and $G_3$ are stable, stably embedded (resp. 1-based). \end{teo} \section{Abelian Groups}\label{sec:abelian} In this section, we study abelian groups defined over some subset $K=acl(K)$ of a model $({\mathcal U},\sigma,D)$ of {\it DCFA}. We investigate whether they are $1$-based, and whether they are stable, stably embedded. By 4.3 of \cite{rbrank1}, \ref{GIII6} and \ref{wgrsec} this study may be reduced to the case when the group $H$ is a quantifier-free definable subgroup of some commutative algebraic group $G$, and $G$ has no proper (infinite) algebraic subgroup, i.e. $G$ is either $\mathbb G_a$, $\mathbb G_m$, or a simple abelian variety $A$. \\\\ {\bf From now on we suppose all the groups are quantifier-free definable}.\\ We study now all three cases for $G$.\\\\ {\bf The additive group} \begin{prop}\label{adgr} No infinite definable subgroup of $\mathbb G_a^n(\mathcal{U})$ is $1$-based. \end{prop} {\it Proof}:\\ Let $H<\mathbb G_a^n$ be a definable infinite group. By 4.4 of \cite{rbrank1}, $H$ is quantifier-free definable and contains a definable subgroup $H_0$ which is definably isomorphic to $Fix (\sigma) \cap {\mathcal C}$. Hence $H$ is not $1$-based.\\ $\Box$\\\\ {\bf The multiplicative group}\\ The logarithmic derivative $lD:\mathbb G_m \to \mathbb G_a$, $x\mapsto Dx/x$ is a group epimorphism with $Ker(lD)=\mathbb G_m(\mathcal{C})$ (see \cite{manin}). Given a polynomial $P(T)= \sum_{i=0}^na_iT^i\in {\mathbb Z}[T]$, we denote by $P(\sigma)$ the homomorphism defined by $x \mapsto \sum_{i=0}^na_i\sigma^i(x)$. \begin{prop} Let $H$ be a quantifier-free ${\mathcal L}_{\sigma,D}$-definable subgroup of $\mathbb G_m$. If $lD(H) \neq 0$ then $H$ is not $1$-based. If $lD(H)=0$ then there is a polynomial $P(T)$ such that $H=Ker(P(\sigma))$. Then we have that $H$ is $1$-based if and only if $P(T)$ is relatively prime to all cyclotomic polynomials $T^m-1$ for all $m\in \mathbb{N}$ \end{prop} {\it Proof}: \\ By \ref{adgr}, if $lD(H) \neq0$ then $H$ is not $1$-based. If $lD(H)=0$, as $Ker(lD)=\mathbb G_m(\mathcal{C})$, $H$ is ${\mathcal L}_{\sigma}$-definable in $\mathcal{C}$. Hence there is a polynomial $P(T)=\sum_{i=0}^na_iT^i \in \mathbb{Z}[T]$ such that $H$ is defined by $\Pi_{i=0}^n\sigma^i(X^{a_i})=1$. In {\it ACFA}, $H$ is 1-based, stable, stably embedded if and only if $P(T)$ is relatively prime to all cyclotomic polynomials $T^m-1$ for $m \geq 1$ (see \cite{HMM}). By \ref{st1} the same holds for {\it DCFA}.\\ $\Box$\\\\ {\bf Abelian varieties}\\ \begin{defi}\label{ab1} An abelian variety is a connected algebraic group $A$ which is complete, that is, for any variety $V$ the projection $\pi:A \times V \to V$ is a closed map. \end{defi} As a consequence of the definition we have that an abelian variety is commutative. Let $B$ be an algebraic subgroup of an abelian variety $A$. Then $A/B$ is an abelian variety. If in addition $B$ is connected $B$ is an abelian variety. An abelian variety is called simple if it has no infinite proper abelian subvarieties. Let $A$ and $B$ be two abelian varieties. Let $f:A \to B$ be a homomorphism. We say that $f$ is an isogeny if $f$ is surjective and $Ker(f)$ is finite. We say that $A$ and $B$ are isogenous if there are isogenies $f:A \to B$ and $g:B \to A$. \begin{prop}\label{ab5}\textup{({\it ACF}, \cite{langabelian})} There is no nontrivial algebraic homomorphism from a vector group into an abelian variety. \end{prop} Now we mention some properties concerning 1-basedness of abelian varieties in difference and differential fields. Consider a saturated model $({\mathcal U},\sigma)$ of {\it ACFA}. In \cite{HMM}, Hrushovski gives a full description of definable subgroups of $A({\mathcal U})$ when $A$ is a simple abelian variety defined over ${\mathcal U}$. When $A$ is defined over $Fix(\sigma)$, this description is particularly simple, at least up to commensurability. Let $R=End(A)$ (the ring of algebraic endomorphisms of $A$). If $P(T)=\sum_{i=0}^n e_iT^i\in R[T]$, define $Ker(P(\sigma))=\{a\in A({\mathcal U})\mid \sum_{i=0}^n e_i(\sigma^i(a))=0\}$. \begin{prop} \textup{({\it ACFA}, \cite{HMM})}\label{ab2} Let $A$ be a simple abelian variety defined over $\mathcal U$, and let $B$ be a definable subgroup of $A({\mathcal U})$ of finite $SU$-rank. \begin{enumerate} \item If $A$ is not isomorphic to an abelian variety defined over $(Fix(\sigma))^{alg}$, then $B$ is $1$-based and stable, stably embedded. \item Assume that $A$ is defined over $Fix(\sigma)$. Then there is $P(T)\in R[T]$ such that $B\cap Ker(P(\sigma))$ has finite index in $B$ and in $Ker(P(\sigma))$. Then $B$ is $1$-based if and only if the polynomial $P(T)$ is relatively prime to all cyclotomic polynomials $T^m-1$, $m\in\mathbb{N}$. If $B$ is $1$-based, then it is also stable, stably embedded. \end{enumerate} \end{prop} We work now in a saturated model $({\mathcal U},D)$ of {\it DCF}. The following is proved in \cite{manin}. \begin{prop}\label{ab6} Let $A$ be an abelian variety. Then there is a ${\mathcal L}_D$-definable (canonical) homomorphism $\mu: A \to \mathbb G_a^n$, for $n=dim(A)$, such that $Ker(\mu)$ has finite Morley rank (a generalization of the notion of algebraic dimension). \end{prop} $Ker(\mu)$, is known as the Manin kernel of $A$, we denote it by $A^{\sharp}$. \begin{prop}\textup{(Properties of the Manin Kernel, see \cite{manin} for the proofs)}\label{propmanin}\\ Let $A$ and $B$ be abelian varieties. Then \begin{enumerate} \item $A^{\sharp}$ is the Kolchin closure of the torsion subgroup $Tor(A)$ of $A$. \item $(A\times B)^{\sharp}=A^{\sharp}\times B^{\sharp}$, and if $B<A$ then $B\cap A^\#=B^\#$. \item A differential isogeny between $A^{\sharp}$ and $B^{\sharp}$ is the restriction of an algebraic isogeny from $A$ to $B$. \end{enumerate} \end{prop} We say that an abelian variety descends to the constants if it is isomorphic to an abelian variety defined over the constants. \begin{prop}\label{ab8}\textup{({\it DCF}, see \cite{manin})} Let $A$ be a simple abelian variety. If $A$ is defined over ${\mathcal C}$, then $A^{\sharp}=A({\mathcal C})$. If $A$ does not descend to the constants, then $A^{\sharp}$ is strongly minimal and 1-based. \end{prop} We now return to {\it DCFA} and fix a saturated model $({\mathcal U,\sigma,D})$ of {\it DCFA} and a simple abelian variety $A$ defined over $K=acl(K) \subset \mathcal{U}$. Let $H$ be an ${\mathcal L}_{\sigma,D}$-definable connected subgroup of $A$ defined over the difference-differential field $K$ and let $\tilde{H}$ be its $(\sigma,D)$-Zariski closure. Since $H$ is $1$-based if and only if $\tilde{H}$ is $1$-based (see 4.3 and 4.4 of \cite{rbrank1}), we can suppose that $H$ is quantifier-free definable and quantifier-free connected. Let $\mu:A \to \mathbb G_a^d$ as in \ref{ab6}. If $H \not\subset Ker\mu$ then by \ref{adgr} $H$ is not $1$-based. Assume that $H \subset A^{\sharp}$. We first show a very useful lemma. \begin{lem}\label{ab20} Let $H$ be a quantifier-free definable subgroup of $A^{\sharp}$ which is quantifier-free connected. Then $H=H'\cap A^{\sharp}$ for some quantifier-free ${\mathcal L}_\sigma$-definable subgroup $H'$ of $A$. \end{lem} {\it Proof}:\\ Our hypotheses imply that there is an integer $k$ and a differential subgroup $S$ of $A\times A^\sigma\times \cdots \times A^{\sigma^k}$ such that $H=\{a\in A: (a,\sigma(a),\cdots,\sigma^k(a))\in S\}$. By \ref{propmanin}.2, replacing $S$ by its Zariski closure $\bar S$ we get $H=\{a\in A^{\sharp}: (a,\sigma(a),\cdots,\sigma^k(a))\in \bar S\}$. Thus $H=H'\cap A^{\sharp}$, with $H'=\{a\in A : (a,\sigma(a),\cdots,\sigma^k(a)\in \bar{S}\}$.\\ $\Box$ Let us state an immediate consequence of \ref{ab20} : \begin{cor}\label{ab9} If for all $k \in \mathbb{N}$, $A$ and $A^{\sigma^k}$ are not isogenous, then $SU(A^{\sharp})=1$. \end{cor} \ {\bf Case 1}: $A$ is isomorphic to a simple abelian variety $A'$ defined over $\mathcal{C}$. We can suppose that $A$ is defined over $\mathcal{C}$. Then, by \ref{ab8}, $A^{\sharp}=A(\mathcal{C})$. Hence, by \ref{st1}, $H$ is $1$-based for {\it DCFA} if and only if it is $1$-based for $ACFA$; and in that case, by \ref{st3}, it will also be stable, stably embedded If $H=A(\mathcal{C})$ then we know that $H$ is not 1-based in {\it ACFA}. If $H$ is a proper subgroup of $A(\mathcal{C})$, \ref{ab2} gives a precise description of that case. \\ {\bf Case 2}: $A$ does not descend to $\mathcal{C}$. Then, by \cite{manin}, section 5, $A^{\sharp}$ is strongly minimal and 1-based for {\it DCF}. By \ref{dcfdcfa} it is 1-based for {\it DCFA}. We will now investigate when $H$ is stable, stably embedded. By $1$-basedness and quantifier-free $\omega$-stability, we know that if $X\subset A^{\sharp}$ is quantifier-free definable, then $X$ is a Boolean combination of cosets of quantifier-free definable subgroups of $A^{\sharp}$. Assume first that $H\neq A^{\sharp}$, and let $a$ be a generic of $H$ over $K$. Then $H$ is finite-dimensional, and therefore $SU(H)<\omega$. As $H$ is $1$-based, there is an increasing sequence of subgroups $H_i$ of $H$ with $SU(H_{i+1}/H_i)=1$. By \ref{ab20}, we may assume that $H_i=U_i\cap A^{\sharp}$ for some quantifier-free ${\mathcal L}_\sigma$-definable subgroups $U_i$ of $A$. Note that \ref{ab20} also implies that each quotient $U_{i+1}/U_i$ is $c$-minimal (i.e., all quantifier-free definable ${\mathcal L}_\sigma$-definable subgroups are either finite or of finite index). Furthermore, by elimination of imaginaries in {\it ACFA}, $acl_{\sigma}(Ka)$ contains tuples $a_i$ coding the cosets $a+U_i$. Hence $tp(a/K)$ satisfies the conditions of \ref{st5} and we obtain that if $tp_{ACFA}(a/K)$ is stable, stably embedded then so is $tp(a/K)$. For the other direction, observe that if $tp_{ACFA}(a/K)$ is not stable, stably embedded, then for some $i$, the generic {\it ACFA}-type of $U_{i+1}/U_i$ is non-orthogonal to $\sigma(x)=x$, and there is a (${\mathcal L}_\sigma$)-definable morphism $\psi$ with finite kernel $U_{i+1}/U_i \to B(Fix (\sigma^k))$ for some $k$ and abelian variety $B$ (see \cite{HMM}). But, returning to {\it DCFA}, no non-algebraic type realized in $Fix( \sigma^k)$ can be stable, stably embedded, since for instance the formula $\varphi(x,y)=\exists z\ z^2=x+y \ \land \ \sigma(z)=z$ is not definable (\ref{st1},3). This proves the other implication. \smallskip Thus we have shown: \smallskip If $H$ is finite dimensional, then $tp(a/K)$ is stable, stably embedded if and only if $tp_{ACFA}(a/K)$ is stable, stably embedded. Using \ref{ab20}, \ref{ab2} gives us a full description of that case. In particular, we then have that if $H$ is not stable, stably embedded, then $A$ is isomorphic to an abelian variety defined over $Fix(\sigma^k)$ for some $k$. \smallskip Let us now assume that $H=A^{\sharp}$. Let $a$ be a generic of $H$ over $K$. Then $tp_{ACFA}(a,\cdots,D^ma/K)$ is the generic type of an algebraic variety $V$, and is therefore stationary (by 2.11 of \cite{salinas}). Thus, using the finite dimensional case, if $A$ is not isomorphic to an abelian variety defined over $(Fix(\sigma))^{alg}$, then $H$ is stable, stably embedded. If $A$ is isomorphic to a variety $B$ defined over $Fix( \sigma^k)$, via an isomorphism $\psi$, then the subgroup $\psi^{-1}(Ker (\sigma^k -1))\cap A^{\sharp}$ is not stable, stably embedded. We summarize the results obtained: \begin{teo} \label{absum} Let $A$ be a simple abelian variety, and let $H$ be a quantifier-free definable subgroup of $A({\mathcal U})$ defined over $K=acl(K)$. If $H\not\subset A^{\sharp}({\mathcal U})$, then $H$ is not 1-based. Assume now that $H\subset A^{\sharp}({\mathcal U})$, and let $a$ be a generic of $H$ over $K$. Then \begin{enumerate} \item If $A$ is defined over the field $\mathcal C$ of constants, then $H$ is $1$-based if and only if it is stable, stably embedded, if and only if $tp_{ACFA}(a/K)$ is hereditarily orthogonal to $(\sigma(x)=x)$. The results in \cite{HMM} yield a complete description of the subgroups $H$ which are not $1$-based. \item If $A$ does not descend to the field $\mathcal C$ of constants, then $H$ is $1$-based. Moreover \begin{enumerate} \item If $A$ is not isomorphic to an abelian variety defined over $Fix( \sigma^k)$ for some $k$, then $H$ is stable, stably embedded. \item Assume that $A$ is defined over $Fix(\sigma)$. Then $H$ is stable, stably embedded if and only $tp_{ACFA}(a/K)$ is stable, stably embedded. Again, the results in \cite{HMM} give a full description of this case. \end{enumerate} \end{enumerate} \end{teo} \def\leavevmode\raise.45ex\hbox{$\rhook$}} \def\cprime{$'${\leavevmode\raise.45ex\hbox{$\rhook$}} \def\cprime{$'$}
2024-02-18T23:40:47.041Z
2019-04-29T02:04:38.000Z
algebraic_stack_train_0000
3,323
6,206
proofpile-arXiv_066-500
\section{Introduction} Given a (possibly infinite) family of $r$-uniform hypergraphs $\mathcal{F}$, the \emph{Tur\'an number} or \emph{extremal number} of $\mathcal{F}$, denoted by $\mbox{ex}(n,\mathcal{F})$, is the maximum number of edges in an $r$-uniform hypergraph on $n$ vertices which does not contain a copy of any member of $\mathcal{F}$. The study of the extremal numbers of graphs and hypergraphs is one of the central topics in discrete mathematics, which goes back more than hundred years to the works of Mantel \cite{Ma} in 1907 and Tur\'an \cite{Tu} in 1941. Instances of this problem also appear naturally in discrete geometry, additive number theory, probability, analysis, computer science and coding theory. For a general reference, we refer the reader to the surveys \cite{FS13,K11,MPS, Su}. Despite that this topic was extensively studied, there are still many natural families of graphs and hypergraphs whose Tur\'an number is not well understood. In this paper, we make a substantial progress on understanding the extremal number of such a family of hypergraphs. One of the most basic results in graph theory says that if $G$ is a graph on $n$ vertices which does not contain a cycle, then $G$ has at most $n-1$ edges, and this bound is best possible. While this simple fact has many different proofs, analogues of this result for uniform hypergraphs turn out to be more challenging. There are different notions of cycles in hypergraphs one can consider: loose cycles, Berge cycles and tight cycles (which all coincide for graphs), but in this paper we focus on tight cycles. If $r\geq 2$ and $\ell\geq r+1$, the \emph{tight cycle of length $\ell$} is the $r$-uniform hypergraph with vertices $x_{1},\dots,x_{\ell}$ and edges $\{x_{i},x_{i+1},\dots,x_{i+r-1}\}$ for $i=1,\dots,\ell$, where all indices are modulo~$\ell$. There is a large literature on the extremal number of Berge and loose cycles, see e.g. \cite{BGy08,FJ14,Gy06,GyL12,JM18,KMV15}, but the corresponding questions for tight cycles turn out to be particularly difficult. Let $\mathcal{C}^{(r)}$ denote the family of $r$-uniform tight cycles. Let $S^{(r)}_n$ be the $r$-uniform hypergraph, whose edges are those $r$-element subsets of $[n]$ that contain 1. Clearly, $S^{(r)}_n$ contains no tight cycle, and $|E(S^{(r)}_n)|=\binom{n-1}{r-1}$. S\'os, and independently Verstra\"ete (see, e.g., \cite{MPS, V16}) conjectured that $S^{(r)}_n$ is extremal for tight cycles, that is, $\mbox{ex}(n,\mathcal{C}^{(r)})=\binom{n-1}{r-1}$ for sufficiently large $n$. This conjecture was recently disproved by Huang and Ma \cite{HM19}, who showed that for every $r\geq 3$ there exists some constant $1<c=c(r)<2$ such that $\mbox{ex}(n,\mathcal{C}^{(r)})\geq c\binom{n-1}{r-1}$ for every sufficiently large $n$. On the other hand, it is widely believed that $\mbox{ex}(n,\mathcal{C}^{(r)})=O(n^{r-1})$. Nevertheless, there were no upper bounds known getting close to this conjecture. In the case $r=3$, an unpublished result of Verstra\"ete states that if $C^{(3)}_{24}$ is the 3-uniform tight cycle of length 24, then $\mbox{ex}(n,C^{(3)}_{24})=O(n^{5/2})$. For $r\geq 4$, the best upper bound we were aware of is $\mbox{ex}(n,\mathcal{C}^{(r)})=O(n^{r-2^{-r+1}})$, which comes from the observation that the complete $r$-partite $r$-uniform hypergraph with vertex classes of size 2, denoted by $K^{(r)}_{2, \dots,2}$, contains the tight cycle of length $2r$, and we have $\mbox{ex}(n,K^{(r)}_{2, \dots,2})=O(n^{r-2^{-r+1}})$ by a well known result of Erd\H{o}s \cite{E64}. In case one wants to find a tight cycle of linear size, Allen, B\"ottcher, Cooley, and Mycroft \cite{ABCM17} proved that for every $0<\alpha,\delta<1$ and sufficiently large $n$ (with respect to $r$ and $\delta$), any $r$-uniform hypergraph with $n$ vertices and at least $(\alpha+\delta)\binom{n}{r}$ edges contains a tight cycle of length $\ell$ for any $\ell\leq\alpha n$, $r \mid \ell$. However, the proof of this result uses the Hypergraph Regularity Lemma, and thus not applicable in the setting of sparse hypergraphs with $o(n^r)$ edges. In this paper, we prove the first upper bound for containing a tight cycle which matches the lower bound up to an $n^{o(1)}$ factor. \begin{theorem}\label{thm:mainthm} If $\mathcal{H}$ is an $r$-uniform hypergraph on $n$ vertices which does not contain a tight cycle, then $\mathcal{H}$ has at most $n^{r-1+o(1)}$ edges. \end{theorem} \noindent More precisely, our proof shows that that there exists $c=c(r)>0$ such that $\mathcal{H}$ has at most $n^{r-1}e^{c\sqrt{\log n}}$ edges. \section{Preliminaries} Let us start by describing the notation we are going to use, some of which might be slightly unconventional. As usual, $[r]$ denotes the set $\{1,\dots,r\}$, and $S_{r}$ denotes the set of all permutations of $[r]$. If $G$ is a graph and $X\subset V(G)$, the \emph{neighborhood} of $X$ is $N_{G}(X)=N(X)=\{y\in V(G)\setminus X:\exists x\in X, xy\in E(G)\}$. A \emph{tight path} in an $r$-uniform hypergraph $\mathcal{H}$ is a sequence of $\ell\geq r+1$ vertices $x_1,\dots,x_{\ell}$ such that $\{x_{i},\dots,x_{i+r-1}\}\in E(\mathcal{H})$ for $i=1,\dots,\ell-r+1$. Let $\mathcal{H}$ be an $r$-uniform hypergraph on $n$ vertices. By considering a random partition of $V(\mathcal{H})$ into $r$-parts, we can find an $r$-partite subgraph $\mathcal{H}'$ of $\mathcal{H}$ with at least $\frac{r!}{r^r}|E(\mathcal{H})|$ edges. Therefore, it is enough to verify Theorem \ref{thm:mainthm} for $r$-partite $r$-uniform hypergraphs. Now suppose that $\mathcal{H}$ is an $r$-partite $r$-uniform hypergraph with vertex classes $A_1,\dots,A_r$, each of size at most $n$. Instead of working with hypergraphs, we find it more suitable to work with their \emph{line graphs}. Also, instead of viewing edges as $r$-element subsets of the vertices, it is better to work with $r$-tuples of vertices. This motivates the following definition. Say that a graph $G$ is an \emph{$r$-line-graph} if the vertex set of $G$ is a set of $r$-tuples $V\subset A_{1}\times \dots\times A_{r}$, and $x$ and $y$ are joined by an edge in $G$ if and only if $x$ and $y$ differ in exactly one coordinate. A \emph{subgraph} of an $r$-line-graph $G$ always refer to an induced subgraph of $G$, which is an $r$-line-graph as well by definition. Given an $r$-partite $r$-uniform hypergraph $\mathcal{H}$, we can naturally identify it with an $r$-line-graph. A \emph{tight cycle} in an $r$-line-graph refers to a sequence of vertices corresponding to the edges of a tight cycle in the associated hypergraph. Let $G$ be an $r$-line-graph and let $X\subset V(G)\subset A_{1}\times \dots\times A_{r}$. For $i\in [r]$ and $X\subset V(G)$, the \emph{$i$-boundary} of $X$, denoted by $\partial^{(i)}_{G}(X)=\partial^{(i)}(X)$ is the set of vertices $y\in V(G)$ for which $y$ has a neighbor in $X$ which differs from $y$ in the $i$-th coordinate. Also, the \emph{$i$-neighborhood} of $X$ is $N^{(i)}_{G}(X)=N^{(i)}(X)=\partial^{(i)}(X)\setminus X$. For $i\in [r]$, an \emph{$i$-block} of $G$ is a set of the form $x\cup N^{(i)}(x)$ for some $x\in V(G)$, and a \emph{block} is an $i$-block for some $i\in [r]$. Note that the block containing $x$ is the set of all vertices which only differ from $x$ in the $i$-th coordinate. Therefore, a block is a clique in $G$, and the $i$-blocks of $G$ form a partition of $V(G)$ for any $i\in [r]$. Let $p(G)$ denote the number of blocks of $G$, and define the \emph{density} of $G$ as $$\mbox{dens}(G)=\frac{\sum_{B}|B|}{p(G)}=\frac{r|V(G)|}{p(G)},$$ where the sum iterates over all blocks $B$ of $G$. The \emph{$i$-degree} of a vertex $x\in V(G)$ is $d^{(i)}_{G}(x)=d^{(i)}(x)=|N^{(i)}(x)|+1$, where we write $N^{(i)}(x)$ instead of $N^{(i)}(\{x\})$ (so $d_{i}(x)$ is the size of the $i$-block containing $x$). With slight abuse of notation, the minimum degree of $G$, denoted by $\delta(G)$, is the minimum of $d^{(i)}(x)$ over all $x\in V(G)$ and $i\in [r]$, which is the minimum size of a block in $G$. \subsection{An overview of the proof} Let $\mathcal{H}$ be an $r$-partite $r$-uniform hypergraph with vertex classes of size at most $N$, and let $G$ be the $r$-line-graph associated with $\mathcal{H}$. Let $n=dN^{r-1}$ be the number of edges of $\mathcal{H}$, then $|V(G)|=n$ and $p(G)\leq rN^{r-1}$. Therefore, $$\mbox{dens}(G)=\frac{rn}{p(G)}\geq \frac{rdN^{r-1}}{rN^{r-1}}=d.$$ Hence, Theorem \ref{thm:mainthm} is an immediate consequence of the following theorem. \begin{theorem}\label{thm:mainthm2} There exists $c=c(r)>0$ such that the following holds. If $G$ is an $r$-line-graph with $n$ vertices that does not contain a tight cycle, then $\mbox{dens}(G)\leq e^{c\sqrt{\log n}}$. \end{theorem} In the rest of the paper, we prove Theorem \ref{thm:mainthm2}. Let us briefly outline our proof strategy. Let $G$ be an $r$-line-graph of density $d$ such that $V(G)\subset A_1\times\dots\times A_r$. First, we show that $G$ contains a subgraph $H$ with minimum degree $\Omega(d)$ (as a reminder, here and everywhere else, minimum degree refers to our new definition of minimum degree) and good expansion properties, namely that every $X\subset V(H)$ of size at most $\frac{|V(H)|}{2}$ satisfies $|N(X)|\geq \lambda|X|$, where $\lambda=\Theta(\frac{1}{\log n})$. Then, we show that $H$ is a robust expander, meaning that even if one removes of a few elements of $A_1\cup\dots\cup A_r$ (and thus deletes all the vertices containing a removed coordinate), $H$ still has good expansion properties. This can be found in Section \ref{sect:expansion}. Now fix an arbitrary permutation $\sigma\in S_r$. Let $x=(x_1,\dots,x_r),y=(y_1,\dots,y_r)\in V(G)$ such that $x_i\neq y_i$ for $i\in [r]$. Say that $y$ is a \emph{$\sigma$-neighbor} of $x$ if the following holds. Let $z_0=x$, and define $z_1,\dots,z_r\in A_1\times\dots\times A_r$ such that $z_i$ is the vector we get from $z_{i-1}$ after changing the $\sigma(i)$-th coordinate to $y_{\sigma(i)}$. Then $y=z_r$. If $z_1,\dots,z_{r-1}$ are all vertices of $G$, then say that $y$ is a $\sigma$-neighbor of $x$. If $X\subset V(G)$, the $\sigma$-boundary of $X$, denoted by $\partial^{\sigma}_{G}(X)$, is the set of vertices $y$ which are the $\sigma$-neighbor of some $x\in X$. This notion is useful for the following reason. Say that a sequence of vertices $v_1,\dots,v_k$ is a $\sigma$-path if $v_{i+1}$ is a $\sigma$-neighbor of $v_{i}$ for $i=1,\dots,k-1$, and no two vertices among $v_{1},\dots,v_{k}$ share a coordinate. Then a $\sigma$-path corresponds to a tight path in the associated hypergraph. Our goal is to show that the expansion of $H$ implies that for any pair of vertices $x,y\in V(H)$ not sharing a coordinate, there is a short $\sigma$-path $P$ starting with $x$ and ending with $y$. But then after removing the coordinates appearing in $P$ (except for the coordinates of $x$ and $y$), the remaining graph $H'$ still has good expansion properties. Therefore, we can find a $\sigma$-path $P'$ starting with $y$ and ending with $x$ in $H'$. But then $P\cup P'$ is a tight cycle in $H$, and we are done. Unfortunately, we are not quite able to show that there is a $\sigma$-path from any $x$ to any $y$, but we can prove that either this is the case, or we can find a small subgraph of $H$ with unusually high density. Then, we conclude the proof by a density increment type argument. The key observation that allows us to find short paths between pairs of vertices is that if $H$ has good expansion properties, then the $\sigma$-boundaries also expand, that is, $|\partial^{\sigma}_{G}(X)|\geq (1+\lambda')|X|$ for every $|X|\leq \frac{1}{2}|V(H)|$ for some $\lambda'=\Theta(\lambda)$. This implies that given $x,y\in V(H)$, we can reach a large proportion of the vertices of $H$ from $x$ by a short $\sigma$-path. Also, if $\tau$ is the reverse of $\sigma$, we can reach a large proportion of the vertices of $H$ by a $\tau$-path starting with $y$. See Section \ref{sect:sigma} for details. Then, it remains to find some $z\in V(G)$ such that $z$ can be reached from $x$ by a $\sigma$-path $P_x$, $z$ can be reached by a $\tau$-path $P_y$ from $y$, and no vertex $u\in P_x\setminus\{z\}$ and $v\in P_y\setminus\{z\}$ share a coordinate. \section{Expansion}\label{sect:expansion} Let us start with a simple, but very useful lemma about finding subgraphs of large minimum degree in $r$-line-graphs. \begin{lemma}\label{lemma:mindeg} If $G$ is an $r$-line-graph of density $d$, then $G$ contains a subgraph $H$ such that $\mbox{dens}(H)\geq d$ and $\delta(H)\geq\frac{d}{r}$. \end{lemma} \begin{proof} Repeat the following operation: if there exist $i\in [r]$ and $x\in V(G)$ such that $d_{i}(x)<\frac{d}{r}$, then delete the block $B$ containing $x$. We show that if the density of $G$ is at least $d$, then this operation increases the density. Indeed, if the resulting graph is $G'$, then $$\mbox{dens}(G')\geq\frac{r|V(G)|-r|B|}{p(G)-1}>\frac{dp(G)-d}{p(G)-1}=d.$$ This implies that after repeating the operation described above a finite number of times, we end up with a nonempty graph $H$ with the desired properties. \end{proof} If $G$ is a graph and $\lambda>0$, we say that $G$ is a \emph{$\lambda$-expander} if for every $X\subset V(G)$ satisfying $|X|\leq \frac{1}{2}|V(G)|$ we have $|N(X)|\geq \lambda |X|$. Note that having expansion for every set of size at most $\frac{1}{2}|V(G)|$ automatically implies the expansion of larger sets as well, as we show in the next claim. \begin{claim}\label{claim:eps_exp} Let $0<\lambda<1$, $\epsilon>0$ and let $G$ be a $\lambda$-expander. If $X\subset V(G)$ such that $|X|\leq (1-\epsilon)|V(G)|$, then $|N(X)|\geq \frac{\lambda\epsilon}{2}|X|$. \end{claim} \begin{proof} If $|X|\leq \frac{|V(G)|}{2}$, then this follows from the definition. Suppose that $|X|\geq \frac{|V(G)}{2}$ and that $|N(X)|< \frac{\lambda\epsilon}{2} |X|$. Let $Y=V(G)\setminus (X\cup N(X))$. Then $|Y|\leq \frac{|V(G)|}{2}$ so $|N(Y)|\geq \lambda|Y|$. But $|Y| = |V(G)|-|X|-|N(X)|\geq |V(G)|(\epsilon-\frac{\epsilon\lambda}{2})$ and $N(Y)\subset N(X)$, so $$|N(X)|\geq |N(Y)|\geq \lambda |V(G)|\left(\epsilon-\frac{\epsilon\lambda}{2}\right)\geq \frac{\epsilon\lambda}{2}|V(G)|,$$ contradiction. \end{proof} Say that an $r$-line-graph $G$ is a \emph{$(\lambda,d)$-expander} if $G$ is $\lambda$-expander and $\delta(G)\geq d$. A result of Shapira and Sudakov \cite{SS15} tells us that every graph on $n$ vertices contains a $\lambda$-expander subgraph of roughly the same density, where $\lambda=\Theta(\frac{1}{\log n})$ (in their case, density refers to the usual notion of edge density; also, their notion of expansion is stronger). We use their approach to show that an $r$-line-graph $G$ also contains a $(\lambda,d)$-expander subgraph of roughly the same density, where $d=\Omega(\mbox{dens}(G))$. \begin{lemma}\label{lemma:expander} Let $G$ be an $r$-line-graph on $n$ vertices of density $d$, and let $0<\lambda\leq \frac{1}{2\log_{2} n}$. Then $G$ contains a subgraph $H$ of density at least $d(1-\lambda\log_{2} n)$ such that $H$ is $(\lambda,\frac{d}{2r})$-expander. \end{lemma} \begin{proof} First, we show that if $G'$ is an $r$-line-graph of density $d'$ that is \emph{not} a $\lambda$-expander, then there exists $U\subset V(G')$, $U\neq V(G')$ such that either \begin{enumerate} \item $|U|\leq \frac{1}{2}|V(G')|$ and $\mbox{dens}(G'[U])\geq d'(1-\lambda)$, or \item $\mbox{dens}(G'[U])\geq d'$. \end{enumerate} Indeed, if $G'$ is not a $\lambda$-expander, then there exists $W\subset V(G')$ such that $|W|\leq \frac{1}{2}|V(G')|$ and $|N(W)|\leq \lambda|W|$. We show that either $U_{1}=W$ satisfies 1., or $U_{2}=V(G')\setminus (N(W)\cup W)$ satisfies 2.. Suppose this is not true. Let $p_{j}=p(G'[U_{j}])$ for $j=1,2$. Note that $p(G')\geq p_{1}+p_{2}$. But then we can write \begin{align*} r|V(G')|&=r(|U_1|+|N(U_{1})|+|U_{2}|)\leq r|U_{1}|(1+\lambda)+r|U_{2}|\\ &<d'(1-\lambda)(1+\lambda)p_{1}+d'p_{2}\leq d'(p_{1}+p_{2})\leq r|V(G')|, \end{align*} contradiction. By applying Lemma \ref{lemma:mindeg}, we can also conclude that there exists $U\subset V(G')$, $U\neq V(G')$ such that either \begin{itemize} \item[1.$^{*}$] $|U|\leq \frac{1}{2}|V(G')|$, $\mbox{dens}(G'[U]))\geq d'(1-\lambda)$ and $\delta(G'[U])\geq\frac{d'(1-\lambda)}{r}$, or \item[2.$^{*}$] $d(G'[U])\geq d'$ and $\delta(G'[U])\geq \frac{d'}{r}$. \end{itemize} Starting with an $r$-line-graph $G$ of density $d$, replace $G$ with one of its subgraphs of minimum degree at least $d/r$ having density at least $d$. If current $G$ is not a $\lambda$-expander, we can find $U\subset V(G)$ satisfying 1.$^{*}$ or 2.$^{*}$. Replace $G$ with $G[U]$, and repeat the previous step until $G$ is a $\lambda$-expander or its density is less than $d/2$. The process must stop since we are deleting at least one vertex at every step. Let $H$ be the final graph and let $\ell$ be the number of steps of kind 1.$^{*}$ that we made. Then $|V(H)|\leq |V(G)|2^{-\ell}$ and therefore $\ell\leq \log_{2}n$. This implies that $\mbox{dens}(H)\geq d(1-\lambda)^{\ell}\geq d(1-\lambda\log_{2}n)\geq d/2$, $\delta(H)\geq \frac{d(1-\lambda\log_{2}n)}{r}\geq \frac{d}{2r}$ and $H$ is $\lambda$-expander. \end{proof} From this, we can immediately conclude that we can cover almost every vertex of an $r$-line-graph $G$ with disjoint expander subgraphs. \begin{corollary}\label{lemma:expander_covering} Let $\epsilon>0$. Let $G$ be an $r$-line-graph on $n$ vertices of density at least $d$, and let $\lambda\leq \frac{1}{2\log_2 n}$. Then $G$ contains vertex disjoint subgraphs $G_{1},\dots,G_{k}$ such that $G_{i}$ is a $(\lambda,\frac{\epsilon d}{2r})$-expander for $i\in [k]$, and $|\bigcup_{i=1}^{k} V(G_{i})|\geq (1-\epsilon)n$. \end{corollary} \begin{proof} We will greedily find the subgraphs $G_1,\dots,G_k$ as follows. Suppose that we have already found $G_{1},\dots,G_{j}$, and let $H$ be the subgraph of $G$ induced on $V(G)\setminus \bigcup_{i=1}^{j}V(G_{i})$. If $|V(H)|\leq \epsilon n$, then stop, otherwise define $G_{j+1}$ as follows. The number of blocks of $H$ is at most $p(G)$, so the density of $H$ is at least $\frac{r\epsilon n}{p(G)}=\epsilon d.$ But then $H$ contains a $(\lambda,\frac{\epsilon d}{2r})$-expander subgraph by Lemma \ref{lemma:expander}, let this subgraph be $G_{j+1}$. \end{proof} Let $G$ be an $r$-line-graph with $V(G)\subset A_1\times \dots\times A_r$. If $u\in A_1\cup \dots\cup A_r$, \emph{the deletion of $u$ from $G$} means that we remove all vertices of $G$ with one coordinate equal to $u$. Next, we show that our notion of $(\lambda,d)$-expansion is robust, which means that after the deletion of a few coordinates of a good expander, the resulting graph is still a good expander. \begin{lemma}\label{lemma:expander_robust} Let $u,d$ be positive integers. Let $G$ be an $r$-line-graph on $n$ vertices with $V(G)\subset A_1\times \dots\times A_r$ such that the minimum degree of $G$ is $\delta$. Let $H$ be the subgraph of $G$ we get after deleting at most $u$ elements of $A_1\cup \dots\cup A_r$ from $G$. Then $H$ is an $r$-line-graph of minimum degree at least $\delta-u$ on at least $(1-\frac{u}{\delta})n$ vertices. If in addition $G$ is $\lambda$-expander, $\lambda\leq 1$ and $u\leq \frac{\lambda \delta}{4r}$, then $H$ is a $(\frac{\lambda}{2},\frac{d}{2})$-expander. \end{lemma} \begin{proof} If $x\in V(H)$, then at most $u$ neighbors of $x$ are deleted, so it is clear that the minimum degree of $H$ is at least $\delta-u$. Let $U$ be the set of deleted elements and let $U_i=A_i$ for $i\in [r]$. The number of vertices $(a_1,\dots,a_r)\in V(G)$ such that $a_{i}\in U_i$ for some $i\in [r]$ is at most $\frac{|U_{i}|}{\delta}n$, as each $i$-block of $G$ contains at least $\delta$ vertices of which at most $|U_i|$ has its $i$-th coordinate in $U_i$. Therefore, the total number of deleted vertices is at most $\frac{|U_1|+\dots+|U_r|}{\delta}n\leq\frac{u}{\delta}n.$ It remains to show that $H$ is a $\frac{\lambda}{2}$-expander. Let $X\subset V(H)$ such that $|X|\leq \frac{1}{2}|V(H)|$. As $G$ is $\lambda$-expander, we have $|N_{G}(X)|\geq \lambda |X|$. Let $i\in [r]$, and let $\mathcal{B}$ be the family of $i$-blocks of $G$ having a nonempty intersection with $X$. Then the $i$-blocks of $H$ intersecting $X$ are $V(H)\cap B$ for $B\in \mathcal{B}$. But here, we have $$|V(H)\cap B|\geq |B|-u\geq |B|\left(1-\frac{u}{\delta}\right)\geq |B|\left(1-\frac{\lambda}{4r}\right).$$ Note that $$|N_{G}^{(i)}(X)|+|X|=\sum_{B\in \mathcal{B}}|B|,$$ so $$|N_{H}^{(i)}(X)|+|X|=\sum_{B\in\mathcal{B}}|B\cap V(H)|\geq \left(1-\frac{\lambda}{4r}\right)\sum_{B\in\mathcal{B}}|B|= \left(1-\frac{\lambda}{4r}\right)(|N_G^{(i)}(X)|+|X|).$$ From this, we get \begin{equation}\label{equ:robust} |N_{H}^{(i)}(X)|\geq |N_{G}^{(i)}(X)|-\frac{\lambda}{4r}(|X|+|N_{G}^{(i)}(X)|) . \end{equation} Consider two cases. \begin{itemize} \item[Case 1.] There exists $i\in [r]$ such that $|N_{G}^{(i)}(X)|\geq |X|$. In this case (\ref{equ:robust}) implies that $$|N_{H}(X)|\geq |N_{H}^{(i)}(X)|\geq \left(1-\frac{\lambda}{2r}\right)|N^{(i)}_{G}(X)|>\frac{1}{2}|X|.$$ \item[Case 2.] For every $i\in [r]$, we have $|N_{G}^{(i)}(X)|< |X|$. Then (\ref{equ:robust}) implies that $|N_{H}^{(i)}(X)|\geq |N^{(i)}_G(X)|-\frac{\lambda}{2r}|X|$. But then $$|N_{H}(X)|=\left|\bigcup_{i\in [r]} N^{(i)}_H(X)\right|\geq \left|\bigcup_{i\in [r]} N^{(i)}_G(X)\right|-r\cdot\frac{\lambda}{2r}|X|=|N_G(X)|-\frac{\lambda}{2}|X|\geq \frac{\lambda}{2}|X|.$$ \end{itemize} Therefore, $H$ is a $\frac{\lambda}{2}$-expander. \end{proof} \section{$\sigma$-expansion}\label{sect:sigma} Let $G$ be an $r$-line-graph and let $X\subset V(G)$. Given a permutation $\sigma\in S_r$, the \emph{$\sigma$-boundary }of $X$ is defined as $$\partial_{G}^{\sigma}(X)=\partial^{\sigma}(X)=\partial^{(\sigma(r))}(\dots\partial^{(\sigma(2))}(\partial^{(\sigma(1))}(X))\dots).$$ If $x\in V(G)$ and $y\in \partial^{\sigma}(x)$, say that \emph{$y$ is a $\sigma$-neighbor of $x$ in $G$}. Note that being a $\sigma$-neighbour is not necessarily a symmetric relation. In this section, we show that if $G$ has good expansion properties, then the $\sigma$-boundaries also expand. We prove even more: even if one deletes a few $\sigma$-neighbours of every $x\in V(G)$ arbitrarily, the $\sigma$-boundaries still expand. This motivates the following definition. Suppose that $V(G)\subset A_{1}\times\dots\times A_{r}$. For every $x\in V(G)$, let $F(x)\subset A_1\cup \dots\cup A_r$ be a (possibly empty) set of forbidden coordinates, and let $\mathcal{F}=(F(x))_{x\in V(G)}$. Given $X\subset V(G)$, define $\partial^{\sigma}_G(X,\mathcal{F})=\partial^{\sigma}(X,\mathcal{F})$ as the set of vertices $y$ for which there exists $x\in X$ such that $y$ is a $\sigma$-neighbor of $x$, and no coordinate of $y$ is in $F(x)$. \begin{lemma}\label{lemma:sigma_expansion} Let $\sigma\in S_r$, let $0<\epsilon,\lambda<1$, and let $d,u$ be a positive integers such that $100r^2 u\leq \epsilon d\lambda$. Let $G$ be an $r$-line-graph such that $G$ is a $(\lambda,d)$-expander, and let $\mathcal{F}=(F(x))_{x\subset V(G)}$ such that $|F(x)|\leq u$ for every $x\in V(G)$. Then for every $X\subset V(G)$ satisfying $|X|\leq (1-\epsilon)|V(G)|$, we have $$|\partial^{\sigma}(X,\mathcal{F})|\geq \left(1+\frac{\epsilon\lambda}{4r}\right)|X|.$$ \end{lemma} \begin{proof} Write $t=\frac{u}{d}$, then $t\leq \frac{\epsilon \lambda}{100r^{2}}$. Let $i\in [r]$ and $X\subset V(G)$. We start the proof with two simple claims. Let $\mathcal{F}'=(F'(x))_{x\in V(G)}$ be some family of sets of forbidden coordinates, and define $\partial^{(i)}(X,\mathcal{F}')$ as the set of vertices $y$ which have an $i$-neighbor $x\in X$ such that the $i$-th coordinate of $y$ is not in $F'(x)$. Suppose that $|F'(x)|\leq u$ for every $x\in V(G)$. \begin{claim}\label{claim:expansion} $$|\partial^{(i)}(X,\mathcal{F}')|\geq (1-4t)|X\cup N^{(i)}(X)|\geq (1-4t)|X|.$$ \end{claim} \begin{proof} It is enough to show the first inequality. Let $\mathcal{B}$ be the set of $i$-blocks in $G$ having a nonempty intersection with $X$. Clearly, $\partial^{(i)}(X,\mathcal{F}')$ is the disjoint union of the sets $\partial^{(i)}(B\cap X,\mathcal{F}')$ for $B\in\mathcal{B}$, and $X\cup N^{(i)}(X)$ is the disjoint union of the sets $B\in \mathcal{B}$. Therefore, it is enough to show that $|\partial^{(i)}(B\cap X,\mathcal{F}')|\geq (1-4t)|B|$. But if $x\in B\cap X$, then $$|\partial^{(i)}(B\cap X,\mathcal{F}')|\geq |B|-1-|F(x)|\geq |B|-1-dt\geq |B|(1-4t),$$ where the third inequality holds noting that $|B|\geq d$. \end{proof} \begin{claim}\label{claim:nonexpansion} If $|X\cup N^{(i)}(X)|\leq 2|X|$, then $|X\cap \partial^{(i)}(X,\mathcal{F}')|\geq (1-4t)|X|$. \end{claim} \begin{proof} As before, let $\mathcal{B}$ be the set of $i$-blocks in $G$ having a nonempty intersection with $X$. Note that $$ |X\cup N^{(i)}(X)|=\sum_{B\in\mathcal{B}}|B|\geq d|\mathcal{B}|,$$ so we get $|\mathcal{B}|\leq \frac{2|X|}{d}$. Let $B\in \mathcal{B}$ and $x\in B\cap X$. Clearly, $$|\partial^{(i)}(B\cap X,\mathcal{F}')\cap X|\geq |B\cap X|-1-|F(x)|\geq |B\cap X|-1-dt.$$ Therefore, we have $$|\partial^{(i)}(X,\mathcal{F}')\cap X|\geq \sum_{B\in\mathcal{B}}(|B\cap X|-1-dt)=|X|-2dt|\mathcal{B}|\geq |X|(1-4t),$$ where the second inequality holds by $|\mathcal{B}|\leq \frac{2|X|}{d}$. \end{proof} Without loss of generality, suppose that $\sigma$ is the permutation $12\dots r$, and let $X\subset V(G)$ such that $|X|\leq (1-\epsilon) |V(G)|$. Let $X_{0}=X$, $\mathcal{F}_{0}=\mathcal{F}$, and in what follows we define $X_{1},\dots,X_{r}$ and $\mathcal{F}_1,\dots,\mathcal{F}_r$. Suppose that $X_{i},\mathcal{F}_{i}=(F_{i}(x))_{x\in V(G)}$ is already defined for some $i\in \{0,\dots,r-1\}$, then define $X_{i+1},\mathcal{F}_{i+1}$ as follows. Consider two cases. First, suppose that $|N^{(i+1)}(X_{i})|\geq \frac{\epsilon\lambda}{3r}|X_{i}|$, and in this case say that \emph{expansion happened}. Let $X_{i+1}=\partial^{(i+1)}(X_{i},\mathcal{F}_i)$. Then $|X_{i+1}|\geq (1-4t)(1+\frac{\epsilon\lambda}{3r})|X_{i}|$ by Claim \ref{claim:expansion}. On the other hand, if $|N^{(i+1)}(X_{i})|< \frac{\epsilon\lambda}{3r}$, then let $X_{i+1}=X_{i}\cap \partial^{(i+1)}(X_{i},\mathcal{F}_i)$. Then $|X_{i+1}|\geq (1-4t)|X_i|$ by Claim \ref{claim:nonexpansion}. In each case, for each $y\in X_{i+1}$, let $F_{i+1}(y)=F_{i}(x)$, where $x\in X_i$ is an arbitrary $(i+1)$-neighbor of $y$ for which no coordinate of $y$ is in $F_{i}(x)$, and set $F_{i+1}(y)=\emptyset$ for every $y\in V(G)\setminus X_{i+1}$. Note that $X_{r}\subset \partial^{\sigma}(X,\mathcal{F})$, so it is enough to show that $|X_{r}|\geq (1+\frac{\epsilon\lambda}{4r})|X|$. Observe that if expansion happened even once, then we have $$|X_{r}|\geq (1-4t)^{r}\left(1+\frac{\epsilon\lambda}{3r}\right)|X|\geq (1-4rt) \left(1+\frac{\epsilon\lambda}{3r}\right)|X| \geq \left(1+\frac{\epsilon\lambda}{4r}\right)|X|,$$ where the second inequality holds as $t\leq \frac{\epsilon \lambda}{100r^{2}}$. Therefore, it remains show that expansion must have happened. Suppose otherwise, then $X_{r}\subset X_{r-1}\subset\dots\subset X_{0}$ and $|X_{r}|\geq (1-4t)^{r}|X|\geq (1-4rt)|X|.$ Observe that for $i\in \{0\dots,r-1\}$, we have $X_{r}\cup N^{(i+1)}(X_{r})\subset X_{i}\cup N^{(i+1)}(X_{i})$. Therefore, $$|N^{(i+1)}(X_{r})|\leq |X_{i}|+|N^{(i+1)}(X_{i})|-|X_{r}|\leq |X|+\frac{\epsilon\lambda}{3r}|X|-(1-4rt)|X|=\left(4rt+\frac{\epsilon\lambda}{3r}\right)|X|, $$ where the second inequality holds as expansion did not happen. But then $$|N(X_{r})|\leq\sum_{i=1}^{r}|N^{(i)}(X_{r})|\leq \left(4r^2t+\frac{\epsilon\lambda}{3}\right)|X|<\frac{\epsilon\lambda}{2} |X_{r}|,$$ which contradicts that $G$ is a $\lambda$-expander and Claim \ref{claim:eps_exp}. Therefore, expansion must have happened for some $0 \leq i \leq r-1$, finishing the proof. \end{proof} Let $G$ be an $r$-line-graph such that $V(G)\subset A_{1}\times \dots\times A_{r}$, and let $\sigma\in S_r$. Say that a sequence $a_{1},\dots,a_{rk}$ of distinct elements of $A_{1}\cup \dots\cup A_{r}$ is a \emph{$\sigma$-path} in $G$ if \begin{enumerate} \item $a_{i}\in A_{\sigma(j)}$, where $j\equiv i \pmod{r}$, \item $(a_{i},\dots,a_{i+r-1})\in V(G)$ for $i=1,\dots,rk-r+1$. \end{enumerate} Note that the sequence $a_{1},\dots,a_{rk}$ corresponds to a tight path in the hypergraph identified with $G$. Also, if $x_{1},\dots,x_{k}$ is a sequence of vertices of $G$, say that $x_{1},\dots,x_{k}$ forms a \emph{$\sigma$-path} if $a_{1},\dots,a_{rk}$ is a $\sigma$-path, where $a_{ri-r+1},\dots,a_{ri}$ are the coordinates of $x_{i}$ (in the order given by $\sigma$). Also, if $x,y\in V(G)$, say that \emph{$y$ can be reached from $x$ by a $\sigma$-path}, if there exists a $\sigma$-path $x_{1},\dots,x_{k}$ such that $x_{1}=x$ and $x_{k}=y$. Note that the statement that $y$ can be reached from $x$ by a $\sigma$-path is equivalent to the statement that $x$ can be reached from $y$ by a $\tau$-path, where $\tau$ is the reverse of $\sigma$ (that is, $\tau(i)=\sigma(r+1-i)$ for $i\in [r]$). The \emph{size} of a $\sigma$-path $x_{1},\dots,x_{k}$ is $k$. \begin{lemma}\label{lemma:paths} Let $\sigma\in S_{r}$, let $\epsilon,\lambda>0$ and let $n,d$ be positive integers such that $500r^4\log n<\epsilon^2\lambda^2 d$. Let $G$ be an $r$-line-graph on $n$ vertices that is a $(\lambda,d)$-expander, and let $x\in V(G)$. Then at least $(1-\epsilon)n$ vertices of $G$ can be reached by a $\sigma$-path of size at most $\frac{5r\log n}{\epsilon\lambda}$ from $x$. \end{lemma} \begin{proof} Suppose that $V(G)\subset A_1\times \dots\times A_r$. Let $X_{1}=\{x\}$ and let $X_{i}$ be the set of vertices that can be reached from $x$ by a $\sigma$-path of size $i$. For $y\in X_{i}$, let $x=x_1,\dots,x_i=y$ be a $\sigma$-path, and let $F(y)\subset A_1\cup \dots\cup A_r$ be the set of all coordinates appearing in $x_1,\dots,x_{i-1}$. For $y\in V(G)\setminus X_{i}$, let $F(y)=\emptyset$, and set $\mathcal{F}=(F(y))_{y\in V(G)}$. Then $\partial^{\sigma}(X_i,\mathcal{F})\subset X_{i+1}$. Note that $|F(y)|<ri$ for every $y\in V(G)$. If $|X_{i}|\leq (1-\epsilon)n$ and $ri\leq \frac{\epsilon\lambda d}{100r^2}$, then we can apply Lemma \ref{lemma:sigma_expansion} to get $$|X_{i+1}|\geq |\partial^{\sigma}(X_i,\mathcal{F})|\geq \left(1+\frac{\epsilon\lambda}{4r}\right)|X_i|.$$ Hence, by induction on $i$, if $i<\frac{\epsilon\lambda d}{100r^3}$, then either $|X_{i}|\geq \left(1+\frac{\epsilon\lambda}{4r}\right)^{i}$ or $|X_j|\geq (1-\epsilon)n$ for some $j\leq i$. Setting $I:=\frac{5r\log n}{\epsilon\lambda}< \frac{\epsilon\lambda d}{100r^3}$, we have $\left(1+\frac{\epsilon\lambda}{4r}\right)^{I}>n$, which implies $|X_{j}|\geq (1-\epsilon)n$ for some $j\leq I$. \end{proof} \noindent \textbf{Remark.} Following the same proof, it is not hard to show the following strengthening of Lemma \ref{lemma:paths}. Let $L$ be a positive integer such that $\frac{5r\log n}{\epsilon\lambda}<L<\frac{\epsilon\lambda d}{100r^3}$. Then at least $(1-\epsilon)n$ vertices of $G$ can be reached by a $\sigma$-path of size exactly $L$ from $x$. \section{Finding tight paths} This section contains the bulk of the proof of Theorem \ref{thm:mainthm2}. Here, we prove that if $G$ is an $r$-line-graph with good expansion properties and $\sigma\in S_r$, then either there exists a $\sigma$-path from any vertex $x$ to any other vertex $y$, or $G$ contains a small subgraph with unusually high density. Let us give a rough outline of the proof. Suppose that $V(G)\subset A_1\times\dots\times A_r$, then we partition each of the sets $A_1,\dots,A_r$ into two parts, which then gives a partition of $V(G)$ into $2^r$ parts. We find a partition such that $x$ and $y$ are in different parts $G_1$ and $G_2$, no vertex in $V(G_1)$ shares a coordinate with a vertex in $V(G_2)$, and the vertices of $V(G)$ are uniformly distributed among the $2^r$ parts. Let $\tau$ be the reverse of $\sigma$, that is $\tau(i)=\sigma(r+1-i)$ for $i\in [r]$. Then our goal is to find two vertices $z\in V(G_1)$ and $z'\in V(G_2)$ such that $z'$ is a $\sigma$-neighbor of $z$, there is a $\sigma$-path $P$ from $x$ to $z$ in $V(G_1)$, and there is a $\tau$-path $P'$ from $y$ to $z'$ in $V(G_2)$. Indeed, then $P\cup P'$ is a $\sigma$-path from $x$ to $y$. Unfortunately, we are not quite able to achieve this. One of the main difficulties is that while $G$ might have good expansion properties, this might not be true for $G_1$ or $G_2$. Instead, for $i=1,2$, we cover most vertices of $G_i$ with expander subgraphs $G_{i,1},\dots,G_{i,k_i}$ using Lemma \ref{lemma:expander_covering}, and argue that either the number of expanders used is small, or one of the expander subgraphs is small. In the latter case, we found our small subgraph of $G$ with unusually high density. Hence, suppose that both $k_1$ and $k_2$ are small. In this case, we will choose a vertex $x_j \in G_{1,j}$ and a $\sigma$-path $P_{1,j}$ connecting $x$ with $x_j$ in $G$. Since $G$ is expander this is possible to do for most indices $j$. Let $U_1$ be the set of coordinates appearing on the union of paths $P_{1,j}$. Similarly, for most indices $j$, we will choose a vertex $y_j \in G_{2,j}$ and a $\tau$-path $P_{2,j}$ connecting $y$ with $y_j$ such that the vertices of $P_{2,j}$ have no coordinates in $U_1$. Let $U_2$ be the set of coordinates appearing on the union of paths $P_{2,j}$. Using the paths $P_{i,j}$ and expansion properties of $G_{i,j}$ we show that most vertices of $G_1$ can be reached from $x$ by a $\sigma$-path, whose every vertex $z$ satisfies that $z$ has no coordinate in $U_2$, and $z$ is either in $V(G_1)$, or its coordinates are in $U_1$. Also, most vertices of $G_2$ can be reached from $y$ by a $\tau$-path, whose every vertex $z'$ satisfies that $z'$ has no coordinate in $U_1$, and $z'$ is either in $V(G_2)$, or its coordinates are in $U_2$. Then, we find two vertices $z\in V(G_1)$ and $z'\in V(G_2)$ such that $z'$ is a $\sigma$-neighbor of $z$, there is a $\sigma$-path $P$ from $x$ to $z$ satisfying the above properties, and there is a $\tau$-path $P'$ from $y$ to $z'$ satisfying the above properties. Then $P\cup P'$ is a $\sigma$-path from $x$ to $y$. In the next claim, we show that if we randomly partition the sets $A_1,\dots,A_r$, then the vertices are well distributed among the $2^r$ parts. \begin{claim}\label{claim:partition} Let $\epsilon>0$, then there exists $c=c(r,\epsilon)>0$ such that the following holds. Let $G$ be an $r$-line-graph on $n$ vertices such that $V(G)\subset A_1\times\dots \times A_r$ and the minimum degree of $G$ is at least $c(\log n)$. Also, let $x,y\in V(G)$ such that $x$ and $y$ share no coordinates. Then for $i\in [r]$, there exists a partition of $A_{i}$ into two sets, $A_{i,1}$ and $A_{i,2}$, with the following three properties: \begin{enumerate} \item $x\in A_{1,1}\times\dots \times A_{r,1}$ and $y\in A_{1,2}\times\dots \times A_{r,2}$. \item Given vector ${\bf e}=(e_1,\dots,e_r)$ let $A_{{\bf e}}=A_{1,e_1}\times\dots\times A_{r,e_r}$. Then for all ${\bf e} \in \{1,2\}^r$ and every block $B$ of $G$, either $B\cap A_{{\bf e}}=\emptyset$, or $$\frac{1-\epsilon/2r}{2}|B|\leq |B \cap A_{{\bf e}}|\leq \frac{1+\epsilon/2r}{2}|B|.$$ \item For every ${\bf e} \in \{1,2\}^r$, $$\frac{1-\epsilon}{2^r}n\leq |V(G)\cap A_{{\bf e}}|\leq \frac{1+\epsilon}{2^r}n.$$ \end{enumerate} \end{claim} \begin{proof} For $i\in [r]$, partition $A_i$ randomly into two sets $A_{i,1}$ and $A_{i,2}$ such that $x\in A_{1,1}\times \dots\times A_{r,1}$ and $y\in A_{1,2}\times \dots\times A_{r,2}$. More precisely, each element of $A_i\setminus\{x(i),y(i)\}$ is in either $A_{i,1}$ or $A_{i,2}$ independently with probability $1/2$. This partition clearly satisfies property 1 and we prove that for large enough $c$, with positive probability, it satisfies also 2 and 3. Let $B$ be an $i$-block and let ${\bf e} \in \{1,2\}^r$. Without loss of generality we can assume that $i=1$. Then for $i=2,\dots,r$, there exists $a_{i}\in A_{i}$ such that $B\subset A_{1}\times \{a_2\}\times\dots\times \{a_r\}$. Therefore, if $a_{i}\not\in A_{i,e_i}$ for some $i\in \{2,\dots,r\}$, then $B\cap A_{{\bf e}}=\emptyset$. Otherwise, each element of $B'=B\setminus \{x,y\}$ appears in $A_{{\bf e}}$ independently with probability $1/2$. Therefore, choosing $c$ large enough and writing $m=|B'|$, we have by Chernoff's inequality $$\mathbb{P}\left(|B'\cap A_e|-\frac{m}{2}\geq \frac{\epsilon m}{6r}\right)\leq 2e^{-\frac{\epsilon^2 m}{72}} \leq 2n^{-\frac{\epsilon^2c}{72r^2}}<\frac{1}{2^{r+1}r n}.$$ Here, $|B|\geq m\geq |B|-2$ and $\frac{\epsilon m}{12r}\geq 2$, so with probability at least $1-\frac{1}{2^{r+1}r n}$, we also have $\frac{1-\epsilon/(2r)}{2}|B|\leq |B\cap A_e|\leq \frac{1+\epsilon/(2r)}{2}|B|$. Since the number of blocks $B$ of $G$ is at most $rn$, and there are $2^r$ choices for $A_{{\bf e}}$, by the union bound the probability that property 2 holds is at least $\frac{1}{2}$. To complete the proof we show that property 2 implies 3. For $i\in [r]$, let $A_{i,0}=A_{i}$, and for ${\bf e} \in \{0,1,2\}^{r}$, as before $A_{{\bf e}}=A_{1,e_1}\times\dots\times A_{r,e_r}$. We show, by induction, that if ${\bf e} \in \{0,1,2\}^{r}$ is a vector with exactly $s$ nonzero coordinates, then $$\frac{(1-\frac{\epsilon}{2r})^{s}}{2^s}n\leq |V(G)\cap A_{{\bf e}}|\leq \frac{(1+\frac{\epsilon}{2r})^{s}}{2^s}n.$$ When $s=r$, this implies property 3. For $s=0$, the statement is trivially true, so let us suppose that $s\geq 1$, and that the statement holds for $s-1$ instead of $s$. Let ${\bf e} \in \{0,1,2\}^{r}$ such that $e$ contains $s$ nonzero coordinates, and suppose that $e_{i}\neq 0$. Let ${\bf f} \in \{0,1,2\}^r$ be the vector we get after changing $e_{i}$ to 0 in ${\bf e}$. Note that if $B$ is an $i$-block of $G$, then $B$ is either disjoint from or completely contained in $A_{{\bf f}}$. Also, for each $B$ contained in $A_{{\bf f}}$, we can change uniquely the zero coordinates of ${\bf f}$ to be either $1$ or $2$ to obtain a vector ${\bf g} \in \{1,2\}^r$ such that $B\cap A_{{\bf g}}=B\cap A_{{\bf e}}\neq \emptyset$. Then by property 2, we have $\frac{1-\epsilon/(2r)}{2}|B|\leq |B\cap A_{{\bf e}}|\leq \frac{1+\epsilon/(2r)}{2}|B|.$ As this holds for every $i$-block $B$ contained in $A_{{\bf f}}$, we have $\frac{1-\epsilon/(2r)}{2}|V(G)\cap A_{{\bf f}}|\leq |V(G)\cap A_{{\bf e}}|\leq \frac{1+\epsilon/(2r)}{2}|V(G)\cap A_{{\bf f}}|$. Since, by induction, $\frac{(1-\epsilon/(2r))^{s-1}}{2^{s-1}}n\leq |V(G)\cap A_{{\bf f}}|\leq \frac{(1+\epsilon/(2r))^{s-1}}{2^{s-1}}n$ this implies $\frac{(1-\epsilon/(2r))^{s}}{2^s}n\leq |V(G)\cap A_{{\bf e}}|\leq \frac{(1+\epsilon/(2r))^{s}}{2^s}n$. \end{proof} Now we are ready to prove the main lemma of this section. \begin{lemma}\label{lemma:density} There exist $c_1,c_2,c_3,c_4>0$ depending only on $r$ such that the following holds. Let $\lambda>0$, $K>1$, and let $n,d$ be positive integers such that $K<\frac{c_1\lambda^2 d}{\log n}$, $\lambda\leq \frac{1}{2\log_2 n}$ and $d\geq \frac{c_2\log n}{\lambda^{2}}$. Let $G$ be an $r$-line-graph with $n$ vertices that is $(\lambda,d)$-expander, and let $x,y\in V(G)$ such that $x$ and $y$ share no coordinates. Then either $G$ contains a $\sigma$-path of size at most $\frac{c_{3}\log n}{\lambda}$ from $x$ to $y$, or $G$ has a subgraph with at most $\frac{n}{K}$ vertices and minimum degree at least $c_4d$. \end{lemma} \begin{proof} Let $\epsilon=2^{-r-6}$, and without loss of generality, let $\sigma$ be the permutation $12\dots r$. Also, suppose that $n\geq 2^{-\epsilon}$, otherwise, we can choose $c_1$ small enough to guarantee that $1<K\leq\frac{c_1\lambda^2 d}{\log n}$ cannot be satisfied. This also implies that $\lambda<\epsilon$. Let $c=c(r,\epsilon)$ be the constant given by Claim \ref{claim:partition}. We show that the constants $c_1=\frac{\epsilon^2}{240r^{4}}$, $c_2=\max\{10^5r^5\epsilon^{-3},c\}$, $c_3=40r\epsilon^{-1}$ and $c_4=\frac{\epsilon}{6r}$ suffice. Suppose that $G$ contains no subgraph with at most $\frac{n}{K}$ vertices and minimum degree at least $c_4d$. Let $V(G)\subset A_1\times \dots\times A_r$, and for $i\in [r]$, partition $A_i$ into two sets $A_{i,1}$ and $A_{i,2}$ satisfying Claim \ref{claim:partition}. This can be done since $d\geq \frac{c_2\log n}{\lambda^2}\geq c\log n$. For ${\bf e} \in \{1,2\}^{r}$, recall that $A_{{\bf e}}=A_{1,e_1}\times\dots\times A_{r,e_r}$ and let $G_{{\bf e}}$ be the subgraph of $G$ induced on $A_{{\bf e}} \cap V(G)$. For simplicity, write $G_1$ and $G_2$ instead of $G_{(1,\dots,1)}$ and $G_{(2,\dots,2)}$, respectively. By Claim \ref{claim:partition}, we have the following properties: \begin{enumerate} \item $x\in V(G_1)$ and $y\in V(G_2)$, \item for every ${\bf e} \in\{1,2\}^r$, $\frac{1+\epsilon}{2^r}n\geq |V(G_{{\bf e}})|\geq \frac{1-\epsilon}{2^r}n$, and \item for every block $B$ of $G$, and every ${\bf e} \in\{1,2\}^r$, either $B\cap V(G_{{\bf e}})=\emptyset$, or $$\frac{1+\frac{\epsilon}{2r}}{2}|B|\geq |B\cap V(G_{{\bf e}})|\geq \frac{1-\frac{\epsilon}{2r}}{2}|B|\geq \frac{d}{3}.$$ \end{enumerate} Let $j\in\{1,2\}$. As the density of $G_{j}$ is at least $d/3$, we can apply Lemma \ref{lemma:expander_covering} to find vertex disjoint subgraphs $G_{j,1},\dots,G_{j,k_j}$ of $G_{j}$ such that $G_{j,i}$ is a $(\lambda,\frac{\epsilon d}{6r})$-expander, and $G_{j,1},\dots,G_{j,k_j}$ cover at least $(1-\epsilon)|V(G_{j})|$ vertices of $G_{j}$. By our assumption on the nonexistence of subgraphs of size at most $\frac{n}{K}$ and minimum degree $c_{4}d=\frac{\epsilon d}{6r}$, the size of each $G_{j,i}$ is at least $\frac{n}{K}$, which implies that $k_j\leq K$. Let $G'$ be the graph we get after removing the coordinates of $y$ from $G$. As $r\leq \frac{\lambda d}{4r}$, we can apply Lemma \ref{lemma:expander_robust} to conclude that $G'$ is a $(\frac{\lambda}{2},\frac{d}{2})$-expander on at least $(1-\frac{r}{d})|V(G)|\geq (1-\epsilon)|V(G)|$ vertices. Let $X\subset V(G')$ be the set of vertices $z$ such that $z$ can be reached from $x$ by a $\sigma$-path of size at most $L:=\frac{5r\log n}{\epsilon(\lambda/2)}$ in $G'$. Noting that $500r^4\log n\leq \epsilon^{2}(\frac{\lambda}{2})^{2}\frac{d}{2}$ holds by the choice of $c_2$, we can apply Lemma \ref{lemma:paths} to get $|X|\geq (1-\epsilon)|V(G')|\geq (1-2\epsilon)|V(G)|$. For $i=1,\dots,k_1$, if $X\cap V(G_{1,i})$ is nonempty, pick an arbitrary vertex $x_i\in X\cap V(G_{1,i})$, which we call the representative of $G_{1,i}$. Without loss of generality, let $1,\dots,\ell_1$ be the indices $i$ for which $G_{1,i}$ has a representative, then $\sum_{i=\ell_1+1}^{k_1}|V(G_{1,i})|\leq |V(G)|-|X|\leq 2\epsilon|V(G)|$. Therefore, \begin{equation}\label{equ:total_size} \sum_{i=1}^{\ell_1}|V(G_{1,i})|\geq (1-\epsilon)|V(G_1)|-2\epsilon |V(G)|\geq (1-2^{r+2}\epsilon)|V(G_1)|, \end{equation} where the last inequality holds by the bound on $|V(G_1)|$ from property 2. For $i=1,\dots,\ell_1$, let $P_{1,i}\subset V(G)$ be a $\sigma$-path of size at most $L$ from $x$ to $x_i$, and let $P_1=\bigcup_{i=1}^{\ell_1}P_{1,i}$. Also, let $U_1$ be the set of coordinates appearing in the vertices of $P_1$. Then $|P_1|\leq LK$, and $|U_1|\leq rLK\leq \frac{\lambda d}{4r}$, where the last inequality holds by the choice of $c_1$. Let $G''$ be the subgraph of $G$ after the removal of the elements of $U_1$. We can apply Lemma \ref{lemma:expander_robust} to conclude that $G''$ is a $(\frac{\lambda}{2},\frac{d}{2})$-expander on at least $(1-\frac{|U_1|}{d})|V(G)|$ vertices. Here, $(1-\frac{|U_1|}{d})|V(G)|\geq (1-\frac{\lambda}{4r})|V(G)|>(1-\epsilon)|V(G)|$ holds. Let $\tau$ be the reverse of $\sigma$, that is, the permutation $r(r-1)\dots1$. Let $Y$ be the set of vertices in $G''$ which can be reached from $y$ by $\tau$-path of size at most $L$ in $G''$. Then $|Y|\geq (1-\epsilon)|V(G'')|>(1-2\epsilon)|V(G)|$ by Lemma \ref{lemma:paths}. For $i=1,\dots,k_2$, if $Y\cap V(G_{2,i})$ is nonempty, pick an arbitrary vertex $y_i\in Y\cap V(G_{2,i})$, which we call the representative of $G_{2,i}$. Without loss of generality, let $1,\dots,\ell_2$ be the indices $i$ for which $G_{2,i}$ has a representative, then $\sum_{i=\ell_2+1}^{k_2}|V(G_{2,i})|\leq |V(G)|-|Y|\leq 2\epsilon|V(G)|$. Therefore, $$\sum_{i=1}^{\ell_2}|V(G_{2,i})|\geq (1-\epsilon)|V(G_2)|-2\epsilon |V(G)|\geq (1-2^{r+2}\epsilon)|V(G_2)|.$$ For $i=1,\dots,\ell_2$, let $P_{2,i}\subset V(G)$ be a $\tau$-path of size at most $L$ from $y$ to $y_i$, and let $P_2=\bigcup_{i=1}^{\ell_2}P_{2,i}$. Also, let $U_2$ be the set of coordinates appearing in the vertices of $P_2$. Then $|P_2|\leq LK$ and $|U_2|\leq rLK$. For $j=1,2$ and $i=1,\dots,\ell_j$, let $H_{j,i}$ be the graph we get after removing every element of $U=U_{1}\cup U_{2}$ from $G_{j,i}$, with the exception of the coordinates of $x_i$ in case $j=1$, and with the exception of the coordinates of $y_i$ in case $j=2$. Here, $|U|\leq 2rLK<\frac{\lambda(\epsilon d/(6r))}{4r}$ holds by the choice of $c_1$, so we can apply Lemma \ref{lemma:expander_robust} to conclude that $H_{j,i}$ is a $(\frac{\lambda}{2},\frac{\epsilon d}{12r})$-expander on at least $$\left(1-\frac{|U|}{\epsilon d/(6r)}\right)|V(G_{j,i})|>\left(1-\frac{\lambda}{4r}\right)|V(G_{j,i})|> (1-\epsilon)|V(G_{j,i})|$$ vertices. Let $X_{i}$ be the set of vertices in $H_{1,i}$ that can be reached from $x_i$ by $\sigma$-path of size at most $r$ in $H_{1,i}$. Also, let $Y_{i}$ be the set of vertices in $H_{2,i}$ that can be reached from $y_i$ by $\tau$-path of size at most $L$ in $H_{2,i}$. See Figure \ref{figure} for an illustration. Noting that $500r^4\log n<\epsilon^{2}(\frac{\lambda}{2})^2(\frac{\epsilon d}{12r})$ holds by the choice of $c_2$, we can apply Lemma \ref{lemma:paths} to deduce that $|X_{i}|\geq (1-\epsilon)|V(H_{1,i})|\geq (1-2\epsilon)|V(G_{1,i})|$, and similarly $|Y_{i}|\geq (1-2\epsilon)|V(G_{2,i})|$. \begin{figure} \centering \begin{tikzpicture} \draw (0,0) circle (4) ; \node at (-3.5,3.5) {\huge $G$}; \draw[fill=blue!10!white] (0,-0.5) circle (1); \draw[fill=red!10!white] (2,1.1) circle (1); \draw[fill=blue!10!white] (-1.8,1) circle (1); \draw[fill=red!10!white] (1.8,-1.8) circle (1); \draw[fill=red!10!white] (-1.9,-1.7) circle (1); \draw[fill=blue!10!white] (0,2.2) circle (1); \node[vertex] (x) at (0,3.6) {}; \node at (0,3.8) {\small $x$}; \node[vertex] (y) at (0,-3.5) {};\node at (0,-3.8) {\small $y$}; \node[vertex] (x1) at (2,1.7) {};\node at (2.09,1.9) {\small $x_1$}; \node at (2,-0.15) {\small $H_{1,1}$}; \node[vertex] (x2) at (1.8,-1.2) {};\node at (1.95,-1) {\small $x_2$}; \node at (1.8,-3.05) {\small $H_{1,2}$}; \node[vertex] (x3) at (-1.9,-1.1) {};\node at (-2,-0.9) {\small $x_3$}; \node at (-1.9,-2.95) {\small $H_{1,3}$}; \node[vertex] (y1) at (0,-1.1) {};\node at (-0.08,-1.3) {\small $y_1$}; \node at (-0.1,-1.75) {\small $H_{2,1}$}; \node[vertex] (y2) at (-1.8,0.4) {};\node at (-1.95,0.2) {\small $y_2$}; \node at (-1.8,2.2) {\small $H_{2,2}$}; \node[vertex] (y3) at (0,1.6) {};\node at (-0.1,1.4) {\small $y_3$}; \node at (-0.5,3.35) {\small $H_{2,3}$}; \draw[thick] (x) -- (1.3,2.8) -- (1.4,2.4) -- (x1); \draw[thick] (1.4,2.4) -- (0.8,1.1); \draw[thick] (x2) -- (0.8,1.1); \draw[thick] (x3) -- (-0.443739,0.913236); \draw[thick] (0.8,1.1) -- (-0.443739,0.913236); \draw[thick] (y2) -- (-0.7,-1.6) -- (y); \draw[thick] (y1) -- (0.65,-2.81); \draw[thick] (y3) -- (1.3,-0.3) -- (0.7,-1.55) -- (0.65,-2.81); \draw[thick] (0.65,-2.81) -- (y); \draw (0,1.6) -- (0.0580543,1.96579); \draw (0,1.6) -- (-0.15155,1.8979); \draw (0.0580543,1.96579) -- (0.287352,2.21594); \draw (0.0580543,1.96579) -- (0.0653794,2.34426); \draw (-0.15155,1.8979) -- (-0.187155,2.29996); \draw (-0.15155,1.8979) -- (-0.352203,2.10376); \draw (0.287352,2.21594) -- (0.524554,2.3656); \draw (0.287352,2.21594) -- (0.415834,2.5413); \draw (0.0653794,2.34426) -- (0.251011,2.66589); \draw (0.0653794,2.34426) -- (0.0523207,2.72256); \draw (-0.187155,2.29996) -- (-0.153429,2.70367); \draw (-0.187155,2.29996) -- (-0.338477,2.61177); \draw (-0.352203,2.10376) -- (-0.477858,2.45925); \draw (-0.352203,2.10376) -- (-0.552767,2.26669); \draw (0.524554,2.3656) -- (0.73646,2.49187); \draw (0.524554,2.3656) -- (0.69633,2.62861); \draw (0.415834,2.5413) -- (0.631058,2.7553); \draw (0.415834,2.5413) -- (0.543,2.86735); \draw (0.251011,2.66589) -- (0.435337,2.96072); \draw (0.251011,2.66589) -- (0.311956,3.03204); \draw (0.0523207,2.72256) -- (0.177312,3.07874); \draw (0.0523207,2.72256) -- (0.0362648,3.09912); \draw (-0.153429,2.70367) -- (-0.106091,3.09246); \draw (-0.153429,2.70367) -- (-0.244617,3.05899); \draw (-0.338477,2.61177) -- (-0.37431,2.99992); \draw (-0.338477,2.61177) -- (-0.490489,2.91738); \draw (-0.477858,2.45925) -- (-0.588958,2.81436); \draw (-0.477858,2.45925) -- (-0.666162,2.69457); \draw (-0.552767,2.26669) -- (-0.719313,2.56234); \draw (-0.552767,2.26669) -- (-0.746493,2.42244); \draw (-1.8,0.4) -- (-1.74195,0.765786); \draw (-1.8,0.4) -- (-1.95155,0.697903); \draw (-1.74195,0.765786) -- (-1.51265,1.01594); \draw (-1.74195,0.765786) -- (-1.73462,1.14426); \draw (-1.95155,0.697903) -- (-1.98716,1.09996); \draw (-1.95155,0.697903) -- (-2.1522,0.903756); \draw (-1.51265,1.01594) -- (-1.27545,1.1656); \draw (-1.51265,1.01594) -- (-1.38417,1.3413); \draw (-1.73462,1.14426) -- (-1.54899,1.46589); \draw (-1.73462,1.14426) -- (-1.74768,1.52256); \draw (-1.98716,1.09996) -- (-1.95343,1.50367); \draw (-1.98716,1.09996) -- (-2.13848,1.41177); \draw (-2.1522,0.903756) -- (-2.27786,1.25925); \draw (-2.1522,0.903756) -- (-2.35277,1.06669); \draw (-1.27545,1.1656) -- (-1.06354,1.29187); \draw (-1.27545,1.1656) -- (-1.10367,1.42861); \draw (-1.38417,1.3413) -- (-1.16894,1.5553); \draw (-1.38417,1.3413) -- (-1.257,1.66735); \draw (-1.54899,1.46589) -- (-1.36466,1.76072); \draw (-1.54899,1.46589) -- (-1.48804,1.83204); \draw (-1.74768,1.52256) -- (-1.62269,1.87874); \draw (-1.74768,1.52256) -- (-1.76374,1.89912); \draw (-1.95343,1.50367) -- (-1.90609,1.89246); \draw (-1.95343,1.50367) -- (-2.04462,1.85899); \draw (-2.13848,1.41177) -- (-2.17431,1.79992); \draw (-2.13848,1.41177) -- (-2.29049,1.71738); \draw (-2.27786,1.25925) -- (-2.38896,1.61436); \draw (-2.27786,1.25925) -- (-2.46616,1.49457); \draw (-2.35277,1.06669) -- (-2.51931,1.36234); \draw (-2.35277,1.06669) -- (-2.54649,1.22244); \draw (0,-1.1) -- (0.0580543,-0.734214); \draw (0,-1.1) -- (-0.15155,-0.802097); \draw (0.0580543,-0.734214) -- (0.287352,-0.484056); \draw (0.0580543,-0.734214) -- (0.0653794,-0.355743); \draw (-0.15155,-0.802097) -- (-0.187155,-0.400042); \draw (-0.15155,-0.802097) -- (-0.352203,-0.596244); \draw (0.287352,-0.484056) -- (0.524554,-0.3344); \draw (0.287352,-0.484056) -- (0.415834,-0.158702); \draw (0.0653794,-0.355743) -- (0.251011,-0.0341117); \draw (0.0653794,-0.355743) -- (0.0523207,0.0225614); \draw (-0.187155,-0.400042) -- (-0.153429,0.00367093); \draw (-0.187155,-0.400042) -- (-0.338477,-0.0882345); \draw (-0.352203,-0.596244) -- (-0.477858,-0.240755); \draw (-0.352203,-0.596244) -- (-0.552767,-0.433312); \draw (0.524554,-0.3344) -- (0.73646,-0.208133); \draw (0.524554,-0.3344) -- (0.69633,-0.0713876); \draw (0.415834,-0.158702) -- (0.631058,0.0552979); \draw (0.415834,-0.158702) -- (0.543,0.16735); \draw (0.251011,-0.0341117) -- (0.435337,0.260722); \draw (0.251011,-0.0341117) -- (0.311956,0.332043); \draw (0.0523207,0.0225614) -- (0.177312,0.378739); \draw (0.0523207,0.0225614) -- (0.0362648,0.399123); \draw (-0.153429,0.00367093) -- (-0.106091,0.392459); \draw (-0.153429,0.00367093) -- (-0.244617,0.358987); \draw (-0.338477,-0.0882345) -- (-0.37431,0.299917); \draw (-0.338477,-0.0882345) -- (-0.490489,0.217381); \draw (-0.477858,-0.240755) -- (-0.588958,0.114359); \draw (-0.477858,-0.240755) -- (-0.666162,-0.00542963); \draw (-0.552767,-0.433312) -- (-0.719313,-0.137659); \draw (-0.552767,-0.433312) -- (-0.746493,-0.277555); \draw (2,1.7) -- (2.05805,1.33421); \draw (2,1.7) -- (1.84845,1.4021); \draw (2.05805,1.33421) -- (2.28735,1.08406); \draw (2.05805,1.33421) -- (2.06538,0.955743); \draw (1.84845,1.4021) -- (1.81284,1.00004); \draw (1.84845,1.4021) -- (1.6478,1.19624); \draw (2.28735,1.08406) -- (2.52455,0.9344); \draw (2.28735,1.08406) -- (2.41583,0.758702); \draw (2.06538,0.955743) -- (2.25101,0.634112); \draw (2.06538,0.955743) -- (2.05232,0.577439); \draw (1.81284,1.00004) -- (1.84657,0.596329); \draw (1.81284,1.00004) -- (1.66152,0.688235); \draw (1.6478,1.19624) -- (1.52214,0.840755); \draw (1.6478,1.19624) -- (1.44723,1.03331); \draw (2.52455,0.9344) -- (2.73646,0.808133); \draw (2.52455,0.9344) -- (2.69633,0.671388); \draw (2.41583,0.758702) -- (2.63106,0.544702); \draw (2.41583,0.758702) -- (2.543,0.43265); \draw (2.25101,0.634112) -- (2.43534,0.339278); \draw (2.25101,0.634112) -- (2.31196,0.267957); \draw (2.05232,0.577439) -- (2.17731,0.221261); \draw (2.05232,0.577439) -- (2.03626,0.200877); \draw (1.84657,0.596329) -- (1.89391,0.207541); \draw (1.84657,0.596329) -- (1.75538,0.241013); \draw (1.66152,0.688235) -- (1.62569,0.300083); \draw (1.66152,0.688235) -- (1.50951,0.382619); \draw (1.52214,0.840755) -- (1.41104,0.485641); \draw (1.52214,0.840755) -- (1.33384,0.60543); \draw (1.44723,1.03331) -- (1.28069,0.737659); \draw (1.44723,1.03331) -- (1.25351,0.877555); \draw (1.8,-1.2) -- (1.85805,-1.56579); \draw (1.8,-1.2) -- (1.64845,-1.4979); \draw (1.85805,-1.56579) -- (2.08735,-1.81594); \draw (1.85805,-1.56579) -- (1.86538,-1.94426); \draw (1.64845,-1.4979) -- (1.61284,-1.89996); \draw (1.64845,-1.4979) -- (1.4478,-1.70376); \draw (2.08735,-1.81594) -- (2.32455,-1.9656); \draw (2.08735,-1.81594) -- (2.21583,-2.1413); \draw (1.86538,-1.94426) -- (2.05101,-2.26589); \draw (1.86538,-1.94426) -- (1.85232,-2.32256); \draw (1.61284,-1.89996) -- (1.64657,-2.30367); \draw (1.61284,-1.89996) -- (1.46152,-2.21177); \draw (1.4478,-1.70376) -- (1.32214,-2.05925); \draw (1.4478,-1.70376) -- (1.24723,-1.86669); \draw (2.32455,-1.9656) -- (2.53646,-2.09187); \draw (2.32455,-1.9656) -- (2.49633,-2.22861); \draw (2.21583,-2.1413) -- (2.43106,-2.3553); \draw (2.21583,-2.1413) -- (2.343,-2.46735); \draw (2.05101,-2.26589) -- (2.23534,-2.56072); \draw (2.05101,-2.26589) -- (2.11196,-2.63204); \draw (1.85232,-2.32256) -- (1.97731,-2.67874); \draw (1.85232,-2.32256) -- (1.83626,-2.69912); \draw (1.64657,-2.30367) -- (1.69391,-2.69246); \draw (1.64657,-2.30367) -- (1.55538,-2.65899); \draw (1.46152,-2.21177) -- (1.42569,-2.59992); \draw (1.46152,-2.21177) -- (1.30951,-2.51738); \draw (1.32214,-2.05925) -- (1.21104,-2.41436); \draw (1.32214,-2.05925) -- (1.13384,-2.29457); \draw (1.24723,-1.86669) -- (1.08069,-2.16234); \draw (1.24723,-1.86669) -- (1.05351,-2.02244); \draw (-1.9,-1.1) -- (-1.84195,-1.46579); \draw (-1.9,-1.1) -- (-2.05155,-1.3979); \draw (-1.84195,-1.46579) -- (-1.61265,-1.71594); \draw (-1.84195,-1.46579) -- (-1.83462,-1.84426); \draw (-2.05155,-1.3979) -- (-2.08716,-1.79996); \draw (-2.05155,-1.3979) -- (-2.2522,-1.60376); \draw (-1.61265,-1.71594) -- (-1.37545,-1.8656); \draw (-1.61265,-1.71594) -- (-1.48417,-2.0413); \draw (-1.83462,-1.84426) -- (-1.64899,-2.16589); \draw (-1.83462,-1.84426) -- (-1.84768,-2.22256); \draw (-2.08716,-1.79996) -- (-2.05343,-2.20367); \draw (-2.08716,-1.79996) -- (-2.23848,-2.11177); \draw (-2.2522,-1.60376) -- (-2.37786,-1.95925); \draw (-2.2522,-1.60376) -- (-2.45277,-1.76669); \draw (-1.37545,-1.8656) -- (-1.16354,-1.99187); \draw (-1.37545,-1.8656) -- (-1.20367,-2.12861); \draw (-1.48417,-2.0413) -- (-1.26894,-2.2553); \draw (-1.48417,-2.0413) -- (-1.357,-2.36735); \draw (-1.64899,-2.16589) -- (-1.46466,-2.46072); \draw (-1.64899,-2.16589) -- (-1.58804,-2.53204); \draw (-1.84768,-2.22256) -- (-1.72269,-2.57874); \draw (-1.84768,-2.22256) -- (-1.86374,-2.59912); \draw (-2.05343,-2.20367) -- (-2.00609,-2.59246); \draw (-2.05343,-2.20367) -- (-2.14462,-2.55899); \draw (-2.23848,-2.11177) -- (-2.27431,-2.49992); \draw (-2.23848,-2.11177) -- (-2.39049,-2.41738); \draw (-2.37786,-1.95925) -- (-2.48896,-2.31436); \draw (-2.37786,-1.95925) -- (-2.56616,-2.19457); \draw (-2.45277,-1.76669) -- (-2.61931,-2.06234); \draw (-2.45277,-1.76669) -- (-2.64649,-1.92244); \end{tikzpicture} \caption{An illustration of how we build $\sigma$-paths from $x$ and $\tau$-paths from $y$.} \label{figure} \end{figure} Let $X'=\bigcup_{i=1}^{\ell_1}X_{i}$ and $Y'=\bigcup_{i=1}^{\ell_2}Y_{i}$, then $|X'|\geq \sum_{i=1}^{\ell_1}(1-2\epsilon)|V(G_{1,i})|\geq (1-2^{r+3}\epsilon)|V(G_{1})|$ by inequality (\ref{equ:total_size}), and similarly $|Y'|\geq (1-2^{r+3}\epsilon)|V(G_{2})|$. Here, $X'$ and $Y'$ have the following property. Every vertex $z\in X'$ can be reached from $x$ by a $\sigma$-path $P_{z}$ of size at most $2L$ such that every coordinate of every vertex of $P_{z}$ is in the set $(A_{1,1}\cup \dots\cup A_{1,r}\cup U_1)\setminus U_{2}$. Also, every vertex $z'\in Y'$ can be reached from $y$ by a $\tau$-path $P_{z'}$ of size at most $2L$ such that every coordinate of every vertex of $P_{z'}$ is in the set $(A_{2,1}\cup \dots\cup A_{2,r}\cup U_2)\setminus U_{1}$. Therefore, if $z\in X'$ and $z'\in Y'$, then no vertex in $P_z$ shares a coordinate with any vertex in $P_{z'}$. In order to finish the proof, it is enough to find $z=(z_1,\dots,z_r)\in X'$ and $z'=(z_1',\dots,z_r')\in Y'$ such that $w_{i}=(z_1',\dots,z_i',z_{i+1},\dots,z_r)$ are vertices of $G$ for $i=1,\dots,r-1$. Indeed, then $P_z\cup P_{z'}$ is a $\sigma$-path of size at most $4L=\frac{c_{3}\log n}{\lambda}$ from $x$ to $y$. But this is equivalent to the statement that $\partial^{\sigma}_{G}(X')\cap Y'\neq \emptyset$. \begin{claim} Let $W\subset |V(G_1)|$. Then $|\partial_{G}^{\sigma}(W)\cap V(G_2)|\geq (1-\epsilon)|W|$. \end{claim} \begin{proof} For $i\in \{0,\dots,r\}$, let ${\bf e}_{i}\in \{1,2\}^r$ be the vector whose first $i$ coordinates are 2, and the last $r-i$ coordinates are 1. Let $W_{0}=W$ and for $i=1,\dots,r$, let $W_i=\partial^{(i)}(W_{i-1})\cap V(G_{{\bf e}_i})$. Then $\partial_{G}^{\sigma}(W)\cap V(G_2)=W_r$. We show that $|W_i|\geq (1-\epsilon/r) |W_{i-1}|$, then we get that $|W_r|\geq (1-\epsilon/r)^r|W|\geq (1-\epsilon)|W|$, finishing the proof. Let $\mathcal{B}$ be the set of $i$-blocks of $G$ having a nonempty intersection with $W_{i-1}$. Let $B\in \mathcal{B}$, then $B\cap V(G_{{\bf e}_i})=B\cap W_{i}$. But $|B\cap V(G_{{\bf e}_i})|\geq \frac{1-\epsilon/(2r)}{2}|B|$ and $|B\cap W_{i-1}|\leq |B\cap V(G_{{\bf e}_{i-1}})|\leq\frac{1+\epsilon/(2r)}{2}|B|$ by 2., so $|B\cap W_i|\geq \frac{1-\epsilon/(2r)}{1+\epsilon/(2r)}|B\cap W_{i-1}|\geq (1-\epsilon/r)|B\cap W_{i-1}|$. As this is true for every block in $\mathcal{B}$, we get $|W_{i}|\geq (1-\epsilon/r)|W_{i-1}|$. \end{proof} By the previous claim, we have $$|\partial^{\sigma}_{G}(X')\cap V(G_2)|\geq (1-\epsilon)|X'|\geq (1-2^{r+4}\epsilon)|V(G_{1})|>\frac{1}{2}|V(G_2)|$$ where the third inequality holds by the bound on $|V(G_2)|$ from property 2. Since also $$|Y'|\geq (1-2^{r+3}\epsilon)|V(G_2)|>\frac{1}{2}|V(G_2)|,$$ we get that $\partial^{\sigma}_{G}(X')\cap Y'\neq \emptyset$, completing the proof. \end{proof} \section{Finding a tight cycle} The following statement follows easily from Lemma \ref{lemma:paths}. \begin{corollary}\label{cor:tight_cycle} There exist $c_1',c_2',c_3'>0$ depending only on $r$ such that the following holds. Let $K>1$ and let $n,d$ be positive integers such that $d\geq c_1'(\log n)^{3}$ and $K\leq c_2'\frac{d}{(\log n)^{3}}$. If $G$ is an $r$-line-graph on $n$ vertices of density at least $d$, then either $G$ contains a tight cycle, or $G$ contains a subgraph with minimum degree at least $c_3'd$ on at most $\frac{n}{K}$ vertices. \end{corollary} \begin{proof} Let $c_1,c_2,c_3,c_4$ be the constants given by Lemma \ref{lemma:density}. We show that $c_1'=\max\{64rc_{2},256r^3c_{3}\}$, $c_2'=\frac{c_1}{32r^2}$, $c_3'=\frac{c_{4}}{4r}$ suffices. Let $\lambda=\frac{1}{2\log_2 n}$. As $\mbox{dens}(G)\geq d$, $G$ contains a subgraph $H$ that is $(\lambda,\frac{d}{2r})$-expander, by Lemma \ref{lemma:expander}. Suppose that $H$ contains no subgraph with minimum degree at least $c_3'd$ on at most $\frac{|V(H)|}{K}\leq \frac{n}{K}$ vertices. Let $\sigma\in S_{r}$ be an arbitrary permutation. Let $x,y\in V(H)$ such that $x$ and $y$ share no coordinates. As the parameters $\lambda,\frac{d}{2r},K$ satisfy the desired conditions of Lemma \ref{lemma:density}, there exists a $\sigma$-path $P$ from $x$ to $y$ in $H$ of size at most $\frac{c_{3}\log n}{\lambda}< 4c_{3}(\log n)^2$. Let $U$ be the set of coordinates appearing in $P\setminus \{x,y\}$, and let $H'$ be the subgraph of $H$ we get after removing the elements of $U$. Note that $|U|\leq 16rc_{3}(\log n)^{2}\leq \frac{\lambda(d/(2r))}{4r}$, so we can apply Lemma \ref{lemma:expander_robust} to get that $H'$ is a $(\frac{\lambda}{2},\frac{d}{4r})$-expander. But then applying Lemma \ref{lemma:density} again, noting that $\frac{\lambda}{2},\frac{d}{4r},K$ also satisfy he desired conditions, we get that $H'$ contains a $\sigma$-path $P'$ from $y$ to $x$. Observe that $P\cup P'$ is a tight cycle, finishing the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:mainthm2}] Let $c_1',c_2',c_3'$ be the constants given by Corollary \ref{cor:tight_cycle}, and let $K=e^{(\log n)^{1/2}}$. Choose $c$ sufficiently large such that the following inequalities hold for every positive integer $n$: $c\geq 2\log(1/c_3')$, $\exp(\frac{c}{2}(\log n)^{1/2})\geq c_{1}'(\log n)^{3}$ and $K(\log n)^3\leq c_2'\exp(\frac{c}{2}(\log n)^{1/2})$. Suppose that $G=G_{0}$ does not contain a tight cycle. Define the graphs $G_{1},G_{2},\dots,G_{k}$ for $k<\sqrt{\log n}$ with the following properties: \begin{enumerate} \item for $i\geq 1$, $G_{i}$ is a subgraph of $G_{i-1}$, \item $\mbox{dens}(G_{i})\geq (c_{3}')^{i}d$, \item $|V(G_{i})|\leq \frac{n}{K^{i}}$. \end{enumerate} Clearly, $G_{0}$ satisfies these properties. If $G_{i}$ is already defined satisfying these properties for some $0\leq i\leq \sqrt{\log n}$, define $G_{i+1}$ as follows. We have $$d_{i}:=\mbox{dens}(G_{i})\geq (c_{3}')^{i+1}d\geq \exp\left(c(\log n)^{1/2}-\log\left(\frac{1}{c_{3}'}\right)(\log n)^{1/2}\right)\geq \exp\left(\frac{c}{2}(\log n)^{1/2}\right),$$ which implies $d_{i}\geq c_{1}'(\log n)^{3}\geq c_{1}'(\log |V(G_{i})|)^{3}$ and $K\leq c_2'\frac{d_{i}}{(\log n)^{3}}\leq c_2'\frac{d_{i}}{(\log |V(G_i)|)^{3}}$. Therefore, we can apply Lemma \ref{cor:tight_cycle} to conclude that there exists a subgraph $G_{i+1}$ of $G_{i}$ with at most $\frac{|V(G_{i})|}{K}\leq \frac{n}{K^{i+1}}$ vertices and density at least $(c_{3}')^{i+1}d$. However, this is a contradiction: for $I=\lfloor \sqrt{\log n}\rfloor$, the graph $G_{I}$ has less vertices than its density. Therefore, $G$ must contain a tight cycle. \end{proof} \section{Concluding remarks} In this paper we proved that an $r$-uniform hypergraph $\mathcal{H}$ on $n$ vertices with $dn^2$ edges, where $d\geq e^{c\sqrt{\log n}}$, contains a tight cycle of length at most $O((\log n)^{2})$. Our proof can be also used to show that one can find a cycle of any specific length $L$, divisible by $r$, such that $\Omega((\log n)^{2})<L<de^{-O(\sqrt{\log n})}$. To achieve this, one needs just to follow the remark after Lemma \ref{lemma:paths} and its consequences. It is very plausible that our approach works also when $d$ is polylogarithmic in $n$. The only place in our paper that requires larger degree comes from Lemma \ref{lemma:density}. It seems that after taking a random partition of the sets $A_1,\dots,A_r$, the graphs $G_1$ and $G_2$ should also have good expansion properties. If this is the case, then one can show that $\mbox{ex}(n,\mathcal{C}^{(r)})=n^{r-1}(\log n)^{O(1)}$. On the other hand, in order to prove $\ex(n,\mathcal{C}^{(r)})=O(n^{r-1})$, one seems to need new ideas. Finally, let us mention a related conjecture of Conlon, see \cite{MPS}, about the extremal number of tight cycles of given constant length. Let $C^{(r)}_{\ell}$ denote the $r$-uniform tight cycle of length $\ell$. \begin{conjecture}\label{conj:length} There exists $c=c(r)>0$ such that for every $\ell\geq r+1$ which is divisible by $r$, we have $\mbox{ex}(n, C^{(r)}_{\ell})=O(n^{r-1+\frac{c}{\ell}})$. \end{conjecture} \noindent Note that, it is essential that $r$ divides $\ell$. Otherwise a complete $r$-partite $r$-uniform hypergraph shows that the extremal number is $\Omega(n^{r})$. For $r=3$, a solution of the above conjecture gives an improved upper bound on the maximum number of edges in a subgraph of the hypercube $\{0,1\}^n$ that contain no cycle of length $4k+2$ for large $k$. See \cite{C10} for connection between these two problems. \vspace{0.3cm} \noindent {\bf Acknowledgement.}\, We would like to thank Jacques Verstra\"ete for bringing \cite{C10} to our attention and Stefan Glock for useful discussions.
2024-02-18T23:40:48.250Z
2020-09-02T02:23:07.000Z
algebraic_stack_train_0000
3,374
12,704
proofpile-arXiv_066-513
\section{Introduction} As the demands on the infrastructure continue to increase, research into structural health monitoring (SHM) has grown in importance throughout the world. The widespread application of sophisticated SHM systems in civil infrastructure produces a large volume of data. However, the harsh environmental conditions of civil structures cause the data measured by SHM systems to be affected by multiple anomalies caused by faulty or broken sensors. These anomalies pose a significant barrier for assessing the true structural performance and severely affects the automatic warning system for damage or accidents. The identification and removal of data anomalies due to environmental variations is thus an important preprocessing step in a successful warning system. Several model-based methods have been developed in the past few decades for data anomaly detection in SHM data \cite{thiyagarajan2017predictive,abdelghani2004sensor,wan2018bayesian,wang2019modeling}. In these methods a number of statistical models are initially constructed to predict the measurements. Using appropriate thresholds, measurements that show significant differences between predicted and measured values are identified and treated as anomalies. Faced with a massive amount of data due the continuous monitoring of structures, researchers have recently resorted to advanced approaches such as data mining and machine learning techniques for anomaly detection. \cite{bao2019computer} proposed a computer vision and deep learning–based data anomaly detection method in which the raw time series measurements are first transformed into image vectors which are then fed into the Deep Neural Networks (DNN) to train them to identify various anomalies from SHM data.\cite{fu2019sensor} used a similarity test based on power spectral density to detect anomalies and then trained an artificial neural network to identify the different types of sensor anomalies. \cite{tang2019convolutional} proposed the use of a Convolutional Neural Network (CNN) for anomaly detection that learned from multiple graphical information. The visualizations of the time series measurements in time and frequency domain are fed to the neural networks which then learned the characteristics of each of the anomalies during training. The trained network is then used to identify and classify various anomalies. \cite{mao2020toward} used Generative Adversarial Networks (GAN) in combination with autoencoders to identify anomalies. The raw time series from the SHM system are first transformed into Gramian Angular Field (GAF) images which are then used to train the GAN and autoencoders to identify anomalies. This paper contributes to this effort by proposing the use of a relatively new time series representation named “Shapelet Transform” in combination with Random Forest classifiers for anomaly detection in SHM data. The shapelet transform is a unique time series representation technique that is solely based on the shape of the time series. The raw measurements of every sensor anomaly has a unique time series shape. The shapelet transform utilizes this feature to easily capture these distinct shapes and the Random Forest classifier uses these shapes to identify and classify the different anomalous data patterns from a large SHM system database. In terms of applicability, shapelets have been utilized in a wide variety of domains including motion-capture \cite{ye2009time,ye2011time,lines2012shapelet,hartmann2010gesture}, spectrographs \cite{ye2009time,ye2011time}, tornado prediction \cite{mcgovern2011identifying}, detection of natural hazards\cite{arul2020applications}, medical and health informatics \cite{ghalwash2013extraction,xing2011extracting,xing2012early} among others. In the present study, the efficacy of this method is demonstrated by the identification of anomalies in SHM data obtained from a long-span bridge in China. The article is organized as follows. A general overview of the shapelet transform is provided in section 2. A brief description about the SHM data used for this study is given in the section 3. The methodology for the proposed anomaly detection process is elaborated in section 4. In this section, the different stages involved in shapelet transform are explained in great detail along with illustrative examples. The section also explains the step-by-step procedure for detection of anomalous patterns in SHM data obtained from the long-span bridge. Finally, a comprehensive summary of the anomalies detected using shapelet transform is provided in sections 5 and 6. \section{Overview of shapelet transform} Consider time series 1 and 2 generated as a result of an event as shown in Fig. 1. Both the time series have long stretches of aperiodic waveforms. However, a local shape appears for a short duration in the time series that differs substantially from the rest of the time series. These localized shapes are called shapelets. These discriminatory shapes which are phase independent serve as a powerful feature for identifying anomalous patterns or classifying events from a large database containing continuous records. \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[scale=0.8]{tsshapelet.pdf} \caption{Time series shapelets} \label{fig:fig1} \end{figure} Time series shapelets stem from the desire to reify human’s innate capacity to visualize the shape of data and identify almost instantly similarities and differences between patterns. Shapelets help computers perform this complex task by identifying the local or global similarity of shape that can offer an intuitively comprehensible way of understanding continuous time series. The shapelets, once discovered, can then be used to transform data into a local-shape space where each feature is the distance between a shapelet and a time series \cite{lines2012shapelet}. The result of this transform is that the new representation can be applied to any standard machine learning algorithm, to identify anomalous patterns. Shapelet transform has five major stages: generation of shapelet candidates, distance calculation between a shapelet and a time series, assessment of the quality of shapelets, discovery of shapelets, and data transformation.Each of these stages will be elaborated in detail in the following sections. \section{Data description} In this paper, SHM dataset from a long-span cable-stayed bridge in China is used. The main span of the bridge is 1088 m, two side spans are 300m each and it consists of two towers that are 306 m high. The structural health monitoring system of the bridge consists of 38 sensors, whose locations are illustrated in Fig. 2. The sensors include accelerometer, anemometer, strain gauge, global positioning system (GPS), and thermometer. For the present case, one-month (2012-01-01 – 2012-01-31) of acceleration data for all 38 sensors from the SHM system is considered for anomaly detection. The sampling frequency of the accelerometers is 20Hz. The continuous raw measurements are broken down into 1-hour segments and 744 time series measurements for each sensor is obtained for a one-month period resulting in a total of 744*38 datasets.The characteristics of the normal data and the six classes of anomalies found in the dataset is described in Table 1. Examples for each data pattern is shown in Fig. 3. The normal time series measurement is labelled as 1 and the other six data anomaly patterns are labelled from 2 – 7. From Table 1, it can be seen that nearly 52\% of the data are anomalous. The “trend” is the major anomalous pattern constituting of 20\% of the dataset followed by “missing” and “square” each accounting for around 10\%. On the other hand, the “outlier” pattern accounts for only 1.9\% of the dataset followed by “drift” that constitutes of 2.4\% of the data. \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[scale=0.9]{bridge.jpg} \caption{The bridge and the placement of accelerometers on the deck and towers} \label{fig:fig2} \end{figure} \begin{table}[htbp] \begin{center} \captionof{table}{Decription of anomalous data patterns} \label{foobar} \includegraphics[scale=0.9]{tbl.pdf} \end{center} \end{table} \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[scale=0.8]{pattern.jpg} \caption{Examples for each of the anomaly pattern in the SHM data} \label{fig:fig3} \end{figure} \section{Methodology for anomaly detection} The methodology for anomaly detection in SHM data involves 3 major steps as shown in Fig. 4. In the first step, the raw time series measurements are broken down into 1-hour segments as mentioned before. The peak envelopes of the time series are extracted to easily visualize the shape of the time series. \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[width=\textwidth,scale=0.9]{method.pdf} \caption{Methodology for anomaly detection in SHM data} \label{fig:fig4} \end{figure} A time series learning set is constructed along with class labels using these envelopes as shapes. Once the learning set is ready, the process of transforming it into local-shape space begins. Shapelet transform has five major stages: generation of shapelet candidates, distance calculation between a shapelet and a time series, assessment of the quality of shapelets, discovery of shapelets, and data transformation. In the second step, the original time series-based learning set is transformed into a local-shape space where each element is the distance between a shapelet and a time series. In this transformed learning set, the features are the discovered shapelets and the instances are the individual time series envelopes. This is fed to a Random Forest classifier for training. Once the training is complete, the trained classifier is used to classify normal and anomalous data from the new incoming time series from the SHM system in the third step. \subsection{Step 1: Discovery of shapelets and shapelet transform} \subsubsection{Preprocessing of raw data} Based on visual inspection of Fig.3, it is easy to differentiate between the different anomalies. However, it is quite difficult to use the raw time histories for shapelet detection due to the long periods of periodic waveforms present in the vibrations. This can be overcome by extracting the envelopes of the acceleration time history which gives an overall shape to the vibration time series. The envelopes can then be easily used as input for the discovery of shapelets. Fig. 5 shows the extraction of peak envelopes of the bridge acceleration time history calculated using a moving window. Peak envelope is used here instead of a root-mean-square (RMS) envelope, as peak provides better differentiation between anomalous patterns when noisy or spurious signals are present. By looking at the peak envelopes of anomalies, the classification of anomalous data has become a much easier task now. Considering the computational demand of the algorithm, the envelopes obtained from the raw time series is down sampled to 1 Hz to improve the efficiency of the algorithm. Down sampling the data did not affect the shapes of the envelopes and hence the reliability of the method remains unchanged. \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[scale=0.8]{env.pdf} \caption{Extraction of peak envelopes from anomalous patterns} \label{fig:fig5} \end{figure} \subsubsection{Generation of shapelet candidates} Consider a time-series dataset ${TS}$. Let \textit{C} be the set of corresponding class labels for each time series. A time series learning set $\Phi \left\{ TS,C \right\}$ is first created by a vector of instance input-output pairs ${{\Phi }_{i}}=({{TS}_{i}},{{C}_{i}})$. Each subsequence in each time series in ${\Phi}$ is considered as a potential shapelet candidate. So, there are $\left( m-l \right)+1$ discrete subsequences of length \textit{l} between a subsequence \textit{X} of length \textit{l} of a time series \textit{TS} of length \textit{m}. If ${{W}_{1}}$ is the set of all candidate shapelets of length \textit{l} in a time series ${{TS}_{1}}$, then \begin{equation} \ {W}_{1}=\left\{ {{w}_{\min }},{{w}_{\min +1}},...,{{w}_{\max }} \right\}\ \end{equation} where $min\ge 3$ as it is the minimum meaningful length for a time series and $max\le m$.r For the present case, the time series learning set consists of 700 labeled set of time series envelopes that are extracted from the raw measurements as shown in Fig. 6. It should be noted that the learning set contains equal samples of patterns obtained from the SHM data i.e., the set contains 100 samples of normal pattern, 100 samples of missing pattern, 100 samples of minor pattern and so on. This is done to achieve a balanced training set to avoid classifier bias during the detection of anomalies. The reason for choosing 100 as the sample number is as follows. The data from 2012-01-01 to 2012-01-16 is used for training the algorithm and the data from the other fifteen days (2012-01-17 to 2012-01-31) is used for testing. In the training dataset, the “outlier” patten had lowest quantity of about 100 datasets. Hence this number has been established as a baseline for choosing the number of samples for each pattern. Thus the time-series learning set, ${\Phi}$ has a total of 700 datasets as shown in Fig.6. Each time series in the training set has 3600 data points as the sampling frequency is 1 Hz. Let us take the first time series in the training set for illustration. As per Eq. (2) \begin{equation} \ {W}_{1}=\left\{ {{w}_{3}},{{w}_{4}},...,{{w}_{3599}},{{w}_{3600}} \right\}\ \end{equation} where ${{w}_{3}}$(first 3 data points) is the shortest shapelet length and ${{w}_{3600}}$ (entire time series) is the longest shapelet length. Thus, the set ${{W}_{1}}$ contains 3598 different lengths of shapelets obtained from the first time series. In a similar way,shapelet candidates are generated from all the time series in the learning set. \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[scale=0.5]{generation.pdf} \caption{Illustration of generation of shapelet candidates for each time series in the time series learning set} \label{fig:fig6} \end{figure} \subsubsection{Shapelet distance calculation} Euclidean distance is used as a similarity measure in shapelets and the squared Euclidean distance between a subsequence \textit{X} of length \textit{l} and another subsequence \textit{Y} of the same length is defined as: \begin{equation} \ d(X,Y)=\sum\limits_{i=1}^{l}{{{\left( {{x}_{i}}-{{y}_{i}} \right)}^{2}}}\ \end{equation} \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[scale=0.65]{dist.pdf} \caption{Illustration of Euclidean distance calculation between a candidate shapelet S1 and time series in the learning set} \label{fig:fig7} \end{figure} The distance between a potential shapelet candidate and all series in \textit{TS} is computed to create a list of \textit{n} distances called an orderline ${D}_{S}$. An orderline consists of distance values and the class label corresponding to the time series for which the distance value is calculated. The orderline is then sorted in increasing order of the distance value. In the present study, each time series leads to the generation of 3598 shapelet candidates. Each of these 3598 shapelets is then compared with other time series using Euclidean distance. For illustration purpose, consider a shapelet candidate (${{S}_{1}}$) as shown in Fig. 7. The shapelet candidate moves over every time series and the minimum distance between the candidate and the time series is noted. If the shapelet candidate is generated from a pattern that is different from the time series being compared to, then it will lead to a large Euclidean distance. However, if the shapelet is similar to the one being compared to, then it will have a minimum Euclidean distance as seen in ${{d}_{S1,7}}$ in Fig. 7. Thus, the distance between a shapelet candidate ${{S}_{1}}$ and all the time series in \textit{TS} is given by, \begin{equation} \ {{D}_{S}}=\left\langle {{d}_{S1,1}},{{d}_{S1,2}},...,{{d}_{S1,n}} \right\rangle \ \end{equation} It is a time-consuming task to calculate ${D}_{S}$ and hence a number of speed-up techniques have been proposed in the literature to handle the large volume of calculations. \cite{ye2009time,mueen2011logical,ye2011time,hills2014classification,rakthanmanon2013fast} \subsubsection{Assessment of shapelet quality} Information Gain (IG) \cite{shannon1949mathematical} is used as the standard approach to calculate the quality of a shapelet \cite{ye2009time,mueen2011logical,ye2011time}. If a time series dataset \textit{T} can be split into two classes, \textit{1} and \textit{2}, then the entropy of \textit{T} is: \begin{equation} \ H(T)=-p(1)\log (p(1))-p(2)\log (p(2))\ \end{equation} where \textit{p(1)} and \textit{p(2)} are the proportion of objects in class \textit{1} and \textit{2} respectively. Thus every splitting strategy partitions the dataset \textit{T} into two sub-datasets ${T}_{I}$ and ${T}_{II}$. The Information Gain of this split is the difference between the entropy of the entire dataset and the sum of the weighted average of entropies for each split. In the present case, the splitting rule is based on the distance from the shapelet candidate ${S}$ to every series in the dataset. The best possible shapelet will generate small distance values when compared to a time series of its own class and large distance values for time series from the other class. Thus the best arrangement for the orderline is to have all the distance values corresponding to the class of the shapelet in ${T}_{I}$ and the other in ${T}_{II}$. Thus, the information gain for each split is calculated as: \begin{equation} \ IG=H(T)-\left( \frac{|{{T}_{I}}|}{|T|}H({{T}_{I}})+\frac{|{{T}_{II}}|}{|T|}H({{T}_{II}}) \right)\ \end{equation} where $0\le IG\le 1$. \vspace{\baselineskip} \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[scale=0.9]{order.pdf} \caption{One-dimensional representation of the arrangement of time series objects by the distance to the candidate shapelet. Information Gain is calculated for each possible split point} \label{fig:fig8} \end{figure} The same procedure is extended to the 7-class problem as in the present study. For illustration purpose, consider the shapelet candidate (${S}_{1}$), mentioned in the previous section. ${S}_{1}$ is compared with 699 other time series in the learning set and thus 699 distances are obtained. These distances values are ordered in increasing value in the orderline and the information gain is calculated as shown in Fig.8. The same procedure is extended to all the shapelets candidates that is generated. Whichever length of shapelet surpasses the provided information gain threshold (0.05 in the present case) is retained and the other shapelet lengths are discarded. This makes sure that the selected shapelets are meaningful and have discriminatory power. Predetermining the optimal length of shapelet is impossible and unnecessary as it hinders the detection accuracy of the algorithm. It is also very difficult to interpret the variety of shapelet lengths obtained from the algorithm as these lengths have been chosen from several 1000s of shapelet lengths that were compared with several other time series. However, there is a provision in the shapelet algorithm to set the maximum and minimum shapelet length to achieve speedup. This provision should be used with care and should only be utilized in cases where only a certain length of shapelets are of interest. \subsubsection{Discovery of shapelets and shapelet transform} \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[scale=0.7]{sdisc.pdf} \caption{Shapelets discovered for anomaly detection} \label{fig:fig9} \end{figure} An algorithm combining all of the above mentioned components of shapelet discovery was developed by \cite{bostrom2017binary,lines2012shapelet,hills2014classification} and is available at \url{www.timeseriesclassification.com}. The same algorithm has been adopted and modified to suit the datasets under consideration for the present study. The same algorithm has been adopted and modified to suit the datasets under consideration for the present study. The input to the algorithm is the time series leaning set ${\Phi}$. As mentioned in the previous sections, the default minimum length of the shapelets is set to 3 and the maximum length is equal to the length of each individual time series. The number of shapelets to store (\textit{r}) is set to a default of 10 times the number of time series in the training set. Moreover, based on the number of classes (\textit{numC}) in the training set, a limit of \textit{r/numC} shapelets for each class is set as the maximum number of shapelets to store per class. The minimum information gain threshold is set to a default value of 0.05. This makes sure that poor quality shapelets below this threshold are removed during the shapelet finding process. Using the provided parameters, the algorithm then makes a single pass through the time series data in ${\Phi}$ taking each subsequence of every time series as a potential shapelet candidate. The generated shapelet candidates are also normalized to make them independent of scale and offset. The distance between each shapelet candidate and time series in the training dataset is calculated and the order list ${D}_{S}$ is formed to assess the quality of shapelets using Information Gain. Once all the shapelets in a time series have been assessed, the poor quality shapelets are removed and the rest is added to the shapelet set. After all the time series in the training set have been evaluated this way, the algorithm returns the discovered shapelets. . For the present study, the shapelet algorithm is implemented in Python as a single-core serial job on an Intel Xeon Processor E5-2620 (2.6-GHz CPU) for 1 hour and the algorithm discovered a total of 68 shapelets. Various shapes were discovered for each of the 7 anomalous patterns. Examples of some of the top shapelets discovered by the algorithm is shown in Fig. 9 along with their information gain. Shapelets corresponding to the “missing” and “trend” patterns have the highest information gain as these shapes separate the classes easily. This is followed by the shapelet from the “square” pattern which has multiple unique dips in the time series which is absent in the other classes. Shapelets from “normal” and “minor” patterns have almost similar information gain as their discriminatory power is rather less when compared to the other shapelets. \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[scale=0.9]{st.pdf} \caption{Shapelet Transform containing a matrix of Euclidean distance between the discovered shapelets and the other time series in the learning set} \label{fig:fig10} \end{figure} Once the shapelets are discovered, the next step is to transform the learning set ${\Phi}$ into a local-shape space where each feature is the distance between a shapelet and a time series. So, given a set of a time series dataset \textit{TS} containing \textit{n} time series and a set of \textit{r} discovered shapelets, the shapelet transform algorithm calculates the minimum distance between each discovered shapelet and each time series in the dataset. This transformation creates a matrix \textit{G} that contains \textit{n} rows and \textit{r} columns matrix as illustrated in Fig. 10 where each element is the minimum Euclidean distance between each shapelet and time series, with the class values appended to the end of each row. The matrix \textit{G} now serves as the standard instance-attribute dataset that is used in machine learning tasks that can be used with any supervised or unsupervised algorithm. In the present study, shapelet transform constructs a 3500 x 68 matrix where each element corresponds to the minimum Euclidean distance between each shapelet and the time series. \subsection{Step 2: Training of shapelet-based classifier} The shapelet based classifier originally developed by \cite{ye2009time} embeds shapelet finding in a decision tree classifier where shapelets are found at every node. Many researchers ever since have demonstrated that higher accuracy can be achieved by using shapelets with more complex classifiers or ensemble of classifiers than with decision trees, where overfitting is a major issue. \cite{lines2012shapelet,hills2014classification,bagnall2017great,bostrom2017shapelet}. For the present study, Random Forest \cite{breiman2001random} is used as a classifier for time series classification. The Random Forest algorithm seeks to solve the issues with decision trees by classifying examples through using a multitude of decision trees and predicting the class of a sample based on the mean probability estimate across all the trees. Thus, a Random Forest classifier with 500 trees is used for training on the shapelet-transformed dataset. \cite{hills2014classification,bagnall2017great} compared the performance of shapelet transform using several standard classifiers and ensemble classifiers on a variety of datasets from UCR time-series repository. According to their study, a shapelet-based random forest classifier with 500 trees is found to be optimal. Hence the same has been adopted in the present study. It is also found that increasing the number of trees beyond 500 did not result in any significant increase in accuracy. \subsection{Step 3: Anomaly detection} As mentioned in section 4.1.1, the data from 2012-01-01 to 2012-01-16 is used for training the algorithm and the data from the other fifteen days (2012-01-17 to 2012-01-31) is used for testing. The raw measurements are broken down into 1-hr segments resulting in a total of 13,679 datasets. The peak envelopes of the time series are extracted and down sampled to 1 Hz. The shapelet transform algorithm is used on the test set to transform the data onto shape-space where each element is the minimum Euclidean distance between the discovered shapelets and the time series in the test set. Thus a 13,679 x 68 matrix is obtained where 13,679 are the time series instances and 68 are the shapelet-based features. The trained Random Forest classifier is then tested on this transformed test set. \section{Results and discussion} The detection of anomalies using shapelet-based classifier was implemented as a single-core serial job on an Intel Xeon Processor E5-2620 (2.6-GHz CPU) and the algorithm took 2.5 hours to output the results. The detection results are shown in Table 2. The definitions in the following section will help understand the performance metrics of the classifier better. \begin{table}[htbp] \begin{center} \captionof{table}{Performance metrics of the shapelet-based Random Forest classifier} \label{resbar} \includegraphics[scale=0.9]{tbltworev.pdf} \end{center} \end{table} \subsection{Terminologies used in assessing the performance of the classifier} \subsubsection{True Negative (TN)} The actual value is False, and the classifier also predicted False. \subsubsection{False Positive (FP)} The actual value is False, and the classifier predicted True. \subsubsection{False Negative (FN)} The actual value is True, and the classifier predicted False. \subsubsection{True Positive (FP)} The actual value is True, and the classifier also predicted True. \subsubsection{Accuracy} Accuracy is the sum true positives and true negatives divided by the total number of instances. From the confusion matrix, accuracy is the sum of the elements on the diagonal divided by the total number of predictions made. Accuracy is calculated as follows. \begin{equation} \ Accuracy=\frac{TP+TN}{TP+TN+FP+FN}\ \end{equation} \subsubsection{Precision} Precision is the ratio of number of correct predictions to the number of total predictions made. If a class has high precision, then it means that if the classifier predict this class, it is most likely to be true. Precision is given by: \begin{equation} \ Precision=\frac{TP}{TP+FP}\ \end{equation} \subsubsection{Recall} Precision is the ratio of number of correct predictions to the number of total predictions made. If a class has high precision, then it means that if the classifier predict this class, it is most likely to be true. Precision is given by: \begin{equation} \ Recall=\frac{TP}{TP+FN}\ \end{equation} \subsubsection{F1 Score} F1 score is the harmonic mean of precision and recall and is a combined measure of the two. F1 score is high when both precision and recall are high. \begin{equation} \ F1 Score=2*\frac{Precision*Recall}{Precision+Recall}\ \end{equation} \subsection{Discussion of results} \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[scale=0.9]{confusion.pdf} \caption{Confusion matrix for detected anomalies} \label{fig:fig11} \end{figure} The performance metrics are also shown visually in the form of a confusion matrix in Fig. 11. In the confusion matrix, the diagonal elements are the correctly classified instances and their corresponding precision is provided underneath within brackets. An overall accuracy of 93\% is obtained using the shapelet-based classifier. The individual accuracies of all the classes are above 95\% with class 2 and 5 having an accuracy of about 100\%. In terms of precision and recall, classes “normal”, “square” and “trend” have a high value of over 90\% with class “missing” having a maximum of 100\%. For classes “outlier” and “drift”, even though the precision is high, recall value is very low. This is due to the fact that, a small number of instances in class “normal” were predicted as belonging to class “outlier” due to the presence of significant outliers. Also, some of instances in class “normal” were predicted as “class square” as the time series has a very close resemblance to a square shape. Similarly, a number of instances in class “trend” were predicted as belonging to class “drift” as the time series closely resembled class “drift”. Each of these cases are examined in detail and remedial measures are proposed in the following sections. Meanwhile, in the present study, since the learning set is constructed as a well-balanced dataset, of all the performance metrics, accuracy measure can be used as a useful indicator to comment on the performance of the classifier. Based on high individual and overall accuracy, the proposed shapelet-based classifier has a very good ability to identify anomalies in SHM data. \section{Remedial measures for increasing the performance of the classifier} \subsection{Removal of outliers during pre-processing} From the confusion matrix, it can be seen that 329 instances in class “normal” are predicted as class “outlier”. On closer inspection, it is found that the outliers not only affect the instances in class “outlier” but also the instances in class “normal”. One such example of a class “normal” instance with outliers is shown in Fig. 12 in the upper left corner. This confuses the machine learning algorithm as it has learned that “outlier” is the only class with large outliers. Hence it is wise to remove all the predominant outliers in the preprocessing step so that class “outlier” becomes a pure class that only contains datasets with significant outliers. This can be easily done using the ‘rmoutliers’ command in Matlab that detects and removes predominant outliers according to a user specified window. It should be noted that this command not only removes outliers from class “normal, it also removes significant outliers in class “outlier”. From Fig. 12, it can be seen that in the first column, after removal of outliers, class “normal” appears clean. This will increase the accuracy of the classifier as it will not be confused over the presence of outliers in class “1”. In the second column, a single outlier in an instance in class “outlier” is removed which transforms the time series to class “minor”. In the third column, even after the removal of predominant outliers, certain pesky outliers remain and hence this instance still belongs to class “outlier”. Relabeling datasets in this way, after outlier removal, will lead to pure classes which in turn leads to better classifier performance. \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[scale=0.6]{outlier.pdf} \caption{Removal of outliers during pre-processing} \label{fig:fig12} \end{figure} \subsection{Detrending time series during pre-processing} It can be seen from the confusion matrix that 263 instances in class “trend” are labelled as “class “drift”. On close inspection, it is noted that the instances in class “trend” are nothing but the instances in “drift” with a trend in time series. Since the algorithm is trained on time series envelopes, the classifier finds many similarities between the two classes. Moreover, the class “trend” contains varieties of time series trends increasing from left to right and vice versa. This introduces difficulty during the learning process. In Fig. 13, a time series instance in class “trend” is detrended and it transforms to class “drift”. After thorough inspection, it is found that this is the case for all of the instances in class “trend”. So, if all the time series instances are detrended this way, during the pre-processing step, this will transform the 7-class problem to a 6-class problem. This is a huge advantage in terms of computational efficiency and classifier performance. Detrending can be applied together with removal of outliers in the pre-processing step and datasets need to be relabeled before feeding it to the machine learning algorithm. These simple preprocessing steps will drastically increase the performance of the classifier. \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[scale=0.6]{trend.pdf} \caption{Detrending during pre-processing} \label{fig:fig13} \end{figure} \section{Conclusion} Anomaly detection is a long standing problem in the SHM community. In this paper, this fundamental problem is addressed by autonomously identifying anomalous data patterns in 1-month of acceleration data from a SHM system installed on a long-span bridge in China. This is achieved using a relatively new and efficient time series representation named “Shapelet transform” that is combined with machine learning algorithm (Random Forest classifier) to identify anomalies in SHM data. Shapelet transform is a unique time series representation technique that is solely based on the shape of the time series and provides a universal standard feature for detection which is based on the distance between a shapelet and a time series. The raw measurements of every sensor anomaly has a unique time series shape and the shapelet transform utilizes this feature to easily capture these distinct shapes. These shapes are used to transform the SHM data into a local-shape space and the Random Forest classifier is then trained on this transformed dataset to identify and classify the different anomalous data patterns. The data used in the current study has 6 different anomalous patterns of time series. From the 1-month acceleration data, the first sixteen days is used for training the algorithm and the data from the other fifteen days is used for testing. A balanced dataset is created that contains equal samples from all classes of anomalies. The shapelet algorithm discovered 68 shapes from the training dataset. These shapes are used to transform the dataset into a local shape-space. The transformed dataset is then used to train a Random Forest classifier for anomaly detection. The classifier has an overall accuracy of 93\% which indicates that the proposed shapelet-based classifier has a very good ability to identify anomalies in SHM data. The individual accuracies of all the classes are also above 95\%. Various pre-processing measures are also proposed in this paper to increase the classifier performance even further and this will be pursued in future studies. \section*{Data and resources} The structural health monitoring data of the long-span bridge is obtained from the organizers of the 1st International Project Competition for Structural Health Monitoring (IPC - SHM), 2020 (\url{http://www.schm.org.cn/#/IPC-SHM,2020}). The basic algorithm for shapelet discovery is available at "Anthony Bagnall, Jason Lines, William Vickers, and Eamonn Keogh, The UEA \& UCR Time Series Classification Repository” (\url{www.timeseriesclassification.com}). Additional information related to this paper may be requested from the authors. \section*{Funding} This work was supported in part by the Robert M. Moran Professorship and National Science Foundation Grant (CMMI 1612843). \bibliographystyle{unsrt}
2024-02-18T23:40:48.321Z
2020-09-02T02:20:44.000Z
algebraic_stack_train_0000
3,378
5,894
proofpile-arXiv_066-547
\section*{Introduction} The pursuit of methods to robustly and accurately measure animal behavior is at least as old as the scientific study of behavior itself~\cite{klette2008understanding}. Trails of hominid footprints, ``motion'' captured by pliocene deposits at Laetoli that date to 3.66 million years ago, firmly established that early hominoids achieved an upright, bipedal and free-striding gait~\cite{leakey1979pliocene}. Beyond fossilized locomotion, behavior can now be measured in a myriad of ways: from GPS trackers, videography, to microphones, to tailored electronic sensors~\cite{kays2015terrestrial, brown2013observing, camomilla2018trends}. Videography is perhaps the most general and widely-used method as it allows noninvasive, high-resolution observations of behavior~\cite{johansson1973visual, o2010camera, weinstein2018computer}. Extracting behavioral measures from video poses a challenging computational problem. Recent advances in deep learning have tremendously simplified this process~\cite{wu2020recent, mathis2020deep}, which quickly impacted neuroscience~\cite{mathis2020deep, datta2019computational}. \medskip In this primer we review markerless (animal) motion capture with deep learning. In particular, we review principles of algorithms, highlight their potential, as well as discuss pitfalls for experimentalists, and compare them to alternative methods (inertial sensors, markers, etc.). Throughout, we also provide glossaries of relevant terms from deep learning and hardware. Furthermore, we will discuss how to use them, what pitfalls to avoid, and provide perspectives on what we believe will and should happen next. \medskip \begin{figure*}[htp] \centering \includegraphics[width=\textwidth]{fig/figure1.jpg} \caption{% {\bf Schematic overview of markerless motion capture or pose estimation.} The pixel representation of an image (left) or sequence of images (video) is processed and converted into a list of keypoints (right). Semantic information about object identity and keypoint type is associated to the predictions. For instance, the keypoints are structures with a name e.g. ear, the x and y coordinate as well as a confidence readout of the network (often this is included, but not for all pose estimation packages), and are then grouped according to individuals (subjects).} \label{fig:overview} \end{figure*} What do we mean by ``markerless motion capture?'' While biological movement can also be captured by dense, or surface models~\cite{mathis2020deep, guler2018densepose, Zuffi20163dMenagerie}, here we will almost exclusively focus on ``keypoint-based pose estimation.'' Human and many other animal's motion is determined by the geometric structures formed by several pendulum-like motions of the extremities relative to a joint~\cite{johansson1973visual}. Seminal psychophysics studies by Johansson showed that just a few coherently moving keypoints are sufficient to be perceived as human motion~\cite{johansson1973visual}. This empirically highlights why pose estimation is a great summary of such video data. Which keypoints should be extracted, of course, dramatically depends on the model organism and the goal of the study; e.g., many are required for dense, 3D models~\cite{guler2018densepose, sanakoyeu2020transferring, Zuffi20163dMenagerie}, while a single point can suffice for analyzing some behaviors~\cite{mathis2020deep}. One of the great advantages of deep learning based methods is that they are very flexible, and the user can define what should be tracked. \section*{Principles of deep learning methods for markerless motion capture} In raw video we acquire a collection of pixels that are static in their location and have varying value over time. For analyzing behavior, this representation is sub-optimal: Instead, we are interested in properties of objects in the images, such as location, scale and orientation. Objects are collections of pixels in the video moving or being changed in conjunction. By decomposing objects into \emph{keypoints} with semantic meaning ---such as body parts in videos of human or animal subjects---a high dimensional video signal can be converted into a collection of time series describing the movement of each keypoint (Figure~\ref{fig:overview}). Compared to raw video, this representation is easy to analyze, and semantically meaningful for investigating behavior and addressing the original research question for which the data has been recorded. \medskip \begin{figure*}[b] \centering \includegraphics[width=\textwidth]{fig/figure2.jpg} \caption{{\bf Comparison of marker-based (traditional) and markerless tracking approaches.} {\bf (A)} In marker-based tracking, \emph{prior} to performing an experiment, special measures have to be taken regarding hardware and preparation of the subject (images adapted from~\citealp{inayat2020matlab, maceira2019wearable}; IMU stands for inertial measurement unit). {\bf (B)} For markerless pose estimation, raw video is acquired and processed post-hoc: Using labels from human annotators, machine learning models are trained to infer keypoint representations directly from video (on-line inference without markers is also possible~\cite{Kane2020dlclive}). Typically, the architectures underlying pose estimation can be divided into a feature extractor and a decoder: The former maps the image representation into a feature space, the latter infers keypoint locations given this feature representation. In modern deep learning systems, both parts of the systems are trained end-to-end. \label{fig:comparisonmarkerbasedvsmarkerless} } \end{figure*} Motion capture systems aim to infer keypoints from videos: In marker-based systems, this can be achieved by manually enhancing parts of interest (by colors, LEDs, reflective markers), which greatly simplifies the computer vision challenge, and then using classical computer vision tools to extract these keypoints. Markerless pose estimation algorithms directly map raw video input to these coordinates. The conceptual difference between marker-based and marker-less approaches is that the former requires special preparation or equipment, while the latter can even be applied \emph{post-hoc}, but typically requires ground truth annotations of example images (i.e., a training set). Notably, markerless methods allow for extracting additional keypoints at a later stage, something that is not possible with markers (Figure~\ref{fig:comparisonmarkerbasedvsmarkerless}). \medskip Fundamentally, a pose estimation algorithm can be viewed as a function that maps frames from a video into the coordinates of body parts. The algorithms are highly flexible with regard to what body parts are tracked. Typically the identity of the body parts (or objects) have semantically defined meaning (e.g., different finger knuckles, the head), and the algorithms can group them accordingly (namely, to assemble an individual) so that the posture of multiple individuals can be extracted simultaneously (Figure~\ref{fig:overview}). For instance, for an image of one human the algorithm would return a list of pixel coordinates (these can have subpixel resolution) per body part and frame (and sometimes an uncertainty prediction;~\citealp{insafutdinov2016deepercut, kreiss2019pifpaf, mathis2018deeplabcut}). Which body parts the algorithm returns depends on both the application and the training data provided---this is an important aspect with respect to how the algorithms can be customized for applications. \begin{figure*}[b] \centering \includegraphics[width=\textwidth]{fig/figure3.jpg} \caption{Example augmentation images with labeled body parts in red. {\bf (A)} Two example frames of Alpine choughs (Pyrrhocorax graculus) near Mont Blanc with human-applied labels in red (original). The images to the right illustrate three augmentations (as labeled). {\bf (B)} Two example frames of a trail-tracking mouse (mus musculus) from~\cite{mathis2018deeplabcut} with four labeled bodyparts as well as augmented variants. \href{https://colab.research.google.com/github/DeepLabCut/Primer-MotionCapture/blob/master/COLAB_Primer_MotionCapture_Fig3.ipynb}{Open in Google Colaboratory} } \label{fig:AUG} \end{figure*} \subsection*{Overview of algorithms} \justify While many pose estimation algorithms~\cite{moeslund2006survey, POPPE20074} have been proposed, algorithms based on deep learning~\citep{lecun2015dl} are the most powerful as measured by performance on human pose estimation benchmarks~\cite{ToshevDEEPPOSE,JainMODEEP,insafutdinov2016deepercut, newell2016stacked, cao2018openpose, Xiao2018, cheng2020higherhrnet}. More generally, pose estimation algorithms fall under ``object detection'', a field that has seen tremendous advances with deep learning (aptly reviewed in Wu et al.,~\citealp{wu2020recent}). In brief, pose estimation can often intuitively be understood as a system of an encoder that extracts important (visual) features from the frame, which are then used by the decoder to predict the body parts of interests along with their location in the image frame. \medskip In classical algorithms (see~\citealp{moeslund2006survey, POPPE20074, wu2020recent}), handcrafted feature representations are used that extract invariant statistical descriptions from images. These features were then used together with a classifier (decoder) for detecting complex objects like humans~\cite{dalal2005histograms, moeslund2006survey}. Handcrafted feature representations are (loosely) inspired by neurons in the visual pathway and are designed to be robust to changes in illumination, and translations; typical feature representations are Scale Invariant Feature Transform (SIFT; \citealp{lowe2004distinctive}), Histogram of Gradients (HOG; \citealp{dalal2005histograms}) or Speeded Up Robust Features (SURF; ~\citealp{bay2008speeded}). \medskip In more recent approaches, both the encoder and decoders (alternatively called the backbone and output heads, respectively) are deep neural networks (DNN) that are directly optimized on the pose estimation task. An optimal strategy for pose estimation is jointly learning representations of the raw image or video data (encoder) and a predictive model for posture (decoder). In practice, this is achieved by concatenating multiple layers of differentiable, non-linear transformations and by training such a model as a whole using the back propagation algorithm~\cite{lecun2015dl, goodfellow2016deep, wu2020recent}. In contrast to classical approaches, DNN based approaches directly optimize the feature representation in a way most suitable for the task at hand (For a glossary of deep learning terms see Box~\ref{box1}). \medskip Machine learning systems are composed of a dataset, model, loss function (criterion) and optimization algorithm~\cite{goodfellow2016deep}. The dataset defines the input-output relationships that the model should learn: In pose estimation, a particular pose (output) should be predicted for a particular image (input), see Figures~\ref{fig:overview} \& \ref{fig:comparisonmarkerbasedvsmarkerless}B. The model's parameters (weights) are iteratively updated by the optimizer to minimize the loss function. Thereby the loss function measures the quality of a predicted pose (in comparison to the ground truth data). Choices about these four parts influence the final performance and behavior of the pose-estimation system and we discuss possible design choices in the next sections. \subsection*{Datasets \& Data Augmentation} \justify Two kinds of datasets are relevant for training pose estimation systems: First, one or multiple datasets used for related tasks---such as image recognition---can be used for \emph{pre-training} computer vision models on this task (also known as transfer learning; see Box~\ref{box1}). This dataset is typically considerably larger than the one used for pose estimation. For example, ImageNet~\cite{deng2009imagenet}, sometimes denoted as ImageNet-21K, is a highly influential dataset and a subset was used for the ImageNet Large Scale Visual Recognition Challenge in 2012 (ILSRC-2012;~\citealp{russakovsky2015imagenet}) for object recognition. Full ImageNet contains 14.2 million images from 21K classes, the ILSRC-2012 subset contains 1.2 million images of 1,000 different classes (such as car, chair, etc;~\citealp{russakovsky2015imagenet}). Groups working towards state-of-the-art performance on this benchmark also helped push the field to build better DNNs and openly share code. This dataset has been extensively used for pre-training networks, which we will discuss in the model and optimization section below. \medskip The second highly relevant dataset is the one curated for the task of interest-- Mathis et al.~\cite{mathis2018deeplabcut} empirically demonstrated that the size of this dataset can be comparably small for typical pose estimation cases in the laboratory. Typically, this dataset contains 10--500 images, vs. the standard human pose estimation benchmark datasets, such as MS COCO~\cite{lin2014microsoft} or MPII pose~\cite{andriluka20142d}, which has annotated 40K images (of 26K individuals). This implies that the dataset that is curated is highly influential on the final performance, and great care should be taken to select diverse postures, individuals, and background statistics and labeling the data accurately (discussed below in ``pitfalls''). \medskip In practice, several factors matter: the performance of a fine-tuned model on the task of interest, the amount of images that need to be annotated for fine-tuning the network, and the convergence rate of optimization algorithm---i.e., how many steps of gradient descent are needed to obtain a certain performance. Using a pre-trained network can help with this in several regards: \citet{he2018rethinking} show that in the case of large training datasets, pre-training typically aids with convergence rates, but not necessarily the final performance. Despite this evidence that under the right circumstances (i.e., given enough task-relevant data) and with longer training, randomly initialized models can match the performance of fine-tuned ones for key point detection on COCO~\cite{he2018rethinking} and horses~\cite{mathis2019TRANSFER}, however, the networks are less robust~\cite{mathis2019TRANSFER}. Beyond robustness, using a pre-trained model is generally advisable when the amount of labeled data for the target task is small, which is true for many applications in neuroscience, as it leads to shorter training times and better performance with less data~\cite{he2018rethinking, mathis2018deeplabcut, mathis2019TRANSFER, arac2019deepbehavior}. Thus, pre-trained pose estimation algorithms save training time, increase robustness, and require substantially less training data. Indeed, most packages in Neuroscience now use pre-trained models~\cite{mathis2018deeplabcut,graving2019fast,arac2019deepbehavior,Bala2020, Liu2020optiflex, mathisimagenet2020}, although some do not~\cite{pereira2019fast,Gnel2019DeepFly3D, Zimmermann2020}, which can give acceptable performance for simplified situations with aligned individuals. \medskip More recently, larger datasets like the 3.5 billion Instagram dataset~\cite{mahajan2018exploring}, JFT which has 300M images~\cite{hinton2015distilling,xie2020noisy} and OpenImages~\cite{kuznetsova2018open} became popular, further improving performance and robustness of the considered models~\cite{xie2020noisy}. What task is used for pre-training also matters. Corroborating this insight, Li et al. showed that pre-training on large-scale object detection task can improve performance for tasks that require fine, spatial information like segmentation~\cite{li2019analysis}. \medskip Besides large datasets for pre-training, a curated dataset with pose annotations is needed for optimizing the algorithm on the pose estimation task. The process is discussed in more detail below and it typically suffices to label a few (diverse) frames. Data augmentation is the process of expanding the training set by applying specified manipulations (like rotate, scaling image size). Based on the chosen corruptions, models become more invariant to rotations, scale changes or translations and thus more accurate (with less training data). Augmentation can also help with improving robustness to noise, like jpeg compression artefacts and motion blur (Figure~\ref{fig:AUG}). To note, data augmentation schemes should not affect the semantic information in the image: for instance, if color conveys important information about the identity of an animal, augmentations involving changes in color are not advisable. Likewise, augmentations which change the spatial position of objects or subjects should always be applied to both the input image and the labels (Box~\ref{box2}). \subsection*{Model architectures} \justify Systems for markerless pose estimation are typically composed of a \emph{backbone} network (encoder), which takes the role of the feature extractor, and one or multiple \emph{heads} (decoders). Understanding the model architectures and design choices common in deep learning based pose estimation systems requires basic understanding of convolutional neural networks. We summarize the key terms in Box~\ref{box1}, and expand on what encoders and decoders are below. \medskip Instead of using handcrafted features as in classical systems, deep learning based systems employ ``generic'' encoder architectures which are often based on models for object recognition. In a typical system, the encoder design affects the most important properties of the algorithms such as its inference speed, training-data requirements and memory demands. For the pose estimation algorithms so far used in neuroscience the encoders are either stacked hourglass networks~\cite{newell2016stacked}, MobileNetV2s~\cite{sandler2018mobilenetv2}, ResNets~\cite{He_2016_CVPR}, DenseNets~\cite{huang2017densely} or EfficientNets~\cite{tan2019efficientnet}. These encoder networks are typically pre-trained on one or multiple of the larger-scale datasets introduced previously (such as ImageNet), as this has been shown to be an advantage for pose estimation on small lab-scale sized datasets~\cite{mathis2019TRANSFER, mathis2018deeplabcut, arac2019deepbehavior}. For common architectures this pre-training step does not need to be carried out explicitly-trained weights for popular architectures are already available in common deep learning frameworks. \medskip The impact of the encoder on DNN performance is a highly active research area. The encoders are continuously improved in regards to speed and object recognition performance~\cite{huang2017densely, sandler2018mobilenetv2, tan2019efficientnet, wu2020recent, kornblith2019better}. Naturally, due to the importance of the ImageNet benchmark the accuracy of network architectures continuously increases (on that dataset). For example, we were able to show that this performance increase is not merely reserved for ImageNet, or (importantly) other object recognition tasks~\cite{kornblith2019better}, but in fact that better architectures on ImageNet are also better for pose estimation~\cite{mathisimagenet2020}. However, being better on ImageNet, also comes at the cost of decreasing inference speed and increased memory demands. DeepLabCut (an open source toolbox for markerless pose estimation popular in neuroscience) thus incorporates backbones from MobileNetV2s (faster) to EfficientNets (best performance on ImageNet; \citealp{mathis2019TRANSFER,mathisimagenet2020}). \medskip \input{box1} \input{box2} In (standard) convolutional encoders, the high-resolution input images get gradually downsampled while the number of learned features increases. Regression based approaches which directly predict keypoint locations from the feature representation can potentially deal with this downsampled representation. When the learning problem is instead cast as identifying the keypoint locations on a grid of pixels, the output resolution needs to be increased first, often by deconvolutional layers~\cite{insafutdinov2016deepercut, Xiao2018}. We denote this part of the network as the decoder, which takes downsampled features, possibly from multiple layers in the encoder hierarchy, and gradually upsamples them again to arrive at the desired resolution. The first models of this class were Fully Convolutional Networks~\cite{long2015fully}, and later DeepLab~\cite{chen2017deeplab}. Many popular architectures today follow similar principles. Design choices include the use of skip connections between decoder layers, but also regarding skip connections between the encoder and decoder layers. Example encoder--decoder setups are illustrated in Figure~\ref{fig:model-architectures}. The aforementioned building blocks---encoders and decoders---can be used to form a variety of different approaches, which can be trained end-to-end directly on the target task (i.e., pose estimation). \medskip Pre-trained models can also be adapted to a particular application. For instance, DeeperCut~\cite{insafutdinov2016deepercut}, which was adapted by the animal pose estimation toolbox DeepLabCut~\cite{mathis2018deeplabcut}, was built with a ResNet~\cite{He_2016_CVPR} backbone network, but adapted the stride by atrous convolutions~\cite{chen2017deeplab} to retain a higher spatial resolution (Box~\ref{box1}). This allowed larger receptive fields for predictions, but retains a relatively high speed (i.e., for video analysis) but most importantly because ResNets can be pre-trained on ImageNet, those initialized weights could be used. Other architectures, like stacked hourglass networks~\cite{newell2016stacked} used in DeepFly3D~\cite{pavan2019} and DeepPoseKit~\cite{graving2019fast}, retain feature representations at multiple scales and pass those to the decoder (Figure~\ref{fig:model-architectures}A, B). \begin{figure}[b] \centering \includegraphics[width=0.5\textwidth]{fig/figure4.jpg} \caption{% {\bf Schematic overview of possible design choices for model architectures and training process} {\bf(A)} A simple, but powerful variant~\cite{insafutdinov2016deepercut} is a ResNet-50~\cite{He_2016_CVPR} architecture adapted to replace the final down-sampling operations by atrous convolutions~\cite{chen2017deeplab} to keep a stride of 16, and then a single deconvolution layer to upsample to output maps with stride 8. It also forms the basis of other architectures (e.g.~\citealp{Xiao2018}). The encoder can also be exchanged for different backbones to improve speed or accuracy (see Box~\ref{box2}). {\bf(B)} Other approaches like stacked hourglass networks~\cite{newell2016stacked}, are not pre-trained and employ skip connections between encoder and decoder layers to aid the up-sampling process. {\bf(C)} For training the network, the training data comprising input images and target heatmaps is used. The target heatmap is compared with the forward prediction. Thereby, the parameters of the network are optimized to minimize the loss that measures the difference between the predicted heatmap and the target heatmap (ground truth). } \label{fig:model-architectures} \end{figure} \begin{figure*}[b] \centering \includegraphics[width=\textwidth]{fig/figure5.jpg} \caption{% {\bf Multi-animal pose estimation approaches.} {\bf A}: Bottom-up approaches detect all the body parts (e.g. elbow and shoulder in example) as well as ``limbs'' (part confidence maps). These limbs are then used to associate the bodyparts within individuals correctly (Figure from OpenPose,~\citealp{cao2018openpose}). For both OpenPose and DeepLabCut, the bodyparts and part confidence maps, and part affinity fields (paf's) are predicted as different decoders (aka output heads) from the encoder. {\bf B}: Top-down approaches localize individuals with bounding-box detectors and then directly predict the posture within each bounding box. This does not require part confidence maps, but is subject to errors when bounding boxes are wrongly predicted (see black bounding box encompassing two players in (c)). The displayed figures, adapted from Xiao et al.~\cite{Xiao2018}, improved this disadvantage by predicting bounding boxes per frame and forward predicting them across time via visual flow.} \label{fig:bottom-up_top-down} \end{figure*} \subsection*{Loss functions: training architectures on datasets} \justify Keypoints (i.e., bodyparts) are simply coordinates in image space. There are two fundamentally different ways for estimating keypoints (i.e., how to define the loss function). The problem can be treated as a regression problem with the coordinates as targets~\cite{ToshevDEEPPOSE, carreira2016human}. Alternatively, and more popular, the problem can be cast as a classification problem, where the coordinates are mapped onto a grid (e.g. of the same size as the image) and the model predicts a heatmap (scoremap) of location probabilities for each bodypart (Figure~\ref{fig:model-architectures}C). In contrast to the regression approach~\cite{ToshevDEEPPOSE}, this is fully convolutional, and allows modeling of multi-modal distributions and aids the training process~\cite{tompson2014joint, newell2016stacked, insafutdinov2016deepercut, cao2018openpose}. Moreover, the heatmaps have the advantage that one can naturally predict multiple locations of the ``same'' bodypart in the same image (i.e., 2 elbows) without mode collapse (Figure~\ref{fig:bottom-up_top-down}A). \medskip Loss functions can also reflect additional priors or inductive biases about the data. For instance, DeepLabCut uses location refinement layers (locref), that counteract the downsampling inherent in encoders, by training outputs to predict corrective shifts in image coordinates relative to the downsampled output maps (Figure~\ref{fig:bottom-up_top-down}A). In pose estimation, it is possible to define a \emph{skeleton} or graph connecting keypoints belonging to subjects with the same identity (see below)~\cite{insafutdinov2016deepercut,cao2018openpose}. When estimating keypoints over time, it is also possible to employ temporal information and encourage the model to only smoothly vary its estimate among consecutive frames~\cite{insafutdinov2017cvpr,yao2019monet, xu2020eventcap,zhou2020monocular}. Based on the problem, these priors can be directly encoded and be used to regularize the model. \medskip How can pose estimation algorithms accommodate multiple individuals? Fundamentally, there are two different approaches: bottom-up and top-down methods (Figure~\ref{fig:bottom-up_top-down}). In top-down methods, individuals are first localized (often with another neural network trained on object localization) then pose estimation is performed per localized individual~\cite{Xiao2018,newell2016stacked,sun2019deep}. In bottom-up methods all bodyparts are localized, and networks are also trained to predict connections of bodyparts within individuals (i.e., limbs). These connections are then used to link candidate bodyparts to form individuals~\cite{cao2018openpose, insafutdinov2017cvpr,kreiss2019pifpaf,cheng2020higherhrnet}. To note, these techniques can be used on single individuals for increased performance, but often are not needed and usually imply reduced inference speed. \subsection*{Optimization} \justify For pre-training, stochastic gradient descent (SGD; \citealp{bottou2010large}) with momentum~\cite{sutskever2013importance} is an established method. Different variants of SGD are now common (such as Adam;~\citealp{kingma2014adam}) and used for fine-tuning the resulting representations. As mentioned above, pose estimation algorithms are typically trained in a multi-stage setup where the backbone is trained first on a large (labeled) dataset of a potentially unrelated task (like image classification). Users can also download these pre-trained weights. Afterwards, the model is fine-tuned on the pose-estimation task. Once trained, the quality of the prediction can be judged in terms of the root mean square error (RMSE), which measures the distance between the ground truth keypoints and predictions~\cite{mathis2018deeplabcut,pereira2019fast}, or by measuring the percentage of correct keypoints (PCK,~\citealp{andriluka20142d, mathis2019TRANSFER}); i.e., the fraction of detected keypoints that fall within a defined distance of the ground truth. \medskip To properly estimate model performance in an application setting, it is advisable to split the labeled dataset at least into train and test subsets. If systematic deviations can be expected in the application setting (e.g., because the subjects used for training the model differ in appearance from subjects encountered at model deployment~\cite{mathis2019TRANSFER}, this should be reflected when choosing a way to split the data. For instance, if data from multiple individuals is possible, distinct individuals should form distinct subsets of the data. On the contrary, strategies like splitting data by selecting every \textit{n}-th frame in a video likely overestimates the true model performance. \medskip The model is then optimized on the training dataset, while performance is monitored on the validation (test) split. If needed, hyperparameters---like parameter settings of the optimizer, or also choices about the model architecture---of the model can be adapted based on an additional validation set. \medskip \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{fig/figure6.jpg} \caption{An overview of the workflow for deep learning based pose estimation, which highlights several critical decision points.} \label{fig:workflow} \end{figure*} \medskip \input{table1.tex} All of the aforementioned choices influence the final outcome and performance of the algorithm. While some parts of the training pipeline are well-established and robust---like pre-training a model on ImageNet---choices about the dataset, architecture, augmentation, fine-tuning procedure, etc. will inevitably influence the quality of the pose estimation algorithm (Box~\ref{box2}). See Figure~\ref{fig:AUG} for a qualitative impression of augmentation effects of some of these decisions (see also Figure~\ref{fig:AUG2}). We will discuss this in more detail in the Pitfalls section. \medskip So far, we considered algorithms able to infer 2D keypoints from videos, by training deep neural networks on previously labeled data. Naturally, there is also much work in computer vision and machine learning towards the estimation of 3D keypoints from 2D labels, or to directly infer 3D keypoints. In the interest of space, we had to omit those but refer the interested reader to ~\cite{martinez2017simple, Mehta2017_3D, TomeRA17,chen20173d,yao2019monet} as well specifically for neuroscience~\cite{yao2019monet, pavan2019, nath2019deeplabcut, Zimmermann2020, karashchuk2020anipose, Bala2020}. \medskip Lastly, it is not understood how CNNs make decisions and they often find ``shortcuts''~\cite{geirhos2020shortcut}. While this active research area is certainly beyond the scope of this primer, from practical experience we know that at least within-domain---i.e., data that is similar to the training set---DNNs work very well for pose estimation, which is the typical setting relevant for downstream applications in neuroscience. It is worth noting that in order to optimize performance, there is no one-size-fits-all solution. Thus, we hope by building intuition in users of such systems, we provide the necessary tools to make these decisions with more confidence (Figure~\ref{fig:workflow}). \section*{Scope and applications} Markerless motion capture can excel in complicated scenes, with diverse animals, and with any camera available (mono-chrome, RGB, depth cameras, etc). The only real requirement is the ability of the human to be able to reliably label keypoints (manually or via alternative sources). Simply, you need to be able to see what you want to track. Historically, due to limitations in computer vision algorithms experimentalists would go to great lengths to simplify the environment, even in the laboratory (i.e., no bedding, white or black walls, high contrast), and this is no longer required with deep learning-based pose estimation. Now, the aesthetics one might want for photographs or videos taken in daily life are the best option. \medskip Indeed, the field has been able to rapidly adopt these tools for neuroscience. Deep learning-based markerless pose estimation applications in the laboratory have already been published for flies~\cite{mathis2018deeplabcut, pereira2019fast, graving2019fast, pavan2019, karashchuk2020anipose,Liu2020optiflex}, rodents~\cite{mathis2018deeplabcut,MathisWarren2018speed, pereira2019fast, graving2019fast, arac2019deepbehavior, pavan2019,Zimmermann2020,Liu2020optiflex}, horses~\cite{mathis2019TRANSFER}, dogs~\cite{yao2019monet}, rhesus macaque~\cite{berger2020wireless, yao2019monet, Bala2020, labuguen2020macaquepose} and marmosets~\cite{ebina2019arm}; the original architectures were developed for humans~\cite{insafutdinov2016deepercut, newell2016stacked, cao2018openpose}. Outside of the laboratory, DeepPoseKit was used for zebras~\cite{graving2019fast} and DeepLabCut for 3D tracking of cheetahs~\cite{nath2019deeplabcut}, for squirrels~\cite{barrett2020manual} and macaques~\cite{labuguen2020macaquepose}, highlighting the great ``in-the-wild'' utility of this new technology~\cite{mathis2020deep}. As outlined in the principles section, and illustrated by these applications, these deep learning architectures are general-purpose and can be broadly applied to any animal as well as condition. \medskip Recent research highlights the prevalent representations of action across the brain~\cite{kaplan2020brain}, which emphasizes the importance of quantifying behavior even in non-motor tasks. For instance, pose estimation tools have recently been used to elucidate the neural variability across cortex in humans during thousands of spontaneous reach movements~\cite{peterson2020behavioral}. Pupil tracking is of great importance for visual neuroscience. One recent study by Meyer et al. used head-fixed cameras and DeepLabCut to reveal two distinct types of coupling between eye and head movements~\cite{meyer2020two}. In order to accurately correlate neural activity to visual input tracking the gaze is crucial. The recent large, open dataset from the Allen Institute includes imaging data of six cortical and two thalamic regions in response to various stimuli classes as well as pupil tracking with DeepLabCut~\cite{siegle2019survey}. The International Brain Lab has integrated DeepLabCut into their workflow to track multiple bodyparts of decision-making mice including their pupils~\cite{Harris2020dataIBL}. \medskip Measuring relational interactions is another major direction, that has been explored less in the literature so far, but is feasible. Since the feature detectors for pose estimation are of general nature one can easily not only track the posture of individuals but also the tools and objects one interacts with (e.g. for analyzing golf or tennis). Furthermore, social behaviors, and parenting interactions (for example in mice) can now be studied noninvasively. \medskip Due to the general capabilities, these tools have several applications for creating biomarkers by extracting high fidelity animal traits, for instance in the pain field~\cite{tracey2019composite} and for monitoring motor function in healthy and diseased conditions~\cite{micera2020advanced}. DeepLabCut was also integrated with tools for x-ray analysis~\cite{laurence2020integrating}. For measuring joint center locations in mammals, arguably, x-ray is the gold standard. Of course, x-ray data also poses challenges for extracting body part locations from x-ray data. A recent paper shared methodology to integrate DeepLabCut with XROMM, a popular analysis suite, to advance the speed and accuracy for x-ray based analysis~\cite{laurence2020integrating}. \section*{How do the (current) packages work?} Here we will focus on packages that have been used in behavioral neuroscience, but the general workflow for pose estimation in computer vision research is highly similar. What has made experimentalist-focused toolboxes different is that they provide essential code to generate and train on one's own datasets. Typically, what is available in computer vision focused pose estimation repositories is code to run inference (video analysis) and/or run training of an architecture for specific datasets around which competitions happen (e.g., MS COCO;~\citealp{lin2014microsoft} and MPII pose;~\citealp{andriluka20142d}). While these are two crucial steps, they are not sufficient to develop tailored neural networks for an individual lab or experimentalist. Thus, the ``barrier to entry'' is often quite high to use these tools. It requires knowledge of deep learning languages to build appropriate data loaders, data augmentation pipelines, and training regimes. Therefore, in recent years several packages have not only focused on animal pose estimation networks, but in providing users a full pipeline that allows for (1) labeling a customized dataset (frame selection and labeling tools), (2) generating test/train datasets, (3) data augmentation and loaders, (4) neural architectures, (5) code to evaluate performance, (6) run video inference, and (7) post-processing tools for simple readouts of the acquired machine-labeled data. Thus far, around 10 packages have become available in the past 2 years~\cite{mathis2018deeplabcut, pereira2019fast, graving2019fast, pavan2019, arac2019deepbehavior, Zimmermann2020, Bala2020, Liu2020optiflex}. Each has focused on providing slightly different user experiences, modularity, available networks, and balances to the speed/accuracy trade-off for video inference. Several include their (adapted) implementations of the original DeepLabCut or LEAP networks as well~\cite{graving2019fast, Liu2020optiflex}. But the ones we highlight have the full pipeline delineated above as a principle and are open source, i.e., at minimum inference code is available (see Table~\ref{tab:packages}). The progress gained and challenges they set out to address (and some that remain) are reviewed elsewhere~\cite{mathis2020deep, kordingLimitations2019}. Here, we discuss collective aims of these packages (see also Figure~\ref{fig:workflow}). \medskip \medskip Current packages for animal pose estimation have focused on primarily providing tools to train tailored neural networks to user-defined features. Because experimentalists need flexibility and are tracking very different animals and features, the most successful packages (in terms of user base as measured by citations and GitHub engagement) are species agnostic. However, given they are all based on advances from prior art in human pose estimation, the accuracy of any one package given the breadth of options that could be deployed (i.e, data augmentation, training schedules, and architectures) will remain largely comparable, if such tools are provided to the user. What will determine performance the most is the input training data provided, and how much capacity the architectures have. \medskip It is notable that using transfer learning has proven to be advantageous for better robustness (i.e., its ability to generalize, see~\citealp{mathis2018deeplabcut, mathis2019TRANSFER, arac2019deepbehavior}), which was first deployed by DeepLabCut (see Table~\ref{tab:packages}). Now, training on large animal-specific datasets has recently been made available in DeepLabCut as well (such as a horse pose dataset with >8,000 annotated images of 30 horses;~\citealp{mathis2019TRANSFER}). This allows the user to bypass the only manual part of curating and labeling ground truth data, and these models can directly be used for inference on novel videos. For DeepLabCut, this is an emerging community-driven effort, with external labs already contributing models and data\footnote{\href{http://modelzoo.deeplabcut.org}{modelzoo.deeplabcut.org}}. \medskip \input{box3} In the future, having the ability to skip labeling and training and run video inference with robust models will lead to more reproducible and scalable research. For example, as we show in other sections of the primer, if the labeling accuracy is not of a high quality, and the data is not diverse enough, then the networks are not able to generalize to so-called ``out-of-domain'' data. If as a community we collectively build stable and robust models that leverage the breadth of behaviors being carried out in laboratories worldwide, we can work towards models that would work in a plug-in-play fashion. We anticipate new datasets and models to become available in the next months to years. \medskip All packages, just like all applications of deep learning to video, prefer access to GPU computing resources (See Box~\ref{box:hardware}). On GPUs one experiences faster training and inference times but the code can also be deployed on standard CPUs or laptops. With cloud computing services, such as Google Colaboratory and JupyterLab, many pose estimation packages can simply be deployed on remote GPU resources. This still requires (1) knowledge about these resources, and (2) toolboxes providing so-called ``notebooks'' that can be easily deployed. But, given these platforms have utility beyond just pose estimation, they are worthwhile to learn about. \medskip For the non-GPU aspects, only a few packages have provided easy-to-use graphical user interfaces that allow users with no programming experience to use the tool (see Table~\ref{tab:packages}). Lastly, the available packages vary in their access to 3D tools, multi-animal support, and types of architectures available to the user, which is often a concern for speed and accuracy. Additionally, some packages have limitations on only allowing the same sized videos for training and inference, while others are more flexible. These are all key considerations when deciding which eco-system to invest in learning (as every package has taken a different approach to the API). \medskip Perhaps the largest barrier to entry for using deep learning-based pose estimation methods is managing the computing resources (See Box~\ref{box:hardware}, Box~\ref{box:software}). From our experience, installing GPU drivers and the deep learning packages (TensorFlow, PyTorch), that all the packages rely on, is the biggest challenge. To this end, in addition to documentation that is ``user-focused'' (i.e., not just an API for programmers), resources like webinars, video tutorials, workshops, Gitter and community-forums (like StackOverflow and Image Forum SC) have become invaluable resources for the modern neuroscientist. Here, users can ask questions and get assistance from developers and users alike. We believe this has also been a crucial step for the success of DeepLabCut. \medskip While some packages provide full GUI-based control over the packages, to utilize more advanced features at least minimal programming knowledge is ideal. Thus, better training for the increasingly computational nature of neuroscience will be crucial. Making programming skills a requirement of graduate training, building better community resources, and leveraging the fast-moving world of technology to harness those computing and user resources will be crucial. In animal pose estimation, while there is certainly an attempt to make many of the packages user-friendly, i.e., to onboard users and have a scalable discussion around common problems, we found user forums to be very valuable~\cite{rueden2019scientific}. Specifically, DeepLabCut is a member of the Scientific Community Image Forum\footnote{\href{https://forum.image.sc/}{forum.image.sc}} alongside other packages that are widely used for image analysis in the life sciences such as Fiji~\cite{schindelin2012fiji}, napari, CellProfiler~\cite{McQuin2018CellProfiler3N} Ilastik~\cite{sommer2011ilastik} and scikit-image~\cite{van2014scikit}. \medskip \input{box4} \section*{Practical considerations for pose estimation (with deep learning)} \justify As a recent field gaining traction, it is instructive to regard the operability of deep learning-powered pose estimation in light of well-established, often gold standard, techniques. \subsection*{General considerations and pitfalls} \justify As discussed in {\it Scope and applications} and as evidenced by the strong adaptation of the tools, deep learning-based pose estimation work well in standard setups with visible animals. The most striking advantage over traditional motion capture systems is the absence of any need for body instrumentation. Although seemingly obvious, the previous statement hides the belated recognition that marker-based motion capture suffers greatly from the wobble of markers placed on the skin surface. That behavior, referred to as ``soft tissue artifact'' among movement scientists and attributable to the deformation of tissues underneath the skin such as contracting muscles or fat, is now known to be the major obstacle to obtaining accurate skeletal kinematics~\footnote{Intra-cortical pins and biplane fluoroscopy give direct, uncontaminated access to joint kinematics. The first, however, is invasive (and entails careful surgical procedures; \citealp{ramsey2003methodological}) whereas the second is only operated in very constrained and complex laboratory settings~\cite{list2017moving}. Both are local to a specific joint, and as such do not strictly address the task of pose estimation.} \cite{camomilla2017}. To make matters worse, contaminated marker trajectories may be harmful in clinical contexts, potentially invalidating injury risk assessment (e.g.~\citealp{smale2017}). Although a multitude of numerical approaches exists to tackle this issue, the most common, yet incomplete, solution is multi-body kinematics optimization (or ``inverse kinematics'' in computer graphics and robotics;~\citealp{begon2018}). This procedure uses a kinematic model and searches for the body pose that minimizes in the least-squares sense the distance between the measured marker locations and the virtual ones from the model while satisfying the constraints imposed by the various joints~\cite{lu1999bone}. Its accuracy is, however, decisively determined by the choice of the underlying model and its fidelity to an individual’s functional anatomy~\cite{begon2018}. In contrast, motion capture with deep learning elegantly circumvents the problem by learning a geometry-aware representation of the body from the data to associate keypoints to limbs~\cite{cao2018openpose,insafutdinov2016deepercut,mathis2020deep}, which, of course, presupposes that one can avoid the ``soft tissue artifact'' when labeling. \medskip At present, deep learning-powered pose estimation can be poorly suited to evaluate rotation about a bone's longitudinal axis. From early markerless techniques based on visual hull extraction this is a known problem~\cite{ceseracciu2014comparison}. In marker-based settings, the problem has long been addressed by rather tracking clusters of at least three non-aligned markers to fully reconstruct a rigid segment's six degrees of freedom~\cite{spoor1980rigid}. Performing the equivalent feat in a markerless case is difficult, but it is possible by labeling multiple points (for instance on either side of the wrist to get the lower-limb orientation). Still, recent hybrid, state-of-the-art approaches jointly training under both position and orientation supervision augur very well for video-based 3D joint angle computation~\cite{xu2020eventcap,zhou2020monocular}. \medskip With the notable exception of approaches leveraging radio wave signals to predict body poses through walls~\cite{zhao2018through}, deep learning-powered motion capture requires the individuals be visible; this is impractical for kinematic measurements over wide areas. A powerful alternative is offered by Inertial Measurement Units (IMUs)---low-cost and lightweight devices typically recording linear accelerations, angular velocities and the local magnetic field. Raw inertial data can be used for coarse behavior classification across species~\cite{kays2015terrestrial,chakravarty2019novel}. They can also be integrated to track displacement with lower power consumption and higher temporal resolution than GPS~\cite{bidder2015step}, thereby providing a compact and portable way to investigate whole body dynamics (e.g.~\citealp{wilson2018biomechanics}) or, indirectly, energetics~\cite{gleiss2011making}. Recent advances in miniaturization of electronical components now also allow precise quantification of posture in small animals~\cite{pasquet2016wireless}, and open new avenues for kinematic recordings in multiple animals at once at fine motor scales. \medskip \begin{figure*}[h] \centering \includegraphics[width=.97\textwidth]{fig/figure7.jpg} \caption{{\bf Labeling Pitfalls: How corruptions affect performance} {\bf (A)} Illustration of two types of labeling errors. Top is ground truth, middle is missing a label at the tailbase, and bottom is if the labeler swapped the ear identity (left to right, etc.). {\bf (B)} Using a small dataset of 106 frames, how do the corruptions in A affect the percent of correct keypoints (PCK) as the distance to ground truth increases from 0 pixel (perfect prediction) to 20 pixels (larger error)? The X-axis denotes the difference in the ground truth to the predicted location (RMSE in pixels), whereas Y-axis is the fraction of frames considered accurate (e.g., $\approx$80\% of frames fall within 9 pixels, even on this small training dataset, for points that are not corrupted, whereas for corrupted points this falls to $\approx$65\%). The fraction of the dataset that is corrupted affects this value. Shown is when missing the tailbase label (top) or swapping the ears in $1, 5, 10$ and $20\%$ of frames (of $106$ labeled training images). Swapping vs. missing labels has a more notable adverse effect on network performance. } \label{fig:corruption} \end{figure*} Nonetheless, IMU-based full body pose reconstruction necessitates multiple sensors over the body parts of interest; commercial solutions require up to 17 of them~\cite{roetenberg2009xsens}. That burden was recently eased by utilizing a statistical body model that incorporates anatomical constraints, together with optimizing poses over multiple frames to enforce coherence between the model orientation and IMU recordings---reducing the system down to six sensors while achieving stunning motion tracking~\cite{von2017sparse}. Yet, two additional difficulties remain. The first arises when fusing inertial data in order to estimate a sensor's orientation (for a comprehensive description of mathematical formalism and implementation of common fusion algorithms, see~\citealp{sabatini2011estimating}). The process is susceptible to magnetic disturbances that distort sensor readings and, consequently, orientation estimates~\cite{fan2018magnetic}. The second stems from the necessity to align a sensor's local coordinate system to anatomically meaningful axes, a step crucial (among others) to calculating joint angles (e.g.,~\citealp{lebleu2020lower}). The calibration is ordinarily carried out by having the subject perform a set of predefined movements in sequence, whose execution determines the quality of the procedure. Yet, in some pathological populations (let alone in animals), calibration may be challenging to say the least, deteriorating pose reconstruction accuracy~\cite{vargas2016imu}. \medskip A compromise to making the task less arduous is to combine videos and body-worn inertial sensors. Thanks to their complementary nature, incorporating both cues mitigates the limitations of each individual system; i.e., both modalities reinforce one another in that IMUs help disambiguate occlusions, whereas videos provide disturbance-free spatial information~\cite{gilbert2019fusing}. The idea also applies particularly well to the tracking of multiple individuals---even without the use of appearance features, advantageously---by exploiting unique movement signatures contained within inertial signals to track identities over time~\cite{henschel2019simultaneous}. \subsection*{Pitfalls of using deep learning-based \\motion capture} \justify Despite being trained on large scale datasets of thousands of individuals, even the best architectures fail to generalize to ``atypical'' postures (with respect to the training set). This is wonderfully illustrated by the errors committed by OpenPose on yoga poses~\cite{huang2019followmeup}. \medskip These domain shifts are major challenges (also illustrated below), and while this is an active area of research with much progress, the easiest way to make sure that the algorithm generalizes well is to label data that is similar to the videos at inference time. However, due to active learning implemented for many packages, users can manually refine the labels on ``outlier'' frames. \medskip Another major caveat of deep learning-powered pose estimation is arguably its intrinsic reliance on high-quality labeled images. This suggests that a labeled dataset that reflects the variability of the behavior should be used. If one -- due to the quality of the video -- cannot reliably identify body parts in still images (i.e., due to massive motion blur, uncertainty about body part (left/right leg crossing) or animal identity) then the video quality should be fixed, or sub-optimal results should be expected. \medskip To give readers a concrete idea about label errors, augmentation methods, and active learning, we also provide some simple experiments with shared code and data. Code for reproducing these analyses is available at~\href{https://github.com/DeepLabCut/Primer-MotionCapture}{github.com/DeepLabCut/Primer-MotionCapture}. \medskip To illustrate the importance of error-free labeling, we artificially corrupted labels from the trail-tracking dataset from Mathis et al.~\cite{mathis2018deeplabcut}. The corruptions respectively simulate inattentive labeling (e.g., with left–right bodyparts being occasionally confounded), and missing annotation or uncertainty as to whether to label an occluded bodypart. We corrupted $1, 5, 10$ and $20\%$ of the dataset (N=1,066 images) either by swapping two labels or removing one, and trained on $5\%$ of the data. The effect of missing labels is barely noticeable (Figure~\ref{fig:corruption}A). Swapping labels, on the other hand, causes a substantial drop in performance, with an approximate 10\% loss in percentage of correct keypoints (PCK) (Figure~\ref{fig:corruption}B). We therefore reason that careful labeling, more so than labeling a very large number of images, is the safest guard against poor ground truth annotations. We believe that explicitly modeling labeling errors, as done in Johnson and Everingham~\cite{johnson2011learning}, will be an active area of research and integrated in some packages. \medskip Even if labeled well, augmentation greatly improves results and should be used. For instance, when training on the example dataset of (highly)-correlated frames from one short video of one individual, the loss nicely plateaus and shows comparable train/test errors for three different augmentation methods (Figure~\ref{fig:AUG2}A, B). The three models also give good performance and generalize to a test video of a different mouse. However, closer inspection reveals that the "scalecrop" augmentation method, which only performs cropping and scaling during training~\cite{nath2019deeplabcut}, leads to swaps in bodyparts with this small training set from only one different mouse (Figure~\ref{fig:AUG2}C, D). The other two methods, which were configured to perform rotations of the training data, could robustly track the posture of the mouse. This discrepancy becomes striking when observing the PCK plots: imgaug and tensorpack outperform scalecrop by a margin of up to $\approx$ 30\% (Figure~\ref{fig:AUG2}E). One simple way to generalize to this additional case is by active learning~\cite{nath2019deeplabcut}, which is also available for some packages. Thereby one annotates additional frames with poor performance (outlier frames) and then trains the network from the final configuration, which thus only requires a few thousand iterations. Adding 28 annotated frames from the higher resolution camera, we get good generalization for test frames from both scenarios (Figure~\ref{fig:AUG2}F). Generally, this illustrates, how the lack of diversity in training data leads to worse performance, but can be fixed by adding frames with poor performance (active learning). \subsection*{Coping with pitfalls} \justify Fortunately, dealing with the most common pitfalls is relatively straightforward, and mostly demands caution and common sense. Rules of thumb and practical guidelines are given in Box~\ref{box:pitfalls}. Video quality should be envisaged as a trade-off between storage limitations, labeling precision, and training speed; e.g., the lower the resolution of a video, the smaller the occupied disk space and the faster the training speed, but the harder it gets to consistently identify bodyparts. In practice, DeepLabCut was shown to be very robust to downsizing and video compression, with pose reconstruction degrading only after scaling videos down to a third of their original size or compression by a factor of 1000~\cite{MathisWarren2018speed}. \medskip Body parts should be labeled reliably and consistently across frames that preferably capture a variety of behaviors. Note that some packages provide the user means to automatically extract frames differing in visual content based on unsupervised clustering, which simplifies the selection of relevant images in sparse behaviors. \medskip Utilize symmetries for training with augmentation and try to include image augmentations that are helpful. Use the strongest model (given the speed requirements). Check performance and actively grow the training set if errors are found. \medskip \input{box5} \begin{figure*}[b] \centering \includegraphics[width=.93\textwidth]{fig/figure8.jpg} \caption{{\bf Data Augmentation Improves Performance} Performance of three different augmentation methods on the same dataset of around 100 training images from one short video of one mouse (thus correlated). Scalecrop is configured to only change the scale, and randomly crop images; Imgaug also performs motion blur and rotation ($\pm 180^\circ$) augmentation. Tensorpack performs Gaussian noise and rotation ($\pm 180^\circ$) augmentation. {\bf (A)} Loss over training iterations has plateaued, and {\bf (B)} test errors in pixels appear comparable for all methods. {\bf (C)} Tail base aligned skeletons across time for a video of a different mouse (displayed as a cross connecting snout to tail and left ear to right ear). Note the swap of the ``T'' in the shaded gray zone (and overlaid on the image to the right in {\bf (D)}). Imgaug and tensorpack, which also included full $180^\circ$ rotations, work perfectly). This example highlights that utilizing the rotational symmetry of the data during training can give excellent performance (without additional labeling). {\bf (E)} Performance of the networks on different mice recorded with the same camera (top) and a different camera ($\approx$ 2.5x magnification; bottom). Networks trained with tensorpack and imgaug augmentation generalize much better, and in particular generalize very well to different mice. The generalization to the other camera is difficult, but also works better for tensorpack and imgaug augmentation. {\bf (F)} Performance of networks on same data as in (E), but after an active learning step, adding $28$ training frames from the higher resolution camera and training for a few thousand iterations. Afterwards, the network generalizes well to both scenarios. } \label{fig:AUG2} \end{figure*} Pose estimation algorithms can make different types of errors: jitter, inversion (e.g. left/right), swap (e.g. associating body part to another individual) and miss~\cite{ruggero2017benchmarking}. Depending on the type of errors, different causes need to be addressed (i.e., check the data quality for any human-applied mistakes~\cite{mathis2018deeplabcut}, use suitable augmentation methods). Also for some cases, post processing filters can be useful (such as Kalman filters), but also graphical models or other methods that learn the geometry of the bodyparts. We also believe that future work will explicitly model labeling errors during training. \section*{What to do with motion capture data?} Pose estimation with deep learning is to relieve the user of the painfully slow digitization of keypoints. With markerless tracking you need to annotate a much smaller dataset and this can be applied to new videos. Pose estimation also serves as a springboard to a plethora of other techniques. Indeed, many new tools are specifically being developed to aid users of pose estimation packages to analyze movement and behavioral outputs in a high-throughput manner. Plus, many such packages existed pre-deep learning and can now be leveraged with this new technology as well. While the general topic of what to do with the data is beyond this primer, we will provide a number of pointers. These tools fall into three classes: time series analysis, supervised, and unsupervised learning tools. \medskip A natural step ahead is the quantitative analysis of the keypoint trajectories. The computation of linear and angular displacements, as well as their time derivatives, lays the ground for detailed motor performance evaluation---a great introduction to elementary kinematics can be found in~\cite{Winter2009}, and a thorough description of 151 common metrics is given in~\cite{schwarz2019systematic}. These have a broad range of applications, of which we highlight a system for assessing >30 behaviors in groups of mice in an automated way~\cite{de2019real}, or an investigation of the evolution of gait invariants across animals~\cite{catavitello2018kinematic}. Furthermore, kinematic metrics are the basis from which to deconstruct complex whole-body movements into interpretable motor primitives, non-invasively probing neuromuscular control~\cite{longo2019biomechanics}. Unsupervised methods such as clustering methods~\cite{Pedregosa2011}, MotionMapper~\cite{Berman2014}, MoSeq~\citep{wiltschko2015mapping}, or variational autoencoders~\cite{luxem2020identifying} allow the extraction of common ``kinematic behaviors'' such as turning, running, rearing. Supervised methods allow the prediction of human defined labels such as ``attack'' or ```freezing.'' For this, general purpose tools such as scikit-learn~\cite{Pedregosa2011} can be ideal, or tailored solutions with integrated GUIs such as JAABA can be used~\citep{kabra2013jaaba}. Sturman et al. have developed an open source package to utilize motion capture outputs together with classifiers to automate human annotations for various behavioral tests (open field, elevated plus maze, forced swim test). They showed that these open source methods outperform commercially available platforms~\cite{sturman2020deep}. \medskip Kinematic analysis, together with simple principles derived from physics, also allows the calculation of the energy required to move about, a methodology relevant to understanding the mechanical determinants of the metabolic cost of locomotion (e.g.~\citealp{saibene2003biomechanical}) or informing the design of bio-inspired robots (e.g.~\citealp{li2017mechanical,nyakatura2019reverse}). \subsection*{Modeling and motion understanding} \justify Looking forward, we also expect that the motion capture data will be used to learn task-driven and data-driven models of the sensorimotor as well as the motor pathway. We have recently provided a blueprint combining human movement data, inverse kinematics, biomechanical modeling and deep learning~\cite{sandbrink2020task}. Given the complexity of movement, as well as the highly nonlinear nature of the sensorimotor processing~\cite{madhav2020synergy, nyakatura2019reverse}, we believe that such approaches will be fruitful to leverage motion capture data to gain insight into brain function. \section*{Perspectives} As we highlighted thus far in this primer, markerless motion capture has reached a mature state in only a few years due to the many advances in machine learning and computer vision. While there are still some challenges left~\cite{mathis2020deep}, this is an active area of research and advances in training schemes (such as semi-supervised and self-supervised learning) and model architectures will provide further advances and even less required manual labour. Essentially, now every lab can train appropriate algorithms for their application and turn videos into accurate measurements of posture. If setups are sufficiently standardized, these algorithms already broadly generalize, even across multiple laboratories as in the case of the International Brain Lab~\cite{Harris2020dataIBL}. But how do we get there, and how do we make sure the needs of animal pose estimation for neuroscience applications are met? \subsection*{Recent developments in deep learning} \justify Innovations in the field of object recognition and detection affect all aforementioned parts of the algorithm, as we discussed already in the context of using pre-trained representations. An emerging relevant research direction in machine learning is large scale semi-supervised and self-supervised representation learning (SSL). In SSL, the problem of pre-training representations is no longer dependent on large labeled datasets, as introduced above. Instead, even larger databases comprised of unlabeled examples---often multiple orders of magnitude larger than the counterparts used in supervised learning---can be leveraged. A variety of SSL algorithms are becoming increasingly popular in all areas of machine learning. Recently, representations obtained by large-scale self-supervised pre-training began to approach or even surpass performance of the best supervised methods. Various SSL methods \cite{oord2018representation, logeswaran2018efficient,wu2018unsupervised,henaff2019data,tian2019contrastive,hjelm2018learning,bachman2019learning,he2019momentum,chen2020simple, wu2018unsupervised,hjelm2018learning,bachman2019learning,he2019momentum,chen2020simple} made strides in both image recognition \cite{chen2020simple}, speech processing \citep{schneider2019wav2vec,baevski2019vq,baevski2020wav2vec,ravanelli2020multi} and NLP~\cite{devlin2019bert,Liu2019roberta}, already starting to outperform models obtained by supervised pre-training on large datasets. Considering that recent SSL models for computer vision are continued to being shared openly (e.g.~\citealp{xie2020noisy,chen2020simple}), it can be expected to impact and improve new model development in pose estimation, especially if merely replacing the backend model is required. On top, SSL methods can be leveraged in end-to-end models for estimating keypoints and poses directly from raw, unlabeled video \cite{umer2020self, tung2017self, kocabas2019self}. Approaches based on graph neural networks \cite{scarselli2008graph} can encode priors about the observed structure and model correlations between individual keypoints and across time \cite{cai2019exploiting}. For some applications (like modeling soft tissue or volume) full surface reconstructions are needed and this area has seen tremendous progress in recent years~\cite{guler2018densepose,sanakoyeu2020transferring, Zuffi2019ICCV}. Such advances can be closely watched and incorporated in neuroscience, but we also believe our field (neuroscience) is ready to innovate in this domain too. \subsection*{Pose estimation specifically for neuroscience} \justify The goal of human pose estimation---aside from the purely scientific advances for object detection---range from person localization in videos, self-driving cars and pedestrian safety, to socially aware AI, is related to, but does differ from, the applied goals of animal pose estimation in neuroscience. Here, we want tools that give us the highest precision, with the most rapid feedback options possible, and we want to train on small datasets but have them generalize well. This is a tall order, but so far we have seen that the glass is (arguably more than) half full. How do we meet these goals going forward? While much research is still required, there are essentially two ways forward: datasets and associated benchmarks, and algorithms. \subsection*{Neuroscience needs (more) benchmarks} \justify In order to push the field towards innovations in areas the community finds important, setting up benchmark datasets and tasks will be crucial (i.e., the Animal version of ImageNet). The community can work towards sharing and collecting data of relevant tasks and curating it into benchmarks. This also has the opportunity of shifting the focus in computer vision research: Instead of ``only'' doing human pose estimation, researchers probably will start evaluating on datasets directly relevant to neuroscience community. Indeed there has been a recent interest in more animal-related work at top machine learning conferences~\cite{khan2020animalweb, sanakoyeu2020transferring}, and providing proper benchmarks for such approaches would be ideal. \medskip For animals, such efforts are developing: Khan et al. recently shared a dataset comprising 22.4K annotated faces from 350 diverse species~\cite{khan2020animalweb} and Labuguen announced a dataset of 13K annotated macaque~\cite{labuguen2020macaquepose}. We recently released two benchmark datasets that can be evaluated for state-of-the-art performance~\footnote{\href{https://paperswithcode.com}{paperswithcode.com}} on within domain and out-of-domain data~\footnote{\href{http://horse10.deeplabcut.org}{horse10.deeplabcut.org}}. The motivation is to train on a limited number of individuals and test on held out animals (the so-called ``out-of-domain'' issue)~\cite{mathis2019TRANSFER, mathisimagenet2020}. We picked horses due to the variation in coat colors (and provide >8K labeled frames). Secondly, to directly study the inherent shift in domain between individuals, we set up a benchmark for common image corruptions, as introduced by Hendrycks et al.~\cite{Hendrycks2019} that uses the image corruptions library proposed by Michaelis et al.~\cite{michaelis2019dragon}. \medskip Of course these aforementioned benchmarks are not sufficient to cover all the needs of the community, so we encourage consortium-style efforts to also curate data and provide additional benchmarks. Plus, making robust networks is still a major challenge, even when trained with large amounts of data~\cite{beery2018recognition, geirhos2020shortcut}. In order to make this a possibility it will be important to develop and share common keypoint estimation benchmarks for animals as well as expand the human ones to applications of interest, such as sports~\cite{huang2019followmeup}. \subsection*{Sharing Pre-trained Models} \justify We believe another major step forward will be sharing pre-trained pose estimation networks. If as a field we were to annotate sufficiently diverse data, we could train more robust networks that broadly generalize. This success is promised by other large scale data sets such as MS COCO~\cite{lin2014microsoft} and MPII pose~\cite{andriluka20142d}. In the computer vision community, sharing model weights such that models do not need to be retrained has been critical for progress. For example, the ability to download pre-trained ImageNet weights is invaluable---training ImageNet from scratch on a standard GPU can take more than a week. Now, they are downloaded within a few seconds and fine tuned in packages like DeepLabCut. However even for custom training setups, sharing of code and easy access to cloud computing resources enables smaller labs to train and deploy models without investment in additional lab resources. Pre-training a typical object recognition model on the ILSVC is now possible on the order of minutes for less than 100 USD \cite{coleman2017dawnbench} thanks to high-end cloud computing, which is also feasible for labs lacking the necessary on-site infrastructure (Box~\ref{box:hardware}). \medskip In neuroscience, we should aim to fine tune even those models; namely, sharing of mouse-specific, primate-specific weights will drive interest and momentum from researchers without access to such data, and further drive innovations. Currently, only DeepLabCut provides model weights (albeit not at the time of the original publication) as part of the recently launched Model Zoo (\href{http://modelzoo.deeplabcut.org/}{modelzoo.deeplabcut.org}). Currently it contains models trained on MPII pose~\cite{insafutdinov2016deepercut}, dog and cat models as well as contributed models for primate facial recognition, primate full body recognition~\cite{labuguen2020macaquepose} and mouse pupil detection (Figure~\ref{fig:workflow}). Researchers can also contribute in a citizen-science fashion by labeling data on the web (\href{http://contrib.deeplabcut.org}{contrib.deeplabcut.org}) or by submitting models. \medskip Both datasets and models will benefit from common formatting to ease sharing and testing. Candidate formats are HDF5 (also chosen by NeuroData Without Borders~\cite{teeters2015neurodata} and DeepLabCut), TensorFlow data\footnote{% \href{https://www.tensorflow.org/api_docs/python/tf/data}{tensorflow.org/api\_docs/python/tf/data}}, and/or PyTorch data\footnote{% \href{https://pytorch.org/docs/stable/torchvision/datasets.html}{pytorch.org/docs/stable/torchvision/datasets.html}}. Specifically, for models, proto-buffer formats for weights are useful and easy to share~\cite{Kane2020dlclive, lopes2015bonsai} for deployment to other systems. Platforms such as OSF and Zenodo allow banking of weights, and some papers (e.g.~\citealp{barrett2020manual, sturman2020deep}) have also shared their trained models. We envision that having easy-to-use interfaces to such models will be possible in the future. \medskip These pre-trained pose estimation networks hold several promises: it saves time and energy (as different labs do not need to annotate and train networks), as well as contributes to reproducibility in science. Like many other forms of biological data, such as genome sequences, functional imaging data, behavioral data is notoriously hard to analyze in standardized ways. Lack of agreement can lead to different results, as pointed out by a recent landmark study comparing the results achieved by 70 independent researchers analyzing nine hypothesis in shared imaging data~\cite{botvinik2020variability}. To increase reproducibility in behavioral science, video is a great tool~\cite{gilmore2017video}. Analyzing behavioral data is complex, owing to its unstructured, large-scale nature, which highlights the importance of shared analysis pipelines. Thus, building robust architectures that extract the same behavioral measurements in different laboratories would be a major step forward. \section*{Conclusions} Deep learning based markerless pose estimation has been broadly and rapidly adopted in the past two years. This impact was, in part, fueled by open-source code: by developing and sharing packages in public repositories on GitHub they could be easily accessed for free and at scale. These packages are built on advances (and code) in computer vision and AI, which has a strong open science culture. Neuroscience also has strong and growing open science culture~\cite{white2019future}, which greatly impacts the field as evidenced by tools from the Allen Institute, the UCLA Miniscope~\cite{aharoni2019all}, OpenEphys~\cite{siegle2017open}, and Bonsai~\cite{lopes2015bonsai} (just to name a few). \medskip Moreover, Neuroscience and AI have a long history of influencing each other~\cite{hassabis2017neuroscience}, and research in Neuroscience will likely contribute to making AI more robust~\cite{SINZ2019, hassabis2017neuroscience}. The analysis of animal motion is a highly interdisciplinary field at the intersection of biomechanics, computer vision, medicine and robotics with a long tradition~\cite{klette2008understanding}. The recent advances in deep learning have greatly simplified the measurement of animal behavior, which, as we and others believe~\cite{krakauer2017neuroscience}, in turn will greatly advance our understanding of the brain. \begin{flushleft} \textbf{Acknowledgments:} \end{flushleft} We thank Yash Sharma for discussions around future directions in self-supervised learning, Erin Diel, Maxime Vidal, Claudio Michaelis, Thomas Biasi for comments on the manuscript. Funding was provided by the Rowland Institute at Harvard University (MWM, AM), the Chan Zuckerberg Initiative (MWM, AM, JL) and the German Federal Ministry of Education and Research (BMBF) through the Tübingen AI Center (StS; FKZ: 01IS18039A). StS thanks the International Max Planck Research School for Intelligent Systems (IMPRS-IS) and acknowledges his membership in the European Laboratory for Learning \& Intelligent Systems (ELLIS) PhD program. The authors declare no conflicts of interest. M.W.M. dedicates this work to Adam E. Max. \section*{References} \input{manuscript.bbl} \end{document}
2024-02-18T23:40:48.466Z
2020-09-04T02:02:53.000Z
algebraic_stack_train_0000
3,389
11,179
proofpile-arXiv_066-664
\section{The geometric Hall algebra} \label{geoHall} \begin{Notation} From now on we will shorten $S_{\ordop{n}}\mathcal{C}$ to $S_{n}$. \end{Notation} \subsection{The associative multiplication} The spaces $S_1$ and $S_2$ of the Waldhausen construction are used to define multiplication in the Hall algebra. Recall that $S_1$ is the stack of objects of $\mathcal{C}$ and $S_2$ is the stack of short exact sequences. We have the maps $mid:S_2 \to S_1$ and $end:S_2 \to S_1\times S_1$ given by \begin{align*} mid(0\to U\to V\to W\to 0)&=V\\ end(0\to U\to V\to W\to 0)&=(U,W) \end{align*} This gives us a correspondence \[ \stik{1}{ {} \& S_2 \ar{dl}[above, xshift=-0.5em]{end} \ar{dr}{mid} \& {} \\ S_1\times S_1 \& {} \& S_1 } \] Our objective is to explicitly present the associativity data for this multiplication. The usual approach to showing the associativity of multiplication on the 1-categorical level comes down to considering the associativity square, i.e. the square \[ \stik{1}{ S_1^3 \& \& S_2\times S_1 \ar{ll} \ar{rr} \& \& S_1\times S_1 \\ S_1\times S_2 \ar{u} \ar{d} \& P_2 \ar{dr} \ar{l} \& {} \& P_1 \ar{ul} \ar{r}\& S_2 \ar{u} \ar{d} \\ S_1\times S_1 \& \& S_2 \ar{ll} \ar{rr} \& \& S_1 } \] To show that this square commutes in the 1-category of correspondences one takes pullbacks $P_1$ and $P_2$ corresponding to compositions of the upper and right and left and lower sides in the category of correspondences and shows that they are isomorphic. In \cite{KapranovDyckerhoff} it is shown using relations between $S_1,S_2,S_3$ which are an instance of what the authors call the 2-Segal conditions. However if we want to consider the higher associativity data we have to adapt a more systematic approach. For instance in this square we will have an explicitly specified middle term \[ \stik{1}{ S_1^3 \& S_2\times S_1 \ar{l} \ar{r} \& S_1\times S_1 \\ S_1\times S_2 \ar{u} \ar{d} \& X\ar{u} \ar{d} \ar{l} \ar{r} \& S_2 \ar{u} \ar{d} \\ S_1\times S_1 \& S_2 \ar{l} \ar{r} \& S_1 } \] Where the appropriate squares are pullbacks. In this case $X=S_3$, and this condition recovers the 2-Segal condition. We will explicitly construct a system of $n$-cubes serving as candidates for higher associativity data. That this data provides higher isomorphisms in the category of correspondences is equivalent to the 2-Segal conditions of \cite{KapranovDyckerhoff} (see \autoref{2Segalpullbacks}) \subsection{\texorpdfstring{$n$}{n}-cubes of correspondences} \label{Corr} We describe what we mean by an $n$-cube in the category of correspondences. For $\mathcal{D}$ a category we take $\Corr(\mathcal{D})$ to be the system of cubes (formally a \emph{cubical set}) with points being objects of $\mathcal{D}$ and arrows being correspondences (spans) in $\mathcal{D}$ \[ \stik{1}{ A \& E \ar{l} \ar{r} \& B } \] A 2-cube is a diagram of the form \begin{equation} \label{CorrSquare} \stik{1}{ A \& E \ar{l} \ar{r} \& B \\ F \ar{u} \ar{d}\& Z \ar[red,thick]{u} \ar[red,thick]{d}\ar[red,thick]{l} \ar[red,thick]{r} \& G\ar{u} \ar{d} \\ C \& H \ar{l} \ar{r} \& D \\ } \end{equation} We will say that a 2-cube \emph{commutes} if the upper right and lower left squares are Cartesian. This is equivalent to the usual notion in the 1-category of correspondences. A 3-cube is a diagram \begin{equation*} \label{CorrCube} \stik{0.7}{ {} \& \& \phantom{XX}A_3 \& \& \phantom{XX}E_3 \ar[xshift=2em,shorten >=-2em]{ll} \ar{rr} \& \& B_3\\ \\ {} \& A_2 \ar{uur} \ar{ddl} \& \& E_2 \ar{uur} \ar{ddl} \ar{ll} \ar{rr} \& \& B_2 \ar{uur} \ar{ddl} \\ {} \& {}\& \phantom{XX}F_3 \ar[xshift=1em]{uuu} \ar[xshift=1em]{dddd}\& {} \& \phantom{XX}Z_3 \ar{rr} \ar[xshift=2em,shorten >=-2em]{ll} \ar[xshift=1em]{uuu} \ar[xshift=1em]{dddd}\& {} \& G_3 \ar{uuu} \ar{dddd}\\ A_1 \& \& E_1 \ar{ll} \ar{rr} \& \& B_1 \& \&\\ \\ {} \& F_2 \ar{uuur} \ar{ddl} \ar{uuuu} \ar{dddd}\& \& Z_2 \ar[red,thick]{uuur} \ar[red,thick]{ddl} \ar[red,thick]{uuuu} \ar[red,thick]{dddd}\ar[red,thick]{ll} \ar[red,thick]{rr} \& \& G_2 \ar{uuur} \ar{ddl} \ar{uuuu} \ar{dddd}\\ {} \& {}\&\phantom{XX}C_3 \& {}\&\phantom{XX}H_3 \ar{rr} \ar[xshift=2em,shorten >=-2em]{ll} \& {}\& D_3\\ F_1 \ar{uuuu} \ar{dddd}\& \& Z_1 \ar{uuuu} \ar{dddd}\ar{ll} \ar{rr} \& \& G_1\ar{uuuu} \ar{dddd} \& \& \\ \\ {} \& C_2 \ar{uuur} \ar{ddl} \& \& H_2 \ar{uuur}\ar{ddl} \ar{ll} \ar{rr} \& \& D_2 \ar{uuur} \ar{ddl} \\ \\ C_1 \& \& H_1 \ar{ll} \ar{rr} \& \& D_1 } \end{equation*} We will say that a cube of correspondences commutes if the cubes in the upper-right-back and lower-left-front corners are \emph{pullback cubes} (see \autoref{PullbackCube}). This coincides with the usual notion of commutativity of a cube in a 2-category in the same way as above for squares. \begin{Definition} \label{commutativecubes} Representing the vertices in an $n$-cube by sequences of 0's and 1's, we can now define the higher commutative cubes by the property that the two $n$-cubes generated by all faces containing the vertex $(1,0,1,0,\ldots)$ or all faces containing the vertex $(0,1,0,1,\ldots)$ are pullbacks. \end{Definition} \subsection{The combinatorial construction} \label{Hcomb} Our construction of the associativity data splits into two parts. We first construct the system of $n$-cubes in the category $\Corr(\sset^{op})$. Next we apply the extension of the Waldhausen construction described in \autoref{Waldhausen} $S^{ext}$ object-wise to obtain a system of $n$-cubes in $\Corr(\Spaces)$. We obtain a functor $H_{comb}: \mathbb{\Delta}_+ \rightarrow \Corr(SSet)$. By composing with the extended Waldhausen construction we then obtain a functor $H_{geo}: \mathbb{\Delta}_+ \rightarrow \Corr(Stacks)$. Formally the most straightforward way is to consider these functors as functors between categories modelled by cubical sets. However we do not wish to enter into technical details and limit ourselves to the very explicit construction which is the objective of this article. This construction is obviously functorial in any conceivable sense. We start from the combinatorial construction. For every $n$-cube in $\mathbb{\Delta}_+$ we construct a corresponding $n$-cube in $\Corr(\sset^{op})$. \begin{Definition} For $X\in\mathbb{\Delta}_+$ define the \emph{augmentation} of $X$, $\aug{X}$ to be $\Hom_\mathbb{\Delta}(X,0\rightarrow 1)$ considered as an object in $\mathbb{\Delta}^{op}$. \end{Definition} This sends the totally ordered set with $n$ elements in $\mathbb{\Delta}_+$ (which we denote $\ord{n}$) to the totally ordered set with $n+1$ elements in $\mathbb{\Delta}^{op}$ (which we denote $\ordop{n}$). \subsubsection{Construction for objects} \label{HcombObjects} Let $X\in\mathbb{\Delta}_+$. Each element $x\in X$ determines an embedding of $\aug{\{x\}}$ in $\aug{X}$ as follows: Given a map $\{x\} \to \{0\rightarrow 1\}$, we extend it to a map $X \to \{0 \rightarrow 1\}$ by setting it to $0$ on lower than $x$ elements and $1$ on higher than $x$ elements of $X$. Consider the sub-simplicial set of $\aug{X}$ (considered as an object in $\sset^{op}$) generated by these embeddings. We will denote this simplicial set by $H_{comb}(X)$. \begin{Example} The first few values of $H_{comb}$ on the objects of $\mathbb{\Delta}_+$ are as follows \begin{itemize} \item $H_{comb}(\ord{0})=\ordop{0}$ \item $H_{comb}(\ord{1})=\ordop{1}$ \item $H_{comb}(\ord{2})$ is the horn \includegraphics[scale=0.3]{Figures/2HornComb.pdf} \item $H_{comb}(\ord{3})$ is \includegraphics[scale=0.3]{Figures/3HornComb.pdf} \end{itemize} \end{Example} From the description in \autoref{Waldhausen} it is clear how $S$ extends to these objects. Namely, we have \begin{itemize} \item $\ordop{0}\xmapsto{S^{ext}}S_0(C)=\point$ \item $\ordop{1}\xmapsto{S^{ext}}S_1(\mathcal{C})$ \item $\includegraphics[scale=0.3]{Figures/2HornComb.pdf} \xmapsto{S^{ext}}S_1(\mathcal{C})\times S_1(\mathcal{C})\hookrightarrow S_2(\mathcal{C})$ \item $\includegraphics[scale=0.3]{Figures/3HornComb.pdf}\xmapsto{S^{ext}}S_1(\mathcal{C})\times S_1(\mathcal{C})\times S_1(\mathcal{C}) \hookrightarrow S_3(\mathcal{C})$ \end{itemize} \subsubsection{Construction for arrows} Let $X \xrightarrow{f} Y$ be a map in $\mathbb{\Delta}_+$. We want to associate to it a correspondence in $\sset^{op}$. Let $y\in Y$ and denote $X_y$ the preimage of $y$ under $f$. Similarly to the above, we get an imbedding $\aug{X_y}$ in $\aug{X}$. Denote the sub-simplicial set generated by these imbeddings for all $y$ by $H_{comb}(f)$. Then we have a natural correspondence \[ H_{comb}(X) \rightarrow{} H_{comb}(f) \leftarrow H_{comb}(Y) \] \begin{Example} The multiplication is the image of the map $\ord{2} \to \ord{1}$, and on the level of $H_{comb}$ this map goes to \[ \includegraphics[scale=0.5]{Figures/2MultComb.pdf} \] $S^{ext}$ then sends the middle object to the short exact sequences, and the maps to restriction to the endpoints or middle respectively. This is the correspondence defining the multiplication in the Hall algebra. \end{Example} \subsubsection{Construction for squares} Given a square in $\mathbb{\Delta}$ \[ \stik{1}{ X \ar{r}{f} \ar{d}{g} \& Y \ar{d}{h}\\ Z \ar{r}{k} \& W } \] Similarly to what we did with arrows, we have a map $X\xrightarrow{\alpha}W$ and we then construct the square of correspondences \[ \stik{1}{ H_{comb}(X) \ar{r} \ar{d} \& H_{comb}(f) \ar{d}{}\& H_{comb}(Y) \ar{l} \ar{d} \\ H_{comb}(g) \ar{r} \& H_{comb}(\alpha) \& H_{comb}(h) \ar{l}{} \\ H_{comb}(Z) \ar{u} \ar{r} \& H_{comb}(k) \ar{u} \& H_{comb}(W) \ar{l} \ar{u} } \] where $H_{comb}(\alpha)$ is the sub simplicial set in $\aug{X}$ generated by the imbeddings of $\aug{(h\circ f)^{-1}(w)}$ for all $w\in W$. The construction for general $n$-cubes following this example is straightforward and we omit the tedious general definition. See \autoref{2Segalpullbacks} for more examples. \subsection{The higher associativity cubes} The higher associators in the Hall algebra are given by the images of the following family of commutative cubes in $\mathbb{\Delta}_+$: \begin{Lemma} For any $n\geq 3$ there is a unique commutative $n-1$ cube in $\mathbb{\Delta}_+$ that contains all of the surjections $\ord{j}\to\ord{j-1}$ for all $1\leq j\leq n$. \end{Lemma} We call such cube "the $n$-associativity cube. The paths from $\ord{n}$ to $\ord{1}$ on the $n$-associativity cube correspond exactly to all ways of bracketing $n$ letters. \begin{Example} For $n=3$ the $n$-associativity cube is the square \[ \stik{1}{ \ord{3} \ar[two heads]{r} \ar[two heads]{d} \& \ord{2} \ar[two heads]{d} \\ \ord{2} \ar[two heads]{r} \& \ord{1} } \leftrightarrow \left(\alpha: (XY)Z\to X(YZ)\right) \] \end{Example} \begin{Example} For $n=4$ the $n$-associativity cube is the cube \begin{gather*} \stik{1}{ \& \ord{3} \ar[two heads]{rr} \ar[two heads]{dd} \& {} \& \ord{2} \ar[two heads]{dd}\\ \ord{4} \ar[two heads,crossing over]{rr} \ar[two heads]{dd} \ar[two heads]{ur} \& {} \& \ord{3} \ar[two heads]{dd} \ar[two heads]{ur} \& {}\\ {} \& \ord{2} \ar[two heads]{rr} \& {} \& \ord{1}\\ \ord{3} \ar[two heads]{rr} \ar[two heads]{ur} \& {} \& \ord{2} \ar[two heads]{ur} \latearrow{commutative diagrams/crossing over,commutative diagrams/two heads}{2-3}{4-3}{} } \\ \updownarrow\\ \stik{1}{ {} \& (X(YZ))W \ar{r}{\alpha_{{}_{X,Y\cdot Z,W}}}\& X((YZ)W) \ar{dr}{X\cdot \alpha}\\ ((XY)Z)W \ar{ur}{\alpha\cdot W} \ar{dr}{\alpha_{{}_{X\cdot Y,Z,W}}} \& {} \& {} \& X(Y(ZW))\\ {} \& (XY)(ZW) \phantom{X}\ar[shorten <=-1em,shorten >=-1em]{r}{\Id_{XY}\cdot\Id_{ZW}} \& \phantom{X}(XY)(ZW) \ar{ur}{\alpha_{{}_{X,Y,Z\cdot W}}} } \end{gather*} The image of this cube under $H_{comb}$ and the extended Waldhausen construction is a cube of correspondences of stacks, where each face has a 2-morphism specified by $S$. These 2-morphisms compose along the edges and form a diagram whose commutativity is equivalent to the pentagon identity (This diagram is the pentagon diagram, with one of the edges equal $\Id$). \end{Example} The higher associativity cubes correspond to higher associators. \subsection{Recovering the 2-Segal conditions} This section contains the proof of the following: \label{2Segalpullbacks} \begin{Theorem} \label{Corr0} The 2-Segal conditions satisfied by the original Waldhausen construction are equivalent to the requirement that the functor $H_{geo}:\mathbb{\Delta}_+ \rightarrow \Corr(\Spaces)$ sends the higher associativity cubes to commutative cubes in $\Corr(\Spaces)$. \end{Theorem} Let us recall the 2-Segal condition from \cite[\S 2.3]{KapranovDyckerhoff}: Let $S:\mathbb{\Delta}\to\Spaces$ be a functor, and $P$ a polygonal decomposition of an $n$-gon, written as $(P_1,\ldots,P_k)$. e.g. \[ \input{Figures/PolygonalDecomp.tex} \] We define $S_P$ to be $S_{P_1}\times_{S_{P_1\cap P_2}} S_{P_2}\times_{S_{P_2\cap P_3}} S_{P_3}\ldots \times_{S_{P_{k-1}\cap P_k}} S_{P_k}$ where $S_{P_i}$ is $S_{\ordop{\#\{\text{vertices of }P_i\}}}$ and note that by functoriality we have a natural map $S_{\ordop{n}}\to S_P$. \begin{Definition} A functor $S:\mathbb{\Delta}\to\Spaces$ is said to be a 2-Segal stack if for and $n\geq 3$ and any polygonal decomposition $P$ of an $n$-gon the map $S_{\ordop{n}}\to S_P$ is a weak equivalence. \end{Definition} For our purposes we need the following dual version of Proposition 2.3.2 from \cite{KapranovDyckerhoff}: \begin{Proposition} The following are equivalent \begin{enumerate} \item $S$ is a 2-Segal stack. \item For any $n\geq 3$ and any two disjoint subsets $I,J$ of $\ordop{n}$ that don't cover all the vertices of $\ordop{n}$ the following square \[ \stik{1}{ S_{\ordop{n}} \ar{r} \ar{d} \& S_{\ordop{n}\setminus I} \ar{d}\\ S_{\ordop{n}\setminus J} \ar{r} \& S_{\ordop{n}\setminus (I\cup J)} } \] is a pullback. \item For any $n\geq 3$ and any $0\leq i < j\leq n$ of $\ordop{n}$ the following square \[ \stik{1}{ S_{\ordop{n}} \ar{r} \ar{d} \& S_{\ordop{n}\setminus \{i\}} \ar{d}\\ S_{\ordop{n}\setminus \{j\}} \ar{r} \& S_{\ordop{n}\setminus \{i,j\}} } \] is a pullback. \item For any $n\geq 3$ and any $0\leq i\leq n-2$ of $\ordop{n}$ the following square \[ \stik{1}{ S_{\ordop{n}} \ar{r} \ar{d} \& S_{\ordop{n}\setminus \{i\}} \ar{d}\\ S_{\ordop{n}\setminus \{i+2\}} \ar{r} \& S_{\ordop{n}\setminus \{i,i+2\}} } \] is a pullback. \end{enumerate} \end{Proposition} Denote by $C^n_i$ the conditions in part $(4)$. \begin{proof} Obviously $(2)\Rightarrow (3) \Rightarrow (4)$. $(1)\Rightarrow (2)$ because $(2)$ is equivalent to a special case of $(1)$ for the decomposition corresponding to \[\ordop{n}=I\cup J\cup (\ordop{n}\setminus (I\cup J))\] It is also easy to see that any polygonal decomposition can be built up from such decompositions so $(2)\Rightarrow (1)$. $(3)\Rightarrow (2)$ similarly because we can remove the points of $I,J$ one by one. $(4)\Rightarrow (1),(2),(3)$: Consider \[ \stik{1}{ S_{\ordop{n}} \ar{r} \ar{d} \& S_{\ordop{n}\setminus \{i\}} \ar{d} \ar{r} \& S_{\{i+1,i+2,i+3\}} \ar{d}\\ S_{\ordop{n}\setminus \{i+2\}} \ar{r} \& S_{\ordop{n}\setminus \{i,i+2\}} \ar{r} \& S_{\{i+1,i+3\}} } \] The right square is a pullback by induction on $n$, and the left square is a pullback by assumption, so the whole square is a pullback. In terms of polygonal decompositions this means we can separate away any triangle from the polygon and so we can get to any triangulation. Since any polygonal decomposition can be refined to a triangulation, we are done. \end{proof} \begin{proof}[Proof of \autoref{Corr0}] Let us first consider the case $n=3$. The associativity square is \[ \stik{1}{ \ord{3} \ar{r} \ar{d} \& \ord{2} \ar{d} \\ \ord{2} \ar{r} \& \ord{1} } \] Its image under $H_{comb}$ is \[ \includegraphics[scale=0.8]{Figures/AssocInSset.pdf} \] In order for this square to go to a commutative square of correspondences of stacks (see \autoref{commutativecubes}), $S^{ext}$ should take the squares in the upper right (i.e. $(1,0)$) and lower left (i.e. $(0,1)$) corners to pullback squares. This is easily seen to be equivalent to conditions $C^3_1$ and $C^3_0$. Now consider $n=4$. The associativity cube is \begin{equation} \label{PentagonCube} \stik{1}{ \& \ord{3} \ar{rr} \ar{dd} \& {} \& \ord{2} \ar{dd}\\ \ord{4} \ar[crossing over]{rr} \ar{dd} \ar{ur} \& {} \& \ord{3} \ar{dd} \ar{ur} \& {}\\ {} \& \ord{2} \ar{rr} \& {} \& \ord{1}\\ \ord{3} \ar{rr} \ar{ur} \& {} \& \ord{2} \ar{ur} \latearrow{commutative diagrams/crossing over}{2-3}{4-3}{} } \end{equation} Its image under $H_{comb}$ is a cube of correspondences of $\sset^{op}$, that is, a $2\times 2\times 2$ grid of commutative cubes in $\sset^{op}$ so that the outer shell is comprised of the images of the faces of the cube \autoref{PentagonCube} and the center is the 4-simplex. In order for this cube to be commutative by \autoref{commutativecubes} the cubes in the upper-right-back (i.e. $(1,0,1)$) and lower-left-front (i.e. $(0,1,0)$) corners should go to pullback cubes under $S^{ext}$. Let's consider first the upper-right-back cube: \[ \includegraphics[scale=1.2]{Figures/HigherAssocURB.pdf} \] Its top face goes to a product of trivial squares, hence a pullback. Therefore by \autoref{pullbackcubeCor} the cube is a pullback iff the bottom face is a pullback, and this is exactly $C^4_1$. Now consider the lower-left-front cube: \[ \includegraphics[scale=1.2]{Figures/HigherAssocLLF.pdf} \] Its left and front faces correspond to conditions $C^3_1,C^3_2$ which follow from the previous case, and so for the cube to be a pullback we need either the right or the back faces to be pullbacks (in which case the other is as well). These give us conditions $C^4_2,C^4_0$. We continue by induction, noting that sub-cubes of associator cubes are disjoint unions of lower dimensional associator cubes. The conditions $C^n_{2k}$ are recovered from the cube in the $(0,1,0,1,\ldots)$ corner being a pullback cube and the conditions $C^n_{2k+1}$ are recovered from the cube in the $(1,0,1,0,\ldots)$ corner being a pullback cube. \end{proof} \section{Introduction} To an abelian category $\mathcal{C}$ (with appropriate restrictions) one can assign an algebra $A$ called the \emph{Hall algebra} of $\mathcal{C}$. The first major application was in \cite{ringel1990hall} where Ringel showed that the positive half of the quantum group is isomorphic to the Hall algebra of the category $\Rep_{\mathbbm{F}_q}(Q)$ for $Q$ the quiver corresponding to the Dynkin diagram. \begin{Example} Take $Q$ to be the quiver with one point, then $\Rep_{\mathbbm{F}_q}(Q)$ is just $\mathbf{Vect}_{\mathbbm{F}_q}$. The Hall algebra $A$ has basis $\mathbf{n},n\in\mathbbm{N}$ and multiplication given by \[ \mathbf{n}\cdot\mathbf{m}=\#\left(\{0\to\mathbbm{F}_q^n\to\mathbbm{F}_q^{n+m}\to\mathbbm{F}_q^m\to 0\}/\sim\right)\cdot(\mathbf{n+m}) \] Even in this simplest example we see that geometry is evident in the Hall algebra, as the coefficient appearing is obviously the number of points of an algebraic variety (a Grassmannian in this case). \end{Example} Luzstig used this observation in \cite{lusztig1991quivers} to construct a canonical basis for the Hall algebra (and hence for the positive halves of quantum groups) and to prove the positivity for the canonical basis. In this article we describe a coherent construction of a system of geometric objects which govern the Hall algebra, and which have inherent in them a higher associative structure. In \cite{KapranovDyckerhoff} the authors consider the connection between Hall algebras and the Waldhausen construction. They introduce the notion of 2-Segal space and show that the Waldhausen construction for an abelian category $\mathcal{C}$ is a 2-Segal space. This fact in particular imples the associativity of the Hall algebra. Moreover they indicate the connection between the 2-Segal conditions and the higher associativity constraints. We explore this subject systematically in the present work. In \autoref{Corr0} we state the connection between the 2-Segal conditions and the higher associativity data we construct. In \cite{KapranovDyckerhoff} the authors suggest to study Hall algebras on the level of category of correspondences. Consider the groupoids (or stacks) of objects of a category $\Ob$ and exact sequences $\Exact$. Then the correspondence \[ \stik{1}{ {} \& \Exact \ar{dl}[above, xshift=-0.5em]{end} \ar{dr}{mid} \& {} \\ \Ob\times \Ob \& {} \& \Ob } \] defines a monoidal structure on $\Ob$. Using a composition of appropriate push-and pull- functors this correspondence can be used to define multiplication in the Hall algebra of (finitely supported) functions on groupoids and it's geometric analog constructed using sheaves. We construct a coherent system of higher associativity data for this structure. This data is provided by an explicit system of $n$-cubes of correspondences for any $n$. Altogether this construction organizes into a functor $\mathbb{\Delta}_+\rightarrow Corr(\Spaces)$. In our forthcoming article \cite{ourGeometricHall2} we extend this to a functor from $\mathbb{\Delta}_+\otimes\mathbb{\Delta}_+$ which captures a bi-monoidal structure associated to Hall algebras. Such a construction was attempted also in \cite{Lyubashenko} from a different point of view. Several interesting examples of Hall algebras can then be obtained from this abstract setting using so called \emph{transfer theories}. The examples treated in \cite{KapranovDyckerhoff} include Ringel's Hall algebra associated to the category of representations of a quiver, Lusztig’s geometric Hall algebra, To\"en's derived Hall algebra, Joyce’s motivic Hall algebra, and Kontsevich-Soibelman’s cohomological Hall algebra. In this article we outline a transfer theory to the category of constructible sheaves. The difference from the examples listed above is that we obtain a monoidal category as opposed to an algebra and thus have to make use of the higher associators mentioned above. We show that the 2-Segal conditions of \cite{KapranovDyckerhoff} are equivalent to our associators going to isomorphisms under this transfer construction. Finally, we suggest an approach to constructing categorified representations in \autoref{Representations}. In what follows we will assume for the ease of presentation that we work with finitary abelian categories. However the construction we describe can be straightforwardly generalized to different settings, such as in the list of examples above. \section{Notations} $\mathbb{\Delta}$ - the category of finite ordered sets. The elements of $\mathbb{\Delta}$ will be denoted by \[\ordop{0}=0, \ordop{1}=0\rightarrow 1, \ordop{2}=0\rightarrow 1\rightarrow 2, \ldots\] $\mathbb{\Delta}_+$ - the augmented category of finite ordered sets. The elements of $\mathbb{\Delta}_+$ will be denoted by \[\ord{0}=\emptyset, \ord{1}=0, \ord{2}=0\rightarrow 1, \ord{3}=0\rightarrow 1\rightarrow 2, \ldots\] \section{Proto-abelian categories} \label{ProtoAbelian} In \cite{Dyckerhoff} the notion of proto-abelian category is described. We briefly recall the definition here and define what we call an "extended" proto-abelian category. \begin{Definition} A \emph{proto-abelian category} is the data of a pointed (i.e. possessing a 0 object) category $\mathcal{C}$ together with two classes of morphisms $E,M$ ("epis" and "monos") which satisfy: \begin{enumerate} \item $\mathcal{C}$ has all pushouts and pullbacks of epis along monos. \item pushouts or pullbacks of monos (epis) along epis (monos) are monos (epis). \item A square is a pushout of an epi along a mono iff it is a pulback of an epi along a mono. \end{enumerate} \end{Definition} The only thing we need to change in the above definition is the pointedness of $\mathcal{C}$. \begin{Definition} A \emph{pseudo-zero} object of a category $\mathcal{C}$ is an object $Z$ such that we have $\#\Hom(Z,X)\leq 1$ for any $X\in\mathcal{C}$ or $\#\Hom(X,Z)\leq 1$ for any $X\in\mathcal{C}$. \end{Definition} \begin{Definition} An \emph{extended} proto-abelian category is the same as a proto abelian category, except that instead of being pointed it is required to satisfy that any object has a map to and from some pseudo-zero object. We call the minimal collection of psudeo-zero objects with this property the "zero subcategory". \end{Definition} A morphism of extended proto-abelian categories is a functor $\mathcal{C}\to\mathcal{D}$ preserving epis and monos, their pullbacks, and zero subcategories. \begin{Example} The category $\operatorname{grid}(X)=\Hom(0\to 1,X)$ defined in \autoref{Waldhausen} is an extended proto-abelian category with zero subcategory the constant maps. \end{Example} \begin{Proposition} Let $\mathcal{C}$ be a proto-abelian category, and $X\in\mathcal{C}$, then $\mathcal{C}_{/X}$ has a natural structure of an extended proto-abelian category. \end{Proposition} \begin{proof} Define epis via the forgetful functor, and monos to be maps $(S\xrightarrow{0} X)\to(T\to X)$ where the underlying map $S\to T$ is a mono. The conditions are then easy to verify, and the zero subcategory consists of $(0\to X)$ and $X\xrightarrow{\Id}X$. \end{proof} \section{Pullback cubes} \begin{Definition} \label{PullbackCube} A commutative cube is said to be a \emph{pullback} cube if it presents the vertex $X=000\ldots$ as the limit of the rest of the diagram. i.e. for any other commutative cube with $\widetilde{X}=000\ldots$ there exists a unique morphism $\widetilde{X}\to X$ making everything commute. \end{Definition} \begin{Lemma} \label{pullbackcubeLemma} Consider a commutative cube in some category \[ \stik{1}{ \& A \ar{rr} \ar{dd} \& {} \& B \ar{dd}\\ X \ar[crossing over]{rr} \ar{dd} \ar{ur} \& {} \& Y \ar{dd} \ar{ur} \& {}\\ {} \& C \ar{rr} \& {} \& D\\ Z \ar{rr} \ar{ur} \& {} \& W \ar{ur} \latearrow{commutative diagrams/crossing over}{2-3}{4-3}{} } \] And suppose that $ABCD$ is a pullback square, then $XYZW$ is a pullback square if and only if the whole cube is a pullback cube. \end{Lemma} \begin{proof} For transparency let us present a proof for a 1-category case. The appropriate generalization is straightforward. Assume $XYZW$ is a pullback square. Consider another commutative cube \[ \stik{1}{ \& A \ar{rr} \ar{dd} \& {} \& B \ar{dd}\\ \widetilde{X} \ar[crossing over]{rr} \ar{dd} \ar{ur} \& {} \& Y \ar{dd} \ar{ur} \& {}\\ {} \& C \ar{rr} \& {} \& D\\ Z \ar{rr} \ar{ur} \& {} \& W \ar{ur} \latearrow{commutative diagrams/crossing over}{2-3}{4-3}{} } \] We want to show that there is a unique map $\widetilde{X}\to X$ that makes everything commute. Since $XYZW$ is a pullback square we have a unique map $\widetilde{X}\to X$ such that $\widetilde{X}Y=XY\circ\widetilde{X}X$ and $\widetilde{X}Z=XZ\circ\widetilde{X}X$. we just need to show that $\widetilde{X}A=XA\circ\widetilde{X}X$. This follows because both sides are a map $\widetilde{X} \to A$ which make the diagram \[ \stik{1}{ \widetilde{X} \ar{dr} \ar{drr} \ar{ddr} \\ {} \& A \ar{d} \ar{r}\& B \ar{d} \\ {} \& C \ar{r} \& D } \] commute. Since we assumed $ABCD$ is a pullback such a map is unique. Assume now that the cube is a pullback and consider a square \[ \stik{1}{ \widetilde{X} \ar{r} \ar{d} \& Y \ar{d} \\ Z \ar{r} \& W } \] We want to show that there is a unique map $\widetilde{X}\to X$ which makes the diagram \[ \stik{1}{ \widetilde{X} \ar{dr} \ar{drr} \ar{ddr} \\ {} \& X \ar{d} \ar{r}\& Y \ar{d} \\ {} \& Z \ar{r} \& W } \] commute. The compositions $YB\circ\widetilde{X}Y$ and $ZC\circ\widetilde{X}Z$ fit in a commutative square \[ \stik{1}{ \widetilde{X} \ar{r} \ar{d} \& B \ar{d} \\ C \ar{r} \& D } \] and so there is a map $\widetilde{X}\to A$ that makes everything commute, and since the cube is a pullback this gives us our desired map $\widetilde{X}\to X$. \end{proof} \begin{Corollary} \label{pullbackcubeCor} Let $C$ be a commutative $n$-cube. Suppose that an $n-1$-subcube $C'$ in $C$ is a pullback cube, then $C$ is a pullback iff the opposite cube to $C'$ is a pullback cube. \end{Corollary} \begin{proof} Proven in the same way as \autoref{pullbackcubeLemma} by induction on $n$. \end{proof} \section{Representations} \label{Representations} In this section we outline an approach to constructing certain representations from our geometric point of view. For simplicity we consider the case $\mathcal{C}=\mathbf{Vect}_{\mathbbm{F}_q}$. Let $V\in\mathcal{C}$, and consider the category $\mathcal{C}_{/V}$. As discussed in \autoref{ProtoAbelian} this is an extended proto-abelian category, and therefore we can use it as the input in the Waldhausen construction, with some caveats. For instance for a simplex $\ordop{n}$ we get the stack of diagrams of the form \[ \stik{1}{ 0 \ar[hook]{r} \& C_{01} \ar[two heads]{d} \ar[hook]{r} \& C_{02}\ar[two heads]{d} \ar[hook]{r} \& \cdots \ar[hook]{r} \& C_{0n}\ar[two heads]{d} \\ {} \& 0 \ar[hook]{r} \& C_{12}\ar[two heads]{d} \ar[hook]{r} \& \cdots \ar[hook]{r} \& C_{1n}\ar[two heads]{d} \\ {} \& {} \& 0 \ar[hook]{r} \& \cdots \ar[hook]{r} \& C_{2n}\ar[two heads]{d}\\ {} \& {} \& {} \& \ddots \& \vdots\ar[two heads]{d} \\ {} \& {} \& {} \& {} \& 0 } \] where in all but the last column, the map to $V$ must be the zero map. As a result, this stack has a canonical projection to the Waldhausen construction for $\mathcal{C}$ and $\ordop{n-1}$ (the first $n-1$ columns), and noting that the rest can be generated by the object $C_{0n}$ by forming pushouts, we see that in fact this stack is equivalent to $S_{\ordop{n-1}}^\mathcal{C}\times S_{\ordop{1}}^{\mathcal{C}_{/V}}$. From the above it is obvious that not every map of simplicial sets appearing in the combinatorial Hall algebra construction will be compatible with this Waldhausen construction. However if we restrict $H_{comb}$ to the subcategory $\mathbb{\Delta_{mod}}$ of $\mathbb{\Delta}$ which has only the maps where the preimage of the top element is always non-empty, then there is no problem. For example the map $\ord{2}\to\ord{1}$ goes to the correspondence \[ \stik{1}{ {} \& S_{\ordop{2}}^{\mathcal{C}_{/V}} \ar{dl}[above,xshift=-1.5em]{(C_{01},C_{12})} \ar{dr}[above,xshift=0.5em]{C_{02}}\& {} \\ S_{\ordop{1}}^{\mathcal{C}}\times S_{\ordop{1}}^{\mathcal{C}_{/V}} \& {} \& S_{\ordop{1}}^{\mathcal{C}_{/V}} } \] which gives the action. The subcategory $\mathbb{\Delta_{mod}}$ embodies the structure of a module over an algebra much in the same way as $\mathbb{\Delta}$ embodies the structure of an algebra. We will return to this construction in more detail in a future article. \subsection{General construction} Following the ideas of \cite{KapranovDyckerhoff} we would like to study Hall algebras on the level of the category of correspondences (in our case correspondences of stacks). The first observation is that the underlying structure governing the associativity conditions becomes more transparent if one works in the 2- (or more precisely, double) category of correspondences $\Corr(\Spaces)$. We extend the Waldhausen construction to a (certain subcategory of) the category of simplicial sets to provide the data of 2-morphisms in this category. We obtain a functor $H_{geom}:\mathbb{\Delta}_+ \rightarrow \Corr(\Spaces)$. We call this object the geometric Hall algebra. We provide two transfer constructions for $\Corr(\Spaces)$ in \autoref{Transfer} - the one recovering the Ringel-Hall algebra lands in $\mathbf{Vect}$ and the one we use to prove Green's theorem lands in $\Cat$. This gives us two monoidal functors $H:\mathbb{\Delta}_+ \to \mathbf{Vect}$ and $H:\mathbb{\Delta}_+ \to \Cat$. To prove Green's theorem we would like to study the relation between multiplication and comultiplication on the level of $\Corr(\Spaces)$. We do it by extending $H$ so it gives a bisimplicial system in \autoref{Extension}. \section{Transfer theories} \label{Transfer} In \autoref{geoHall} we describe a system of stacks $H_{geo}$ associated to a category $\mathcal{C}$. In order to recover from it any kind of algebra structure, we need to transfer the higher associative structure from correspondences of stacks to an algebraic setting $\mathcal{A}$ - formally a monoidal $\infty$ category with duals. We call such a gadget (following \cite{KapranovDyckerhoff}) a \emph{transfer theory}. A transfer theory $T$ will need to assign to each $n$-cube of commutative correspondences an $n$-cube in our algebraic setting of choice. To begin with, we need an assignment $X\mapsto T(X)$ on objects. Then we need for any correspondence $X\xleftarrow{f}Z\xrightarrow{g}Y$ a morphism $T(X)\to T(Y)$. Instead we will only require a to have an assignment $T(X)\xleftarrow{T(f)}Z\xrightarrow{T(g)}T(Y)$. Since by assumption $\mathcal{A}$ has duals we can choose a dual for $T(f)$ to get an actual morphism but we need not make a consistent choice of duals. For squares, note that correspondences can be considered as $\widetilde{I}$-diagrams for $\widetilde{I}=(0\leftarrow M\rightarrow 1)$. Consider now the 2-category $\widetilde{I}_2$: \begin{equation} \label{CorrSquare2cat} \stik{1}{ (0,0) \& (M,0) \ar{r} \ar{l} \& (1,0) \\ (0,M) \ar[Rightarrow,shorten <=1em,shorten >=1em]{dr}[above,sloped]{\sim}\ar[Rightarrow,shorten <=1em,shorten >=1em]{ur}[above,sloped]{\sim} \ar{u} \ar{d} \& (M,M) \ar{r} \ar{l} \ar{u} \ar{d} \& (1,M)\ar[Rightarrow,shorten <=1em,shorten >=1em]{dl}[above,sloped]{\sim} \ar[Leftarrow,shorten <=1em,shorten >=1em]{ul}[above,sloped]{\sim} \ar{u} \ar{d}\\ (0,1) \& (M,1) \ar{r} \ar{l} \& (1,1) } \end{equation} We want an assignment (compatible with the previous one) from any $\widetilde{I}_2$ diagram in $\Spaces$ to an $\widetilde{I}_2$ diagram in $\mathcal{A}$. Again, if we want to get an actual 2-morphism we need to replace some things with adjoints. Firstly, we need to take the left adjoint of the $(0,1)$ and $(1,0)$ squares, and the double left adjoint of the $(0,0)$ square to get a diagram of the form \[ \stik{1}{ (0,0) \& (M,0) \ar[Rightarrow,red,shorten <=1em,shorten >=1em]{dl} \ar{r} \ar[leftarrow,red]{l} \& (1,0) \ar[Rightarrow,red,shorten <=1em,shorten >=1em]{dl}\\ (0,M) \ar[leftarrow,red]{u} \ar{d} \& (M,M) \ar[Leftarrow,red,shorten <=1em,shorten >=1em]{dl} \ar{r} \ar[leftarrow,red]{l} \ar[leftarrow,red]{u} \ar{d} \& (1,M)\ar[Rightarrow,shorten <=1em,shorten >=1em]{dl}[above,sloped]{\sim} \ar[leftarrow,red]{u} \ar{d}\\ (0,1) \& (M,1) \ar{r} \ar[leftarrow,red]{l} \& (1,1) } \] In order to compose this we need to be able to invert the lower left morphism, or the other three. In both cases we want to end up with an invertible morphism so in fact we want to require all of them to be invertible. For the $(0,0)$ square this adds no requirement since a double adjoint of an invertible morphism is invertible, but for the $(0,1)$ and $(1,0)$ squares this needs to be a requirement on $T$. Note that, as shown in \autoref{Corr0} for the associator square the squares in question are pullback squares. Hence it is enough to require $T$ to be a functor from $\Spaces$ to $\mathcal{A}$ which takes pullback squares to squares satisfying the \emph{Beck-Chevalley} condition. (The Beck-Chevalley condition says precisely that the 2-morphism in the square whose sides are replaced by adjoints is invertible). \begin{Remark} \label{pullbackBC} In \cite{ourSSH} we showed that this is the same as saying that $T$ preserves a generalization of pullback squares. \end{Remark} This can be generalized to higher dimensional cubes using the notion of pullback cubes from \autoref{PullbackCube}. Using \autoref{Corr0} and \autoref{pullbackBC} we arrive at the following: \begin{Definition} \label{TransferDef} A functor $T:\Spaces\to\mathcal{A}$ is called a \emph{transfer theory} if $T$ preserves generalized pullback cubes. \end{Definition} \subsection{Transfer to \texorpdfstring{$\mathbf{Vect}$}{Vect}} \label{VectTransfer} This construction is based on a functor from the 1-category of correspondences of groupoids to $\mathbf{Vect}$ described in \cite[\S8.2]{KapranovDyckerhoff}. We need to assume that $\mathcal{C}$ is finitary (categories of representations of a simply laced quiver satisfy this assumption). A stack $X$ over $\mathbbm{k}$ is sent to the vector space of finitely supported functions on the set $\pi_0(X(\mathbbm{k}))$ (i.e. the isomorphism classes of $X(\mathbbm{k})$) which we denote $\mathcal{F}(X)$. Given a correspondence $X \xleftarrow{s} Y \xrightarrow{p} Z$ we first send it to the correspondence of groupoids $X(\mathbbm{k}) \xleftarrow{s} Y(\mathbbm{k}) \xrightarrow{p} Z(\mathbbm{k})$ and then to the map $\mathcal{F}(X) \to \mathcal{F}(Z)$ given by $p_!s^*$. This assumes some restrictions on $s$ and $p$ (see \cite[\S 2]{Dyckerhoff}) which are always satisfied in the cases we consider. It follows immediately from \cite[Proposition 2.17]{Dyckerhoff} that this assignment is a transfer theory to $\mathbf{Vect}$. Applying this transfer to $H_{geo}$ recovers the usual Hall algebra. \subsection{Transfer to \texorpdfstring{$\dgCat$}{dgCat}} \label{CatTransfer} Since all the stacks appearing in $H_{geo}$ are disjoint unions of global quotients, the assignment $X\to \Sh^{const}(X)$ which sends a stack to the category of constructible sheaves on $X$ makes sense. i.e. if $X\coprod X_i//G_i$ then $\Sh^{const}(X):=\bigoplus \Sh^{const}_{G_i}(X_i)$. The fact that the equivariant sheaves functor satisfies base change is exactly what we need in \autoref{TransferDef} when restricting to the image of $H_{geo}$. It follows from \cite{VaragnoloVasserot} that applying this transfer to $H_{geo}$ recovers the categorification of quantum groups by KLR algebras. \subsection{Transfer to \texorpdfstring{$\LinCat$}{LinCat}} Define a transfer theory by sending a stack $X/\mathbbm{k}$ to the category $\Rep_\mathbbm{C}(X(\mathbbm{k}))$ of finitely supported representations of the groupoid $X(\mathbbm{k})$ in $\mathbf{Vect}_\mathbbm{C}$. In our article \cite{ourGLnBraiding} with Mark Penney we use this transfer in the case $\mathcal{C}=\mathbf{Vect}_{\mathbbm{F}_q}$. This yields a category equivalent to the category $\bigoplus \Rep(GL(n,\mathbbm{F}_q))$ together with the monoidal structure of parabolic induction. Using constructions from \cite{ourGeometricHall2} we use this point of view to recover the braiding constructed in \cite{JoyalStreetGLn}. \section{The Waldhausen S-construction} \label{Waldhausen} Let $\mathcal{C}$ be an abelian category. We recall the construction of a simplicial system of spaces associated to $\mathcal{C}$ called the Waldhausen construction. This system of spaces plays a role in defining multiplication in the Hall algebra of $\mathcal{C}$ and providing the associator data. Our exposition in this section follows \cite{KapranovDyckerhoff}. Note that the classes of mono- and epimorphisms in $\mathcal{C}$ satisfy the following: \begin{enumerate} \item Any commutative square with monomorphisms as vertical and epimorphisms as horizontal maps is a pullback iff it is a pushout. We will call such squares bicartesian. \item Pullbacks and pushouts of monomorphisms along epimorphisms exist. \end{enumerate} The Waldhausen construction assigns to a linear category $\mathcal{C}$ a functor $S_{-}\mathcal{C}:\mathbb{\Delta}^{op}\to \Spaces$ as follows: \begin{enumerate} \item Take $X\in\mathbb{\Delta}$ \item Consider $\operatorname{grid}(X):=\Hom(0\to 1,X)$, as a marked category, with the constant maps the marked objects. Note that $\operatorname{grid}(X)$ has two classes of maps, the "horizontal" and the "vertical", by which we mean maps that are identity on the 0 or 1 component, respectively. \item Define $S_X\mathcal{C}$ to be the stack of maps from $\operatorname{grid}(X)$ to $\mathcal{C}$ which take the marked objects to $0$, the horizontal maps to monomorphisms and the vertical maps to epimorphisms, and take Cartesian squares to Cartesian squares. \end{enumerate} \begin{Remark} The last stage can be streamlined by noting that (assuming some restrictions on $X$) $\operatorname{grid}(X)$ has a natural structure of an \emph{extended} proto-abelian category, i.e. instead of one zero object it has a "zero subcategory" - the constant maps. Then $S_X\mathcal{C}$ can be considered to be just maps of proto abelian categories in this wider sense. See \autoref{ProtoAbelian} \end{Remark} \begin{Example} Take $X=\ordop{1}=0\to 1$, then \[ \operatorname{grid}(X)=\stik{1}{ 00 \ar{r} \& 01 \ar{d} \\ {} \& 11 } \] and so $S_X\mathcal{C}=S_{\ordop{1}}\mathcal{C}$ is the space of objects of $\mathcal{C}$. \end{Example} \begin{Example} Take $X=\ordop{2}=0\to 1\to 2$, then \[ \operatorname{grid}(X)=\stik{1}{ 00 \ar{r} \& 01 \ar{r} \ar{d} \& 02 \ar{d} \\ {} \& 11 \ar{r} \& 12 \ar{d} \\ {} \& {} \& 22 } \] The data of a map $\operatorname{grid}(X)\to\mathcal{C}$ then consists of a square\[ \stik{1}{ C_{01} \ar[hook]{r} \ar[two heads]{d} \& C_{02} \ar[two heads]{d}\\ C_{11}=0 \ar[hook]{r} \& C_{12} } \] which must be Cartesian and therefore also coCartesian. This just means that $C_{01}\to C_{02} \to C_{12}$ is an exact sequence. In all, $S_X\mathcal{C}=S_{\ordop{2}}\mathcal{C}$ is the stack of exact sequences in $\mathcal{C}$. \end{Example} It is now easy to guess the general shape of $S_X\mathcal{C}$. Namely, for $X=\ordop{n}$ we get the diagrams of the form \[ \stik{1}{ 0 \ar[hook]{r} \& C_{01} \ar[two heads]{d} \ar[hook]{r} \& C_{02}\ar[two heads]{d} \ar[hook]{r} \& \cdots \ar[hook]{r} \& C_{0n}\ar[two heads]{d} \\ {} \& 0 \ar[hook]{r} \& C_{12}\ar[two heads]{d} \ar[hook]{r} \& \cdots \ar[hook]{r} \& C_{1n}\ar[two heads]{d} \\ {} \& {} \& 0 \ar[hook]{r} \& \cdots \ar[hook]{r} \& C_{2n}\ar[two heads]{d}\\ {} \& {} \& {} \& \ddots \& \vdots\ar[two heads]{d} \\ {} \& {} \& {} \& {} \& 0 } \] where every square is biCartesian. It is shown in \cite{KapranovDyckerhoff} Lemma 2.4.9 that the groupoid of diagrams of this shape is equivalent to the groupoid of flags of length $n$ providing the connection to the classical Waldhausen construction. Their argument can be generalized to stacks in a straightforward manner. In the following section we will extend the Waldhausen construction to a functor from a certain subcategory of $\sset$ to $\Spaces$.
2024-02-18T23:40:49.062Z
2016-11-29T02:11:49.000Z
algebraic_stack_train_0000
3,415
7,166
proofpile-arXiv_066-705
\section{Introduction} \label{Intro} Recently, Nisar et al.\cite{Nisar-Saiful} introduced and studied various properties of $\mathtt{k}$-Struve function $\mathtt{S}_{\nu,c}^{\mathtt{k}}$ defined by \begin{equation}\label{k-Struve} \mathtt{S}_{\nu,c}^{\mathtt{k}}(x):=\sum_{r=0}^{\infty}\frac{(-c)^r} {\Gamma_{\mathtt{k}}(r\mathtt{k}+\nu+\frac{3\mathtt{k}}{2})\Gamma(r+\frac{3}{2})} \left(\frac{x}{2}\right)^{2r+\frac{\nu}{\mathtt{k}}+1}. \end{equation} where $c,\nu \in \mathbb{C}, \nu>\frac{3}{2}\mathtt{k}$. The generalized Wright hypergeometric function ${}_{p}\psi _{q}(z)$ is given by the series \begin{equation} {}_{p}\psi _{q}(z)={}_{p}\psi _{q}\left[ \begin{array}{c} (a_{i},\alpha _{i})_{1,p} \\ (b_{j},\beta _{j})_{1,q \end{array \bigg|z\right] =\displaystyle\sum_{k=0}^{\infty }\dfrac{\prod_{i=1}^{p \Gamma (a_{i}+\alpha _{i}k)}{\prod_{j=1}^{q}\Gamma (b_{j}+\beta _{j}k) \dfrac{z^{k}}{k!}, \label{Fox-Wright} \end{equation where $a_{i},b_{j}\in \mathbb{C}$, and real $\alpha _{i},\beta _{j}\in \mathbb{R}$ ($i=1,2,\ldots ,p;j=1,2,\ldots ,q$). Asymptotic behavior of this function for large values of argument of $z\in {\mathbb{C}}$ were studied in \cite{CFox} and under the condition \begin{equation} \displaystyle\sum_{j=1}^{q}\beta _{j}-\displaystyle\sum_{i=1}^{p}\alpha _{i}>-1 \label{eqn-5-Struve} \end{equation was found in the work of \cite{Wright-2,Wright-3}. Properties of this generalized Wright function were investigated in \cite{Kilbas}, (see also \cite{Kilbas-itsf, Kilbas-frac}. In particular, it was proved \cite{Kilbas} that ${}_{p}\psi _{q}(z)$, $z\in {\mathbb{C}}$ is an entire function under the condition ($\ref{eqn-5-Struve}$). In\cite{Nair-1}, Nair introduced a pathway fractional integral operator and developed further by Mathai and Haubold \cite{Mathai-Habold-1} ,\cite{Mathai-Habold-2} (Also, see \cite{Mathai-pathway}) is defined as follows : Let $\ f\left( x\right) \in L\left( a,b\right) ,\eta \in \mathbb{C},\Re\left( \eta \right) >0,a>0$ and the pathway parameter $\alpha <1$(cf \cit {Praveen-pathway}),then \begin{equation} \left( P_{0+}^{\left( \eta ,\alpha \right) }f\right) \left( x\right) =x^{\eta }\int\limits_{0}^{\left[ \frac{x}{a\left( 1-\alpha \right)}\right] }\left[1- \frac{a\left( 1-\alpha \right) t}{x}\right] ^{\frac{\eta }{\left( 1-\alpha \right)}}f\left( t\right) dt. \label{eqn-path-1} \end{equation} For a real scalar $\alpha$, the pathway model for scalar random variables is represented by the following probability density function (p.d.f.): \begin{equation} f\left( x\right) =c\left\vert x\right\vert ^{\gamma -1}\left[ 1-a\left( 1-\alpha \right) \left\vert x\right\vert ^{\delta }\right] ^{\frac{\beta } \left( 1-\alpha \right) }}, \label{eqn-path-2} \end{equation} provided that $-\infty <x<\infty ,\delta >0,\beta \geq 0,\left[ 1-a\left( 1-\alpha \right) \left\vert x\right\vert ^{\delta }\right] >0,$ and $\gamma >0$, where is the normalizing constant and $\alpha$ is called the pathway parameter \cite{Nair-1}. Note that for $\alpha <1$ it is a finite range density with $\left[ 1-a\left( 1-\alpha \right) \left\vert x\right\vert ^{\delta }\right] >0$ and \ ($\ref{eqn-path-2}$) remains in the extended generalized type-1 beta family \ . The pathway density in ($\ref{eqn-path-2}$), for $\alpha < 1$, includes the extended type-1 beta density, the triangular density, the uniform density and many other p.d.f'.s .\cite{Praveen-pathway}.For instance , $\alpha >1$ gives \begin{equation} f\left( x\right) =c\left\vert x\right\vert ^{\gamma -1}\left[ 1+a\left( 1-\alpha \right) \left\vert x\right\vert ^{\delta }\right] ^{-\frac{\beta } \left( 1-\alpha \right) }}, \label{eqn-path-3} \end{equation provided that $-\infty <x<\infty ,\delta >0,\beta \geq 0,$ and $\alpha >0$ which is the extended generalized type-2 beta model for real x. It includes the type-2 beta density, the F density, the Student-t density, the Cauchy density and many more. For more details about pathway integral operator, one can refer \cite{Praveen-pathway, Purohit}.The purpose of this work is to investigate the composition formula of integral transform operator due to Nair, which is expressed in terms of the generalized Wright hypergeometric function, by inserting the $\mathtt{k}-$ Struve function \section{Pathway Fractional Integration of$\mathtt{k}$-Struve function.} The results given in this section are based on the preliminary assertions giving by composition formula of pathway fractional integral ($\re {eqn-path-1}$) with a power function. \begin{lemma} ({Agarwal}~\cite{Praveen-pathway},Lemma 1) Let $\eta \in \mathbb{C},\Re\left( \eta \right) >0,\beta \in \mathbb{C}$ and $\alpha <1.$ If \Re\left( \beta \right) >0,$and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1, then \begin{equation} \left\{ P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\beta -1}\right] \right\} \left( x\right) =\frac{x^{\eta +\beta }}{\left[ a\left( 1-\alpha \right) \right] ^{\beta }}\frac{\Gamma \left( \beta \right) \Gamma \left( 1 \frac{\eta }{1-\alpha }\right) }{\Gamma \left( 1+\frac{\eta }{1-\alpha }+\beta \right) }. \label{lemma1} \end{equation} The pathway fractional integration of the $\mathtt{k}-$ Struve function is given by the following theorem. \end{lemma} \begin{theorem}\label{Th1} Let $\eta ,\rho ,\nu, c \in C$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}\mathtt{k}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-th1} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\mathtt{S}_{\nu,c}^{\mathtt{k}}(t)\right] \left( x\right) =x^{\eta}\left(\frac{x}{a(1-\alpha)}\right)^{\rho+\frac{\nu}{\mathtt{k}}+1}\frac{\Gamma\left(1+\frac{\eta}{1-\alpha}\right)} {\mathtt{k}^{\frac{\nu}{\mathtt{k}}+\frac{1}{2}}2^{\frac{\nu}{\mathtt{k}}+1}}\\ \times _{2}\Psi _{3}\left[ \begin{array}{ccc} \left( \rho +\frac{\nu}{\mathtt{k}}+1,2\right) , & \left( 1,1\right); & \\ \left( \rho +\frac{\nu}{\mathtt{k}}+\frac{\eta }{1-\alpha }+2,2\right) , & \left(\frac{\nu}{\mathtt{k}}+\frac{3}{2},1\right) , & \left( 3/2,1\right \end{array ;-\frac{cx^{2}}{4\mathtt{k}\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right] \end{array} \end{equation} \end{theorem} \begin{proof} Applying the pathway operator defined in \eqref{eqn-path-1} to \eqref{k-Struve}, and changing the order of integration and summation, we ge \begin{align*} \left( P_{0+}^{\left( \eta ,\alpha \right)}\left[ t^{\rho-1}\mathtt{S}_{\nu,c}^{\mathtt{k}}(t)\right] \right) \left( x\right)&=P_{0+}^{\left( \eta ,\alpha \right)}\left[t^{\rho-1}\sum_{r=0}^{\infty}\frac{(-c)^{r}\left(\frac{t}{2}\right)^{2r+\frac{\nu}{\mathtt{k}}+1}}{\Gamma_{\mathtt{k}}\left(r\mathtt{k}+\nu+\frac{3}{2}\mathtt{k}\right)\Gamma\left(r+\frac{3}{2}\right)}\right](x)\\ &=\sum_{r=0}^{\infty}\frac{(-c)^{r}\left(\frac{1}{2}\right)^{2r+\frac{\nu}{\mathtt{k}}+1}} {\Gamma_{\mathtt{k}}\left(r\mathtt{k}+\nu+\frac{3}{2}\mathtt{k}\right)\Gamma\left(r+\frac{3}{2}\right)} P_{0+}^{\left( \eta ,\alpha \right)}\left(t^{\rho+2r+\frac{\nu}{\mathtt{k}}}\right)(x) \end{align*} Using Lemma $(\ref{lemma1})$, we get \begin{align*} &&=\sum_{r=0}^{\infty}\frac{(-c)^{r}\left(\frac{1}{2}\right)^{2r+\frac{\nu}{\mathtt{k}}+1}} {\Gamma_{\mathtt{k}}\left(r\mathtt{k}+\nu+\frac{3}{2}\mathtt{k}\right)\Gamma\left(r+\frac{3}{2}\right)} \frac{x^{\eta+\rho+2r+\frac{\nu}{\mathtt{k}}+1}}{[a(1-\alpha)]^{\rho+2r+\frac{\nu}{\mathtt{k}}+1}}\\ &&\times\frac{\Gamma\left(\rho+2r+\frac{\nu}{\mathtt{k}}+1\right)\Gamma\left(1+\frac{\eta}{1-\alpha}\right)}{\Gamma\left(\frac{\eta}{1-\alpha}+\rho+2r+\frac{\nu}{\mathtt{k}}+2\right)} \end{align*} Now using the relation $\Gamma_{\mathtt{k}}\left(\gamma\right)=\mathtt{k}^{\frac{\gamma}{\mathtt{k}}-1}\Gamma\left(\frac{\gamma}{\mathtt{k}}\right)$, we get \begin{align*} &=\frac{x^{\eta+\rho+\frac{\nu}{\mathtt{k}}+1}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)}{\left[a(1-\alpha)\right]^{\rho+\frac{\nu}{\mathtt{k}}+1}2^{\frac{\nu}{\mathtt{k}}+1}}\\ &\times \sum_{r=0}^{\infty}\frac{(-c)^{r}x^{2r}} {\mathtt{k}^{r+\frac{\nu}{\mathtt{k}}+\frac{1}{2}}\Gamma\left(r+\frac{\nu}{\mathtt{k}}+\frac{3}{2}\right)\Gamma\left(r+\frac{3}{2}\right)4^{r}[a(1-\alpha)]^{2r}}\\ &\times\frac{\Gamma\left(\rho+\frac{\nu}{\mathtt{k}}+1+2r\right)}{\Gamma(\frac{\eta}{1-\alpha}+\rho+2r+\frac{\nu}{\mathtt{k}}+2)}. \end{align*} In view of $(\ref{Fox-Wright})$, we arrived the desired result. \end{proof} \begin{corollary} If we take $\mathtt{k}=1$ in theorem \eqref{Th1}, then we get the pathway integrals involving classical Struve function as: \begin{equation}\label{eqn1-cor1} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\mathtt{S}_{\nu,c}^{1}(t)\right] \left( x\right) =x^{\eta}\left(\frac{x}{a(1-\alpha)}\right)^{\rho+\nu+1} \frac{\Gamma\left(1+\frac{\eta}{1-\alpha}\right)}{2^{\nu+1}}\\ \times _{2}\Psi _{3}\left[ \begin{array}{ccc} \left( \rho +\nu+1,2\right) , & \left( 1,1\right); & \\ \left( \rho +\nu+\frac{\eta }{1-\alpha }+2,2\right) , & \left(\nu+\frac{3}{2},1\right) , & \left( 3/2,1\right \end{array ;-\frac{cx^{2}}{4\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right] \end{array} \end{equation} \end{corollary} Now, we will give the relation relation between trigonometric function and $\mathtt{k}$-Struve function. By taking $\nu=\mathtt{k}/2$ in (3.10) of \cite{Nisar-Saiful} we get the relation between cosine functions and $\mathtt{k}$-Struve functions as \begin{equation}\label{cos} 1-\cos\left(\frac{\alpha x}{\sqrt{\mathtt{k}}}\right)= \frac{\alpha}{\mathtt{k}}\sqrt{\frac{\pi x}{2}}~\mathtt{S}_{\frac{\mathtt{k}}{2}, \alpha^2}^{\mathtt{k}} (x). \end{equation} Similarly, the relation \begin{equation}\label{cosh} \cosh\left(\frac{\alpha x}{\sqrt{\mathtt{k}}}\right)-1= \frac{\alpha}{\mathtt{k}}\sqrt{\frac{\pi x}{2}}~\mathtt{S}_{\frac{\mathtt{k}}{2}, -\alpha^2}^{\mathtt{k}} (x), \end{equation} can be derive from (3.11) of \cite{Nisar-Saiful}. Also, by taking $\nu=-\frac{k}{2}$ in $(\ref{k-Struve})$, we obtained the following: \begin{equation}\label{sin} \sin\left(\frac{\alpha x}{\sqrt{\mathtt{k}}}\right)=\alpha\left(\sqrt{\frac{\pi x}{2\mathtt{k}}}\right)\mathtt{S}_{-\frac{\mathtt{k}}{2},\alpha^{2}}^{\mathtt{k}}\left(x\right). \end{equation} \begin{equation}\label{sinh} \sinh\left(\frac{\alpha x}{\sqrt{\mathtt{k}}}\right)=\alpha\left(\sqrt{\frac{\pi x}{2\mathtt{k}}}\right)\mathtt{S}_{-\frac{\mathtt{k}}{2},-\alpha^{2}}^{\mathtt{k}}\left(x\right). \end{equation} \section{Pathway fractional integration of cosine,hyperbolic cosine, sine and hyperbolic sine functions} \begin{theorem}\label{Th2} Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}\mathtt{k}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-th2} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(1-\cos\left(\frac{\gamma t}{\sqrt{\mathtt{k}}}\right)\right)\right] \left( x\right) =\sqrt{\pi}\frac{\gamma}{\mathtt{k}^{2}}\frac{ x^{\eta+\rho+2}}{4[a(1-\alpha)]^{\rho+2}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{2}\Psi _{3}\left[ \begin{array}{ccc} \left( \rho +2,2\right) , & \left( 1,1\right); & \\ \left( \rho +3+\frac{\eta }{1-\alpha },2\right) , & \left(2,1\right) , & \left( 3/2,1\right \end{array ;-\frac{\gamma^{2}x^{2}}{4\mathtt{k}\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{theorem} \begin{proof} Applying the pathway operator defined in \eqref{eqn-path-1} to \eqref{cos}, and changing the order of integration and summation, we ge \begin{align*} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(1-\cos\left(\frac{\gamma t}{\sqrt{\mathtt{k}}}\right)\right)\right] &=\left( P_{0+}^{\left( \eta ,\alpha \right)}\left[ t^{\rho-1}\frac{\gamma}{\mathtt{k}}\sqrt{\frac{\pi t}{2}}\mathtt{S}_{\frac{k}{2},\gamma^{2}}^{\mathtt{k}}(t)\right] \right)\left( x\right)\\ &=\sqrt{\frac{\pi}{2}}\frac{\gamma}{\mathtt{k}}\sum_{r=0}^{\infty}\frac{(-\gamma^{2})^{r}\left(\frac{1}{2}\right)^{2r+\frac{1}{2}+1}}{\Gamma_{\mathtt{k}}\left(r\mathtt{k}+\frac{4}{2}\mathtt{k}\right)\Gamma\left(r+\frac{3}{2}\right)}P_{0+}^{\left( \eta ,\alpha \right)}\left[t^{\rho+2r+1}\right](x), \end{align*} Using Lemma \ref{lemma1}, we get \begin{align*} =\sqrt{\pi}\frac{\gamma}{\mathtt{k}}\sum_{r=0}^{\infty}\frac{(-\gamma^{2})^{r}(\frac{1}{2})^{2r+2}}{\Gamma_{\mathtt{k}}(r\mathtt{k}+2\mathtt{k})\Gamma(r+\frac{3}{2})}\frac{x^{\eta+\rho+2+2r}}{[a(1-\alpha)]^{\rho+2+2r}}\\ \times \frac{\Gamma(\rho+2r+2)\Gamma(1+\frac{\eta}{1-\alpha})}{\Gamma(1+\frac{\eta}{1-\alpha}+\rho+2+2r)} \end{align*} Now using the relation $\Gamma_{\mathtt{k}}\left(\gamma\right)=\mathtt{k}^{\frac{\gamma}{\mathtt{k}}-1}\Gamma\left(\frac{\gamma}{\mathtt{k}}\right)$, we get \begin{align*} &=\sqrt{\pi}\frac{\gamma}{\mathtt{k}^{2}}\frac{ x^{\eta+\rho+2}}{4[a(1-\alpha)]^{\rho+2}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ &=\sum_{r=0}^{\infty}\frac{(-\gamma^{2})^{r}(\frac{1}{2})^{2r}\Gamma\left(\rho+2+2r\right)}{\Gamma(r+2)\Gamma(r+\frac{3}{2})}\frac{x^{2r}}{[a(1-\alpha)]^{2r}\Gamma(1+\frac{\eta}{1-\alpha}+\rho+2+2r)} \end{align*} In view of $(\ref{Fox-Wright})$, we arrived the desired result. \end{proof} \begin{corollary}\label{Cor2} If we take $\mathtt{k}=1$ in theorem \ref{Th2}, then we get the pathway integrals involving classical Struve function as: Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hol \begin{equation}\label{eqn1-Cor2} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right)}\left[t^{\rho-1}\left(1-cos{\gamma t}\right)\right]\left( x\right) =\sqrt{\pi}{\gamma}\frac{x^{\eta+\rho+2}}{4[a(1-\alpha)]^{\rho+2}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{2}\Psi _{3}\left[ \begin{array}{ccc} \left( \rho +2,2\right) , & \left( 1,1\right); & \\ \left( \rho +3+\frac{\eta }{1-\alpha },2\right) , & \left(2,1\right) , & \left( 3/2,1\right \end{array ;-\frac{\gamma^{2}x^{2}}{4\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{corollary} \begin{theorem}\label{Th3} Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}\mathtt{k}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-th3} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\cosh\left(\frac{\gamma t}{\sqrt{\mathtt{k}}}\right)\right)\right] \left( x\right) =\sqrt{\pi}\frac{\gamma}{\mathtt{k}^{2}}\frac{ x^{\eta+\rho+2}}{4[a(1-\alpha)]^{\rho+2}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{2}\Psi _{3}\left[ \begin{array}{ccc} \left( \rho +2,2\right) , & \left( 1,1\right); & \\ \left( \rho +3+\frac{\eta }{1-\alpha },2\right) , & \left(2,1\right) , & \left( 3/2,1\right \end{array ;\frac{\gamma^{2}x^{2}}{4\mathtt{k}\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{theorem} \begin{corollary}\label{cor3} If we set $\mathtt{k}=1$ in Theorem \ref{Th3} then we get, Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-Cor3} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\cosh\left(\gamma t\right)\right)\right] \left( x\right) =\sqrt{\pi}\gamma\frac{ x^{\eta+\rho+2}}{4[a(1-\alpha)]^{\rho+2}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{2}\Psi _{3}\left[ \begin{array}{ccc} \left( \rho +2,2\right) , & \left( 1,1\right); & \\ \left( \rho +3+\frac{\eta }{1-\alpha },2\right) , & \left(2,1\right) , & \left( 3/2,1\right \end{array ;\frac{\gamma^{2}x^{2}}{4\mathtt{k}\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{corollary} \begin{theorem}\label{Th4} Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}\mathtt{k}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-th4} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\sin\left(\frac{\gamma t}{\sqrt{\mathtt{k}}}\right)\right)\right] \left( x\right) =\gamma\sqrt{\frac{\pi}{\mathtt{k}}}\frac{x^{\rho+\eta+1}}{2}{\left[a(1-\alpha)\right]^{\rho+\frac{1}{2}}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{1}\Psi _{2}\left[ \begin{array}{ccc} \left( \rho +\frac{1}{2},2\right) ; & \\ \left( \rho +\frac{\eta }{1-\alpha }+\frac{3}{2},2\right) , & \left(\frac{3}{2},1\right); & \end{array ;\frac{-\gamma^{2}x^{2}}{4\mathtt{k}\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{theorem} \begin{proof} Applying the pathway operator defined in \eqref{eqn-path-1} to \eqref{cos}, and changing the order of integration and summation, we ge \begin{align*} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\sin\left(\frac{\gamma t}{\sqrt{\mathtt{k}}}\right)\right)\right] &=\left( P_{0+}^{\left( \eta ,\alpha \right)}\left[ t^{\rho-1}\gamma\sqrt{\frac{\pi x}{2\mathtt{k}}}~\mathtt{S}_{-\frac{\mathtt{k}}{2},\gamma^{2}}^{\mathtt{k}}(t)\right] \right)\left( x\right)\\ &=\gamma\sqrt{\frac{\pi x}{2\mathtt{k}}}P_{0+}^{\left( \eta ,\alpha \right)}\left[t^{\rho-1}\sum_{r=0}^{\infty}\frac{(-\gamma^{2})^{r} \left(\frac{t}{2}\right)^{2r-\frac{1}{2}+1}}{\Gamma_{\mathtt{k}}(r\mathtt{k}-\frac{\mathtt{k}}{2}+\frac{3\mathtt{k}}{2})\Gamma\left(r+\frac{3}{2}\right)}\right](x)\\ &=\gamma\sqrt{\frac{\pi x}{2\mathtt{k}}}\sum_{r=0}^{\infty}\frac{(-\gamma^{2})^{r}\left(\frac{1}{2}\right)^{2r+\frac{1}{2}}}{\Gamma_{\mathtt{k}}\left(r\mathtt{k}+\mathtt{k}\right)\Gamma\left(r+\frac{3}{2}\right)}P_{0+}^{\left( \eta ,\alpha \right)}\left[t^{\rho+2r+\frac{1}{2}-1}\right] \end{align*} Using Lemma \ref{lemma1} and the relation $\Gamma_{\mathtt{k}}\left(\gamma\right)=\mathtt{k}^{\frac{\gamma}{\mathtt{k}}-1}\Gamma\left(\frac{\gamma}{\mathtt{k}}\right)$, we get \begin{align*} &=\gamma\sqrt{\frac{\pi}{\mathtt{k}}}\frac{x^{\rho+\eta+1}}{[a(1-\alpha)]^{\rho+\frac{1}{2}}}\\ &\times \sum_{r=0}^{\infty}\frac{(-\gamma^{2})^{r}\left(\frac{1}{2}\right)^{2r}x^{2r}\Gamma(\rho+\frac{1}{2}+2r)}{\Gamma\left(r+1\right)\Gamma\left(r+\frac{3}{2}\right)\Gamma\left(\rho+\frac{\eta}{1-\alpha}+\frac{1}{2}+1+2r\right)k^{r}[a(1-\alpha)]^{2r}} \end{align*} In view of $(\ref{Fox-Wright})$, we arrived the desired result. \end{proof} \begin{corollary}\label{Cor4} If we take $\mathtt{k}=1$, then we have Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-Cor4} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\sin\left(\gamma t\right)\right)\right] \left( x\right) =\gamma\sqrt{\pi}\frac{x^{\rho+\eta+1}}{2}{\left[a(1-\alpha)\right]^{\rho+\frac{1}{2}}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{1}\Psi _{2}\left[ \begin{array}{ccc} \left( \rho +\frac{1}{2},2\right) ; & \\ \left( \rho +\frac{\eta }{1-\alpha }+\frac{3}{2},2\right) , & \left(\frac{3}{2},1\right); & \end{array ;\frac{-\gamma^{2}x^{2}}{4\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{corollary} \begin{theorem}\label{Th5} Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}\mathtt{k}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-th5} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\sinh\left(\frac{\gamma t}{\sqrt{\mathtt{k}}}\right)\right)\right] \left( x\right) =\gamma\sqrt{\frac{\pi}{\mathtt{k}}}\frac{x^{\rho+\eta+1}}{2}{\left[a(1-\alpha)\right]^{\rho+\frac{1}{2}}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{1}\Psi _{2}\left[ \begin{array}{ccc} \left( \rho +\frac{1}{2},2\right) ; & \\ \left( \rho +\frac{\eta }{1-\alpha }+\frac{3}{2},2\right) , & \left(\frac{3}{2},1\right); & \end{array ;\frac{\gamma^{2}x^{2}}{4\mathtt{k}\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{theorem} \begin{corollary}\label{Cor5} If we take $\mathtt{k}=1$, then we have Let $\eta ,\rho ,\nu, c \in \mathbb{C}$ and $\alpha <1$ be such that $\Re\left( \eta \right) >0,\Re\left( \rho \right) >0, \nu>-\frac{3}{2}$ and $\Re\left( \frac{\eta }{1-\alpha }\right) >-1 $ then the following formula hold tru \begin{equation}\label{eqn1-Cor5} \begin{array}{c} P_{0+}^{\left( \eta ,\alpha \right) }\left[ t^{\rho-1}\left(\sinh\left(\gamma t\right)\right)\right] \left( x\right) =\gamma\sqrt{\pi}\frac{x^{\rho+\eta+1}}{2}{\left[a(1-\alpha)\right]^{\rho+\frac{1}{2}}}\Gamma\left(1+\frac{\eta}{1-\alpha}\right)\\ \times _{1}\Psi _{2}\left[ \begin{array}{ccc} \left( \rho +\frac{1}{2},2\right) ; & \\ \left( \rho +\frac{\eta }{1-\alpha }+\frac{3}{2},2\right) , & \left(\frac{3}{2},1\right); & \end{array ;\frac{\gamma^{2}x^{2}}{4\left[ a^{2}\left( 1-\alpha \right)^{2} \right]}\right \end{array} \end{equation} \end{corollary}
2024-02-18T23:40:49.220Z
2016-11-29T02:12:10.000Z
algebraic_stack_train_0000
3,424
3,923
proofpile-arXiv_066-724
\section{Efficiency of the result} ||||| перенести First of all, we must notice here, that independently from temperature mixing time is at least of order $n \ln(n)$. It is so, because achieving stationary distribution by iterating means that every vertex has to be updated at least once. And with $n\to \infty$ we have to do of order $n\ln(n)$ Markov chain steps to obtain the probability of updating each vertex at least once tending to 1. Let us show now, that it is impossible to proof fast mixing in the way of ignoring temperature. In other words, there is a graph and a temperature, that force our Markov chain to mix in exponential time. Here we are about to consider configurations on the complete graph. Let $S \subset \Omega$ be the subset of the Markov chain state space $\Omega$, $S^c = \Omega \setminus S$ and for $A,B \subset \Omega$ $$ Q(A,B) = \sum_{a\in A, b \in B}\pi(a)P_{a,b},$$ where $\pi$ is stationary distribution. We call $\Phi(S)$ the \textit{bottleneck ratio} of the set $S$ where $$\Phi(S) = \frac{Q(S, S^c)}{\pi(S)},\ \pi(S) = \sum_{\sigma \in \Omega}\pi(\sigma)$$ and $\pi(S) < 1/2$. Then there is the following lower bound for mixing time (see [1] for details): \begin{equation} t_{mix}(\varepsilon) \geq \frac{\frac{1}{2} - \varepsilon}{\Phi(S)}, \end{equation} which holds for all $S \subset \Omega$. We consider $\varepsilon$ here as constant less than a half. And since that if we are about producing exponential lower found for $t_{mix}$ all we need is to find appropriate $S$. Intuitively, $S$ must be the set of states which is hard to escape from. The following example of $S$ is not the optimal one, it is not the most narrow place of the graph, but it will be enough for our aims. Let consider $S$ like a set of just one configuration with all values equal 0, let us call this state as $\hat{0}$. Then $$\Phi(S) = \frac{Q(S, S^c)}{\pi(S)} = \frac{\pi(\hat{0})\sum \limits_{\sigma \in \Omega \setminus {\hat{0}}}P_{\hat{0}, \sigma}}{\pi(\hat{0})} = n\left(\frac{1}{n} \sum \limits_{k = 1}^m e^{-\beta k^2(n-1)} \right ) Z^{-1}(\beta) \leq$$ $$ \leq me^{-\beta(n-1)}Z^{-1}(\beta), $$ and since $Z(\beta) > e^{-\beta \cdot 0} = 1$, then \begin{equation} \Phi(S) \leq me^{-\beta(n-1)}. \end{equation} Hence, if, for example, $\beta = 1$ is fixed, then (8) and (9) give us exponential lower bound for mixing time. Thus, complete graph is an example which shows that sometimes it may take very long time to get close to stationary distribution. \section{Simulations} \subsection{Monotone perfect Markov Chain Monte Carlo} In this section we are about to compare theoretical result with real simulations. Of course, for simulation one can just run the Glauber dynamics and use the bounds on the mixing time from Theorem~1 or Corollary~1 to indicate the simulation stopping time. However, if matrix $\mathbb{V}$ has some structure, it appears to be possible to construct a monotone perfect Markov Chain Monte Carlo (MCMC) simulation which produces perfect sampling and has a natural stopping rule. Our construction is based on the general recommendations given in \cite{ProppWilson}. Towards this end, under coupling described by equation (1), we need to show that for any two configurations $\sigma$ and $\tau$, such that $\sigma \preceq \tau$, we have $X_{\sigma}^t(U,w) \preceq X_{\tau}^t(U,w)$, where the order $\preceq$ means that for all vertices $v \in V$ it holds that $\sigma(v) \leq \tau(v)$. Unfortunately, this is true not for any matrix $\mathbb{V}$ and here, unlike in Theorem~1, we have to impose additional restrictions on $\mathbb{V}$. Let us call matrix $\mathbb{V}$ \textit{submodular} if for all $i<j , k<l$ it holds that $$ \mathbb{V}(i,k)+\mathbb{V}(j,l) \leq \mathbb{V}(i,l)+\mathbb{V}(j,k). $$ For example, matrix $\mathbb{V}(x,y) = f(x-y)$ is \textit{submodular}, when $f$ is a convex function (in particular, the matrix $\mathbb{V}$ in Theorem~2 is submodular). \begin{Lemma} Let $\sigma \preceq \tau$ and there is a coupling defined by equality (1) for submodular matrix $\mathbb{V}$. Then $$ X_{\sigma}^t(U,w) \preceq X_{\tau}^t(U,w). $$ \end{Lemma} \begin{proof} Suppose $t = 1$. Since the introduced order is transitive, we can limit consideration to neighbor configurations. So, let $\sigma(u) = \tau(u)$ for all $u \in V\setminus \{v\}$ and $\sigma(v) +1= \tau(v)$. Let some vertex $w$ be chosen for update. If $w \notin \mathcal{N}(v)$ then the neighborhood of $w$ is the same for both configurations and it holds that $X_{\sigma}^1(U,w)(w) = X_{\tau}^1(U,w)(w)$. Then, consider $w \in N(v)$. It will be enough to prove that for all $k \leq m$ the following inequality holds $$ pref_{k}(\sigma, w) \leq pref_{k}(\tau ,w) $$ to be sure that $$ X_{\sigma}^1(U,w)(w) = \min(k | pref_k(\sigma,w) \geq U) \leq \min(k | pref_k(\tau,w) \geq U) = X_{\tau}^1(U,w)(w). $$ Here we will use notations of Lemma~6. $$ pref_{k}(\sigma, w) - pref_{k}(\tau ,w) = \sum_{i = 0}^k p_i(\sigma, w) - \sum_{i = 0}^k p_i(\tau, w) = $$ $$ \sum_{i=0}^k \frac{a_i}{a_0 +...+a_{m}} - \sum_{i=0}^k\frac{b_i}{b_0 +...+ b_{m}} =$$ $$= \frac{ (a_0 +...+a_{k}) \cdot (b_0 +...+ b_{m}) - (a_0 +...+a_{m}) \cdot (b_0 +...+ b_{k}) }{(a_0 +...+a_{m})(b_0 +...+ b_{m})} =$$ $$ = \frac{ (a_0 +...+a_{k}) \cdot (b_{k+1} +...+ b_{m}) - (a_{k+1} +...+a_{m}) \cdot (b_0 +...+ b_{k}) }{(a_0 +...+a_{m})(b_0 +...+ b_{m})} = $$ $$ \frac{1}{(a_0+...+a_{m})(b_0+...+b_{m})}\sum_{i\leq k <j}^m( a_i b_j - a_j b_i ) \leq 0.$$ The last inequality holds since each summand is at most zero: it is provided by equation (4), submodular property of matrix $\mathbb{V}$ and the fact that summation is performed with $i<j$. By induction argument the proof immediately extends for arbitrary $t$. \hfill $\Box$ \end{proof} Now we can propose the following algorithm: \begin{algorithm} \caption{Monotone perfect MCMC} \begin{algorithmic} \State $U_t \gets \text{random uniform variables from the segment [0,1]}$ \State $w_t \gets \text{random uniform variables from the set $V$}$ \State $T \gets 1$ \Repeat \State $upper\gets \hat{m}$ \State $lower\gets \hat{1}$ \For{$t = -T \ldots -1$} \State $upper \gets X^1_{upper}(U_t,w_t)$ \State $lower \gets X^1_{lower}(U_t,w_t)$ \EndFor \State $T \gets 2T$ \Until{$upper = lower$} \State \Return $upper,T$ \end{algorithmic} \end{algorithm} It is needed to say that the algorithm uses the same random pair $(U_t,w_t)$ at the same $t$, that is why we initialize them only once during the first call. The required number of steps for this algorithm is upper bounded by $4T_*$, where $T_*$ is the smallest T such that $upper$ and $lower$ values converge. In this case $T_*$ is a random value depending on $U_t$ and $w_t$. Having found $T$ such that $T<T_* \leq 2T$ one can make a binary search to find out the accurate value of $T_*$. This calculation has asymptotic complexity of order $T_*\ln T_*$. According to \cite{ProppWilson}, we have: $${\sf E} T_* \leq 2t_{mix}\cdot(1+\ln n + \ln m).$$ This gives an idea that the Glauber dynamics and Monotone perfect MCMC are comparable in terms of computational requirements. Of course, the advantage of the monotone perfect MCMC is that it produces sampling from the exact stationary distribution. \subsection{Matching} Since we can sample from \textit{general distribution}, we can try to fit real social networks to our model. Towards this goal, we shall employ the maximum log-likelihood method. The log-likelihood function for our model takes the form $$ \log(\pi(\sigma)) = \log(e^{-\beta\varepsilon(\sigma)}) - \log\left(\sum_{\tau \in M^G}e^{-\beta\varepsilon(\tau)}\right) $$ $$ =-\beta \varepsilon(\sigma) - \log\left(\sum_{\tau \in M^G}e^{-\beta\varepsilon(\tau)}\right). $$ Now let us differentiate the above expression with respect to $\beta$. $$ \frac{d}{d\beta} \log(\pi(\sigma)) = - \varepsilon(\sigma) -\frac{1}{\sum \limits_{\tau \in M^G}e^{-\beta\varepsilon(\tau)}} \sum \limits_{\tau \in M^G} (-\varepsilon(\tau))e^{-\beta\varepsilon(\tau)}. $$ Equating the derived expression to zero, we obtain $$ \sum_{\tau \in M^G} \frac{\varepsilon(\tau)e^{-\beta\varepsilon(\tau)}}{\sum_{\tau' \in M^G}e^{-\beta\varepsilon(\tau')}} = \varepsilon(\sigma), $$ or, equivalently, $$ \sf{E}_\beta[\varepsilon(\tau)] = \varepsilon(\sigma). $$ We can estimate the left hand side by $$ {\sf E}_\beta[\varepsilon(\tau)] = \frac{1}{N} \sum_{k=1}^{N} \varepsilon(\tau_k), $$ where $\tau_k$ are generated by perfect MCMC as described in the previous section. \subsection{Numerical example with real network} Let consider well-known social network with attributes {\it AddHealth} \cite{AddHealth1}. For our experiments, we take as attribute the grade (class) of a pupil at school. It is an ordinal attribute in the interval between 7 and 12. It seems natural that this network has cluster structure based on class attribute, because the probability of friendship between two pupils is bigger if their classes are not so far apart in time. For this purpose, as in \cite{ANT16}, we have chosen $6\times 6$ interaction matrix $\mathbb{V}(x,y) = (x - y)^2$. Since $\mathbb{V}$ is submodular, we can use monotone perfect MCMC. We have taken publically available AddHealth graph \cite{AddHealth2} with the number of vertices $n = 1996$ and with the maximum degree $\triangle = 36$. In this case Theorem~2 provides fast mixing for $\beta < 0.000461895$, or equivalently, for the temperature $> 2165$. If we choose $\beta = 0.0002$, Theorem~2 gives the upper bound $27000$ on the mixing time while perfect MCMC algorithm makes about $20000-25000$ running steps. Moreover, if we choose $\beta$ bigger than provided by Theorem~2, e.g., about $0.04$, the perfect MCMC is still fast enough finishing approximately after $200000$ steps. Since we have a relation between the expectation of the number of steps in perfect MCMC and the mixing time, we realize that, on the one hand, our theorem is in agreement with experiment and, on the other hand, on that particular graph there is fast mixing on broader set of parameters. The question if it is possible to obtain a tighter mixing time estimate is an interesting direction for future research. We have also tried to fit the value of $\beta$ for the AddHealth data using a variation of the method of moments (see e.g., \cite{S01}). Specifically, we tried to fit the simulated energy to the energy of the AddHealth data, which is equal to 12328. The perfect simulation algorithm converges in acceptable time for $\beta$ as low as 0.125, which gives the energy level around 15000. We think it is a reasonable match. It is interesting that AddHealth social network is on the boundary of rapid mixing. This might not be a coincidence as a social network can self-organize to find a balance between sufficiently rapid mixing and division into communities. \section{Introduction} Pairwise Markov random fields or Markov random fields with nonzero potential functions only for cliques of size two have a large number of applications in statistical physics, image processing and machine learning. Let us mention just a few very important particular cases and applications. Ising \cite{Ising25}, Potts \cite{Potts52} and Solid-on-Solid (SOS) \cite{MS91,RS06} models are the basic models in statistical physics. Metric Markov random fields and the generalized Potts model are very successfully applied in image processing \cite{BVZ98,BVZ01,Setal08}. Pairwise Markov random fields are also extensively used in the study of classification and labeling problems, see e.g. \cite{BBM04,CDI98,KT02}. Our own motivation to study pairwise Markov random fields comes from the need to model the distribution of attributes in social networks such as age, gender, interests. The fact that friends or acquaintances in social networks share common characteristics is widely observed in real networks and is referred to as homophily. The property of homophily implies that we expect that the more clustered social network members are, the more likely they are to share same attribute. Nowadays social networks are intensively researched by both sociologists and computer scientists. However, if one wants to check some hypotheses about social networks or to test some algorithm such as a sampling method, the researchers need a lot of social network examples to consider and to test. In \cite{ANT16} a model of synthetic social network with attributes has been proposed to test subsampling chain-referral methods on many network instances with various properties. The synthetic network model of \cite{ANT16} is similar in spirit to the SOS model and well represents the distribution of ordinal attributes such as age. Here we study much more general model which could be used to model ordinal as well as non-ordinal attributes' distribution in social networks. Of course, we hope that the results will also be of interest to researchers from statistical physics and machine learning communities. Specifically, in the present work we consider a general pairwise Markov random field and provide conditions for rapid mixing of the associated Glauber dynamics. Rapid mixing guarantees that we can quickly generate many configurations of attributes corresponding to a given Gibbs distribution or energy function. In the important particular case of submodular energy functions, we go a step further and construct a perfect simulation which samples quickly without bias from the target distribution. Our results significantly generalize the corresponding results for the Ising model, see e.g. \cite{LevinMCaMT}. The proof in \cite{LevinMCaMT} relies on the particular size and values of the interaction matrix. Finally, we would like to note that even though our model has some common features with the exponential random graph model (see e.g., \cite{RPKL07}), there are important differences between these two models. The exponential random graph model generates the graph, whereas our model assumes that the graph is given and generates a configuration of attributes over the graph. \section{Model} Let a graph $G = (V,E),$ $|V| = n$, be given. In addition, each vertex $v$ has an attribute which takes a value from the finite set $M = \{1,...,m\}$. We denote by $\sigma \in \Omega=M^n$ a configuration, where each vertex $v \in V$ takes its own certain value $\sigma(v) \in M$ of the attribute. In the present work we restrict ourselves to the model with one attribute. Now we introduce symmetric {\it interaction} matrix $\mathbb{V}$ of size $m\times m$, and say, that the energy of configuration $\sigma$ is given by $$ \varepsilon(\sigma) = \sum_{\{v_1, v_2\} \in E}\mathbb{V}(\sigma(v_1), \sigma(v_2)). $$ Let us call $|\mathbb{V}|$ the maximum absolute value of matrix $\mathbb{V}$ elements. Next we consider \textit{Gibbs distribution} with respect to the introduced energy: $$ \pi^*(\sigma) = \frac{e^{-\beta \varepsilon(\sigma)}}{\sum \limits_{\tau \in M^G}e^{-\beta \varepsilon(\tau)}} = Z^{-1}(\beta)e^{-\beta\varepsilon(\sigma)}, $$ where $\beta = \frac{1}{T}$ is some parameter, the inverse temperature of the system, and $Z(\beta)$ is the normalizing constant or, in statistical physics terminology, the partition function. This distribution describes the {\it pairwise Markov random field} over graph $G$. We shall also refer to this distribution as \textit{network attribute distribution}. We would like to sample configurations from the distribution $\pi(\sigma)$ to test various algorithms on a series of network realisations. However, the main problem is that the probability space is enormous and it is impossible to sample from Gibbs distribution without additional techniques. One such technique is Glauber dynamics, described just below and another technique is monotone perfect simulations described in detail in Section~5. Let $\mathcal{N}(v)$ be the set of neighbours of vertex $v$. Then, we define the \textit{local energy} $\varepsilon_i(\sigma, v)$ for vertex $v$ and value $i$ in configuration $\sigma$ as follows: $$ \varepsilon_i(\sigma, v) = \sum_{u \in \mathcal{N}(v)}\mathbb{V}(i, \sigma(u)). $$ This formula calculates energy in the neighbourhood of $v$ provided that the value of the attribute for $v$ was updated to $i$. Then, we call the \textit{local distribution} for vertex $v$ in configuration $\sigma$ the probability distribution on set $\{1,2,\ ...\ ,m\}$ with respect to the local energy: $$ p_i(\sigma, v) = {\mathbb{P}}(\sigma(v) \to i) := \frac{e^{-\beta \varepsilon_i(\sigma, v)}}{\sum\limits_{k \in M}e^{-\beta \varepsilon_k(\sigma,v)}} = Z^{-1}(\sigma,v, \beta)\cdot e^{-\beta \varepsilon_i(\sigma, v)},$$ which is the probability to update value in $v$ to $i$. The Glauber dynamics is defined as follows: \begin{enumerate} \item Choose arbitrary starting distribution $\pi^0$ and then choose values for vertices according to $\pi^0$; \item Choose uniformly random vertex $v$; \item Update value for $v$ according to the local distribution; \item Go to step 2. \end{enumerate} Let us denote by $\mathcal{X} = \{ X_t, t\geq 0 \}$ the Markov chain associated with the Glauber dynamics, with starting distribution $\pi^0$ and transition matrix $P = \{P_{\sigma, \tau}\}_{\sigma, \tau \in \Omega},$ $\ P_{\sigma,\tau} = \mathbb{P}\{X_{t+1} = \tau|X_{t} = \sigma\}$, which is associated with steps 2-3. If steps 2-3 are repeated $t$ times, $\pi^t$ will stand for the distribution on space of configurations at time moment $t$. Sometimes we shall also use $P_{\sigma}^t(\cdot)$ to denote the probability distribution of $\mathcal{X}$ on $\Omega$ at time moment $t$ to emphasize that $\mathcal{X}$ starts from certain configuration $\sigma$. Before we proceed further, let us notice that the introduced model implies some well-known particular cases. For example, $$\mathbb{V} = \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{pmatrix}$$ corresponds to the Potts model. If $m=2$, then the Potts model becomes the Ising model. If now we take $\mathbb{V}(i,j)=f(|i-j|)$ with some convex function $f(\cdot)$, we obtain the metric Markov random field model extensively used in image processing. In \cite{ANT16}, the Markov random field with quadratic $f(\cdot)$ was used to model social networks with ordinal attributes. The case $\mathbb{V}(i,j)=|i-j|$ corresponds to the SOS model. \section{Main results} We can now formulate the main result of this article which says that under certain conditions the Glauber dynamics corresponding to the general pairwise Markov random fields mixes rapidly. \begin{Theorem} Let $\triangle$ be the maximum degree of graph $G = (V,E),\ |V| = n$ and $\mathbb{V}$ be the interaction matrix. Let also $\beta$ be the inverse temperature and $M = \{1,2,\ ...\ ,m\}$ be the set of attribute values. If $$ \beta < \frac{1}{4|\mathbb{V}|}\ln \left(1 + \frac{1}{\triangle m}\right), $$ then $$ t_{mix} \leq \left \lceil \frac{n(\ln(n)+ \ln(m-1) + \ln(\frac{1}{\varepsilon}))}{1 - \triangle m(e^{4\beta |\mathbb{V}|} - 1)}\right \rceil. $$ \end{Theorem} We would like to notice that independently from temperature the mixing time is at least of order $n \ln(n)$. It is so, because achieving stationary distribution by iterating means that every vertex of the graph has to be updated at least once. As $n$ grows to infinity, we must do order $n\ln(n)$ Markov chain steps to make the probability of updating each vertex at least once tending to 1. More details on various lower bounds can be found in \cite{LevinMCaMT}. Before we proceed to prove the theorem, let us also notice that it claims that the upper bound is of order $n\log n$. The corresponding result for the Ising model has been shown in e.g., \cite{LevinMCaMT}. The present extension is not straightforward, since the proof in \cite{LevinMCaMT} is based on the particular form of the interaction matrix $\mathbb{V}$. \begin{proof} Let us choose two arbitrary configurations $\sigma$ and $\tau$ at time 0 and say that random vectors $X_{\sigma}^t$ and $X_{\tau}^t$ have distributions $P_{\sigma}^t(\cdot)$ and $P_{\tau}^t(\cdot)$, respectively. Then define $pref_k(\sigma, w), k\leq m$, as the {\it prefix sum} of probabilities to label $w$ with one of the first $k$ attribute values at the next step, namely, $$ pref_k(\sigma, w) = \sum_{i = 1}^k p_i(\sigma, w). $$ Let us consider the following probability distribution of pair $(X_{\sigma}^t, X_{\tau}^t)$: first we uniformly at random choose a vertex $w$ to update (common for both configurations) and then we choose uniformly at random a value $U$ from $[0,1]$. Then we set new configurations $X_{\phi}^t(U,w), \phi \in \{\sigma, \tau \}$ at time $t$ by the relation \begin{equation} X_{\phi}^{t}(U,w) (\overline{w}) = \begin{cases} \phi(\overline{w}) &\overline{w} \neq w\\ \min(k | pref_k(\phi,w) \geq U) &\overline{w} = w \end{cases} . \end{equation} where function $X_{\phi}^1:[0,1]\times V \rightarrow \Omega$ becomes a random vector, if $U$ and $w$ are random variables. It is easy to see that distribution of pair $(X_{\sigma}^t(U,w), X_{\tau}^t(U,w))$ is coupling for $P_{\sigma}^t(\cdot)$ and $P_{\tau}^t(\cdot)$. Then, we are going to find an $\alpha > 0$ from Lemma~4 for two neighbor configurations. Let $\sigma, \tau$ be two neighbor configurations with unique difference in vertex $v$, i.e., ${|\sigma(v)-\tau(v)|=1}$. Let also $w$ be a uniformly chosen random vertex. If $w = v$, then $$ \rho(X_{\sigma}^1(U,w), X_{\tau}^1(U,w)) = 0. $$ If $w \notin \mathcal{N}(v) \cup \{v\}$, then $$ \rho(X_{\sigma}^1(U,w), X_{\tau}^1(U,w)) = |\sigma(v) - \tau(v)| = 1. $$ It is so, because in both cases local distributions for $w$ are the same for both configurations. And if $w \in \mathcal{N}(v)$, then $$ \rho(X_{\sigma}^1(U,w), X_{\tau}^1(U,w)) = |\sigma(v) - \tau(v)| + |X_{\sigma}^1(U,w)(w) - X_{\tau}^1(U,w)(w)|. $$ According to probabilities of each case, we can write \begin{equation} {\sf E}\rho(X_{\sigma}^1(U,w), X_{\tau}^1(U,w)) = 1 - \frac{1}{n} + \frac{1}{n}\cdot \sum_{w \in \mathcal{N}(v)}{\sf E}| X_{\sigma}^1(U,w)(w) - X_{\tau}^1(U,w)(w) |. \end{equation} Thus, an upper bound for the sum in (2) is needed. The following lemma helps to achieve the result and is the key element of this work. \begin{Lemma} For arbitrary $\sigma, \tau \in \Omega$ and for all $w \in V$ the following equation holds \begin{equation} {\sf E}| X_{\sigma}^1(U,w)(w) - X_{\tau}^1(U,w)(w)| = \sum_{i = 1}^m| pref_{i}(\sigma, w) - pref_{i}(\tau ,w) |. \end{equation} \end{Lemma} \begin{proof} The expectation in (3) is based on uniform random variable $U$ distributed on $[0,1]$. Let us place on segment $[0,1]$ precisely $m$ red points that correspond to $pref_i(\sigma, w)$ and $m$ blue points that correspond to $pref_i(\tau, w)$, $1 \leq i \leq m$. Since $pref_m(\sigma, w) = pref_m(\tau, w) = 1$, we have $2m - 1$ disjoint (with no common internal points) subsegments with red or blue endpoints (some subsegments may have length 0), they form a set $\{l_k\}_{k=1}^{2m-1}$. Let subsegment $l_k$ have a value $h_{\sigma,k}$, if $h_{\sigma,k}$ satisfies $l_k \subset [pref_{h_{\sigma,k} - 1}(\sigma, v), pref_{h_{\sigma,k}}(\sigma, w)]$. Thus, by definition the mean of $|X_{\sigma}^1(U,w)(w) - X_{\tau}^1(U,w)(w)|$ is $$ {\sf E}|X_{\sigma}^1(U,w)(w) - X_{\tau}^1(U,w)(w)| = \sum_{k = 1}^{2m-1}{\sf length}(l_k)\cdot |h_{\sigma, k} - h_{\tau, k}|. $$ In other words, the length of $l_k$ appears in the expectation as many times as the difference between the values of the attribute for updates in $\sigma$ and $\tau$. Therefore, we now calculate the number of times that the length of each subsegment is added to the result in the right hand side of the above equality. Towards this goal, for the moment let us fix $k$ and let $h_{\sigma, k} = a$, $h_{\tau, k} = b$ and without loss of generality $b \geq a$. Thus, the following series of inequalities hold $$ \begin{cases} pref_a(\sigma,w) \geq pref_a(\tau, w),\\ pref_{a+1}(\sigma,w) \geq pref_{a+1}(\tau, w),\\ ...\\ pref_{b}(\sigma,w) \geq pref_b(\tau, w). \end{cases} $$ Let us identify terms $|pref_{i}(\sigma, w) - pref_{i}(\tau ,w)|$ in (3) which contain the contribution from the subsegment $l_k$. The length of $l_k$ is added for the first time in the right hand side of (3) for $i=a$, because according to the definition of $a$ the minimum $i$ such that segment $[0, pref_{i}(\sigma, w)]$ contains $l_k$ is $i = a$, meantime $pref_a(\tau, w)$ does not contain this subsegment. Second time it is added for $i = a+1$ and so on, the last time it is added for $i = b-1$, which comes from definition of $b$. Hence, $l_k$ is added exactly $b-a$ times. This establishes equivalence between the sums and completes the proof of the lemma. \hfill $\Box$ \end{proof} Actually, this lemma will be used only for neighbor configurations $\sigma, \tau$, as it was mentioned before Lemma~6. Recall that Lemma~4 and then Lemma~5 give us an upper bound on the mixing time, but to apply them we need to obtain the corresponding inequalities on neighbour configurations. Therefore, we give a uniform upper bound for (3). For convenience we introduce $$ S_i = \sum_{u \in \mathcal{N}(w)\setminus\{v\}}\mathbb{V}(i, \sigma(u)) = \sum_{u \in \mathcal{N}(w)\setminus\{v\}}\mathbb{V}(i, \tau(u)),$$ $$ a_i = \exp\left (-\beta\sum_{u\in \mathcal{N}(w)}\mathbb{V}(i, \sigma(u)) \right ) = \exp\left (-\beta (S_i + \mathbb{V}(i, \sigma(v))) \right ),$$ $$ b_i = \exp\left (-\beta\sum_{u\in \mathcal{N}(w)}\mathbb{V}(i, \tau(u)) \right ) = \exp\left (-\beta (S_i + \mathbb{V}(i, \tau(v))) \right ).$$ Thus, $$ \begin{cases} p_i(\sigma, w) = \frac{a_i}{a_1 + \ ...\ a_{m}} \\ p_i(\tau, w) = \frac{b_i}{b_1 + \ ...\ + b_{m}} \end{cases}. $$ The following inequality will be useful: \begin{equation} \frac{a_i b_k}{a_k b_i} = \exp(-\beta(\mathbb{V}(i, \sigma(v)) + \mathbb{V}(k, \tau(v)) - \mathbb{V}(k, \sigma(v)) - \mathbb{V}(i, \tau(v))) \leq e^{4\beta|\mathbb{V}|}. \end{equation} Then, the upper bound for (3) can be derived as follows: $$\sum_{k = 1}^m| pref_{k}(\sigma, w) - pref_{k}(\tau ,w) | \leq \sum_{k=1}^{m}\sum_{i = 1}^k|p_i(\sigma, w) - p_i(\tau, w)| \leq $$ $$\leq m \sum_{i = 1}^m|p_i(\sigma, w) - p_i(\tau, w)| = m\sum_{i=1}^m\left | \frac{a_i}{a_1 +...+a_{m}} - \frac{b_i}{b_1 +...+ b_{m}}\right| \leq$$ $$ \leq \frac{m}{(a_1+...+a_{m})(b_1+...+b_{m})}\sum_{i = 1}^m |a_i(b_1 +...+b_m) - b_i(a_1 + ...+a_m)| \leq $$ $$\leq \frac{m}{(a_1+...+a_{m})(b_1+...+b_{m})}\sum_{i=1}^m \sum_{j = 1}^m |a_i b_j - a_j b_i| \leq $$ \begin{equation} \leq \frac{m}{(a_1+...+a_{m})(b_1+...+b_{m})}\sum_{i=1}^m \sum_{j = 1}^m a_j b_i \left |e^{4\beta|\mathbb{V}|} - 1 \right | \leq m\left(e^{4\beta |\mathbb{V}|} - 1\right). \end{equation} And now collecting together (2), (3) and (5), we obtain \begin{equation} {\sf E}\rho(X_{\sigma}^1, X_{\tau}^1) \leq 1 - \frac{1 - \triangle m e^{4\beta|\mathbb{V}|}}{n} \leq \exp\left (-\frac{1 - \triangle m(e^{4\beta |\mathbb{V}|} - 1)}{n} \right ). \end{equation} Indeed, the diameter of $\Omega$ is equal to $n(m-1)$ and it corresponds to the distance between configurations $\hat{1} = (1,1, \ ...\ ,1)$ and $\hat{m} = (m,m,\ ...\ ,m)$. Now invoking Lemma~5 with $\alpha$ provided by (6), we obtain the upper bound for $t_{mix}(\varepsilon)$ given in the theorem statement. \hfill $\Box$ \end{proof} Once we proved the theorem, we can think about modifications of the interaction matrix $\mathbb{V}$ and their influence on the model. It is easy to see from the definition of the Gibbs distribution that if we consider matrix $c\mathbb{V}$, where each element of matrix $\mathbb{V}$ is multiplied by a factor $c$, we obtain a new probability distribution on the configuration space $\Omega$ which is actually equal to the Gibbs distribution for the pair $\mathbb{V}$ and $c\cdot\beta$. Moreover, if we add some constant $d$ to all elements of matrix $\mathbb{V}$, then the distribution will not change at all. Now we notice that $|\mathbb{V}|$ is mentioned in Theorem~1 and we can diminish it to some extent. This results in the following refinement. \begin{col} Let $\triangle$ be the maximum degree of graph $G = (V,E),\ |V| = n$ and $\mathbb{V}$ be the interaction matrix. Let also $\beta$ be the inverse temperature and $M = \{1,2,\ ...\ ,m\}$ be the set of attribute values. Let also $$K = \frac{\max\limits_{x,y}\mathbb{V}(x,y) - \min\limits_{x,y}\mathbb{V}(x,y)}{2}.$$ If $$\beta < \frac{1}{4K}\ln \left(1 + \frac{1}{\triangle m}\right), $$ then $$ t_{mix} \leq \left \lceil \frac{n(\ln(n)+ \ln(m-1) + \ln(\frac{1}{\varepsilon}))}{1 - \triangle m(e^{4\beta K} - 1)}\right \rceil. $$ \end{col} This refinement gives a slightly better bound for the mixing time. However, we prefer to keep both formulations since the first variant could be just more notationally convenient in some setting. In the case of quadratic dependencies in $\mathbb{V}$ we obtain even better upper bound. \begin{Theorem} If $\ \mathbb{V}(x,y) = (x - y)^2$, and $$ \beta < \frac{1}{2(m-1)}\ln\left(1 + \frac{1}{\triangle m}\right), $$ then $$ t_{mix} \leq \left \lceil \frac{n(\ln(n)+ \ln(m-1) + \ln(\frac{1}{\varepsilon}))}{1 - \triangle m(e^{2\beta (m-1)} - 1)}\right \rceil. $$ \end{Theorem} In this particular case $|\mathbb{V}| = (m-1)^2$ and the above mentioned result is obviously more efficient than the one which can be obtained from Corollary~1. \begin{proof} The only difference in the proof of this theorem with respect to the previous results is in inequality (4). Recall that we use that inequality only for neighbour configurations $\sigma$ and $\tau$, which means that there is a vertex $v$ such that $\sigma$ and $\tau$ agree everywhere but in vertex $v$, and for that vertex it holds that $|\sigma(v) - \tau(v)| = 1$. Since $\mathbb{V}(x,y) = (x - y)^2$, we can rewrite the right hand side of inequality (4) in the following way: $$\frac{a_i b_k}{a_k b_i} = \exp(-\beta((i-\sigma(v))^2 + (k - \tau(v))^2 - (k - \sigma(v))^2 - (k - \tau(v))^2)),$$ Now, without loss of generality $\sigma(v) +1 = \tau(v)$, and then \begin{equation} \frac{a_i b_k}{a_k b_i} = \exp(2\beta(k-i)) \leq \exp(2\beta (m-1)). \end{equation} The latter provides us $\alpha$ for Lemma~5 and leads to the proof of the theorem. \hfill $\Box$ \end{proof} \noindent \textbf{Remark} All three results mentioned above show that there is fast mixing with respect to some condition on the temperature of the system. Actually, it is impossible to proof fast mixing in general case independently of the temperature. It is already shown for the Ising model, and we can generalize that fact and can demonstrate that for arbitrary $m$ and $m\times m$ matrix $\mathbb{V}$, where not all elements are equal, there exists a temperature and a graph such that mixing time has exponential order in terms of graph size. Moreover, we believe, that for every $m$ and $V$ there exists an example of a graph such that mixing is fast independently of the temperature. This is a good question to address in future research. \section{Preliminaries} Here we give several well-known results, which we will use in sequel. It is well-known, see e.g., \cite{Bremaud} and \cite{LevinMCaMT}, that the Markov chain $\mathcal{X}$ corresponding to the Glauber dynamics is reversible with the stationary distribution $\pi^*$. \begin{Lemma} Markov chain $\mathcal{X}$ is time-reversible with the stationary distribution given by $\pi^*(\sigma)=Z^{-1}(\beta) e^{-\beta \varepsilon(\sigma)}$. In other words, $$ \pi^*(\sigma)\cdot P_{\sigma, \tau} = \pi^*(\tau)\cdot P_{\tau, \sigma}, $$ for all $\sigma, \tau \in \Omega.$ \end{Lemma} For two distributions $\pi_1, \pi_2$ on state space $\Omega$ we define the \textit{total variation} distance between them as $$ || \pi_1 - \pi_2 ||_{TV} = \frac{1}{2}\sum_{\sigma \in \Omega}|\pi_1(\sigma) - \pi_2(\sigma)|. $$ Let $\mu$ and $\nu$ be two distributions on the same state space $\Omega$. Pair of random variables $(X_{\mu},X_{\nu})$ forms \textit{coupling}, if it is distributed such that marginal distribution of $X_{\mu}$ is $\mu$ and marginal distribution of $X_{\nu}$ is $\nu$. The main motivation for introducing such term is the following lemma \cite{Bremaud}. \begin{Lemma} Let $\nu$ and $\mu$ be two probability distributions on $\Omega$. Then $$ || \mu - \nu ||_{TV} = \inf \{\ {\mathbb{P}}(X_{\mu} \neq X_{\nu})\ |\ (X_{\mu},X_{\nu})\ is\ a\ coupling\ of\ \mu\ and\ \nu\ \}. $$ \end{Lemma} This lemma is very useful, because a comparison between distributions is reduced to comparison between random variables. Here is one more lemma, which shows how the total variation distance from the stationary distribution can be estimated \cite{Bremaud,LevinMCaMT}. \begin{Lemma} Let $\sigma$ and $\tau$ be initial configurations from state space $\Omega$. Then $$ || \pi^t - \pi^* ||_{TV} \leq \max \limits_{\sigma,\tau \in \Omega} || P_{\sigma}^t(\cdot) - P_{\tau}^t(\cdot) ||_{TV}. $$ \end{Lemma} Now we introduce metric on configuration space $\Omega$. Let $\rho(\cdot, \cdot)$ by definition be equal to $$ \rho(\sigma, \tau) = \sum_{v \in V}|\sigma(v) - \tau(v)|.$$ \begin{Lemma} Let $\alpha$ be such that for every two neighbor configurations $\sigma, \tau \\ (\rho(\sigma, \tau) = 1)$ corresponding random values $X_{\sigma}^1$ and $X_{\tau}^1$ satisfy an inequality $$ {\sf E}\rho(X_{\sigma}^1, X_{\tau}^1) \leq e^{-\alpha}.$$ Then $$\forall t\in \mathbb{N},\ \forall \sigma, \tau \in \Omega \to {\sf E}(\rho(X_{\sigma}^t, X_{\tau}^t)) \leq {\sf diam}(\Omega)\cdot e^{-\alpha t}.$$ \end{Lemma} Lemma 4 shows how the introduced property can be generalized from neighbor configurations to the whole space $\Omega$ for an arbitrary time moment. For some $\varepsilon > 0$, the {\it mixing time} is defined as follows: $$ t_{mix}(\varepsilon) = \min(t \in \mathbb{N}\ | \ ||\pi^t - \pi||_{TV} < \varepsilon). $$ Next lemma is based on Lemma~4 and it provides an upped bound for the mixing time with respect to $\alpha$. \begin{Lemma} Suppose $\alpha > 0$ is such that ${\sf E}(\rho(X_{\sigma}^1, X_{\tau}^1)) \leq e^{-\alpha}$ for all neighbour configurations $\sigma, \tau$. Then $$t_{mix} \leq \left \lceil \frac{1}{\alpha}[\ln({\sf diam}(\Omega)) + \ln(1/\varepsilon)] \right \rceil.$$ \end{Lemma} Lemmas 4 and 5 are borrowed from \cite{LevinMCaMT}. Actually, for our following results it would be enough to refer only to Lemma~5. But we mention here intermediate steps to help a reader to better understand the proof of our main result.
2024-02-18T23:40:49.330Z
2016-11-29T02:12:31.000Z
algebraic_stack_train_0000
3,426
5,937
proofpile-arXiv_066-747
\section{Introduction} The primary data access for {\em Gaia} \citep{2016arXiv160904172G} and several other past and upcoming large-scale surveys is via Table Access Protocol (TAP) services that allow users to execute SQL-like queries against a large remote database. This model of bringing the computation to the data is enforced by the size of these datasets; client-side transportation, storage and processing of the whole dataset is for most purposes impractical or at least highly inefficient. The remote database engines are typically powerful and can perform fast execution of complex queries. Where the desired result is some kind of source list of limited size, filtered by criteria such as sky position or photometry down to no more than a few thousand or maybe million objects, selection on source criteria works well. But where the requirement is to sample all or a large fraction of the sources in a catalogue in order to obtain statistical information about all or large regions of the sky, the model of retrieving source lists breaks down, since results with very large row counts are disallowed by the service or simply unwieldy to transport to and process at the client. It is however possible to calculate in the database histograms representing statistical aggregations of all or many data rows. By binning into a tessellating grid of sky tiles, queries can produce weighted or unweighted sky density maps representing source density or other statistical quantities by sky position. Such queries can be executed in reasonable amounts of time and provide result sets small enough to be transported to the client for examination and analysis. \section{Tiling Scheme} Various sky tiling schemes exist, including HTM, Q3C, and HEALPix. We favour the NESTED variant of HEALPix \citep{2005ApJ...622..759G} which has a number of advantages for this application, including the facts that tiles have equal area, facilitating density map analysis, and that simple SQL-friendly arithmetic (integer division) can be used to degrade pixel index to a lower resolution. The HEALPix grid at order $N$ defines tiles with indices in the range $[ 0, 12 \times 4^{N} )$. A sky position within tile $i$ at order $N$ falls within tile $i/4^{N-M}$ at a lower order (coarser resolution) $M$. \section{Service Requirements} The following items must be in place for end-users to be able to construct and use customised weighted or unweighted all-sky density maps for catalogues that would be impractical to download: \begin{description} \item[SQL-like access to source catalogue:] Public datasets are increasingly exposed via the Virtual Observatory protocol TAP (Table Access Protocol), allowing remote execution of ADQL (SQL-like) queries. \item[HEALPix column or function:] {\em Either\/} the table must have a column giving the index of the HEALPix tile in which the source position falls, {\em or\/} a User-Defined Function must exist that can calculate tile index for each row (e.g.\ from RA, Dec columns). Most existing TAP services do not currently provide this, but the ARI-Gaia and DaCHS TAP services have introduced such a UDF {\em (this work)\/}: \begin{quote} {\tt ivo\_healpix\_index(order, ra, dec)} \end{quote} An order-12 HEALPix index is also buried in bits 36--63 of the Gaia {\tt source\_id} column and can be extracted by integer division. \item[GROUP BY query:] An SQL query of the form \begin{quote} {\tt SELECT {\sl (agg-func)} FROM {\sl (table)} GROUP BY {\sl (healpix-index)}} \end{quote} calculates the sky map, returning one row per populated sky pixel. The aggregate function defines the weighting (e.g.\ {\tt COUNT(*)} gives unweighted source density, {\tt AVG(x)} gives the mean value of column or expression {\tt x}) and a {\tt WHERE} clause can optionally be added to restrict the selection of sources. \item[Query limits:] Limits on query execution time and output size must accommodate execution of these aggregating queries. They typically take very roughly an hour per billion rows, which is long but not unfeasibly so. Million-row outputs are a convenient size for visualisation (HEALPix order 8 has 786\,432 tiles) though finer or coarser resolutions can also be useful. Some TAP services impose limits on execution time or output row count that can preclude these queries. \item[Semantic markup of HEALPix output:] An undocumented convention exists for serialization of HEALPix maps in FITS files, but not for VOTable, which is the standard output format for TAP. Discussion is ongoing in the IVOA about how best to do this. \end{description} \begin{figure} \plotone{P1-31_f1} \caption[figure 1]{ \label{P1-31:2mass} $J-K$ colour for 2MASS point sources, using the query: ``{\tt\footnotesize SELECT ivo\_healpix\_index(9,raj2000,dej2000) AS hpx9, AVG(jmag-kmag) AS j\_k FROM twomass.data WHERE qflg LIKE 'A\_A' AND cflg LIKE '0\_0' AND xflg = '0' GROUP BY hpx9}''. The proposed User-Defined Function is used to calculate HEALPix index from sky position. The upper right half of the image used the {\tt WHERE} clause above, which selects only sources with good J/K photometry, while the lower left includes all sources (no {\tt WHERE} clause). With the custom selection the image is cleaner and the values are lower on average, though not uniformly over the sky. This query took 16/39 minutes to scan 163/471 million rows using the GAVO DC TAP service. Plot by STILTS. } \end{figure} \begin{figure} \plotone{P1-31_f2} \caption[figure 2]{ \label{P1-31:poserr} Mean isotropic positional error of Gaia DR1 sky positions, using the query: ``{\tt\footnotesize SELECT source\_id/2199023255552 AS hpx9, AVG(SQRT(ra\_error*ra\_error+dec\_error*dec\_error)) AS pos\_error FROM gaia.dr1 GROUP BY hpx9}''. The HEALPix index is recovered from the Gaia {\tt source\_id} column using integer division. This query took 70 minutes to scan 1.1 billion rows using the GAVO DC TAP service. Plot by STILTS. } \end{figure} \section{Analysis in TOPCAT and STILTS} Recent releases of the TOPCAT/STILTS table analysis suite \citep{2005ASPC..347...29T} include new features for working with HEALPix maps. Tables with an implicit or explicit HEALPix index column can be visualised interactively or exported to bitmapped or vector graphics files. They can be displayed within TOPCAT's Sky Plot window which offers interactive adjustment of colour maps and grid resolution, pan/zoom navigation, a choice of sky projections and coordinate systems, and the option to overlay multiple plots of different types. Figures \ref{P1-31:2mass} and \ref{P1-31:poserr} show examples plotted by STILTS. There are also new capabilities to generate HEALPix maps on the client side from local source catalogues and a number of HEALPix-related functions added to the expression language. Since HEALPix maps are tables, these tools can be used to analyse and manipulate them in general, non-visual ways too, for instance calculating statistics and performing joins. \section{Conclusions} An all-sky or wide-field view of quantities aggregated from a large catalogue can sometimes reveal large scale features or trends in astronomical or instrumental behaviour that would be difficult to discern from other data products. Source density maps are the most obvious application, but there are numerous other possibilities. Although some data centers (including the ESA and ARI Gaia archives) offer for download various pre-calculated all-sky maps in graphical or tabular form, it is often useful for end-users to construct their own, for instance applying custom source selections or weighting functions not foreseen by data centers. Two examples are given in the figures. We show that this is feasible using TAP services given certain modest requirements. Although this technique is not novel, the lack of required features in most existing TAP services indicate that it is not widely practised. To enable more widespread use of this technique, we recommend that TAP services should make available the User-Defined Function {\tt ivo\_healpix\_index(order, ra, dec)}, and should also consider the case of sky map creation when setting query timeout and row output limits. We also encourage the IVOA to standardise the representation of HEALPix tile indices in VOTables. \acknowledgements This project has received funding from the EU FP7-SPACE-2013-1 grant 606740 (GENIUS), the UK's STFC grant ST/M000907/1 (Gaia CU9), and the BMBF grant 05A11VH3 (GAVO). It has made use of data from the ESA mission {\em Gaia} processed by DPAC, and from the UMass/IPAC/CalTech project 2MASS.
2024-02-18T23:40:50.003Z
2016-11-29T02:12:32.000Z
algebraic_stack_train_0000
3,432
1,412
proofpile-arXiv_066-758
\section{Introduction} \label{intro} The dynamics of antikaon interacting with nucleons and nuclei is one of the current challenging problems in strangeness nuclear physics. The $\bar{K}N$ interaction at low energy is strongly attractive and generates the $\Lambda$(1405) resonance (abbreviated as $\Lambda^*$) as a quasi-bound state embedded in the $\pi\Sigma$ continuum below the $\bar{K}N$ threshold. Thus, one expects unusual, interesting phenomena to be observed when the antikaon is injected or stopped in nuclei. Theoretical interests in $\bar{K}$-nuclear bound states were triggered by the works of Akaishi and Yamazaki (A-Y) looking for $\bar{K}$ bound states in several few-body systems~\cite{akaishi,yamazaki1,dote1,dote2}, which were predicted to be not only deeply bound but also unusually shrunk. In addition to the lightest possible antikaon-nucleus system, $K^{-}pp$, a series of proton-rich $K^{-}$ bound systems were predicted \cite{yamazaki1}, which can be called kaonic nuclear clusters (``KNC"). The proton and neutron distributions in KNC's were studied extensively using antisymmetrized molecular dynamics (AMD) method by Dote {\it et al.} \cite{dote1,dote2}. Subsequently, theoretical studies of KNC's, especially of $K^-pp$, were developed by using different models and methods to solve the three-body system \cite{shev1,shev2,ikeda1,ikeda2,dote3,dote4,ikeda3}. These calculations have shown essentially that the $K^-pp$ system is bound below the break-up threshold in agreement with A-Y's original prediction~\cite{yamazaki1}, though some differences between different predictions remain. Very recently, Maeda {\it et al.}~\cite{maeda} has carried out Faddeev and Faddeev-Yakubowsky calculations for the three and four body systems, $\bar{K}NN$, $\bar{K}NNN$, $\bar{K}\bar{K}N$ and $\bar{K}\bar{K}NN$, with varied elementary potentials, overviewing their binding energies, densities and shapes. It was found and emphasized in refs.~\cite{yamazaki2,yamazaki3} that the essential ingredient in KNC's is the $\Lambda^* = K^-p$. The strong binding force in KNC's originates not only from the direct $\bar{K}N$ interaction, but also from the exchange integral arising from the "Platz-Wechsel" (place-exchange) effect a la Heitler-London type mechanism~\cite{HL} for hydrogen molecular bonding. This multi-body attraction was named ``super-strong nuclear force"~\cite{yamazaki2}. Parallel to the theoretical activities, experimental searches for KNC's have been carried out, but so far, most of the trials are not conclusive. The FINUDA group at DAPHNE first reported a $K^-pp$-like peak in the invariant-mass spectrum of $\Lambda-p$ that were emitted in $K^-$ capture by light targets~\cite{FINUDA}, but this result was poor in statistics, and moreover, its interpretation of the observed spectrum in terms of a single Lorentzian peak without background component to yield a binding energy of $B_K = 115 \pm 7$ MeV and a width of $\Gamma=67\pm14$ MeV was questioned~\cite{Ramos}. In 2007 a theoretical study of the structure of $K^-pp$ and its formation in the $d (\pi^+, K^+)$ reaction and in the $p+p\rightarrow{K}^{+}+K^{-}pp$ reaction was performed~\cite{yamazaki3}. The former method followed a well-known hypernuclear formation, but the formation probability of $K^-pp$ was calculated to be about 1 \% as much as the quasi-free background component. With such a pessimistic prediction and the non-availability of a suitable beam line and detection system no experimental trial had been challenged untill a recent J-PARC E27 experiment~\cite{e27}. Concerning the other method using the $p + p$ reaction, a very exotic formation mechanism was theoretically revealed in contrast to the conventional pessimistic expectation. In such a high-energy collision a large momentum around 1.6 GeV/c is transferred to the formed system, and thus, the sticking of $K^-$ to the involved nucleus should be enormously small. On the contrary to the pessimistic view, the calculated cross section for $K^{-}pp$ was found to be as large as the free production of $\Lambda^{*}=\Lambda$(1405). The reason for this surprising paradoxical consequence is that the formed state $K^-pp$ is a condensed object in which $\Lambda^{*}$ and $p$ are bound with high internal momenta, which can be populated by high-energy short-range collisions of $p+p$. The produced $\Lambda^*$ is in the short proximity of the participating proton in the collision. A small working group (M. Maggiora, K. Suzuki, P. Kienle and T.Yamazaki) was formed to examine this surprising hypothesis using large amounts of existing exclusive data of $p+p\rightarrow p+\Lambda+K^{+}$ reactions, taken by the DISTO collaboration at Saturne of Saclay. In the conventional view, where the $K^-pp$ is not dense, no such reaction will take place. Only when the $K^{-}pp$ were unusually dense, a peak comparable to the free emission of $\Lambda^*$ would be observed. In 2010 the DISTO group published the discovery of a gigantic peak in~\cite{yamazaki4} using the data at the incident energy of $T_p=$ 2.85 GeV. Its mass was found to be $M_X=2267\pm 2 (stat)\pm 5 (syst)$ MeV/$c^2$, and a binding energy of $B_X=$ 105 MeV and a width of $\Gamma_X=118\pm 8 (stat)\pm 10 (cyst)$ MeV were deduced. Recently, another report on the same reaction, but with an incident energy of 2.5 GeV was reported by the same group~\cite{Kienle}. The observed absence of the peak X at the $T_p=$ 2.5 GeV was interpreted as being due to the incident proton energy too low to produce the $\Lambda^*$ doorway. More recently, the HADES group at GSI reported absence of X at the incident energy of 3.5 GeV~\cite{HADES}. This was interpreted to be due to the too high incident energy, which made the collision dynamics to sit outside the favorite Dalitz zone of double resonance that was realized at $T_p$. We believe it to be vitally important to extend the theoretical and experimental search to four-body KNC's. In the present study, we solve the Alt-Grassberger-Sandhas (AGS) equations for $\bar{K}NN$ and $\bar{K}NNN$ with an early phenomenological model of $\bar{K}N$ interaction by applying our approach based on the coupled-channel AGS equations developed in~\cite{shev2,fix}. This paper is composed as follows. In sect. \ref{formal}, we first give a brief recapitulation of the three-body equations and then present the formula corresponding to the four-body equations. The inputs for the AGS system of equations are given in sect. \ref{inp}. A discussion of the results can be found in section \ref{result}. Finally, we summarize our conclusions in sect. \ref{conclu}. \section{Formulation of the problem} \label{formal} \subsection{Three-body AGS equations} In the present work, we employ the three- and four-body Faddeev equations in momentum space, using the Alt-Grassberger-Sandhas form~\cite{alt}. Three-body Faddeev equations~\cite{shev2} in the AGS form are given by \begin{equation} \mathcal{K}_{ij,I_{i} I_{j}}^{\alpha\beta}=\delta_{\alpha\beta} \mathcal{M}_{ij,I_{i}I_{j}}^{\alpha\beta} +\sum_{k,I_{k};\gamma}\mathcal{M}_{ik,I_i I_k}^{\alpha} \tau_{k,I_k}^{\alpha\gamma} \mathcal{K}_{kj,I_k I_j}^{\gamma\beta}, \label{ags1} \end{equation} where the operator $\mathcal{K}_{ij,I_{i} I_{j}}^{\alpha\beta}$ is the transition amplitude between channels $\alpha$ and $\beta$, the operator $\mathcal{M}_{ij,I_{i}I_{j}}^{\alpha\beta}$ is the corresponding Born term and $\tau_{i,I_i}^{\alpha\beta}$ is the two-body t-matrix embedded in three-body system. Here, the Faddeev partition indices $i,j=$ 1, 2, 3 denote simultaneously a spectator particle and, an interacting pair while the particle indices $\alpha,\beta=$ 1, 2, 3 denote the three-body channels. We use these Faddeev equations to solve the $\bar{K}NN-\pi\Sigma{N}$ three-body system. Depending on the two nucleon spin and isospin, we should treat the $K^{-}pp$ or $K^{-}d$ systems. The calculation scheme, which formally allows an exact solution, is based on the separable approximation of the appropriate integral kernels. The separable approximation of the kernel of the Faddeev integral equation permits one to represent the dynamical equations in terms of particle exchange diagrams~\cite{fix}. The key ingredient of the quasi-particle method~\cite{alt2,nadro} is the separable representation of the off-shell scattering amplitudes for the two- and three-body systems. We have to introduce also the separable representation for the three-body amplitudes and driving terms, which will be necessary to find the pole position of $\bar{K}NN$ system. For this purpose we apply the Hilbert-Schmidt expansion (HSE) method \begin{equation} \mathcal{M}_{ij,I_i I_j}^{\alpha}(p,p',\epsilon)=-\sum^{N_{r}}_{n=1}\lambda_{n}(\epsilon) u_{n;i,I_i}^{\alpha}(p,\epsilon)u_{n;j,I_j}^{\alpha}(p',\epsilon), \label{ags} \end{equation} where the form factors $u_{n;i,I_i}^{\alpha}(p,\epsilon)$ are taken as the eigenfunctions of the kernel of eq. (\ref{ags1}), with the eigenvalues $\lambda_{n}(\epsilon)$. The separable form of the Faddeev transition amplitudes is given by \begin{equation} \mathcal{K}_{ij,I_i I_j}^{\alpha\beta}(p,p',\epsilon)=\sum^{N}_{n=1}u_{n;i,I_i}^{\alpha} (p,\epsilon)\zeta_{n}(\epsilon)u_{n;j,I_j}^{\beta}(p',\epsilon), \label{ags2} \end{equation} where the functions $\zeta_{n}(\epsilon)$ obey the equation \begin{equation} \zeta_n(\epsilon)=\lambda_n(\epsilon)/(\lambda_n(\epsilon)-1). \label{zet} \end{equation} Then using the separable approximation for the Faddeev amplitudes and driving terms in (\ref{ags1}), the Faddeev equations take the form \begin{equation} u_{n;i,I_i}^{\alpha}=\frac{1}{\lambda_n}\sum_{k=1}^{3}\sum_{\gamma=1}^{3}\sum_{I_k} \mathcal{M}_{ik,I_i I_k}^{\alpha}\tau_{k,I_k}^{\alpha\gamma}u_{n;k,I_k}^{\gamma}. \label{ags3} \end{equation} The AGS equation of (\ref{ags3}) is a Fredholm type integral equation. To find the resonance energy of the three-body system using these equations, we should transform the integral equations into algebraic ones and then search for a complex energy at which the first eigenvalue of the kernel matrix becomes equal to one. Before we proceed to solve the AGS equations for both $(\bar{K}NN)_{s=0,1}$ systems, the operators involving two identical baryons should be antisymmetric. The baryon spins do not enter explicitly in the three-body equations because the total spin $s$ remains unchanged in the process. In the $K^{-}d$ case, the spin component is symmetric, then all operators in isospin base should be antisymmetric. In the case of $K^{-}pp$ the spin component is antisymmetric. Thus, all operators in isospin base should be symmetric. \subsection{The four-body $\bar{K}NNN$ equations} In four-body $\bar{K}NNN$ system, there is three identical nucleons, therefore, the four-body equations for $\bar{K}NNN$ system are reduced to three sets of integral equations. As it is shown in fig. \ref{kh}, the whole dynamics is described in terms of the Faddeev amplitudes, which connect the three channels characterized by the following partitions \begin{equation} \alpha=\{1,2,3\}=\{\bar{K}(NNN),N(\bar{K}NN),(\bar{K}N)(NN)\}. \label{chan} \end{equation} \begin{figure}[htb] \begin{center} \resizebox{0.6\textwidth}{!}{% \includegraphics{diagrams.eps}} \end{center} \caption{The four different rearrangement channels of the $\bar{K}NNN$ four-body system including the K- and H-type diagrams are represented. Antisymmetrization of three $N$'s is to be made within each channel.} \label{kh} \end{figure} We need all possible amplitudes connecting the initial state, consisting of the 3$N$ bound state ($\mathrm{^{3}{He}}$) and a free kaon, with all three channels listed in (\ref{chan}) via particle or two-body quasi-particle exchange. The four-body Faddeev amplitudes obey a set of three coupled integral equations, whose structure is represented by the following matrix equation \begin{equation} \begin{pmatrix} {\mathcal{A}}^{11} \\ {\mathcal{A}}^{21} \\ {\mathcal{A}}^{31} \end{pmatrix}= \begin{pmatrix} 0 & \mathcal{R}^{12} & \mathcal{R}^{13} \\ \mathcal{R}^{21} & \mathcal{R}^{22} & \mathcal{R}^{23} \\ \mathcal{R}^{31} & \mathcal{R}^{32} & 0 \end{pmatrix} \begin{pmatrix} \zeta^1 & 0 & 0 \\ 0 & \zeta^2 & 0 \\ 0 & 0 & \zeta^3 \end{pmatrix} \begin{pmatrix} {\mathcal{A}}^{11} \\ {\mathcal{A}}^{21} \\ {\mathcal{A}}^{31} \end{pmatrix}. \label{rotation_matrix} \end{equation} Here, we take into account only the dominant s-wave part of the interaction in the two-body subsystems and thus in the three- and four-particle states. Therefore, in all expressions, we drop the index $L=0$. The explicit analytical form of the transition amplitudes between the channel states, taking into account the spin and isospin degrees of freedom, are given by \begin{equation} \mathcal{A}^{\alpha\beta,ss'}_{II',nn'}=\mathcal{R}^{\alpha\beta,ss'}_{II',nn'}+ \sum^{3}_{\gamma=1}\sum_{n''s''I''}\mathcal{R}^{\alpha\gamma,ss''}_{II'',nn''} \zeta^{\gamma}_{n''}\mathcal{A}^{\gamma\beta,s''s'}_{I''I',n''n'}, \label{trans1} \end{equation} where the operators $\mathcal{A}^{\alpha\beta,ss'}_{II',nn'}$ are the four-body Faddeev amplitudes, $\zeta^{\gamma}_{n}$-functions are represented by eq. (\ref{zet}) and the operators $\mathcal{R}^{\alpha\beta,nn'}_{sI,sI'}$ are driving terms, which describe the effective particle-exchange potential realized by the exchanged particle between the quasi-particles in the channels $\alpha$ and $\beta$, which can be written as \begin{eqnarray} \mathcal{R}^{\alpha\beta,ss'}_{II',nn'}(p,p',E)&=&\frac{\Omega^{ss'}_{II'}}{2} \int^{+1}_{-1}d(\hat{p}\cdotp\hat{p}'){u}^{\alpha,s}_{n,I}(\vec{q},\epsilon_{\alpha}- \frac{p^{2}}{2\mathcal{M}_{\alpha}}) \nonumber \\ &\times& \tau(z)u^{\beta,s'}_{n',I'}(\vec{q'},\epsilon_{\beta}-\frac{p'^{2}}{2\mathcal{M}_{\beta}}). \label{trans2} \end{eqnarray} Here, the symbols $\Omega^{ss'}_{II'}$ are the spin and isospin Clebsch-Gordan coefficients, the functions ${u}^{\alpha,s}_{n,I}$ are the form factors that generated by the separable representation of the sub-amplitudes appearing in the channels (\ref{chan}) and $z$ is given as $z=E-\frac{p^{2}}{2M_{\beta}}-\frac{p'^{2}}{2M_{\alpha}}-\frac{\vec{p}\cdot\vec{p}'}{m}$. The energy $\epsilon_{\alpha}$ is the subsystem energy in channel $\alpha$. The momenta $\vec{q}(\vec{p},\vec{p}')$ and $\vec{q}'(\vec{p},\vec{p}')$ are given in terms of $\vec{p}$ and $\vec{p'}$. We use the relations \begin{equation} \begin{split} & \vec{q}=\vec{p}'+\frac{M_{\alpha}}{m}\vec{p}, \\ & \vec{q}'=\vec{p}+\frac{M_{\beta}}{m}\vec{p}', \end{split} \label{trans3} \end{equation} where $m$ is exchanged particle or quasi-particle mass and the reduced masses $\mathcal{M}_{\alpha}$ and $M_{\alpha}$ in the channel $\alpha$ of the [3+1] subsystem are defined by \begin{equation} \begin{split} & \mathcal{M}_{\alpha} = m^{\alpha}_{i}(m^{\alpha}_{j}+m^{\alpha}_{k}+m^{\alpha}_{l}) /(m^{\alpha}_{i}+m^{\alpha}_{j}+m^{\alpha}_{k}+m^{\alpha}_{l}), \\ & M_{\alpha} = m^{\alpha}_{j}(m^{\alpha}_{k}+m^{\alpha}_{l})/(m^{\alpha}_{j}+m^{\alpha}_{k}+m^{\alpha}_{l}), \end{split} \label{trans4} \end{equation} and in the case of the [2+2] subsystem are given by \begin{equation} \begin{split} & \mathcal{M}_{\alpha}=(m^{\alpha}_{i}+m^{\alpha}_{j})(m^{\alpha}_{k}+m^{\alpha}_{l}) /(m^{\alpha}_{i}+m^{\alpha}_{j}+m^{\alpha}_{k}+m^{\alpha}_{l}), \\ & M_{\alpha} = m^{\alpha}_{i}m^{\alpha}_{j}/(m^{\alpha}_{i}+m^{\alpha}_{j}). \end{split} \label{trans44} \end{equation} The meaning of the driving terms $\mathcal{R}^{\alpha\beta,ss'}_{II',nn'}$ is explained schematically by the diagrammatic representation in fig. \ref{diag}. By cyclic permutation of the nucleons, one can obtain various relations between the different driving terms $\mathcal{R}^{\alpha\beta,ss'}_{II',nn'}$. For example, by applying a combination of a cyclic permutation within an antisymmetrized $NN$-state, one obtains for the transition $2\rightarrow{3}$ the relation \begin{equation} \mathcal{R}^{23}=\mathcal{R}_{1}^{23}+2\mathcal{R}_{2}^{23}, \end{equation} where the coefficient 2 in the term $\mathcal{R}_{2}^{23}$ comes from the identity of the nucleons. \begin{figure*} \begin{center} \resizebox{0.8\textwidth}{!}{% \includegraphics{z33.eps}} \end{center} \caption{Diagrammatic representation of the potentials $\mathcal{R}^{\alpha\beta}$ in the separable approximation. The blue dashed line corresponds to the $\bar{K}$ and the black solid lines correspond to the nucleon. The symbols $u_{\alpha}$ will define the initial and final state of the system.} \label{diag} \end{figure*} Before we proceed to solve the four-body equations, we also need as input the equations describing two independent pairs of interacting particles $(\bar{K}N)+(NN)$. The corresponding equations read in our case \begin{equation} \begin{split} & \mathcal{Y}^{sI,s'I'}_{\bar{K}N,NN}=\mathcal{W}^{sI,s'I'}_{\bar{K}N,NN}+\mathcal{W}^{sI,s'I'}_{\bar{K}N,NN} \tau^{s'I'}_{NN}\mathcal{Y}^{s'I',s'I'}_{NN,NN}, \\ & \mathcal{Y}^{s'I',s'I'}_{NN,NN}=\mathcal{W}^{s'I',sI}_{NN,\bar{K}N}\tau^{sI}_{\bar{K}N}\mathcal{Y}^{sI,s'I'}_{\bar{K}N,NN}. \end{split} \label{trans5} \end{equation} Here, the operators $\mathcal{Y}^{sI,s'I'}_{i,j}$ are the Faddeev amplitudes which describe two independent pairs of interacting particles and the operators $\mathcal{W}^{sI,s'I'}_{i,j}$ are the effective potentials. A graphical representation of the system (\ref{trans5}) is shown in fig. \ref{htype}. Analogously to the treatment in the previous subsection, the separable form of the amplitude can easily be found \begin{equation} \mathcal{Y}_{i,j}^{sI,s'I'}(p,p',\epsilon)=\sum^{N_{r}}_{n=1}u_{n;i}^{sI}(p,\epsilon)\zeta_{n}(\epsilon)u_{n;j}^{s'I'}(p',\epsilon), \label{trans6} \end{equation} where the functions $u_{n;i}^{sI}$ are the eigenfunctions of the kernel of eq. (\ref{trans5}). \begin{equation} u_{n;i}^{sI}=\frac{1}{\lambda_n}\sum_{j=\bar{K}N,NN} \mathcal{W}^{sI,s'I'}_{i,j}\tau^{s'I'}_{j}u_{n;j}^{s'I'}. \label{trans7} \end{equation} The conversion of the four-body equations to a numerically manageable form is yielded by expanding the two- and three-body Faddeev amplitudes in eqs. (\ref{ags1}) and (\ref{trans5}) into separable series of finite rank $N_{r}$. For to make a separable representation for these subsystem amplitudes, one can use the energy dependent pole expansion (EDPE)~\cite{sofia} or the Hilbert-Schmidt expansion~\cite{nadro}. The desired approach in this work is the Hilbert-Schmidt expansion (HSE). The inputs for the driving terms of equation (\ref{trans2}) are two-body t-matrices, embedded in the four-body Hilbert space and the form factors, which are defined in eqs. (\ref{ags3}) and (\ref{trans7}). Before we proceed to solve the AGS equations (\ref{trans1}), we should antisymmetriz the basic amplitudes with respect to the exchange of the nucleons for which we follow mainly the work of~\cite{fix}. \begin{figure}[htb] \begin{center} \resizebox{0.7\textwidth}{!}{% \includegraphics{htype.eps}} \end{center} \caption{Diagrammatic representation of the equation (\ref{trans5}) for the transition amplitudes $\mathcal{Y}^{sI,s'I'}_{i,j}$ of $(\bar{K}N)-(NN)$ system. The symbols $g_{\bar{K}N}$ and $g_{NN}$ are the form factors of the $\bar{K}N$ and $NN$ interactions.} \label{htype} \end{figure} \section{Two-body interactions} \label{inp} All two-body interactions are taken in $s$-wave and separable form. Thus, in the case of separable two-body potential we have \begin{equation} V_{\alpha\beta}(p_{\alpha},p_{\beta})=\lambda_{\alpha\beta}g_{\alpha}(p_{\alpha})g_{\beta}(p_{\beta}). \end{equation} Here, $\alpha$ and $\beta$ enumerate two-body channels and $p_{\alpha}$ is the c.m. momentum in the corresponding channel. The two-body t-matrices that serve as input for the three- and four-body problem are all taken in the separable form for a given partial wave \begin{equation} T_{\alpha\beta}(p_{\alpha},p_{\beta},E)=g_{\alpha}(p_{\alpha})\tau_{\alpha\beta}(E)g_{\beta}(p_{\beta}), \end{equation} where $E$ is the total energy, $\lambda_{\alpha\beta}$ are the coupling strength parameters of the interaction and the form factors are defined by $g_{\alpha}(p_{\alpha})$. The $\bar{K}N$ interaction, which is the most important interaction for the $\bar{K}NN$ and $\bar{K}NNN$ systems, is usually described either by pure phenomenological or by chirally motivated potentials. In our Faddeev calculations, we use two different effective interactions for the coupled-channel $\bar{K}N-\pi\Sigma$ interaction that, having a one- and two-pole structure of the $\Lambda$(1405) resonance. The potentials that we use here for the $\bar{K}N$ interaction are given in ref.~\cite{shev4}. The parameters of the coupled-channel $\bar{K}N-\pi\Sigma$ potential were fitted to reproduce all existing experimental data on the low-energy $\bar{K}N$ system and the fitting was performed by using physical masses in $\bar{K}N$ and $\pi\Sigma$ channels with the inclusion of the Coulomb interaction. The $s$-wave $\Sigma{N}$ interaction in the $I=1/2$ isospin state is coupled with $\Lambda{N}$ channel, therefore, we used an optical potential for $\Sigma{N}$ interaction in this isospin state and a real potential for $I=3/2$ channel. The parameters chosen for the $\Sigma{N}$ interaction were those given in ref.~\cite{shev5}. In this calculation, we use the spin independent version of $\Sigma{N}$ interaction. In our three- and four-body study for singlet and triplet $NN$ interaction, we choose a potential of PEST type~\cite{para}, which is a separablization of the Paris potential. The coupling strength parameter was set to $\lambda=-1$ and the form factors are defined by \begin{equation} g^{NN}_{s,I}(p)=\frac{1}{2\sqrt{\pi}}\sum^{6}_{n=1}\frac{c^{NN}_{n,I}}{p^2+(\beta^{NN}_{n,I})^2}, \end{equation} where the constants $c^{NN}_{n,I}$ and $\beta^{NN}_{n,I}$ are listed in ref.~\cite{para}. PEST potential is equivalent to the Paris potential for energies up to $E_{lab}\sim50$ MeV. It reproduces the deuteron binding energy $E_{B.E}=2.2249$ MeV, as well as the singlet and triplet $NN$ scattering lengths, $a(^{1}{S}_{0})=17.534$ fm and $a(^{3}{S}_{1})=−5.422$ fm, respectively. The $\mathrm{^{3}{He}}$ binding energy, calculated with PEST potential is $9.7$ MeV while the experimental value is $8.54$ MeV. \section{Results and discussions} \label{result} Because $[\bar{K}NN]_{I=1/2,J^{\pi}=0^{-}}$ is the most important subsystem of the four-body $\bar{K}NNN$ system, in fig.~\ref{conver} we demonstrated how well a finite sum (\ref{ags}) may represent the exact amplitude. Thus, we calculated the ratio of the Schmidt norm for \begin{equation} \varDelta=\frac{\|\vartheta_{N_{r}}\|}{\|\vartheta\|}, \label{ratio} \end{equation} of the operators \begin{equation} \begin{split} & \vartheta=\mathcal{M}_{(\bar{K}N)_{I=0}N-(\bar{K}N)_{I=0}N},\\ & \vartheta_{N_{r}}=\mathcal{M}_{(\bar{K}N)_{I=0}N-(\bar{K}N)_{I=0}N}-\mathcal{M}^{N_{r}}_{(\bar{K}N)_{I=0}N-(\bar{K}N)_{I=0}N}, \end{split} \label{pot} \end{equation} where $\mathcal{M}^{N_{r}}_{(\bar{K}N)_{I=0}N-(\bar{K}N)_{I=0}N}$ is given by the sum (\ref{ags}) containing only the first $N_{r}$ terms. One can see that the rate of convergence is not very effective, but appears to be sufficient for the practical calculation. \begin{figure}[H] \begin{center} \centering \includegraphics[scale=0.4]{kpp.eps} \end{center} \caption{(Color online) The ratio between the Schmidt norms of the kernels $\vartheta$ and $\vartheta_{N_{r}}$ as defined by eqs. (\ref{ratio}) and (\ref{pot}).} \label{conver} \end{figure} As a starting three- and four-body calculation, we calculated the binding energies and widths of $K^{-}pp$ and $K^{-}ppn$ quasi-bound states using a one-channel complex $\bar{K}N$ potential~\cite{shev2}. During these calculations, we considered the $\bar{K}N$ potentials with the parameters $\lambda^{I,Complex}_{\bar{K}N, \bar{K}N}$ and $\beta_{I}$, which reproduce $\mathrm{M}_{\Lambda}=$ 1405.1 MeV, $\Gamma_{\Lambda}=$ 50 MeV and the $K^{-}p$ scattering length, for which we used as a guideline the SIDDHARTA measured value: $a^{\mathrm{SIDD}}_{K^{-}p}=(-0.65+i0.81)$ fm~\cite{bazi}. In table \ref{sn1}, our results for the binding energy of the $K^{-}pp$ and $K^{-}ppn$ related to these data, and using $\beta_{I}=3.5$ $\mathrm{fm}^{-1}$, are represented. In table \ref{sn1} we performed a calculation for the one-channel $\bar{K}NN$ system using a one-channel complex $\bar{K}N$ potential. For these data, we found a quasi-bound state for $K^{-}pp$ and $K^{-}ppn$ below the threshold. \begin{table}[H] \caption{The binding energies and widths of the quasi-bound state of the $K^{-}pp$ and $K^{-}ppn$ systems for one-channel complex potential.} \centering \begin{tabular}{ccc} \hline\noalign{\smallskip} $a_{K^{-}p}$ (fm) & $E_{K^{-}pp}$ (MeV) & $E_{K^{-}ppn}$ (MeV) \\ \noalign{\smallskip}\hline\noalign{\smallskip} \, -0.65+i0.81~\cite{bazi} \, & \, -49.4-i43.5 \, & \, -60.2-i42.2 \, \\ \noalign{\smallskip}\hline \end{tabular} \label{sn1} \end{table} In the following we present the results for the binding energy of the $\bar{K}N$ and $\bar{K}NN$ in table \ref{sn2} for the one- and two-pole version of $\bar{K}N-\pi\Sigma$ interaction. The binding energies for $\bar{K}N$ in table \ref{sn2} are just a bit different from those given in the original ref.~\cite{shev4}. The reason is that the above calculations were performed with averaged masses and without Coulomb interaction while the fitting to the experimental data was performed with physical masses and Coulomb interaction. At the beginning, we solved eq. (\ref{ags3}) with neglecting the $\Sigma{N}$ and $\pi{N}$ interactions. Thus, only $\bar{K}N$ and $NN$ t-matrices enter the equations. Therefore, we constructed the exact optical $\bar{K}N(-\pi\Sigma)$ potential, which is an approximation for the full coupled-channel interaction. The binding energies are calculated with respect to the $\bar{K}NN$ threshold. In the third column of table \ref{sn2} the binding energy and width of the full coupled-channel calculation of the $\bar{K}NN-\pi\Sigma{N}$ by taking the $\Sigma{N}$ interaction into account are presented. One can see that the one-channel AGS calculation with exact optical $\bar{K}N(-\pi\Sigma)$ potential gives a good approximation to the full coupled-channel calculations. This result was expected because the exact optical potential provides exactly the same elastic $\bar{K}N-\bar{K}N$ amplitude as the coupled-channel model of interaction, see ref.~\cite{shev5}. \begin{table}[H] \caption{The sensitivity of the binding energies and widths of the quasi-bound state of the $K^{-}pp$ systems to the $\bar{K}N$, $\Sigma{N}$ interactions. $E^0$ stands for no $\Sigma{N}$ interaction, while in calculating the $E^1$, $\Sigma{N}$ interaction is on. The real part of the pole $E_{K^{-}pp}$ is measured from the $\bar{K}NN$ threshold.} \centering \begin{tabular}{cccc} \hline\noalign{\smallskip} & $E_{\bar{K}N}$ (MeV) & $E^{(0)}_{\bar{K}NN}$ (MeV) & $E^{(1)}_{\bar{K}NN}$ (MeV) \\ \noalign{\smallskip}\hline\noalign{\smallskip} $V^{SIDD}_{One-pole}$ & 1428.1-i46.6 & -48.7-i34.3 & -52.8-i31.5 \\ $V^{SIDD}_{Two-pole}$ & 1418.1-i56.9 & -45.4-i24.4 & -47.1-i25.0 \\ & 1382.0-i104.2 & & \\ \noalign{\smallskip}\hline \end{tabular} \label{sn2} \end{table} \begin{table*}[t] \caption{The sensitivity of the binding energies and widths of the quasi-bound state of the $K^{-}ppn$ system to the number of terms $N_{r}$ in eqs. (\ref{ags}) and (\ref{trans6}). $E^{SIDD,One-pole}_{K^{-}ppn}$ and $E^{SIDD,Two-pole}_{K^{-}ppn}$ correspond to the one- and two-pole version of the $\bar{K}N$ interaction, respectively. The real part of the pole $E_{K^{-}ppn}$ (in MeV) is measured from the $\bar{K}NNN$ threshold.} \centering \begin{tabular}{cccc} \hline\noalign{\smallskip} & \, $N_{r}=2$ \, & \, $N_{r}=4$ \, & \, $N_{r}=6$ \, \\ \noalign{\smallskip}\hline\noalign{\smallskip} $E^{SIDD,One-pole}_{K^{-}ppn}$ \, & \, -69.6-i10.5 \, & \, -69.0-i11.1 \, & \, -68.8-i11.0 \, \\ \noalign{\smallskip} \noalign{\smallskip} $E^{SIDD,Two-pole}_{K^{-}ppn}$ \, & \, -56.7-i8.6 \, & \, -56.2-i8.8 \, & \, -55.9-i8.8 \, \\ \noalign{\smallskip}\hline \end{tabular} \label{sn3} \end{table*} In table \ref{sn3} we presented our results for the $K^{-}ppn$ quasi-bound state obtained by keeping a finite number of terms $N_{r}$, in the Hilbert-Schmidt expansion of the amplitudes (\ref{ags2}) and (\ref{trans6}). In this table, the rate of convergence of $K^{-}ppn$ binding energy is investigated and one can see that the choice $N_{r}=4$ provides rather satisfactory accuracy. In the four-body calculation we have neglected any $\Sigma{N}-\Lambda{N}$ and $\pi{N}$ interactions. The inclusion of these interactions would increase the number of channels in the four-body equations which would lead to much more complex formalism. As mentioned in the previous paragraph, the one-channel AGS calculation with exact optical $\bar{K}N$ potential, giving exactly the same $\bar{K}N-\bar{K}N$ amplitude as the corresponding coupled-channel potential, turns out to be a good approximation. Therefore, one can safely assume that $\Sigma{N}-\Lambda{N}$ and $\pi{N}$ interactions in the $\pi\Sigma{NN}$ channel can not change the binding energy of the $\bar{K}NNN-\pi\Sigma{NN}$ system more than a few MeV. Using the exact optical $\bar{K}N$ potential, our two-channel four-body calculation with coupled-channel $\bar{K}N-\pi\Sigma$ potential will be equivalent to the one-channel four-body calculation. The binding energies and widths of the quasi-bound state of the $K^{-}pp$, $K^{-}ppn$ and $K^{-}ppp$ systems have been calculated and presented in table \ref{sn4}. We calculated $K^{-}ppn$ and $K^{-}ppp$ quasi-bound state positions by keeping four terms in the Hilbert-Schmidt expansion of the amplitudes (\ref{ags2}) and (\ref{trans6}). Very recently, some few-body calculations are performed on $K^{-}ppn$ by the variational method~\cite{gal,roman} and the Faddeev approach~\cite{maeda}. The investigation of the $\bar{K}NNN$ in ref.~\cite{gal} uses the effective $\bar{K}N$ interaction derived from chiral low energy theorem, a quasi-bound state was found with a binding energy 30 MeV and a width $30$ MeV below the threshold energy of the $\bar{K}NNN$ state. A similar conclusion was drawn using the Faddeev equation by Maeda {\it et al.} using a one-channel real potential~\cite{maeda}. The obtained binding energies for $K^{-}ppn$ was about $69$ MeV below threshold energy. The obtained binding energies of the $\bar{K}NNN$ quasi-bound state in ref.~\cite{roman} for A-Y and HW potentials are $\sim$ 65 and $\sim$ 18 MeV and the corresponding widths are $\sim$ 74-80 and $\sim$ 27-31, respectively. The comparison our results for $K^{-}ppn$ obtained for PEST $NN$ interaction and the coupled-channel $\bar{K}N-\pi\Sigma$ interaction with calculations in ref.~\cite{maeda} within the Faddeev method for rank-two $NN$ interaction and one-channel real $\bar{K}N$ interaction shows that they are in the same range. However, this is in contrast to the chiral low energy potential, which is constructed to generate a bound state with a binding energy $\sim$ 30 MeV. \begin{table}[H] \caption{Pole positions (in MeV) of the quasi-bound states in the $K^{-}pp$, $K^{-}ppn$ and $K^{-}ppp$. The Faddeev AGS calculations performed with the phenomenological potentials from ref.~\cite{shev4}. The potentials $V^{SIDD}_{One-pole}$ and $V^{SIDD}_{Two-pole}$ are $\bar{K}N-\pi\Sigma$ potentials, which produce the one- and two-pole structure of the $\Lambda$(1405) resonance, respectively. The binding energies (real part of the pole) are measured from the thresholds.} \centering \begin{tabular}{cccc} \hline\noalign{\smallskip} & $E_{K^{-}pp}$ & $E_{K^{-}ppn}$ & $E_{K^{-}ppp}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} $V^{SIDD}_{One-pole}$ & -48.7-i34.3 & -68.8-i11.0 & -99.6-i10.5 \\ \noalign{\smallskip} $V^{SIDD}_{Two-pole}$ & -45.4-i24.4 & -55.9-i8.8 & -87.8-i3.5 \\ \noalign{\smallskip}\hline \end{tabular} \label{sn4} \end{table} \section{Conclusion} \label{conclu} Starting from Faddeev AGS equations and using different versions of the $\bar{K}N-\pi\Sigma$ potentials, which produce the one- and two-pole structure of the $\Lambda$(1405) resonance and separable expressions for the [3+1] and [2+2] subsystems. We employed the HSE method to reduce the problem to a set of single-variable integral equations. We solved the three- and four-body Faddeev equations, searching for $K^{-}pp$, $K^{-}ppn$ and $K^{-}ppp$ quasi-bound states. We studied the dependence of the pole energy on different models of $\bar{K}N-\pi\Sigma$ interaction. It was shown that a one-channel complex $\bar{K}N$ potential gives much broader three- and four-body quasi-bound state than the exact optical potential. The calculations yielded binding energy $B_{K^{-}pp}\sim$ 45-55, $B_{K^{-}ppn}\sim$ 55-70 and $B_{K^{-}ppp}\sim$ 90-100 MeV for $K^{-}pp$, $K^{-}ppn$ and $K^{-}ppp$, respectively. The obtained widths for these systems are $\Gamma_{K^{-}pp}\sim$ 50-75, $\Gamma_{K^{-}ppn}=16-20$ and $\Gamma_{K^{-}ppp}=7-20$ MeV. However, a similar calculation should be performed for the standard energy-dependent $\bar{K}N$ input potential, too. The quasi-bound states resulting from the energy-dependent potentials happen to be shallower, this is due to the energy dependence of the interaction. The energy-dependent potential will provide a weaker $\bar{K}N$ attraction for lower energies than the energy independent potential under consideration in this work. A definitive study of the $K^{-}pp$ quasi-bound state could be performed through fully exclusive formation reaction, such as the in-flight $\mathrm{^{3}He}(K^{-},N)$ reaction. This was performed at J-PARC~\cite{e15}. As a next step, we will develop the four-body Faddeev AGS equations to make a practical calculation of the cross section of kaon-induced strange-dibaryon production reaction. In the present study, we have calculated $K^{-}ppn$ and $K^{-}ppp$ quasi-bound state positions using the HSE method to find the separable expressions for the [3+1] and [2+2] subsystems. There is another separable expansion method for the [3+1] and [2+2] subsystems, this method is called the energy-dependent pole expansion (EDPE) method and the form factors have an energy dependence~\cite{sofia}. To study which one of these methods (HSE and EDPE) has a better convergence rate, one can perform a similar calculation using the EDPE method. The authors thank A. Fix for helpful comments and discussions. One of the authors (S. Marri) is thankful to Prof. T. Yamazaki for his fruitful discussions. The authors gratefully acknowledge the Sheikh Bahaei National High Performance Computing Center (SBNHPCC) for providing computing facilities and time. SBNHPCC is supported by the scientific and technological department of presidential office and Isfahan University of Technology (IUT). \bigskip
2024-02-18T23:40:50.040Z
2016-11-29T02:09:25.000Z
algebraic_stack_train_0000
3,434
6,039
proofpile-arXiv_066-790
\section{Introduction}\label{S:intro} A curve on a surface $ S $ is said to be \tdfn{regular} if it has a continuous and nonvanishing derivative. Consider two such curves whose curvatures take values in some given interval, starting in the same direction (prescribed by a unit vector tangent to $ S $) and ending in another prescribed direction. It is a natural problem to determine whether one curve can be deformed into the other while keeping end-directions fixed and respecting the curvature bounds. From another viewpoint, one is asking for a characterization of the connected components of the space of all such curves. More ambitiously, what is its homotopy or homeomorphism type? The answer can be unexpectedly interesting, and it is closely linked to the geometry of $ S $. See the discussion below on some related results. In this article this question is investigated when $ S $ is hyperbolic, that is, a (possibly nonorientable) smooth surface endowed with a complete Riemannian metric of constant negative curvature, say, $ -1$. It is well known that any such surface can be expressed as the quotient of the hyperbolic plane $ \Hh^2 $ by a discrete group of isometries. Thus any curve on $ S $ can be lifted to a curve in $ \Hh^2 $ having the same curvature (at least in absolute value, if $ S $ is nonorientable). As will be more carefully explained later, this implies that one can obtain a solution of the problem about spaces of curves on $ S $ if one knows how to solve the corresponding problem in the hyperbolic plane for all pairs of directions. Let $ u,\,v \in UT\Hh^2 $ (the unit tangent bundle of $ \Hh^2 $) be two such directions. Let $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ be the set of all smooth regular curves on $ \Hh^2 $ whose initial (resp.~terminal) unit tangent vectors equal $ u $ (resp.~$ v $) and whose curvatures take values inside $ (\ka_1,\ka_2) $, furnished with the $ C^\infty $-topology. There are canonical candidates for the position of connected component of this space, defined as follows. Let \begin{equation* \pr\colon \wt{UT\Hh^2} \to UT\Hh^2 \end{equation*} denote the universal cover. Fixing a lift $ \te{u} $ of $ u $, one obtains a map $ \sr C_{\ka_1}^{\ka_2}(u,v) \to \pr^{-1}(v) $ by looking at the endpoint of the lift to the universal cover, starting at $ \te{u} $, of curves in $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $. Since $ \pr^{-1}(v) $ is discrete, this yields a decomposition of $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ into closed-open subspaces. More concretely, let $ \ga \colon [0,1] \to \C $ be a regular plane curve. An \tdfn{argument} $ \theta\colon [0,1] \to \R $ for $ \ga $ is a continuous function such that $ \dot\ga $ always points in the direction of $ e^{i\theta} $; note that there are countably many such functions, which differ by multiples of $ 2\pi $. The \tdfn{total turning} of $ \ga $ is defined as $ \theta(1) - \theta(0)$. Because $ \Hh^2 $ is diffeomorphic to (an open subset of) $ \C $, any regular curve in the former also admits arguments and a total turning. However, these have no geometric meaning since they depend on the choice of diffeomorphism (and subset). In any case, once such a choice has been made, it gives rise to the decomposition \begin{equation}\label{E:decomp} \sr C_{\ka_1}^{\ka_2}(u,v) = \Du_{\tau} \sr C_{\ka_1}^{\ka_2}(u,v;\tau), \end{equation} where $ \sr C_{\ka_1}^{\ka_2}(u,v; \tau) $ consists of those curves in $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ which have total turning $ \tau $, and $ \tau $ runs over all \tdfn{valid} total turnings, viz., those for which $ v $ is parallel to $ e^{i\tau}u $ (regarded as vectors in $ \C $). The closed-open subspaces appearing on the right side of \eqref{E:decomp}, which will be referred to as the \tdfn{canonical subspaces} of $ \sr C_{\ka_1}^{\ka_2}(u,v) $, are independent of the diffeomorphism: they are in bijective correspondence with the fiber $ \pr^{-1}(v) $. If $ \ka_1=-\infty $ and $ \ka_2=+\infty $, so that no restriction is imposed on the curvature, then they are in fact precisely the components of $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $, and each of them is contractible. However, for general curvature bounds they may fail to be connected, contractible, or even nonempty. The possibilities depend above all upon the relation of $ (\ka_1,\ka_2) $ to the interval $ [-1,1] $. The main results of the paper are \tref{T:disjoint}, \tref{T:contained} and \tref{T:compact}. The former two together assert the following. \begin{uthm}\label{T:main} Let $ u,\,v \in UT\Hh^2 $ and $ \ka_1 < \ka_2 $. \begin{enumerate} \item [(a)] If $ (\ka_1,\ka_2) \subs [-1,1] $, then at most one of the canonical subspaces of $ \sr C_{\ka_1}^{\ka_2}(u,v) $ is nonempty. This subspace is always contractible. Thus $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ itself is either empty or contractible. \item [(b)] If $ (\ka_1,\ka_2) $ is disjoint from $ [-1,1] $, then infinitely many of the canonical subspaces are empty, and infinitely many are nonempty. The latter are all contractible. \end{enumerate} \end{uthm} If $ (\ka_1,\ka_2) $ contains $ [-1,1] $, then none of the canonical subspaces is empty. We conjecture that most are contractible, but finitely many of them may have the homotopy type of an $ n $-sphere, for some $ n \in \N $ depending upon the subspace. This conjecture is partly motivated by our knowledge of the homotopy type of the corresponding spaces of curves in the Euclidean plane, as determined in \cite{SalZueh2} and \cite{SalZueh1}. Finally, if $ (\ka_1,\ka_2) $ overlaps $ [-1,1] $, then infinitely many of the canonical subspaces are empty, and infinitely many are nonempty. Nevertheless, as in the previous case, simple examples (not discussed in this article) show that these subspaces may not be connected. We hope to determine the homotopy type in these two last cases in a future paper. To state the third main result \tref{T:compact} referred to above, let $ \sr CS_{\ka_1}^{\ka_2}(u,v) $ denote the space of curves on $ S $ starting (resp.~ending) in the direction of $ u $ (resp.~$ v $) $ \in UTS $ whose curvatures take values in $ (\ka_1,\ka_2) $. \begin{uthm} Let $S$ be any compact connected surface (not necessarily hyperbolic nor orientable). Then $ \sr CS_{\kappa_1}^{\kappa_2}(u,v)\neq \emptyset $ for any choice of $\ka_1<\ka_2$ and $u,\,v\in UTS$. \end{uthm} \subsection*{Related results}A map $ f \colon M^m \to N^n $ is said to be $ k $-th order \tit{free} if the $ k $-th order osculating space of $ f $ at any $ p \in M $, which is generated by all (covariant) partial derivatives of $ f $ at $ p $ of order $ \leq k $, has the maximum dimension, $ d(k,m):= {m+k \choose k} - 1 $, for all $ p \in M $. Notice that a map is first-order free if and only if it is an immersion. Although this definition of ``freeness'' uses covariant derivatives for simplicity, it is easy to avoid Riemannian metrics by using the language of jets. On the other hand, this formulation suggests the study of spaces of maps $ f\colon M \to N $ satisfying more general differential inequalities, or partial differential relations. Various versions of Gromov's h-principle (see \cite{Gromov} and \cite{EliMis}) are known which permit one to reduce the questions of existence, density and approximation of holonomic (i.e., true) solutions to a partial differential relation to the corresponding questions about formal (i.e., virtual) solutions; the latter are usually settled by invoking simple facts from homotopy theory.\footnote{The discussion here is not meant to present an exhaustive compilation of the related literature. We apologize to any authors whose work has not been cited.} It is known that if $ n \geq d(k,m) + 1 $, or if $ n = d(k,m) $ but $ M $ is an open manifold, then the h-principle holds for $ k $-th order free maps $ M^m \to N^n $ (see \cite{Gromov}, p.~9). In contrast, if $ n $ equals the critical dimension $ d(k,m) $ but $ M $ is not open, then practically nothing is known regarding such maps. As an example, it is an open problem to decide whether a second-order free map $ \Ss^1 \times \Ss^1 \to \R^5 $ exists, cf.~\cite{EliMis}. Indeed, it can be quite hard to disprove the validity of the h-principle even for the simplest differential relations. In \cite{SalZueh, SalZueh2, SalZueh1} and the current article we study spaces of curves whose curvature is constrained to a given interval $ (\ka_1,\ka_2) $ on an elliptic, flat or hyperbolic surface. Such curves are holonomic solutions of a second-order differential relation on maps from a one-dimensional manifold (an interval, or a circle) to a two-dimensional manifold (the surface), so that we are in the critical dimension. In all of these cases we obtain some results on the homeomorphism type of the space of such curves, and in particular show that they do not abide to the h-principle. Other articles on the same topic, for curves in the Euclidean plane, include \cite{Ayala, AyaRub, Dubins1}. If $ (\ka_1,\ka_2) = (-\infty,+\infty) $, then no condition is imposed on the curve except that it should be an immersion (i.e., regular). This problem was solved by H.~Whitney \cite{Whitney} for closed curves in the plane and by S.~Smale for closed curves on any manifold \cite{Smale}. The case of immersions of higher-dimensional manifolds has also been elucidated, among others by S.~Smale, R.~Lashof and M.~Hirsch in \cite{Hirsch, Hirsch1, Smale3, Smale1, Smale2}. For results concerning the topology of spaces of immersions into space forms of nonpositive curvature having constrained principal curvatures (second fundamental form), see \cite{Zuehlke2}. If $ (\ka_1,\ka_2) = (0,+\infty) $, then we are asking that the curvature of the curve never vanish. This is equivalent to the requirement that it be second-order free. Such curves are also called \tit{nondegenerate} or \tit{locally convex} in the literature. Works on this problem include \cite{KheSha, KheSha1, Little, Saldanha3}; see also \cite{Arnold}. More generally, $ n $-th order free curves on $ n $-dimensional manifolds have been studied by several authors over the years; we mention \cite{AlvSal, Anisov, Feldman, Feldman1, Fenchel, Goulart, Little1, MosSad, SalSha, ShaSha, Wintgen}. Finally, in another direction, there is a very extensive literature on applications of paths of constrained curvature in engineering and control theory. Perhaps the main problem in this regard is the determination of length-minimizing or other sort of optimal paths within this and related classes. We refer the reader to \cite{Ayala2, Dubins, Mittenhuber, Mittenhuber1, Monroy, ReeShe} for further information and references. \subsection*{Outline of the sections} After briefly introducing some notation and definitions, \S \ref{S:basic} begins with a discussion of curves of constant curvature $ \ka $ in the hyperbolic plane: circles ($ \abs{\ka} > 1 $), horocycles ($ \abs{\ka} = 1 $) and hypercircles ($ \abs{\ka} < 1 $). Then a transformation is defined which takes a curve and shifts it by a fixed distance along the direction prescribed by its normal unit vectors. Its effect on the regularity and curvature of the original curve is investigated, and applied to reduce the dependence of the topology of $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ to four real parameters, instead of the eight needed to specify $ \ka_1,\,\ka_2,\,u $ and $ v $. In \S\ref{S:voidness} the voidness of the canonical subspaces is discussed. It is proven that if $ (\ka_1,\ka_2) $ is contained in $ [-1,1] $, then the image of any curve in $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ is the graph of a function when seen in an appropriate conformal model of the hyperbolic plane, which we call the Mercator model.\footnote{It is very likely that the Mercator model has already appeared under other (possibly standard) names in the literature; however, we do not know any references. Hypercircles are also known as ``hypercycles'' or ``equidistant curves''.} In particular, at most one of its canonical subspaces is nonempty. If $ (\ka_1,\ka_2) $ contains $ [-1,1] $, then none of the canonical subspaces is empty. In the two remaining cases, there is a critical value $ \tau_0 $ for the total turning such that $ \sr C_{\ka_1 }^{\ka_2 }(u,v;\tau) $ is nonempty for all $ \tau \geq \tau_0 $ and empty for all $ \tau < \tau_0 $ (or reversely, depending on whether $ (\ka_1,\ka_2) $ contains points to the right or to the left of $ [-1,1] $). Section \ref{S:frame} explains how a curve of constrained curvature may also be regarded as a curve in the group of orientation-preserving isometries of $ \Hh^2 $ satisfying certain conditions. This perspective is sometimes useful. In \S\ref{S:disjoint} it is shown that the nonempty canonical subspaces of $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ are all contractible when $ (\ka_1,\ka_2) $ is disjoint from $ [-1,1] $. The idea is to parametrize all curves in such a subspace by the argument of its unit tangent vector when viewed as a curve in the half-plane model, and to take (Euclidean) convex combinations. The proof intertwines various Euclidean and hyperbolic concepts, and seems to be highly dependent on this particular model. In the Mercator model $ M $, any curve in $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ may be parametrized as $ x \mapsto (x,y(x)) $ when $ (\ka_1,\ka_2) $ is contained in $ [-1,1] $, so the bounds on its curvature can be translated into two differential inequalities involving $ \dot y $ and $ \ddot y $, but not $ y $ itself, because vertical translations are isometries of $ M $. This allows one to produce a contraction of the space by working with the associated family of functions $ \dot y $, as carried out in \S\ref{S:contained}. In \S\ref{S:general} we define spaces of curves with constrained curvature on a general surface $ S $, not necessarily hyperbolic, complete nor orientable, and explain how a Riemannian covering of $ S $ induces homeomorphisms between spaces of curves on $ S $ and on the covering space. We also show that if $ S $ is compact then any such space is nonempty; this is also true in the Euclidean plane, but not in the hyperbolic plane, as mentioned previously. The section ends with a brief discussion of spaces of closed curves without basepoint conditions. Several useful constructions, notably one which is essential to the proof of the main result of \S\ref{S:contained}, may create discontinuities of the curvature, and thus lead out of the class of spaces $ \sr CS_{\ka_1 }^{\ka_2 }(u,v) $ defined in \S\ref{S:general}. To circumvent this, the paraphernalia of $ L^2 $ functions is used in \S\ref{S:discontinuous} to define another family of spaces, denoted $ \sr LS_{\ka_1}^{\ka_2}(u,v) $, which are Hilbert manifolds. The curves in these spaces are regular but their curvatures are only defined almost everywhere. The main result of \S\ref{S:discontinuous} states that the natural inclusion $ \sr CS_{\ka_1 }^{\ka_2 }(u,v) \inc \sr LS_{\ka_1 }^{\ka_2 }(u,v) $ is a homotopy equivalence with dense image. Sections \ref{S:general} and \ref{S:discontinuous} are independent of the other ones. A few exercises are included in the article. These are never used in the main text, and their solutions consist either of straightforward computations or routine extensions of arguments presented elsewhere. It is assumed that the reader is familiar with the geometry of the hyperbolic plane as discussed, for instance, in chapter 2 of \cite{Thurston}, the expository article \cite{Cannon}, chapter 7 of \cite{Beardon} or chapters 3--5 of \cite{Ratcliffe}. \section{Basic definitions and results}\label{S:basic} When speaking of the hyperbolic plane with no particular model in mind, we denote it by $ \Hh^2 $. The underlying sets of the (Poincar\'e) disk, half-plane and hyperboloid models are denoted by: \begin{alignat*}{9 D & =\set{z \in \C}{\abs{z}<1}; \\ H & =\set{z \in \C}{\Im(z)>0}; \\ L & =\set{(x_0,x_1,x_2) \in \E^{2,1}} {-x_0^2+x_1^2+x_2^2=-1\text{\ and\ }x_0>0}. \end{alignat*} The circle at infinity is denoted by $ \Ss^1_\infty $ or $ \bd \Hh^2 $, the norm of a vector $ v $ tangent to $ \Hh^2 $ by $ \abs{v} $, and the Riemannian metric by $ \gen{\ ,\ } $. When working with $ D $ or $ H $, both $ \Hh^2 $ and its tangent planes are regarded as subsets of $ \C $ In the disk and half-plane models, we will select the orientation which is induced on the respective underlying sets by the standard orientation of $ \C $. In the hyperboloid model, a basis $ (u,v) $ of a tangent plane is declared positive if $ (u,v,e_0) $ is positively oriented in $ \R^{3} $; equivalently, the Lorentzian vector product $ u\ten v $ points to the exterior region bounded by $ L $ (i.e., the one containing the light-cone). Given a regular curve $ \ga\colon [0,1]\to \Hh^2 $, its \tdfn{unit tangent} is the map \begin{equation* \ta=\ta_\ga\colon[0,1]\to UT\Hh^2,\quad \ta := \frac{\dot \ga}{\abs{\dot \ga}}. \end{equation*} Let $ J \colon T\Hh^2 \to T\Hh^2 $ denote the bundle map (``multiplication by $ i$'') which associates to $ v \neq 0 $ the unique vector $ Jv $ of the same norm as $ v $ such that $ (v,Jv) $ is orthogonal and positively oriented. Then the \tdfn{unit normal} to $ \ga $ is given by \begin{equation* \no=\no_\ga\colon [0,1]\to UT\Hh^2, \quad \no:= J \circ \ta. \end{equation*} Assuming that $ \ga $ has a second derivative, its \tdfn{curvature} is the function \begin{equation}\label{E:curv} \ka=\ka_\ga \colon [0,1] \to \R,\quad \ka:=\frac{1}{\abs{\dot\ga}}\Big\langle\frac{D\ta}{dt},\no\Big\rangle = \frac{1}{\abs{\dot\ga}^2}\Big\langle\frac{D\dot\ga}{dt},\no\Big\rangle; \end{equation} here $ D $ denotes covariant differentiation (along $ \ga $). The hyperboloid model is usually the most convenient one for carrying out computations, since it realizes $ \Hh^2 $ as a submanifold of the vector space $ \E^{2,1} $. For instance, the curvature of a curve $ \ga $ on $ L $ is given by: \begin{equation*}\label{E:curv2} \ka= \frac{\dot \ta \cdot \no}{\norm{\dot \ga}} = \frac{\ddot \ga \cdot \no}{\norm{\dot\ga}^2}, \end{equation*} where $ \cdot $ denotes the bilinear form on $ \E^{2,1} $ and $ \norm{\ }^2 $ is the associated quadratic form. \begin{dfn}[spaces of curves in $ \Hh^2 $]\label{D:spaces} Let $ u,\,v \in UT\Hh^2 $ and $ \ka_1 < \ka_2 \in \R \cup \se{\pm \infty} $. Then $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ (resp.~$ \bar{\sr C}_{\ka_1 }^{\ka_2 }(u,v) $) denotes the set of $ C^r $ regular curves $ \ga \colon [0,1] \to \Hh^2 $ satisfying: \begin{enumerate} \item [(i)] $ \ta_\ga(0) = u $ and $ \ta_\ga(1) = v $; \item [(ii)] $ \ka_1 < \ka_\ga < \ka_2 $ (resp.~$ \ka_1 \leq \ka_\ga \leq \ka_2 $) throughout $ [0,1] $. \end{enumerate} This set is furnished with the $ C^r $ topology, for some $ r\geq 2 $.\footnote{The precise value of $ r $ is irrelevant, cf.~\lref{L:C^2}.} \end{dfn} \begin{dfn}[osculation]\label{D:osculating} Two curves $ \ga,\,\eta \colon [0,1] \lto{C^2} S $ on a smooth surface $ S $ will be said to \tdfn{osculate} each other at $ t = t_0,\,t_1 \in [0,1] $ if one may reparametrize $ \ga $, keeping $ t_0 $ fixed, so that \begin{equation* \ga(t_0) = \eta(t_1) \in S,\quad \dot\ga(t_0) = \dot\eta(t_1) \in TS \quad \text{and} \quad \ddot\ga(t_0) = \ddot\eta(t_1) \in T(TS). \end{equation*} \end{dfn} \begin{urmk}\label{R:osculating} Suppose that $ S $ is oriented and furnished with a Riemannian metric. Then $ \ga $ and $ \eta $ osculate each other at $ t=t_0,\,t_1 $ if and only if \begin{equation* \ga(t_0) = \eta(t_1),\quad \ta_{\ga}(t_0) = \ta_{\eta}(t_1)\quad \text{and} \quad \ka_{\ga}(t_0) = \ka_{\eta}(t_1). \end{equation*} \end{urmk} \begin{dfn}[circle, hypercircle, horocycle, ray]\label{D:circle} A \tdfn{circle} is the locus of all points a fixed distance away from a certain point in $ \Hh^2 $ (called its \tit{center}). A \tdfn{hypercircle} is one component of the locus of all points a fixed distance away from a certain geodesic in $ \Hh^2 $. A \tdfn{horocycle} is a curve which meets all geodesics through a certain point of $ \bd \Hh^2 $ orthogonally. A \tdfn{ray} is a distance-preserving map $\al\colon [0,+\infty)\to \Hh^2 $; such a ray is said to \tdfn{emanate from} $ \al'(0) \in UT\Hh^2 $. \end{dfn} In order to understand the effect of a geometric transformation on a given curve, it is often sufficient to replace the latter by its family of osculating constant-curvature curves. Because the hyperbolic plane is homogeneous and isotropic, one can represent these in a convenient position in one of the models as a means of avoiding calculations. \begin{figure}[ht] \begin{center} \includegraphics[scale=.26]{gauss_bonnet} \caption{Computing the curvature of circles, hypercircles and horocycles through Gauss-Bonnet. The circle in (i) is represented in the disk model, while the hypercircle in (ii) and horocycles in (iii) are represented in the half-plane model. } \label{F:gauss_bonnet} \end{center} \end{figure} \begin{rmk}[constant-curvature curves]\label{R:constant} Define an \tdfn{Euclidean circle} to be either a line or a circle in $ \C $; its center is taken to be $ \infty $ if it is a line. \begin{enumerate} \item [(a)] Circles, hypercircles and horocycles are the orbits of elliptic, hyperbolic and parabolic one-parameter subgroups of isometries, respectively. In particular, they have constant curvature. \item [(b)] In the models $ D $ and $ H $, a circle, horocycle or hypercircle appears as the intersection with $ \Hh^2 $ of an Euclidean circle which is disjoint, secant or (internally) tangent to $ \bd \Hh^2 $, respectively. \item [(c)] In the hyperboloid model, a circle, hypercircle or horocycle appears as a (planar) ellipse, hyperbola or parabola, respectively. \item [(d)] If all points of a circle lie at a distance $ r > 0 $ from a certain point, then its curvature is given by $\pm\coth r $. Thus, all circles have curvature greater than $ 1 $ in absolute value. \item [(e)] If all points of a hypercircle lie at a distance $ r > 0 $ from a certain geodesic, then its curvature is given by $ \pm \tanh r $. Thus, all hypercircles have curvature less than $ 1 $ in absolute value. \item [(f)] The curvature of a horocycle equals $ \pm 1 $. \item [(g)] Circles, hypercircles, horocycles and their arcs account for all constant-curvature curves. \end{enumerate} Assertions (d), (e) and (f) may be proved by mapping the curve through an isometry to the corresponding curve in Figure \ref{F:gauss_bonnet} and applying Gauss-Bonnet to the shaded region. The remaining assertions are also straightforward consequences of the transitivity of the group of isometries on $ UT\Hh^2 $. \end{rmk} In view of the sign ambiguity in the preceding formulas, it is desirable to redefine circles, hypercircles and horocycles not as subsets of $ \Hh^2 $, but as oriented curves therein. The \tdfn{radius} $ r \in \R $ of a circle or hypercircle is now defined so that its curvature $ \ka $ is given by \begin{equation* \ka = \coth r \text{\quad or \quad} \ka = \tanh r, \end{equation*} respectively. Then $ \abs{r} $ is the distance from the circle (resp.~hypercircle) to the point (resp.~geodesic) to which it is equidistant. Both expressions for the curvature apply to horocycles if these are regarded as circles/hypercircles of radius $ \pm \infty $, a convention which we shall adopt. Observe that in all cases the sign of $ r $ is the same as that of the curvature. \begin{rmk}[curvature and orientation]\label{R:orientation} Let the circle at infinity be oriented from left to right in $ H $ and counter-clockwise in $ D $. In either of these models (compare \fref{F:constant}): \begin{enumerate} \item [(a)] If a hypercircle meets the circle at infinity at an angle $ \al \in (0,\pi) $, then its curvature equals $ \cos \al $. This follows from a reduction to the hypercircle depicted in Figure \ref{F:gauss_bonnet}\?(ii), with the indicated orientation, by expressing $ r $ in terms of $ \al $. More explicitly, \begin{equation* r = \int_{\al}^{\frac{\pi}{2}}\frac{1}{\sin t}\,dt = -\log \tan \big( \tfrac{\al}{2} \big) \qquad (\al \in (0,\pi)), \end{equation*} so that $ \tanh r = \cos \al $. \item [(b)] If a circle is oriented (counter-)clockwise, then its curvature is less than $ -1 $ (greater than 1). This follows immediately from a reduction to Figure \ref{F:gauss_bonnet}\?(i). \item [(c)] Both preceding assertions can be extended to include horocycles. This follows by representing horocycles as circles tangent to the circle at infinity in the disk model. \end{enumerate} \end{rmk} \begin{figure}[ht] \begin{center} \includegraphics[scale=.27]{constant} \caption{Examples of curves of constant curvature in the disk model. Note that the sign of the curvature of a hypercircle need not agree with that of its Euclidean curvature.} \label{F:constant} \end{center} \end{figure} \begin{exr}[families of parallel hypercircles Let $ p \neq q \in \Ss^1_\infty $ and $ \ka \in (-1,1) $. Then there exist exactly two hypercircles of curvature $ \ka $ through $ p $ and $ q $. Furthermore, in the models $ D $ or $ H $:\footnote{Exercises are not used anywhere else in the text and may be skipped without any loss.} \begin{enumerate} \item [(a)] This pair of hypercircles is related by inversion in $ \Ss^1_\infty $, and their Euclidean centers lie along the Euclidean perpendicular bisector $ l $ of $ \ol{pq} $. \end{enumerate} Let $ o $ and $ o' $ denote the Euclidean centers of $ \Ss^1_\infty $ and the geodesic through the pair $ p,\,q $, respectively. \begin{enumerate} \item [(b)] Any point of $ l $ except for $ o $ and $ o' $ is the Euclidean center of a unique hypercircle of curvature in $ (0,1) $ through $ p $ and $ q $. Describe its orientation in terms of the position of its center. \item [(c)] Using \rref{R:orientation}\?(a), describe how its curvature changes as its center moves along $ l $. \end{enumerate} \end{exr} \begin{dfn}[normal translation]\label{D:translation} Let $ \ga\colon [0,1]\to \Hh^2 $ be a regular curve and $ \rho\in \R $. The \tdfn{normal translation} $ \ga_\rho\colon [0,1]\to \Hh^2 $ of $ \ga $ by $ \rho $ is the curve given by \begin{equation* \ga_\rho(t)=\exp_{\ga(t)}(\rho\no(t))\qquad (t\in [0,1]), \end{equation*} where $ \no $ is the unit normal to $ \ga $ and $ \exp $ the (Riemannian) exponential map. In the hyperboloid model, \begin{equation}\label{E:translation} \ga_\rho(t)=\cosh\rho \,\ga(t)+\sinh\rho\,\no(t)\qquad (t\in [0,1]). \end{equation} \end{dfn} \begin{urmk}\label{R:} The unit tangent bundle of any smooth surface has a canonical contact structure (cf.~\cite{Thurston}, \S3.7). A curve $ \te{\ga} $ on $ UTS $ is Legendrian (with respect to this structure) if its projection to $ S $ always points in the direction prescribed by $ \te{\ga}$. When $ S $ is oriented, a (local) flow of contact automorphisms $ \phi_{\rho} $ on $ UTS $ can be defined by letting $ \phi_{\rho}(u) $ be the parallel translation of $ u $ along the geodesic perpendicular to $ u $ by a signed distance of $ \rho $ toward the left ($ u \in UTS $, $ \rho \in \R $). If $ S $ is complete, $ \phi_\rho $ is defined on all of $ UTS $ for each $ \rho \in \R $. The normal translation of $ \ga $ as defined in \dref{D:translation} is nothing but the projection to $ S $ of $ \phi_\rho(\ta_\ga) $, in the special case where $ S = \Hh^2 $. As will be seen below, this operation may create or remove cusps. \end{urmk} \begin{rmk}[normal translation of constant-curvature curves in $ \Hh^2 $]\label{R:normal}\ Let $ r,\,\rho\in \R $. \begin{enumerate} \item [(a)] The normal translation by $ \rho $ of a circle of radius $ r $ is a circle of radius $ r-\rho $, equidistant to the same point as the original circle. \item [(b)] The normal translation by $ \rho $ of a hypercircle of radius $ r $ is a hypercircle of radius $ r-\rho $, equidistant to the same geodesic as the original hypercircle. \item [(c)] A normal translation of a horocycle is another horocycle, meeting orthogonally the same family of geodesics as the original horocycle. \end{enumerate} More concisely, the normal translation of a constant-curvature curve of radius $ r\in \R\cup\se{\pm\infty}$ by $ \rho\in \R $ is a curve of the same type of radius $ r-\rho $. \begin{enumerate} \item [(d)] A normal translation of a hypercircle (resp.~horocycle) meets $ \bd \Hh^2 $ in the same points (resp.~point) as the original hypercircle (resp.~horocycle), when represented in one of the models $ D $ or $ H $. \end{enumerate} Once again, to prove these assertions one can use an isometry to represent the circle (hypercircle, horocycle) as in Figure \ref{F:gauss_bonnet}, where they become trivial. Notice also that $ \rho $ goes from 0 to $ 2r $, the circle shrinks to a singularity ($ \rho = r $) and then expands back to the original circle, but with reversed orientation ($ \rho = 2r $). When $ \rho = r $ the hypercircle becomes the geodesic, and when $ \rho = 2r $ it becomes the other component of the locus of points at distance $ \abs{r} $ from the geodesic. This behavior is subsumed in the following result. \end{rmk} \begin{lem}[normal translation of general curves]\label{L:normal2} Let $ \ga\colon [0,1]\to \Hh^2 $ be smooth and regular, \begin{equation* \ka_-=\min_{t\in [0,1]}\ka(t)\quad \text{and}\quad \ka_+=\max_{t\in [0,1]}\ka(t). \end{equation*} Assume that $ \coth \rho \nin [\ka_-,\ka_+] $. Then: \begin{enumerate} \item [(a)] The normal translation $ \ga_{\rho} $ is regular. In particular, $ \ga_\rho $ is regular for all $ \rho $ in some open interval containing 0, and for all $ \rho \in \R $ in case $ [\ka_-,\ka_+]\subs [-1,1] $. \item [(b)] $ \ta_{\ga_\rho} \equiv \ta_{\ga} $ if these are regarded as taking values in $ \E^{2,1} \sups L $. \end{enumerate} Given $ t\in [0,1] $, there exists a unique constant-curvature curve which osculates $ \ga $ at $ \ga(t) $. The \tdfn{radius of curvature} $ r_\ga(t) $ of $ \ga $ at $ \ga(t) $ is defined as the radius of this osculating curve. \begin{enumerate} \item [(c)] The radii of curvature of $ \ga_\rho $ and $ \ga $ are related by $ r_{\ga_\rho} = r_\ga - \rho $. \item [(d)] The curvature of $ \ga_\rho $ is given by: \begin{equation* \ka_{\ga_\rho}(t)=\begin{cases} \frac{1-\ka_\ga(t)\coth\rho}{\ka_\ga(t) - \coth \rho} & \text{if\quad $ \vert{\ka_\ga(t)}\vert>1 $;} \\ \frac{\ka_\ga(t) - \tanh \rho}{1-\ka_\ga(t)\tanh\rho} & \text{if\quad $ \vert{\ka_\ga(t)}\vert<1 $;} \\ \ka_\ga(t) & \text{if\quad $ \vert{\ka_\ga(t)}\vert=1 $. } \end{cases} \end{equation*} \item [(e)] $ (\ga_\rho)_{-\rho}=\ga $. \end{enumerate} \end{lem} \begin{proof} For (a) and (b), use \eqref{E:translation}. It is clear from the definition of ``osculation'' that if $ \eta $ osculates $ \ga $ at $ \ga(t)$, then $ \eta_\rho $ osculates $ \ga_\rho $ at $ \ga_\rho(t) $. Thus part (c) is a consequence of \eref{R:normal}. Part (d) follows from the addition formulas for $ \coth $ and $ \tanh $, and part (e) is obvious. \end{proof} The topology of $ \sr C_{\ka_1}^{\ka_2}(u,v) $ depends in principle upon eight real parameters: three for each of $ u,\,v\in UT\Hh^2 $, and two for the curvature bounds. This number can be halved by a suitable use of normal translations and isometries. In the sequel, two intervals are said to \tdfn{overlap} if they intersect but neither is contained in the other one. \begin{prp}[parameter reduction]\label{P:reduction} Let $ (\ka_1,\ka_2) \neq (-1,1) $ and $ u,\,v,\,\bar{u} \in UT\Hh^2 $ be given. Then there exist $ \bar{v}\in UT\Hh^2 $ and $ \ka_0 $ such that $ \sr C_{\ka_1}^{\ka_2}(u,v) $ is canonically homeomorphic to a space of the type listed in Table \ref{Ta:reduction}. \end{prp} \begin{table}[h!] \begin{center} \begin{tabular}{ c c c }\hline Case & $ \sr C_{\ka_1}^{\ka_2}(u,v) $ homeomorphic to & Range of $ \ka_0 $ \rule[-8pt]{0pt}{22pt} \\ \hline $ (\ka_1,\ka_2) $ contained in $ [-1,1] $ & $ \sr C_{0}^{\ka_0}(\bar{u},\bar{v}) $ & $ (0,1] $ \rule{0pt}{14pt} \\ $ (\ka_1,\ka_2) $ disjoint from $ [-1,1] $ & $ \sr C_{\ka_0}^{+\infty}(\bar{u},\bar{v}) $ & $ [1,+\infty) $ \rule{0pt}{14pt} \\ $ (\ka_1,\ka_2) $ overlaps $ [-1,1] $ & $ \sr C_{\ka_0}^{+\infty}(\bar{u},\bar{v}) $ & $ [-1,1) $ \rule{0pt}{14pt} \\ $ (\ka_1,\ka_2) $ contains $ [-1,1] $ & $ \sr C_{-\ka_0}^{+\ka_0}(\bar{u},\bar{v}) $ & $ (1,+\infty] $ \rule{0pt}{14pt} \\ \end{tabular}\vspace{10pt} \caption{ Reduction of the number of parameters controlling the topology of $ \sr C_{\ka_1}^{\ka_2}(u,v) $, for $ (\ka_1,\ka_2) \neq (-1,1) $. } \label{Ta:reduction} \end{center} \end{table} The special role played by the interval $ [-1,1] $ stems from the fact that any normal translation of a horocycle is a horocycle. The four classes listed in Table \ref{Ta:reduction} are genuinely different in terms of their topological properties, as discussed in the introduction. In the only case not covered by \pref{P:reduction}, namely $ (\ka_1,\ka_2)=(-1,1) $, given $ u,\,v,\,\bar{u} $, one can find $ \bar v $ such that $ \sr C_{-1}^{+1}(u,v) $ is homeomorphic to $ \sr C_{-1}^{+1}(\bar u,\bar v) $; for this it is sufficient to compose all curves in the former with an isometry taking $ u $ to $ \bar{u} $. \begin{proof} The argument is similar for all four classes, so we consider in detail only the case where $ (\ka_1,\ka_2) $ overlaps $ [-1,1] $. Firstly, notice that composition with an orientation-reversing isometry switches the sign of the curvature of a curve. Therefore, by applying a reflection in some geodesic if necessary, it can be assumed that $ -1 \leq \ka_1 < 1 < \ka_2 $ (instead of $ \ka_1 < -1 < \ka_2 \leq 1 $). Write $ \ka_2 = \coth \rho_2 $, $ \ka_1 = \tanh \rho_1 $ and $ \ka_0 = \tanh(\rho_1-\rho_2) $. Then normal translation by $ \rho_2 $ (i.e., the map $ \ga \mapsto \ga_{\rho_2} $) provides a homeomorphism \begin{equation* \sr C_{\ka_1}^{\ka_2}(u,v)\to \sr C_{\ka_0}^{+\infty}(u',v'), \end{equation*} where $ u' $ is obtained by parallel translating $ u $ by a distance $ \rho_2 $ along the ray emanating from $ Ju $, and similarly for $ v' $. The inverse of this map is simply normal translation by $ -\rho_2 $. All details requiring verification here were already dealt with in \lref{L:normal2}. Finally, post-composition of curves with the unique orientation-preserving hyperbolic isometry taking $ u' $ to $ \bar{u} $ yields a homeomorphism \begin{equation* \sr C_{\ka_0}^{+\infty}(u',v') \to \sr C_{\ka_0}^{+\infty}(\bar{u},\bar{v}). \end{equation*} In the remaining cases, let $ \rho_i $ denote the radius of a curve of constant curvature $ \ka_i $ ($ i=1,2 $). For the first class in the table, apply a reflection if necessary to ensure that $ \ka_1 > -1 $, then use normal translation by $ \rho_1 $. For the second class, reduce to the case where $ \ka_1 \geq 1 $ and apply normal translation by $ \rho_2 $. For the fourth class, apply a normal translation by $ \frac{1}{2}(\rho_1+\rho_2) $. In all cases, use an orientation-preserving isometry to adjust the initial unit tangent vector to $ \bar u $. \end{proof} \begin{urmk} The homeomorphism constructed in the proof of \pref{P:reduction} operates on the curves in a given space by a composition of a normal translation and an isometry. It is ``canonical'' in the sense that these two transformations, as well as the values of $ \ka_0 $ and $ \bar v $, are uniquely determined by $ \ka_1,\,\ka_2,\,u,\,v $ and $ \bar u $. However, this does not preclude the existence of some more complicated homeomorphism between spaces in the second column of Table \ref{Ta:reduction}. \end{urmk} \begin{exr}[extension of \pref{P:reduction}]\label{X:reduction} Let $ (\ka_1,\ka_2) \neq (-1,1) $ and $ u,\,v,\,\bar u \in UT\Hh^2 $ be given. Use the argument above to prove: \begin{enumerate} \item [(a)] If $ (\ka_1,\ka_2) $ contains $ [-1,1] $, then there exist $ \ka_0 \in [-\infty,-1) $ and $ \bar v \in UT\Hh^2 $ such that \begin{equation* \sr C_{\ka_1 }^{\ka_2 }(u,v) \home \sr C_{\ka_0}^{+\infty}(\bar{u},\bar{v}). \end{equation*} \item [(b)] If $ -1<\ka_1 < \ka_2 < 1 $, then there exist $ \ka_0 \in (0,1) $ and $ \bar v \in UT\Hh^2 $ such that \begin{equation* \sr C_{\ka_1}^{\ka_2}(u,v) \home \sr C_{-\ka_0}^{+\ka_0}(\bar u,\bar v). \end{equation*} Note that the hypothesis here is more restrictive than in the first case of the table. \end{enumerate} \end{exr} \section{Voidness of the canonical subspaces}\label{S:voidness} The purpose of this section is to discuss which of the canonical subspaces of $ \sr C_{\ka_1}^{\ka_2}(u,v) $ are empty. In the sequel $ \sr C_{\ka_1 }^{\ka_2 }(u,\cdot) $ (resp.~$ \bar{\sr C}_{\ka_1 }^{\ka_2 }(u,\cdot) $) denotes the space obtained from \dref{D:spaces} by omitting the restriction that $ \ta_{\ga}(1) = v $. \begin{lem}[attainable endpoints]\label{L:attainable} If at least one $ \abs{\ka_i} > 1 $, then $ \sr C_{\ka_1}^{\ka_2}(u,v) \neq \emptyset $ for all $ u,v\in UT\Hh^2 $. \end{lem} \begin{proof} By symmetry, we may assume that $ \ka_2>1 $. Further, using normal translation by $ \rho_2=\arccoth \ka_2 $, we reduce to the case where $ \ka_2=+\infty $. Let $ q \in \Hh^2 $ and $ x \in UT\Hh^2_q $ be arbitrary. Consider the map \begin{equation* F_x \colon \sr C_{\ka_1 }^{+\infty}(x,\cdot) \to UT\Hh^2,\quad F_x(\ga) = \ta_{\ga}(1). \end{equation*} It is not hard to see that this map is open; cf.~ \lref{L:submersion}. We begin by proving that $ \Im(F_x) \sups UT\Hh^2_q $. Since sufficiently tight circles are allowed, $ x \in \Im(F_x) $. Let $ I\subs UT\Hh^2_q \cap \Im(F_x) $ be an open interval about $ x $. For each $ y\in I $, let $ R_y $ denote the elliptic isometry fixing $ q $ and taking $ x $ to $ y $. Let \begin{equation* \phantom{\quad (y,\,z \in UT\Hh^2_q)} \ga_y \in \sr C_{\ka_1 }^{\ka_2 }(x,y) \quad \text{and} \quad \ga_z \in \sr C_{\ka_1 }^{\ka_2 }(x,z) \quad (y,\,z \in UT\Hh^2_q). \end{equation*} Then the concatenation \begin{equation* \ga_y\ast \big( R_y\circ \ga_z \big) \end{equation*} starts at $ x $ and ends at $ dR_y(z) $. This curve may not have a second derivative at the point of concatenation, but it may be approximated by a smooth curve in $ \sr C_{\ka_1 }^{\ka_2 }(x,R_yz) $. This shows that $ \Im(F_x) $ contains the interval $ \set{R_yz}{y \in I} $ about $ z $ whenever it contains $ z $. In turn, this implies that $ \Im(F_x) $ includes $ UT\Hh^2_q $. Therefore, if $ \Im(F_u) $ contains any tangent vector $ x $ based at $ q $, it must also contain $ UT\Hh^2_q \subs \Im(F_x) $, by transitivity. Now let $ r:=\arccoth \ka_1 $ in case $ \ka_1>1 $ and $ r:=+\infty $ otherwise. If $ \Im(F_u) $ contains tangent vectors at $ q $, then it also contains tangent vectors at $ q' $ for any $ q'\in B(q;2r) $, since $ q' $ can be reached from $ q $ by the half-circle centered at the midpoint of the geodesic segment $ qq' $ of radius $ \frac{1}{2}d(q,q') <r$. We conclude that $ \Im(F_u)=UT\Hh^2 $, that is, $ \sr C_{\ka_1 }^{\ka_2 }(u,v) \neq \emptyset $ for all $ v \in UT\Hh^2 $. \end{proof} The converse of \lref{L:attainable} is an immediate consequence of the following result; cf.~also \cref{C:closed}. \begin{lem}[unattainable endpoints]\label{L:unattainable} Let $ (\ka_1,\ka_2)\subs[-1,1] $ and $ u\in UT\Hh^2 $. The hypercircles (or horocycles) of curvature $ \ka_1 $ and $ \ka_2 $ tangent to $ u $ divide $ \Hh^2 $ into four open regions. Any curve in $ \sr C_{\ka_1}^{\ka_2}(u,\cdot) $ is confined to one of these regions $ R $. Conversely, a point in $ R $ can be reached by a curve of constant curvature in $ \sr C_{\ka_1}^{\ka_2}(u,\cdot) $. \end{lem} \begin{proof} The hypercircle of curvature $ \frac{1}{2}(\ka_1+\ka_2) $ tangent to $ u $ meets $ \bd \Hh^2 $ at two points; let $ R $ denote the open region whose closure contains the one point in $ \bd \Hh^2 $ towards which $ u $ is pointing. Let $ \ga \in \sr C_{\ka_1 }^{\ka_2 }(u,\cdot) $. Comparison of the curvatures of $ \ga $ and those of the hypercircles bounding $ R $ shows that $ \ga(t) \in R $ for sufficiently small $ t>0 $. Suppose for a contradiction that $ \ga $ reaches $ \bd R $ for the first time at $ t=T \in (0,1] $ at some point of the hypercircle $ E $ of curvature $ \ka_2 = \tanh r $. (In case $ \ka_2=1 $, $ E $ is a horocycle and $ r=+\infty $.) The function \begin{equation* \de\colon [0,T] \to \R ,\quad \de(t)=d(\ga(t),E) \end{equation*} vanishes at $ 0 $ and $ T $, hence it must attain its global maximum in $ [0,T] $ at some $ \tau \in (0,T) $, say $ \de(\tau) = \rho $. By \rref{R:normal}, the normal translation of $ E $ by $ -\rho $ is a hypercircle (resp.~horocycle) of curvature $ \tanh (r+\rho) $, which must be tangent to $ \ga $ at $ \ga(\tau) $ because $ \dot \de(\tau) = 0 $. Since $ \de $ has a local maximum at $ \tau $, comparison of curvatures yields that $ \ka_{\ga}(\tau)\geq \tanh(r+\rho)>\ka_2 $, which is impossible. The last assertion of the lemma holds because constant-curvature curves in $ \sr C_{\ka_1}^{\ka_2}(u,\cdot) $ foliate $ R $. \end{proof} \begin{urmk}\label{R:Rbar} With the notation of \lref{L:unattainable}, the image of any curve in $ \bar{\sr C}_{\ka_1}^{\ka_2}(u,\cdot) $ is contained in $ \bar R $ (see \dref{D:spaces} for the definition of $ \bar {\sr C} $). To prove this, let $ \ga $ be a curve in this space. If $ \ka_\ga \equiv \ka_1 $ or $ \ka_\ga\equiv \ka_2 $, then the image of $ \ga $ is entirely contained in $ \bd R $. Otherwise, let $ t_0 < 1 $ be the infimum of all $ t \in [0,1] $ such that $ \ka_{\ga}(t) \in (\ka_1,\ka_2) $, and apply the argument of \lref{L:unattainable} to $ \ga|_{[t_0,1]} $. \end{urmk} \begin{crl}\label{C:closed} The spaces $ \sr C_{\ka_1 }^{\ka_2 }(\cdot,\cdot) $ and $ \bar{\sr C}_{\ka_1}^{\ka_2}(\cdot,\cdot) $ contain closed curves if and only if at least one $ \abs{\ka_i}>1 $.\qed \end{crl} \begin{crl}\label{C:nontrivial} Let $ S $ be a hyperbolic surface. Then any closed curve whose curvature is bounded by 1 in absolute value is homotopically non-trivial (that is, it cannot represent the unit element of $ \pi_1S $). \end{crl} \begin{proof} Indeed, the previous corollary shows that the lift of such a curve to $ \Hh^2 $ cannot be closed. \end{proof} It will be convenient to introduce another model for $ \Hh^2 $, which bears some similarity to Mercator world maps. \begin{dfn}[Mercator model]\label{D:Mercator} The underlying set of the \tdfn{Mercator model} $ M $ is $ (0,\pi) \times \R $. Its metric is defined by declaring the correspondence \begin{equation* M\to H,\quad (x,y)\mapsto e^{y+ix} \qquad \big(x\in (0,\pi),~y\in \R) \end{equation*} to be an isometry. Because this map is the composition of the complex exponential with a reflection in the line $ y=x $, $ M $ is conformal. \end{dfn} \begin{rmk}[geometry of $ M $]\label{R:compM} It is straightforward to verify that in the Mercator model $ M $: \begin{enumerate} \item [(a)] Vertical lines $ y\mapsto (x,y) $, or {parallels}, represent hypercircles of curvature $ \cos x $, corresponding in $ H $ to rays having $ 0 \in H $ for their initial point (see \xref{R:orientation}\?(a)). Horizontal segments, or {meridians}, are geodesics corresponding in $ H $ to Euclidean half-circles centered at 0. \item [(b)] Vertical translations are hyperbolic isometries. The Riemannian metric $ g $ is given by \begin{equation* g_{(x,y)} = \frac{dx^2+dy^2}{\sin^2 x} \qquad ((x,y)\in M). \end{equation*} \item [(c)] The Christoffel symbols are given by \begin{equation* \Ga_{ij}^{k}(x,y) = \begin{cases} 0 & \text{ if $ i+j+k $ is odd;} \\ (-1)^{1+ij}\cot x & \text{ if $ i+j+k $ is even.} \end{cases} \end{equation*} \end{enumerate} \end{rmk} \begin{lem}\label{L:graph} Let $ (\ka_1,\ka_2) \subs [-1,1] $ and $ u $ be the vector $ 1 \in \Ss^1 $ based at $ \big(\frac{\pi}{2},0\big) \in M $. Then the image of any curve in $ \sr C_{\ka_1 }^{\ka_2 }(u,\cdot) $ or $ \bar{\sr C}_{\ka_1}^{\ka_2}(u,\cdot) $ is the graph of a function of $ x $ when represented in $ M $. Conversely, if $ (\ka_1,\ka_2) \not \subs [-1,1] $, then there exist curves in these spaces which are not graphs. \end{lem} \begin{proof} Suppose that $ \ga \in \bar{\sr C}_{\ka_1}^{\ka_2}(u,\cdot) $ is not the graph of function of $ x $ when represented in $ M $, i.e., it is tangent to a parallel $ P_1 $ at some time, say, $ t=1$. Let $ P_0 $ be the parallel $ x=\frac{\pi}{2} $, which is orthogonal to $ \ga $ at $ t=0 $ by hypothesis; note that $ P_0 $ is a geodesic. Let $ L $ be the meridian through $ \ga(1) $, which is a geodesic orthogonal to both $ P_0 $ and $ P_1 $. Let $ \ga_1 $ be the concatenation of $ \ga $ and its reflection in $ L $ (with reversed orientation). Let $ \ga_2 $ be the concatenation of $ \ga_1 $ and its reflection in $ P_0 $ (again with reversed orientation). Then $ \ga_2 $ is a closed curve, hence $ (\ka_1,\ka_2) \not\subs [-1,1] $ by \cref{C:closed}. Conversely, if $ (\ka_1,\ka_2) \not\subs [-1,1] $, then $ \sr C_{\ka_1}^{\ka_2}(u,\cdot) $ contains circles, which are closed and hence not graphs. \end{proof} \begin{prp}[attainable turnings]\label{P:unattainable} Consider the decomposition \eqref{E:decomp} of $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ into canonical subspaces. \begin{enumerate} \item [(a)] If $ (\ka_1,\ka_2) $ contains $ [-1,1] $, then all of its canonical subspaces are nonempty. \item [(b)] If $ (\ka_1,\ka_2) $ is contained in $ [-1,1] $, then at most one canonical subspace is nonempty. \item [(c)] If $ (\ka_1,\ka_2) $ overlaps or is disjoint from $ [-1,1] $, then infinitely many of the canonical subspaces are nonempty, and infinitely many are empty. \end{enumerate} \end{prp} \begin{proof} We split the proof into parts. \begin{enumerate} \item [(a)] By \lref{L:attainable}, $ \sr C_{\ka_1 }^{\ka_2 }(u,v) \neq \emptyset $. Since $ (\ka_1,\ka_2) \sups [-1,1] $, we may concatenate a curve in this space with circles of positive or negative curvature, traversed multiple times, to attain any desired total turning. \item [(b)] By \pref{P:reduction}, we may assume that $ u $ is the vector in the statement of \lref{L:graph}. Then the assertion becomes an immediate consequence of the latter. \item [(c)] By \lref{L:attainable}, $ \sr C_{\ka_1}^{\ka_2}(u,v) \neq \emptyset $. Because we may concatenate any curve in $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ with a circle (of positive curvature if $ \ka_1 > -1 $ and of negative curvature if $ \ka_2 < 1 $) traversed multiple times, $ \sr C_{\ka_1 }^{\ka_2 }(u,v;\tau) \neq \emptyset $ for infinitely many values of $ \tau $. The remaining assertion, that $ \sr C_{\ka_1 }^{\ka_2 }(u,v;\tau) = \emptyset $ for infinitely many values of $ \tau $, is a consequence of \lref{L:turnings}\?(c) below.\qedhere \end{enumerate} \end{proof} \begin{dfn}[$ \al_{\pm} $]\label{D:arcs} Given a regular curve $ \ga \colon [0,1] \to \Hh^2 $, define maps \begin{equation* \al_{\pm} \colon [0,1] \to \Ss^1_\infty \end{equation*} by letting $ \al_{\pm}(t) $ be the point where the geodesic ray emanating from $ \pm \no(t) $ meets $ \Ss^1_\infty $. \end{dfn} \begin{lem}\label{L:turnings} Let $ u,\,v\in UT\Hh^2 $ be fixed. \begin{enumerate} \item [(a)] Two curves $ \ga,\,\bar\ga \in \sr C_{-\infty}^{+\infty}(u,v) $ lie in the same component of this space if and only if the associated maps $ \al_{+},\,\bar\al_{+} \colon [0,1] \to \Ss^1_\infty $ defined in \dref{D:arcs} have the same total turning. \item [(b)] If $ \ka_\ga > -1 $ everywhere, then $ \al_- $ is monotone. Similarly, if $ \ka_\ga<+1 $, then $ \al_{+} $ is monotone. \item [(c)] If $ \ka_1 \geq -1 $, then then there exists $ \tau_0 $ such that $ \sr C_{\ka_1 }^{\ka_2 }(u,v;\tau) $ is empty for all $ \tau <\tau_0 $. \end{enumerate} \end{lem} \begin{proof} It is clear that if $ \ga,\,\bar\ga $ lie in the same component, then $ \al_{+} \iso \bar\al_{+} $ and $ \al_{-} \iso \bar\al_{-} $. Conversely, if $ \ga,\,\bar\ga $ do not lie in the same component, then $ \bar\ga $ must be homotopic (through regular curves) to the concatenation of $ \ga $ with a circle traversed $ n $ times, for some $ n\neq 0 $. This yields a homotopy between $ \bar\al_+ $ and $ \al_+ $ concatenated with a map of degree $ n $. For part (b), it suffices to approximate $ \ga $ by its osculating constant-curvature curve at each point. If such a curve is a circle, place its center at the origin in the disk model. If it is a hypercircle, regard it as an Euclidean ray in $ H $. Finally, (c) is a corollary of (a) and (b). \end{proof} \begin{exr}\label{X:arcs} Let $ \ga \colon [0,1] \to \Hh^2 $ be a regular curve and $ A_{\pm}(\ga) \subs \Ss^1_\infty $ denote the images of $ \al_{\pm} \colon [0,1] \to \Ss^1_\infty $. \begin{enumerate} \item [(a)] Each of $ A_{\pm}(\ga) $ is a closed arc, which may be a singleton or all of $ \Ss^1_\infty $. \item [(b)] Regard $ \ga $ as a curve in $ H $, and let $ \infty $ be the unique point of $ \Ss^1_\infty $ not on the real line in this model. Then $ \infty $ lies in the complement of $ A_{+}(\ga) \cup A_{-}(\ga) $ if and only if $ \ta_\ga $ is never horizontal. \item [(c)] Let $ \eta $ be a horocycle of curvature $ 1 $, tangent to $ \Ss^1_\infty $ at $ z $. Then $ A_+(\eta) = \se{z} $ while $ A_-(\eta) = \Ss^1_\infty \ssm \se{z} $. What happens if the curvature of $ \eta $ is $ -1 $? (\tit{Hint}: reduce to the case where $ z = \infty $.) \end{enumerate} \end{exr} \section{Frame and logarithmic derivative}\label{S:frame} The group $ \Iso_+(\Hh^2) $ of all orientation-preserving isometries of the hyperbolic plane acts simply transitively on $ UT\Hh^2 $. Therefore, an element $ g$ of this group is uniquely determined by where it maps a fixed unit tangent vector $ u_0 $. This yields a correspondence between the two sets, viz., $ g\dar gu_0 $. The \tdfn{frame} \begin{equation}\label{E:frame} \Phi=\Phi_\ga\colon [0,1]\to \Iso_+(\Hh^2) \end{equation} of a regular curve $ \ga\colon [0,1]\to \Hh^2 $ is the image of $ \ta_\ga $ under this correspondence, and the \tdfn{logarithmic derivative} $ \La=\La_\ga\colon [0,1]\to \textrm{L}(\Iso(\Hh^2)) $ is its infinitesimal version. More precisely, $ \La $ is the translation of $ \dot\Phi $ to the Lie algebra $ \textrm{L}(\Iso(\Hh^2)) $, defined by \begin{equation}\label{E:La} \dot\Phi(t) = TL^{\Phi(t)} (\La(t)) \quad (t\in [0,1]). \end{equation} Here $ TL^{\Phi(t)} $ denotes the derivative (at the identity) of left multiplication by $ \Phi(t) $. Although $ \Phi $ depends on the choice of $ u_0 $, $ \La $ does not. To be explicit, let $ S=\diag(-1,1,1) $, \begin{equation* \Oo_{2,1} = \set{ Q \in \GL_3(\R)}{Q^tSQ=S} \end{equation*} be the group of isometries of $ \E^{2,1} $ and \begin{equation* \SO^+_{2,1} = \set{Q\in \Oo_{2,1} }{\det(Q)=1\text{ and }Q(L)=L} \end{equation*} be the connected component of the identity, which is isomorphic to $ \Iso_+(\Hh^2) $. The corresponding Lie algebra is \begin{equation* \aso_{2,1} = \set{X\in \agl_3(\R)}{X^tS+SX=0}. \end{equation*} In the hyperboloid model $ L $, the identification between $ UT\Hh^2 $ and $ \Iso_+(\Hh^2) $ takes the canonical form \begin{equation}\label{E:corresp} UTL_p\ni u \dar \begin{pmatrix} | & | & | \\ p & u & p \ten u \\ | & | & | \end{pmatrix} \in \SO^+_{2,1}, \end{equation} where $ \ten $ denotes the Lorentzian vector product in $ \E^{2,1} $. (Recall that $ u \ten v = S(u \times v)$, where $ \times $ denotes the usual vector product of vectors in $ \R^3 $.) Note that this correspondence is also of the form $ gu_0 \dar g $ described above ($ g \in \SO^+_{2,1} $), for $ u_0 = e_1 = (0,1,0) \in UTL $. \begin{exr}[computations in $ L $]\label{X:loid} Let $ \ga\colon [0,1]\to L $ be smooth and regular. \begin{enumerate} \item [(a)] Denoting differentiation with respect to the given parameter (resp.~arc-length) by $ \dot{} $ (resp.~$ ' $): \begin{equation* \ta' = \ka\no+\ga,\quad \no'=-\ka\ta\quad \text{and} \quad \ka = \ta'\cdot \no = \frac{1}{\norm{\dot\ga}}\dot\ta\cdot \no = \frac{1}{\norm{\dot\ga}^2}\ddot\ga\cdot\no = \frac{1}{\norm{\dot\ga}^3}\det(\ga,\dot \ga,\ddot \ga). \end{equation*} In these formulas $ \ga,\,\ta,\,\no $ are viewed as taking values in $ \E^{2,1} $, $ \cdot $ is the Lorentzian inner product and $ \norm{\ }^2 $ the corresponding quadratic form. \item [(b)] The frame $ \Phi\colon [0,1]\to \SO^+_{2,1} $ and logarithmic derivative $ \La\colon [0,1]\to \aso_{2,1} $ of $ \ga $ are given by \begin{equation* \Phi=\begin{pmatrix} | & | & | \\ \ga & \ta & \no \\ | & | & | \end{pmatrix} \text{\quad and\quad} \La = \norm{\dot\ga}\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & -\ka \\ 0 & \ka & 0 \end{pmatrix}. \end{equation*} \end{enumerate} \end{exr} Similar formulas for the curvature in the disk and half-plane models are much more cumbersome. However, the frame and logarithmic derivative do admit comparatively simple expressions. For concreteness, we choose the correspondence between $ \PSL_2(\R) = \Iso_+(H) $ and $ UTH $ to be $ M \dar dM_i(1) $, where the latter denotes the image under the complex derivative $ dM $ of the tangent vector $ 1 \in \C $ based at $ i\in H $. Similarly, we choose the correspondence between $ \Iso_+(D) $ and $ UTD $ to be $ M \dar dM_0(\frac{1}{2}) $. Recall that $ \Iso_+(D) $ consists of those M\"obius transformations of the form \begin{equation* \qquad z\mapsto \frac{az+b}{\bar bz+\bar a},\quad \abs{a}^2-\abs{b}^2>0 \quad (a,\,b \in \C). \end{equation*} \begin{rmk}[$ \Phi $ and $\La $ in the models $ D $ and $ H $]\label{R:phila} Denote elements of projective groups as matrices in square brackets, the absolute value of a complex number by $ \abs{\ } $ and, exceptionally, the norm of a vector tangent to $ \Hh^2 $ by $ \norm{\ } $. \begin{enumerate} \item [(a)] Let $ \ga\colon [0,1] \to H $ be a smooth regular curve. Then $ \La \colon [0,1]\to \asl_2({\R}) $ is given by \begin{equation* \La=\frac{1}{2}\norm{\dot\ga} \begin{pmatrix} 0 & 1+\ka \\ 1-\ka & 0 \end{pmatrix} \end{equation*} and $ \Phi \colon [0,1]\to \PSL_2(\R) $ is given by \begin{equation* \Phi= \begin{bmatrix} \Im(\ga \bar z) & \Re(\ga \bar z)\\ \Im(\bar z) & \Re(\bar z) \end{bmatrix},\quad \text{where $ \frac{\ta}{\abs{\ta}} = z^2\in \Ss^1 $.} \end{equation*} \item [(b)] Let $ \ga\colon [0,1] \to D $ be a smooth regular curve. Then $ \La\colon [0,1] \to \asl_2(\C) $ is given by \begin{equation* \La=\frac{1}{2}\norm{\dot\ga}\begin{pmatrix} i\ka & 1 \\ 1 & -i\ka \end{pmatrix} \end{equation*} and $ \Phi\colon [0,1] \to \Iso_+(D)\subs\PSL_2(\C) $ is given by \begin{equation*} \Phi \begin{bmatrix} \phantom{\ga}z & \ga \bar z\\ \bar\ga z & \phantom{\ga}\bar z \end{bmatrix},\quad \text{where $ \frac{\ta}{\abs{\ta}}=z^2\in \Ss^1 $.} \end{equation*} \end{enumerate} To establish the first formula, it suffices by \eqref{E:La} to consider the case the parameter is arc-length and $ \Phi(s_0) = I\in \PSL_2(\R) $. Without trying to find an expression for $ \Phi(s) $ itself, write \begin{equation* M^s:=\Phi(s)\in \PSL_2(\R),\quad M^s(z)=\frac{a(s)z+b(s)}{c(s)z+d(s)}, \quad M^{s}(i)=\ga(s)\in H, \quad dM^{s}_i=\ta(s)\in \C. \end{equation*} (Here $ dM^s $ denotes the complex derivative of $ M^s $ as a map $ \C\to \C $.) Differentiate with respect to $ s $ at $ s_0 $ and express $ \ta'(s_0) $ in terms of $ \frac{D\ta}{ds}(s_0) $ to determine $ a',b',c',d' $ at $ s_0 $. The derivation of the formula for $ \La $ in (b) is analogous, and the expressions for $ \Phi $ are obtained by straightforward computations. \end{rmk} A one-parameter group of hyperbolic isometries provides a foliation of $ \Hh^2 $ by its orbits, which are hypercircles by \rref{R:constant}\?(a). A regular curve $ \ga \colon [0,1]\to \Hh^2$ \tdfn{admits hyperbolic grafting} if there exist some such group $ G $ and $ t_0,t_1\in [0,1] $ such that $ \ta(t_0) $ and $ -\ta(t_1) $ are tangent to orbits of $ G $. \begin{rmk}[hyperbolic grafting]\label{R:grafting} A regular curve $ \ga\colon [0,1]\to H $ admits hyperbolic grafting if and only if there exist $ t_0,t_1\in [0,1] $ such that $ \Phi(t_1)\Phi(t_0)^{-1} $ has the form \begin{equation* \begin{bmatrix} r(1-\sin2\theta) & -r\cos2\theta \\ \cos2\theta & -(1-\sin2\theta) \end{bmatrix}= \begin{bmatrix} r(\cos\theta-\sin\theta) & -r(\cos\theta+\sin\theta) \\ \cos\theta+\sin\theta & \sin\theta-\cos\theta \end{bmatrix}\in \PSL_2(\R) \end{equation*} for some $ r>0 $ and $ \theta\in (0,\frac{\pi}{2})$. For the proof, note that any two one-parameter groups of hyperbolic isometries are conjugate, hence $ \Ga $ may be taken as the group of positive homotheties centered at $ 0 $. By homogeneity, it may also be assumed that $ \ta(t_0) = i\in \C $ based at $ i\in H $. Then $ -\ta(t_1) = \Im(z)z $ if it is based at $ z \in H $ and tangent to an orbit of $ \Ga $. The M\"obius transformations $ M\in \PSL_2(\R) $ satisfying $ dM_i(i)=-\Im(z)z $ ($ z=re^{2i\theta}\in H $) admit the stated description. \end{rmk} \section{The case where $ (\ka_1,\ka_2) $ is disjoint from $ [-1,1] $} \label{S:disjoint} \begin{lem}\label{L:convex} In the disk and half-plane models, a curve whose (hyperbolic) curvature is everywhere greater than $ 1 $ is locally convex from the Euclidean viewpoint, i.e., its total turning is strictly increasing. \end{lem} \begin{proof} It suffices to prove this for curves of constant curvature greater than 1, because a general curve satisfying the hypothesis is osculated by curves of this type. In $ D $ or $ H $, such curves are represented as Euclidean circles traversed in the counterclockwise direction, hence they are locally convex. \end{proof} \begin{thm}\label{T:disjoint} If $ (\ka_1,\ka_2) $ is disjoint from $ [-1,1] $, then each canonical subspace of $ \sr C_{\ka_1}^{\ka_2}(u,v) $ is either empty or contractible. \end{thm} \begin{proof} We will work in the half-plane model $ H $ throughout the proof. Points and tangent vectors will thus be regarded as elements of $ \C $. By \pref{P:reduction}, it may be assumed that $ \ka_1 \geq 1 $ and that $ u $ is parallel to $ 1 \in \Ss^1 $. Let $ \tau \in \R $ be a fixed valid total turning for curves in $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $, meaning that $ e^{i\tau} $ is parallel to $ v \in \C $. Given $ \ga \in \sr C_{\ka_1 }^{\ka_2 }(u,v;\tau) $, let $ \theta_{\ga} \colon [0,1] \to [0,\tau] $ be the unique continuous function such that $ \theta_\ga(0) = 0 $ and \begin{equation* \frac{\ta_{\ga}(t)}{\vert \ta_\ga(t)\vert} = e^{i\theta_{\ga}(t)} \text{ for all $ t\in [0,1] $,} \end{equation*} where $ \abs{\ } $ denotes the usual absolute value of complex numbers. By \lref{L:convex}, $ \theta_\ga $ is a diffeomorphism of $ [0,1] $ onto $ [0,\tau] $. Thus it may be used as a parameter for $ \ga $. Now fix $ \ga_0 \in \sr C_{\ka_1 }^{\ka_2 }(u,v;\tau) $ and set $ \ga_1:=\ga $, both viewed as curves in $ H $ and parametrized by the argument $ \theta \in [0,\tau] $, as above. Define \begin{equation* \ga_s(\theta) = (1-s)\ga_0(\theta) + s\ga_1(\theta) \in \C \quad (s \in [0,1],~\theta \in [0,\tau]). \end{equation*} Then each $ \ga_s \colon [0,\tau] \to H $ is a smooth curve satisfying $ \ta_{\ga_s}(0) = u $ and $ \ta_{\ga_s}(1) = v $. Moreover, it has the correct total turning $ \tau $, since $ \ta_{\ga_s}(\theta) $ is always parallel to $ e^{i\theta} $. By \rref{R:constant}\?(g) and the definition of ``osculation'', the constant-curvature curves osculating $ \ga $ from the Euclidean and hyperbolic viewpoints at any $ \theta $ agree. Since $ \ka_1 \geq 1$, they are both equal to a certain circle completely contained in $ H $, traversed counterclockwise; compare \rref{R:orientation}\?(b). Therefore the osculating constant-curvature curve to $ \ga_s $ at $ \theta $ is another circle $ C_s $, the corresponding convex combination of the osculating circles $ C_0 $ to $ \ga_0 $ and $ C_1 $ to $ \ga_1 $ at $ \theta $ (see \fref{F:circles}). Let $ y_s $ denote the $ y $-coordinate of the Euclidean center of $ C_s $ and $ r_s $ its Euclidean radius ($ s \in [0,1] $). More explicitly, \begin{equation* y_s = (1-s)y_0+sy_1 \quad \text{and} \quad r_s = (1-s)r_0 + sr_1. \end{equation*} The \tsl{hyperbolic} diameter $ d_s $ of $ C_s $ is given by \begin{equation* d_s = \log \bigg( \frac{y_s+r_s}{y_s-r_s} \bigg). \end{equation*} Note that $ y_s>r_s $, as this is true for $ s=0,\,1 $. Because \begin{equation* \phantom{\quad (i=0,1)}2\arccoth(\ka_2) < d_i < 2\arccoth(\ka_1) \quad (i=0,1) \end{equation*} by hypothesis, a trivial computation shows that $ d_s $ satisfies the same inequalities for all $ s\in [0,1] $. Hence the curvature of $ \ga_s $ does indeed take values inside $ (\ka_1,\ka_2) $, and $ (\ga,s) \mapsto \ga_s $ defines a contraction of $ \sr C_{\ka_1 }^{\ka_2 }(u,v;\tau) $. \end{proof} \begin{figure}[ht] \begin{center} \includegraphics[scale=.20]{circles} \caption{Proof of \tref{T:disjoint}} \label{F:circles} \end{center} \end{figure} \begin{crl}\label{C:disjoint} If $ (\ka_1,\ka_2) $ is disjoint from $ [-1,1] $, then $ \sr C_{\ka_1}^{\ka_2}(u,v) $ is homeomorphic to the disjoint union of countably many copies of the separable Hilbert space. \end{crl} \begin{proof} This is an immediate consequence of \pref{P:unattainable}\?(c), \tref{T:disjoint} and \lref{L:Hilbert}\?(b). \end{proof} \begin{urmk}\label{R:disk} It is interesting to note that the proof of \tref{T:disjoint} does not work in the disk model. To understand what could go wrong, consider the situation where $ C_0 $ and $ C_1 $ have the same radius and the midpoint of the segment joining their centers is the origin of $ D $ (all concepts here being Euclidean). Then the hyperbolic radius of $ C_{\frac{1}{2}} $ can be arbitrarily small compared to that of $ C_0 $ and $ C_1 $. \end{urmk} \begin{urmk}\label{R:disjoint} The argument in the proof of \tref{T:disjoint} goes through without modifications to show that each component of $ \bar{\sr C}_{\ka_1 }^{\ka_2 }(u,v) $ is contractible if $ [\ka_1,\ka_2] $ is disjoint from $ [-1,1] $. \end{urmk} \section{The case where $ (\ka_1,\ka_2) \subs [-1,1] $}\label{S:contained} In this section we will work with the spaces $ \sr L_{\ka_1 }^{\ka_2 }(u,v) $ which are introduced in \S\ref{S:discontinuous}. These are larger than $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ in that they include regular piecewise $ C^2 $ curves. Our proof of \tref{T:contained} uses such curves and is therefore more natural in the former class. We recommend that the reader ignore the technical details for the moment and postpone a careful reading of \S\ref{S:discontinuous}. There is also a way to carry out the proof below in the setting of $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $: Whenever a discontinuity of the curvature arises, take an approximation by a smooth curve (which needs to be constructed); this path is more elementary but also cumbersome. The reader will notice that similar discussions appear in \cite{Saldanha3}, \cite{SalZueh} and \cite{SalZueh1}. Recall the Mercator model $ M $ defined in \dref{D:Mercator}. Suppose that $ \ga $ is a smooth regular curve in $ M $ which can be written in the form \begin{equation}\label{E:gammax} \ga\colon [a,b] \to M,\quad \ga(x) = (x,y(x)) \end{equation} for some $ [a,b] \subs (0,\pi) $; in other words, the image of $ \ga $ is the graph of a function $ y(x) $. A straightforward computation using \rref{R:compM} shows that the curvature of $ \ga $ is then given by \begin{equation}\label{E:curvatureM} \ka_\ga = \frac{1}{\sqrt{1+\dot y^2}}\bigg( \frac{\ddot y \sin x}{1+\dot y^2} - \dot y \cos x \bigg). \end{equation} More important than this expression itself is the observation that it does not involve $ y $, only its derivatives (because vertical translations are isometries of $ M$). This can be exploited to express geometric properties of $ \ga $ solely in terms of the function $ f = \dot y $, and in particular to prove the following.\footnote{A similar construction was used in \S3 of \cite{SalZueh1}.} \begin{thm}\label{T:contained} If $ (\ka_1,\ka_2) $ is contained in $ [-1,1] $, then $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ is either empty or contractible, hence homeomorphic to the separable Hilbert space. \end{thm} \begin{proof} By \pref{P:reduction}, we can assume that $ u $ is represented in $ M $ as the vector $ 1 \in \Ss^1 $ based at $ (\frac{\pi}{2},0) $. By \lref{L:C^2}, it suffices to prove that the Banach manifold $ \sr L_{\ka_1 }^{\ka_2 }(u,v) $ is either empty or weakly contractible. Let \begin{equation* \Ss^k \to \sr L_{\ka_1}^{\ka_2}(u,v) ,\quad p\mapsto \ga_p \end{equation*} be a continuous map. By \lref{L:smoothie}, it can be assumed that each $ \ga_p $ is smooth. In particular, it is possible to choose $ \bar\ka_2<\ka_2 $ and $ \bar\ka_1>\ka_1 $ such that the curvatures of all the $ \ga_p $ take values inside $ (\bar \ka_1,\bar\ka_2) $. By \lref{L:graph}, any such curve $ \ga $ may be parametrized as in \eqref{E:gammax}. The function $ f = \dot y $ satisfies the following conditions, whose meaning will be explained below: \begin{enumerate} \item [(i)] $ f(a)= \al $ and $ f(b)= \be $. \item [(ii)] $ \int_a^b f(t)\,dt = A_0 $. \item [(iii)] $ \psi_{\bar \ka_1}(x,f(x)) \leq \dot f(x) \leq \psi_{\bar\ka_2}(x,f(x)) $ for a.e.~$ x \in [a,b] $, where $ \psi_\ka \colon [a,b] \times \R \to \R $ is given by: \begin{equation* \psi_{\ka}(x,z) = \frac{1+z^2}{\sin x} \big(z\cos x + \ka \sqrt{1+z^2}\big)\qquad (\ka \in [\bar \ka_1,\bar \ka_2],~ x\in [a,b],~z\in \R). \end{equation*} \end{enumerate} In the present case, $ a = \frac{\pi}{2} $ is the $ x $-coordinate of $ \ga(a) = \big(\frac{\pi}{2},0\big) $ and $ b $ is the $ x $-coordinate of $ \ga(b) \in M \subs \C $. The real numbers $ \al =0 $ and $ \be $ in (i) are the slopes of $ u $ and $ v $ regarded as vectors in $ \C $. Condition (ii) prescribes the $ y $-coordinate of $ \ga(b) $. Finally, the inequalities in (iii) express the fact that the curvature of $ \ga $ takes values in $ [\bar\ka_1,\bar\ka_2] $; compare \eqref{E:curvatureM}. Conversely, suppose that an absolutely continuous function $ f\colon [a,b] \to \R $ satisfies (i)--(iii). If we set \begin{equation* y(x):=\int_a^xf(t)\,dt \end{equation*} and define $ \ga $ through \eqref{E:gammax}, then $ \ga \in \sr L_{\ka_1}^{\ka_2}(u,v) $. Thus one can produce a contraction of the original family of curves by constructing a homotopy $ (s,f) \mapsto f_s $ of the corresponding family of functions $ f=f_1 $, subject to the stated conditions throughout, with $ f_0 $ independent of $ f $. This is what we shall now do. For each $ \ka \in [\bar \ka_1,\bar \ka_2] $, let $ g_\ka $ be the solution of the initial value problem \begin{equation}\label{E:gka} \dot g(x) = \psi_{\ka}(x,g(x)), \quad g(a) = \al, \end{equation} where $ \al $ and $ \psi_\ka $ are as in conditions (i) and (iii). Geometrically, the graph of $ g $ is an arc of the hypercircle of curvature $ \ka $ with initial unit tangent vector $ u $, represented in $ M $. In particular, $ g_\ka $ is defined over all of $ [a,b] $ by \lref{L:graph}. Similarly, for each $ \ka \in [\bar \ka_1,\bar \ka_2] $, let $ h_{\ka} $ be the solution of the initial value problem \begin{equation* \dot h(x) = \psi_{\ka}(x,h(x)), \quad h(b) = \be. \end{equation*} Geometrically, the graph of $ h_\ka $ is an arc of the hypercircle of curvature $ \ka $ whose final unit tangent vector is $ v $. Although $ h_{\ka} $ is smooth, it is possible that it is not defined over all of $ [a,b] $; if its maximal domain of definition is $ (a',b] $ for some $ a' > a $, then we extend $ h_\ka $ to a function $ [a,b] \to \R \cup \se{\pm \infty} $ by setting it equal to $ \lim_{x \to a'{+}} h_{\ka}(x) $ over $ [a,a'] $. Using this geometric interpretation, it follows from \lref{L:unattainable} that \begin{equation* g_{\bar\ka_1},\,h_{\bar\ka_1} \leq f \leq g_{\bar\ka_2},\,h_{\bar\ka_2}. \end{equation*} Moreover, from the same result one deduces that \begin{equation}\label{E:increasing} g_{\ka}(x) < g_{\ka'}(x) \ \ \text{for all\ \ $ x > a $\ \ if\ \ $ \ka < \ka' $}. \end{equation} For each $ \la,\,\mu \in [\bar \ka_1,\bar \ka_2]$, define $ f_{(\la,\mu)} \colon [a,b]\to \R$ by \begin{equation}\label{E:median} f_{(\la,\mu)}(x)=\midd \big(h_{\bar \ka_1}(x) \,,\, g_{\la}(x) \,,\, f(x) \,,\, g_{\mu}(x) \,,\, h_{\bar\ka_2}(x) \big). \end{equation} Notice first that this function does not take on infinite values, since $ f,\,g_\la,\,g_\mu $ are real functions. Similarly, since three of the functions above (namely, $ f,\,g_{\la},\,g_{\mu} $) take the value $ \al $ at $ a $, and three of them (namely, $ f,\,h_{\bar\ka_1},\,h_{\bar\ka_2} $) take the value $ \be $ at $ b $, $ f_{(\la,\mu)} $ automatically satisfies condition (i). It is easy to verify that it is Lipschitz and satisfies (iii) as well (see \lref{L:median} below). It remains to choose $ (\la,\mu) $ appropriately to ensure that it satisfies condition (ii). Let \begin{alignat*}{9 \ka_+ = \min &\set{\ka \in [\bar \ka_1,\bar\ka_2]}{f(x) \leq g_{\ka}(x) \text{ for all $ x \in [a,b] $}}, \\ \ka_- = \max &\set{\ka \in [\bar \ka_1,\bar\ka_2]}{g_{\ka}(x) \leq f(x) \text{ for all $ x \in [a,b] $}}, \\ \De=&\set{(\la,\mu)\in [\ka_-,\ka_+]}{\la \leq \mu} \text{\quad (cf.~\fref{F:delta})}. \end{alignat*} \begin{figure}[ht] \begin{center} \includegraphics[scale=.43]{delta} \caption{ A diagram illustrating the triangle $ \De $ in the $ (\la,\mu) $-plane. The dashed segments are the intersections of $ \De $ with the lines $ \set{(\la,\mu) \in \R^2}{\mu-\la = s(\ka_+-\ka_-)} $ $ (s\in [0,1]) $. } \label{F:delta} \end{center} \end{figure} Define $A \colon \De\to \R$ to be the area under the graph of $f_{(\la,\mu)}$: \begin{equation* A(\la,\mu)=\int_a^bf_{(\la,\mu)}(x)\?dx. \end{equation*} Continuous dependence of the solutions of \eqref{E:gka} on $ \ka $ implies that $A$ is a continuous function of $ (\la,\mu) $. Moreover, from \begin{equation* h_{\bar \ka_1},\,g_{\ka_-} \leq f \leq g_{\ka_+},\,h_{\bar \ka_2}, \end{equation*} one deduces that \begin{equation* f_{(\ka_-,\mu)} \leq f \leq f_{(\la,\ka_+)} \end{equation*} for all $ \la,\,\mu \in [\ka_-,\ka_+] $. Consequently, because the integral of $ f $ equals $ A_0 $, for each $s\in [0,1]$ the set \begin{equation* \set{(\la,\mu)\in \De}{\mu-\la=s(\ka_+-\ka_-)\text{\ and\ } A(\la,\mu) = A_0} \end{equation*} is nonempty (see \fref{F:delta}). In fact, it consists of a unique point $ (\la(s),\mu(s)) $. To establish this, it suffices to show that $ A(\la,\mu) $ is a strictly increasing function of both $ \la $ and $ \mu $. Now if \begin{equation* \ka_- \leq \la < \la' \leq \mu \leq \ka_+, \end{equation*} then the set of all $ x \in [a,b] $ for which $ g_{\la}(x) < f(x) < g_{\la'}(x) $ is nonempty, by \eqref{E:increasing} and the choice of $ \ka_{\pm} $. Therefore $ f_{(\la,\mu)} < f_{(\la',\mu)} $ holds over a set of positive measure, while the nonstrict inequality holds everywhere by \eqref{E:increasing}. This proves strict monotonicity with respect to $ \la $; the argument for $ \mu $ is analogous. Continuity of $ A $ implies continuity of the curve $ s\mapsto (\la(s),\mu(s)) \in \De $ (which is depicted in bold in \fref{F:delta}). The functions \begin{equation* f_s\colon [a,b]\to \R,\quad f_s=f_{(\la(s),\mu(s))} \end{equation*} satisfy all of conditions (i)--(iii) by construction, and they depend continuously on $ f $ and $ s $. Let $ \ka_0 = \la(0) = \mu(0) $; then \begin{equation* f_0 = \midd \big( h_{\bar\ka_1} \,,\, g_{\ka_0} \,,\, h_{\bar\ka_2} \big). \end{equation*} By \eqref{E:increasing}, there is at most one value of $ \ka \in [\bar \ka_1,\bar \ka_2] $ for which the integral of $ \midd ( h_{\bar\ka_1} \,,\, g_{\ka} \,,\, h_{\bar\ka_2} ) $ equals $ A_0 $. This implies that $ \ka_0 $, and hence $ f_0 $, is independent of $ f $. Therefore $ (f,s) \mapsto f_s $ is indeed a contraction. \end{proof} The following fact was used without proof above. \begin{lem}\label{L:median} The function $ f_{(\la,\mu)} $ of \eqref{E:median} is Lipschitz and satisfies (iii). \end{lem} \begin{proof} More generally, let $ \phi $ be the median of $ \phi_1,\dots,\phi_{2n+1} \colon [a,b] \to \R$. If each $ \phi_k $ is $ c $-Lipschitz, then $ \phi $ is $ c $-Lipschitz. Hence $ \phi $ is absolutely continuous and its derivative exists a.e.. Furthermore, if $ \phi $ and each $ \phi_k $ are differentiable at $ x $, then $ \phi'(x) = \phi_k'(x) $ for some $ k $ such that $ \phi(x) = \phi_k(x) $. In particular, if each $ \phi_k $ satisfies the inequalities in (iii) (with $ \phi_k $ in place of $ f $), then so does $ \phi $. The function $ f_{(\la,\mu)} $ does not immediately conform to this situation because $ h_{\bar \ka_i} $ $ (i=1,2) $ may take on infinite values. This can be circumvented by subdividing $ [a,b] $ into at most three subintervals (where none, one or both of $ h_{\bar \ka_i} $ are infinite) and applying the preceding remarks. \end{proof} \section{Spaces of curves with constrained curvature of class $ C^r $} \label{S:general} In this section we consider spaces of curves with constrained curvature on an arbitrary surface, not necessarily hyperbolic nor orientable. We study their behavior under covering maps and show that they are always nonempty if $ S $ is compact; this should be contrasted with \lref{L:attainable}. A \tdfn{surface} is a smooth Riemannian 2-manifold. Given a regular curve $ \ga \colon [0,1] \to S $, its \tdfn{unit tangent} $\ta=\ta_\ga$ is the lift of $ \ga $ to the unit tangent bundle $UTS$ of $S$: \begin{equation* \ta\colon [0,1]\to UTS, \quad \ta(t)=\frac{\dot\ga(t)}{\abs{\dot\ga(t)}}. \end{equation*} Now let an orientation of $ TS_{\ga(0)} $ be fixed. The \tdfn{unit normal} to $\ga$ is the map $\no=\no_\ga\colon [0,1]\to UTS$ determined by the requirement that $ (\ta(t),\no(t)) $ is an orthonormal basis of $ TS_{\ga(t)} $ whose parallel translation to $ \ga(0) $ (along the inverse of $ \ga $) is positively oriented, for each $ t \in [0,1] $. Assuming $\ga$ is twice differentiable, its \tdfn{curvature} $\ka=\ka_\ga$ is given by \begin{equation}\label{E:curvature} \ka:=\frac{1}{\abs{\dot\ga}}\Big\langle\frac{D\ta}{dt},\no\Big\rangle =\frac{1}{\abs{\dot\ga}^2}\Big\langle\frac{D\dot\ga}{dt},\no\Big\rangle. \end{equation} Here $D$ denotes covariant differentiation (along $\ga$). \begin{urmk}\label{R:alternative} If $ S $ is nonorientable, it is more common to define the (unsigned) curvature of $ \ga \colon [0,1] \to S $ by \begin{equation* \ka = \frac{1}{\abs{\dot \ga}}\abs{\frac{D\ta}{dt}}. \end{equation*} If $ S $ is orientable, the usual definition coincides with \eqref{E:curvature}, but $ \no $ is defined by the condition that $ (\ta(t),\no(t)) $ be positively oriented with respect to a specified orientation of $ S $, rather than the parallel translation of an orientation of $ TS_{\ga(0)} $. These two definitions are equivalent, since an orientation of $ TS_{\ga(0)} $ determines an orientation of $ S $ if the latter is orientable. The definition that we have chosen has the advantage of allowing arbitrary bounds for the curvature of a curve on a nonorientable surface, and in particular the concise formulation of \lref{L:covering} below. \end{urmk} A geometric interpretation for the curvature is the following: Let $ v\colon [0,1]\to UTS $ be any smooth parallel vector field along $ \ga $, and let $ \theta\colon [0,1]\to \R $ be a function measuring the oriented angle from $ v(t) $ to $ \ta(t) $. Then a trivial computation shows that $ \dot\theta = \ka \abs{\dot\ga} $. In particular, the \tdfn{total curvature} \begin{equation* \int_0^1\ka(t)\abs{\dot\ga(t)}\,dt \end{equation*} of $ \ga $ equals $ \theta(1)-\theta(0) $. In all that follows, the curvature bounds $ \ka_1<\ka_2 $ are allowed to take values in $ \R\cup\se{\pm\infty} $, $ S $ denotes a surface and $ u,\,v\in UTS $. Moreover, it is assumed that an orientation of $ TS_p $, where $ p $ is the basepoint of $ u $, has been fixed. \begin{dfn}[spaces of $ C^r $ curves]\label{D:Curve} Define $\sr CS_{\ka_1}^{\ka_2}(u,v)^r$ to be the set, endowed with the $C^r$ topology (for some $ r \geq 2 $), of all $C^r$ regular curves $\ga\colon [0,1]\to S$ such that: \begin{enumerate} \item [(i)] $\ta_\ga(0)=u$ and $\ta_\ga(1)=v$; \item [(ii)] $\ka_1<\ka_\ga(t)<\ka_2$ for each $t\in [0,1]$. \end{enumerate} \end{dfn} \begin{urmk} Of course, whether $ S $ is orientable or not, the topological properties (or even the voidness) of $ \sr CS_{\ka_1 }^{\ka_2 }(u,v) $ may be sensitive to the choice of orientation for $ TS_p $. More precisely, if $ \bar S $ denotes the same surface $ S $ with the opposite orientation of $ TS_p $, then $ \sr C\bar S_{\ka_1 }^{\ka_2 }(u,v) = \sr CS_{-\ka_2}^{-\ka_1}(u,v) $. \end{urmk} It will follow from (\ref{L:C^2}) that $r$ is irrelevant in the sense that different values yield spaces which are homeomorphic. Because of this, $ \sr CS_{\ka_1}^{\ka_2}(u,v)^r $ is denoted simply by $ \sr CS_{\ka_1}^{\ka_2}(u,v) $ in this section. \begin{lem}\label{L:contractible} Define $ \sr CS_{\ka_1}^{\ka_2}(u,\cdot) $ as in \dref{D:Curve}, except that no condition is imposed on $ \ta_\ga(1) $, and similarly for $ \sr CS_{\ka_1}^{\ka_2}(\cdot,v) $ and $ \sr CS_{\ka_1}^{\ka_2}(\cdot,\cdot) $. Then: \begin{enumerate} \item [(a)] $ \sr CS_{\ka_1}^{\ka_2}(u,\cdot) $ and $ \sr CS_{\ka_1}^{\ka_2}(\cdot,v) $ are contractible. \item [(b)] $ \sr CS_{\ka_1}^{\ka_2}(\cdot,\cdot) $ is homotopy equivalent to $ UTS $. \end{enumerate} \end{lem} \begin{proof} By \lref{L:Hilbert}\?(b), to prove that $ \sr CS_{\ka_1 }^{\ka_2 }(u,\cdot) $ is contractible, it is actually sufficient to show that it is \tsl{weakly} contractible. Let \begin{equation* K \to \sr CS_{\ka_1}^{\ka_2}(u,\cdot),\quad p \mapsto \ga^p, \end{equation*} be a continuous map, where $ K $ is a compact space. By a preliminary homotopy, each $ \ga^p $ may be reparametrized with constant speed. Let $ \la(p) = \text{length}(\ga^p) $ and $ 0 < \la < \inf_{p \in K}\la(p) $. The curves can be shrunk through the homotopy \begin{equation* (s,p)\mapsto \ga_s^p,\quad \ga^p_s(t):=\ga^p\Big(\la(p)^{-1}\big[(1-s)\la+s\la(p)\big]\?t\Big) \quad (s,\,t\in [0,1],~p \in K) \end{equation*} so that all $ \ga_0^p $ have length $ \la $, which can be chosen smaller than the injectivity radius of $ S $ at the basepoint of $ u $. Then each $ \ga_0^p $ is determined solely by its curvature, and conversely any function $ \ka \colon [0,1] \to (\ka_1,\ka_2) $ of class $ C^{r-2} $ determines a unique curve of constant speed $ \la $ in $ S $ having $ u $ for its initial unit tangent vector. But the set of all such functions is convex. Reversal of orientation of curves yields a homeomorphism between and $ \sr CS_{-\ka_2 }^{-\ka_1 }(-v,\cdot) $ and $ \sr CS_{\ka_1 }^{\ka_2 }(\cdot,v) $, hence a space of the latter type is also contractible. For (b), consider the map $ f\colon UTS \to \sr C_{\ka_1 }^{\ka_2 }(\cdot,\cdot) $ which associates to $ u $ the unique curve of constant curvature $ \frac{1}{2}(\ka_1+\ka_2) $ having $ u $ for its initial unit tangent and length equal to half the injectivity radius of $ S $ at the basepoint of $ u $, parametrized with constant speed. Using the argument of the first paragraph, one deduces that $ f $ is a weak homotopy inverse of \begin{equation* g \colon \sr CS_{\ka_1 }^{\ka_2 }(\cdot,\cdot) \to UTS, \quad g(\ga) = \ta_\ga(0). \end{equation*} Therefore $ \sr C_{\ka_1 }^{\ka_2 }(\cdot,\cdot) $ is homotopy equivalent to $ UTS $, by \lref{L:Hilbert}\?(b). \end{proof} \begin{lem}\label{L:covering} Let $ q\colon \te{S} \to S $ be a Riemannian covering (a covering map which is also a local isometry) and $ u,\,v\in UTS $. Suppose that $ dq \colon \big(T\te{S}_{\te{p}},\te{u}\big) \to \big(TS_p,u\big) $ preserves the chosen orientation of these tangent planes. Then $ \te{\ga} \mapsto q\circ \te{\ga} $ yields a homeomorphism \begin{equation* \bdu_{\te{v}\in dq^{-1}(v)}\sr C\te{S}_{\ka_1}^{\ka_2}(\te{u},\te{v}) \home \sr CS_{\ka_1}^{\ka_2}(u,v). \end{equation*} \end{lem} \begin{proof} Let $ \ga \in \sr CS_{\ka_1 }^{\ka_2 }(u,v) $ and $ \te{\ga} \colon [0,1] \to \te{S} $ be its lift to $ \te{S} $ starting at $ \te{p} $. Since $ dq $ is an isometry, \begin{equation* dq(\ta_{\te{\ga}}) = \ta_\ga \quad \text{and} \quad dq(\no_{\te{\ga}}) = \pm \no_\ga. \end{equation*} Moreover, $ dq(\no_{\te{\ga}}(0)) = \no_{\ga}(0) $ by the hypothesis regarding orientations, hence $ dq(\no_{\te{\ga}}) = \no_\ga $ by continuity. Now by \eqref{E:curvature}, $ \ka_{\te{\ga}} = \ka_\ga $. Therefore $ \te{\ga} \in \sr C\te{S}_{\ka_1 }^{\ka_2 }(\te{u},\te{v}) $ for some lift $ \te{v} $ of $ v $. Conversely, if $ \te{\ga} \in \sr C\te{S}_{\ka_1 }^{\ka_2 }(\te{u},\te{v}) $, then $ \ga = q\circ \te{\ga} \in \sr CS_{\ka_1 }^{\ka_2 }(u,v) $ by the same reasons. Since projection and lift (starting at $ \te{p} $) are inverse operations, the asserted homeomorphism holds. \end{proof} This result is especially useful when $ S $ is a space form (e.g., a hyperbolic surface); for then $ S $ is the quotient by a discrete group of isometries of the model simply-connected space of the same curvature, which is much more familiar. The lemma also furnishes a reduction to the orientable case by taking $ \te{S} $ to be the two-sheeted orientation covering of $ S $. \begin{exr}\label{X:symmetric} Suppose that $ (\ka_1,\ka_2) $ is symmetric about 0. Then the conclusion of \lref{L:covering} holds regardless of whether $ dq $ preserves orientation at $ \te{p} $. (\tit{Hint:} See the remark following \dref{D:Curve}.) \end{exr} \begin{urmk} If closed or half-open intervals were used instead in \dref{D:Curve}, then substantial differences would only arise in marginal cases. For instance, in the situation of \lref{L:unattainable}, if $ v $ is tangent to $ \bd R $, then $ \sr C_{\ka_1 }^{\ka_2 }(u,v) $ is empty, while $ \bar{\sr C}_{\ka_1 }^{\ka_2 }(u,v) $ may not be. The original definition is more convenient to work with since the resulting spaces are Banach manifolds; compare Example 1.1 in \cite{SalZueh1}. \end{urmk} \begin{thm}\label{T:compact} Let $S$ be a compact connected surface. Then $ \sr CS_{\kappa_1}^{\kappa_2}(u,v)\neq \emptyset $ for any choice of $\ka_1<\ka_2$ and $u,\,v\in UTS$. \end{thm} \begin{proof} By passing to the orientation covering if necessary, it may be assumed that $ S $ is oriented. Let $\ka_1<\ka_2$ be fixed. For $u,\, v\in UTS$, write $u\prec v$ if $\sr CS_{\kappa_1}^{\kappa_2}(u,v)\neq \emptyset$. Notice that $\prec$ is transitive: Given curves in $ \sr CS_{\ka_1 }^{\ka_2 }(u,v) $ and $ \sr CS_{\ka_1 }^{\ka_2 }(v,w) $, their concatenation starts at $ u $ and ends at $ w $; its curvature may fail to exist at the point of concatenation, but this can be fixed by taking a smooth approximation. Let \begin{equation* F_u=\set{v\in UTS}{u \prec v}, \quad E_u=\set{v\in UTS}{u \prec v \textrm{ and } v \prec u}. \end{equation*} It is clear from the definition that $F_u\neq \emptyset$ for all $u$, and that $F_u$ and $E_u$ are open subsets of $ UTS $ (cf.~\lref{L:submersion}). Moreover, the family $(F_u)_{u \in UTS}$ covers $UTS$. Indeed, given $v$, we can find $u$ such that $\sr CS_{-\ka_2}^{-\ka_1}(-v,-u)\neq \emptyset$. Reversing the orientation of a curve in the latter set, we establish that $\sr CS_{\kappa_1}^{\kappa_2}(u,v) \neq \emptyset$, that is, $v\in F_u$. Since $UTS$ is compact, it can be covered by finitely many of the $F_u$. Let $UTS=F_{u_1}\cup \dots \cup F_{u_m}$ be a minimal cover. We claim that $m=1$. Assume that $m > 1$. If $u_i \prec u_j$ then $F_{u_i} \sups F_{u_j}$, and therefore by minimality $i = j$. Since $u_i \in UTS$ and $u_i \notin F_{u_j}$ for $j \ne i$, we deduce that $u_i \in F_{u_i}$ for each $i$. The open sets $E_{u_i}$ are thus nonempty and disjoint. On the other hand, every $F_{u_i}$ must intersect some $F_{u_j}$ with $j\neq i$, as $UTS$ is connected. Choose $i\neq j$ such that $F_{u_i} \cap F_{u_j}\neq \emptyset$. It is easy to see that $F_{u_i} \cap F_{u_j}$ must be disjoint from $E_{u_i}$. Thus, if $V$ is the interior of $F_{u_i} \ssm E_{u_i}$, then $V\neq \emptyset$. We will obtain a contradiction from this. By definition, there exist $u_\ast \in E_{u_i}$, $v_\ast \in V$ and $\gamma_\ast \in \sr CS_{\kappa_1}^{\kappa_2}(u_\ast,v_\ast)$. Let $\ga_\ast\colon [0,L]\to S$ be parametrized by arc-length and $\ka_\ast\colon [0,L]\to (\ka_1,\ka_2)$ denote its curvature. The tangent bundle $ TS $ has a natural volume form coming from the Riemannian structure of $S$. A theorem of Liouville states that the geodesic flow preserves volume in $ TS $. Since $S$ is oriented, given $\theta\in \R$ we may define a volume-preserving bundle automorphism on $ UTS $ by $w\mapsto \cos\theta w+\sin\theta J(w)$ (where $ J $ is ``multiplication by $ i $''). Let $Y_0$ and $Z$ be the vector fields on $ UTS $ corresponding to the geodesic flow and to counterclockwise rotation, respectively. Then for any $\kappa\in \R$, the vector field $Y_\kappa = Y_0 + \kappa Z$ defines a volume-preserving flow on $UTS$; the projections of its orbits on $S$ are curves parametrized by arc-length of constant curvature $\kappa$. By definition, the open set $V$ is forward-invariant under each of these flows for $\ka\in (\ka_1,\ka_2)$. Define a map $G\colon UTS\to UTS$ as follows: Given $u\in UTS$, $G(u) = \ta_\eta(L)$, where $\eta\colon [0,L]\to S$ is the unique curve in $\sr CS_{\kappa_1}^{\kappa_2}(u,\cdot)$, parametrized by arc-length, whose curvature is $\ka_\ast$. Then $G$ must be volume-preserving, $G(V)\subs V$, but there exists a neighborhood of $u_\ast$ contained in $E_{u_i}$ which is taken by $G$ to a neighborhood of $v_\ast$ contained in $V$, a contradiction. We conclude that $m=1$, so that $ UTS = F_{u_1} $. Furthermore, $F_{u_1}\ssm E_{u_1}$ must have empty interior by the preceding argument. Hence $E_{u_1}$ is a dense open set in $UTS$. Let $u,\,v\in UTS$ be given. Since $F_u$ is open, there exists $ v_1\in F_u \cap E_{u_1} $. Then $ u\prec v_1 $. Since $v_1\prec u_1$ and $u_1\prec v$, we deduce that $u\prec v$; in other words, $\sr CS_{\kappa_1}^{\kappa_2}(u,v)\neq \emptyset$. \end{proof} \subsection*{Spaces of closed curves without basepoints} Just as in algebraic topology one considers homotopies with and without basepoint conditions, one can also study the space of all smooth closed curves on $ S $ with curvature in an interval $ (\ka_1,\ka_2) $ but no restrictions on the initial and final unit tangents. Let this space be denoted by $ \sr CS_{\ka_1}^{\ka_2} $. In some regards this class may seem more fundamental than its basepointed version, the class of spaces $ \sr CS_{\ka_1 }^{\ka_2 }(u,v) $ considered thus far. However, in the Hirsch-Smale theory of immersions, such basepoint conditions arise naturally. For instance, thm.~C of \cite{Smale} states that $ \sr CS_{-\infty}^{+\infty}(u,u) $ is (weakly) homotopy equivalent to the loop space $ \Om(UTS)(u) $ consisting of all loops in $ UTS $ based at $ u $. Moreover, even if one is interested only in closed curves, it is often helpful to study $ \sr CS_{\ka_1 }^{\ka_2 } $ by lifting its elements to the universal cover of $ S $, and these lifts need not be closed. A further point is provided by the following result. Recall that $ UTS $ is diffeomorphic to $ \R^2 \times \Ss^1 $ if $ S = \R^2 $ or $ S = \Hh^2 $, and to $ \SO_3 \home \RP^3 $ if $ S=\Ss^2 $. \begin{lem}\label{L:closed} Let $ S $ be a simply-connected complete surface of constant curvature and $ u \in UTS $ be arbitrary. Then $ \sr CS_{\ka_1 }^{\ka_2 } $ is homeomorphic to $ UTS \times \sr CS_{\ka_1 }^{\ka_2 }(u,u) $. \end{lem} \begin{proof} The group of orientation-preserving isometries of such a surface acts simply transitively on $ UTS $. Given $ v \in UTS $, let $ g_v $ denote its unique element mapping $ v $ to $ u $. Define \begin{equation* f \colon \sr CS_{\ka_1 }^{\ka_2 } \to UTS \times \sr C_{\ka_1 }^{\ka_2 }(u,u),\quad f(\ga) = (\ta(0) , g_{\ta(0)} \circ \ga ). \end{equation*} This is clearly continuous, and so is its inverse, which is given by $ (v,\eta) \mapsto g_v^{-1} \circ \eta $. \end{proof} Regard the elements of $ \sr CS_{\ka_1 }^{\ka_2 } $ as maps $ \Ss^1 \to S $. There is a natural projection $ \sr CS_{\ka_1 }^{\ka_2 } \to UTS $ taking a curve $ \ga\colon \Ss^1 \to S $ to its unit tangent at $ 1 \in \Ss^1 $. This is a fiber bundle if $ \ka_1=-\infty $ and $ \ka_2=+\infty $ since $ S $ is locally diffeomorphic to $ \R^2 $ and the group of diffeomorphisms of the latter acts transitively on $ UT\R^2 $. It may be a fibration in certain other special cases, as in the situation of \lref{L:closed}, but in general this cannot be guaranteed. For instance, if $ S=T^2 $ is a flat torus, then the homotopy type of the fibers $ \sr CS_{-1}^{+1}(u,u) $ of the map $ \sr CS_{-1}^{+1} \to UTS $ is not locally constant; this follows from an example in the introduction of \cite{SalZueh2}. We believe that little is known about the topology of $ \sr CS_{\ka_1}^{\ka_2} $ beyond what is implied by \lref{L:closed}. \section{Spaces of curves with discontinuous curvature}\label{S:discontinuous} Suppose that $\ga\colon [0,1]\to S$ is a smooth regular curve and, as always, $ TS_{\ga(0)} $ has been oriented. Let $\sig\colon [0,1]\to \R^+$ denote its speed $ \abs{\dot\ga} $ and $\ka$ its curvature. Then $\ga$ and $\ta=\ta_\ga\colon [0,1]\to UTS$ satisfy: \begin{equation}\label{E:de} \begin{cases} \dot\ga=\sig\ta \\ \frac{D\ta}{dt}=\sig\ka\? \no \end{cases} \quad\text{and}\quad \ta(0)=u\in UTS. \end{equation} Thus, $\ga$ is uniquely determined by $ u $ and the functions $\sig,\,\ka$. One can define a new class of spaces by relaxing the conditions that $\sig$ and $\ka$ be smooth. Let $h\colon (0,+\infty)\to \R$ be the diffeomorphism \begin{equation*}\label{E:thereason} h(t)=t-t^{-1}. \end{equation*} For each pair $\ka_1<\ka_2\in \R$, let $h_{\ka_1,\,\ka_2} \colon (\ka_1,\ka_2)\to \R$ be the diffeomorphism \begin{equation* h_{\ka_1,\,\ka_2}(t)=(\ka_1-t)^{-1}+(\ka_2-t)^{-1} \end{equation*} and, similarly, set \begin{alignat*}{10 &h_{-\infty,+\infty}\colon \R\to \R,\qquad & &t\mapsto t \\ &h_{-\infty,\ka_2}\colon (-\infty,\ka_2)\to \R,\qquad & &t\mapsto t+(\ka_2-t)^{-1} \\ &h_{\ka_1,+\infty}\colon (\ka_1,+\infty)\to \R, \qquad & &t\mapsto t+(\ka_1-t)^{-1}. \end{alignat*} Notice that all of these functions are monotone increasing, hence so are their inverses. Moreover, if $\hat{\ka}\in L^2[0,1]$, then $\ka=h_{\ka_1,\ka_2}^{-1}\circ \hat\ka\in L^2[0,1]$ as well. This is obvious if $(\ka_1,\ka_2)$ is bounded, and if one of $\ka_1,\ka_2$ is infinite, then it is a consequence of the fact that $h_{\ka_1,\ka_2}^{-1}(t)$ diverges linearly to $\pm \infty$ with respect to $t$. In what follows, $\L$ denotes the separable Hilbert space $L^2[0,1]\times L^2[0,1]$. \begin{dfn}[admissible curve A curve $\ga\colon [0,1]\to S$ is $(\ka_1,\ka_2)$-\tdfn{admissible} if there exists $(\hat \sig,\hat \ka)\in \L$ such that $\ga$ satisfies \eqref{E:de} with \begin{equation}\label{E:Sobolev} \sig=h^{-1}\circ \hat{\sig}\text{\quad and\quad } \ka=\?h^{-1}_{\ka_1,\,\ka_2}\circ \hat{\ka}. \end{equation} When it is not important to keep track of the bounds $\ka_1,\ka_2$, we will simply say that $\ga$ is \tdfn{admissible}. \end{dfn} The system \eqref{E:de} has a unique solution for any $(\hat \sig,\hat \ka)\in \L$ and $u\in UTS$. To see this, we use coordinate charts for $TS$ derived from charts for $S$ and apply thm.~C.3 on p.~386 of \cite{Younes} to the resulting differential equation, noticing that $S$ is smooth and $\sig,\, \ka\in L^2[0,1]\subs L^1[0,1]$. Furthermore, if we assume that $ S $ is complete, then the solution is defined over all of $[0,1]$. The resulting maps $\ga\colon [0,1]\to S$ and $\ta\colon [0,1]\to TS$ are absolutely continuous (see p.~385 of \cite{Younes}), and so is $ \no $. Using that $\gen{\ta,\no}\equiv 0$ and differentiating, we obtain, in addition to \eqref{E:de}, that \begin{equation* \frac{D\no}{dt}=-\sig \ka\?\ta \text{\quad and \quad} \abs{\ta(t)}=\abs{\no(t)}=\abs{u}=1\ \ \text{for all $t\in [0,1]$.} \end{equation*} Therefore, $\sig=\abs{\dot\ga}$, $\ta_\ga=\ta$ and $\no_\ga=\no$. It is thus natural to call $ \sig $ and $\ka$ the \tdfn{speed} and \tdfn{curvature} of $\ga$, even though $ \sig,\,\ka\in L^2[0,1] $. \begin{urmk} Although $\dot \ga=\sig\ta$ is, in general, defined only almost everywhere on $[0,1]$, if we reparametrize $\ga$ by arc-length then it becomes a regular curve, because $\ga'=\ta$ is continuous. It is helpful to regard admissible curves simply as regular curves whose curvatures are defined a.e.. \end{urmk} \begin{dfn}\label{D:loose} For $u\in UTS$, let $\sr LS_{\ka_1}^{\ka_2}(u,\cdot)$ be the set of all $(\ka_1,\ka_2)$-admissible curves $\ga\colon [0,1]\to S$ with $\ta_\ga(0)=u$. \end{dfn} If $ S $ is complete, then this set is identified with $\L$ via the correspondence $\ga\leftrightarrow (\hat \sig,\hat \ka)$, thus furnishing $\sr LS_{\ka_1}^{\ka_2}(u,\cdot)$ with a trivial Hilbert manifold structure. If $ S $ is not complete, then $ \sr LS_{\ka_1}^{\ka_2}(u,\cdot) $ is some mysterious open subset of $ \L $. However, we still have the following. \begin{lem}\label{L:mysterious} For all $ u \in UTS $, $ \sr LS_{\ka_1}^{\ka_2}(u,\cdot) $ is homeomorphic to $ \L $. \end{lem} \begin{proof} The proof is almost identical to that of \lref{L:contractible}\?(a). Given a family of curves indexed by a compact space, first reparametrize all curves by constant speed and shrink them to a common length $ \la $ smaller than the injectivity radius of $ S $ at the basepoint of $ u $. Now each curve is completely determined by its curvature, and conversely any $ L^2 $-function $ \hat \ka \colon [0,1] \to \R $ determines a unique curve of constant speed $ \la $ having $ u $ for its initial unit tangent vector, via \eqref{E:Sobolev}. But the set of all such functions is convex. Thus $ \sr LS_{\ka_1}^{\ka_2}(u,\cdot) $ is weakly contractible, hence homeomorphic to $ \L $ by \lref{L:Hilbert}\?(a). \end{proof} \begin{lem}\label{L:submersion} Let $ S $ be a surface and define $F\colon \sr LS_{\ka_1}^{\ka_2}(u,\cdot)\to UTS$ by $\ga\mapsto \ta_\ga(1)$. Then $F$ is a submersion, and consequently an open map. \end{lem} \begin{proof} The proof when $ S = \R^2 $ is given in \cite{SalZueh1}, Lemma 1.5. The proof in the general case follows by considering Riemannian normal coordinates, which are flat at least to second order, in a neighborhood of the basepoint of $ \ta_\ga(1) $. \end{proof} \begin{dfn}[spaces of admissible curves]\label{D:Lurvespace} Define $\sr LS_{\ka_1}^{\ka_2}(u,v)$ to be the subspace of $\sr LS_{\ka_1}^{\ka_2}(u,\cdot)$ consisting of all $ \ga $ such that $\ta_\ga(1)=v$. \end{dfn} It follows from (\ref{L:submersion}) that $\sr LS_{\ka_1}^{\ka_2}(u,v)$ is a closed submanifold of codimension 3 in $\sr LS_{\ka_1}^{\ka_2}(u,\cdot)\home \L$, provided it is not empty. We have already seen in \cref{C:closed} that such spaces may indeed be empty, but this cannot occur when $ S $ is compact; cf.~\tref{T:compact} and \lref{L:dense}. \subsection*{Relations between spaces of curves} If $(\ka_1,\ka_2)\subs (\bar \ka_1,\bar \ka_2)$ and $\ga\in \sr LS_{\ka_1}^{\ka_2}(u,v)$, then we can also consider $\ga$ as a curve in $\sr LS_{\bar\ka_1}^{\bar\ka_2}(u,v)$. However, the topology of the former space is strictly finer (i.e., has more open sets) than the topology induced by the resulting inclusion. \begin{lem}\label{L:LtoLinclusion} Let $(\ka_1,\ka_2)\subs (\bar \ka_1,\bar \ka_2)$, $S$ be a surface and $u\in UTS$. Then \begin{equation}\label{E:pair} (\hat \sig,\hat \ka)\mapsto \big(\hat \sig, h_{\bar \ka_1,\bar\ka_2}\circ h_{\ka_1,\ka_2}^{-1}\circ \hat \ka\big) \end{equation} defines a continuous injection $j\colon \sr LS_{\ka_1}^{\ka_2}(u,\cdot)\to \sr LS_{\bar\ka_1}^{\bar\ka_2}(u,\cdot)$. The actual curves on $S$ corresponding to these pairs are the same, but $j$ is not a topological embedding unless $\bar \ka_1=\ka_1$ and $\bar \ka_2=\ka_2$. \end{lem} \begin{proof} It may be assumed that $ S $ is oriented. The curve $\ga$ corresponding to $(\hat \sig,\hat \ka)\in \sr LS_{\ka_1}^{\ka_2}(u,v)$ is obtained as the solution of \eqref{E:de} with \begin{equation* \sig=h^{-1}\circ \hat \sig \text{\quad and\quad }\ka=h^{-1}_{\ka_1,\,\ka_2}\circ \hat\ka. \end{equation*} The curve $\eta$ corresponding to the right side of \eqref{E:pair} in $\sr LS_{\bar\ka_1}^{\bar\ka_2}(u,\cdot)$ is the solution of \eqref{E:de} with \begin{equation* \sig=h^{-1}\circ \hat \sig \text{\quad and\quad }\ka=h^{-1}_{\bar\ka_1,\,\bar\ka_2}\circ\big(h_{\bar \ka_1,\bar\ka_2}\circ h_{\ka_1,\ka_2}^{-1}\circ \hat \ka\big)=h^{-1}_{\ka_1,\,\ka_2}\circ \hat\ka. \end{equation*} By uniqueness of solutions, $\ga=\eta$. In particular, $j$ is injective. Set $g=h_{\bar \ka_1,\bar\ka_2}\circ h_{\ka_1,\ka_2}^{-1}$. Observe that \begin{equation* \lim_{t\to+ \infty}g'(t)= \begin{cases} 1 & \text{ if $\bar\ka_2=\ka_2$;} \\ 0 & \text{ otherwise;} \end{cases} \qquad \lim_{t\to- \infty}g'(t)= \begin{cases} 1 & \text{ if $\bar\ka_1=\ka_1$;} \\ 0 & \text{ otherwise.} \end{cases} \end{equation*} Hence, $\abs{g'}$ is bounded over $\R$. Consequently, there exists $C>0$ such that \begin{equation* \norm{g\circ f_1-g\circ f_2}_2\leq C\norm{f_1-f_2}_2\ \ \text{for any $f_1,\,f_2\in L^2[0,1]$.} \end{equation*} We conclude that $j\colon (\hat\sig,\hat\ka)\mapsto (\hat\sig,g\circ \hat\ka)$ is continuous. Suppose now that $(\ka_1,\ka_2)\psubs (\bar\ka_1,\bar\ka_2)$. No generality is lost in assuming that $\ka_2<\bar\ka_2$. Let \begin{equation* m=g(0) \text{\ \ and \ \ } M=g(+\infty)=h_{\bar\ka_1,\bar\ka_2}(\ka_2). \end{equation*} Define a sequence of $L^2$ functions $\hat \ka_n\colon [0,1]\to \R$ by: \begin{equation* \quad \hat\ka_n(t)= \begin{cases} n & \text{ if $t\in \big[\frac{1}{2}-\frac{1}{2n}\, ,\,\frac{1}{2}+\frac{1}{2n}\big]$;} \\ 0 & \text{ otherwise. } \end{cases}\qquad (n\in \N^+,~t\in [0,1]). \end{equation*} Since $g$ is the composite of increasing functions, $g(t)<g(+\infty)=M$ for any $t\in \R$. Therefore, \begin{equation* \abs{g\circ \hat\ka_n(t)-m} \begin{cases} \leq M-m & \text{ if\, $t\in \big[\frac{1}{2}-\frac{1}{2n}\,,\, \frac{1}{2}+\frac{1}{2n}\big]$;} \\ = 0 & \text{ otherwise}. \end{cases} \end{equation*} Hence, $\norm{\hat {\ka}_n}_2=\sqrt{n}\to +\infty$ as $n$ increases, while $\norm{g\circ \hat \ka_n-m}_2\leq n^{-\frac{1}{2}}(M-m) \to 0$. We conclude that $ j $ is not a topological embedding. This argument may be modified to prove that $\sr LS_{\ka_1}^{\ka_2}(u,v) \inc \sr LS_{\bar\ka_1}^{\bar\ka_2}(u,v)$ is likewise not an embedding for any $v$. \end{proof} \begin{lem}\label{L:CtoLinclusion} Let $(\ka_1,\ka_2)\subs (\bar \ka_1,\bar \ka_2)$ and $u,\,v\in UTS$. Then \begin{equation}\label{E:injection} j\colon \sr CS_{\kappa_1}^{\kappa_2}(u,v)^r\to \sr LS_{\bar\kappa_1}^{\bar\kappa_2}(u,v),\quad \ga\mapsto (h\circ \abs{\dot \ga} \,,\, h_{\bar\ka_1}^{\bar\ka_2}\circ \ka_\ga) \end{equation} is a continuous injection, but not an embedding, for all $r\geq 2$. Moreover, the actual curve on $S$ corresponding to $ j(\ga) $ is $\ga$ itself. \end{lem} \begin{proof} The proof is very similar to that of (\ref{L:LtoLinclusion}). \end{proof} The following lemmas contain all the results on infinite-dimensional manifolds that we shall need.\footnote{\lref{L:Hilbert} and a weaker version of \lref{L:net} have already appeared in \cite{SalZueh}.} \begin{lem}\label{L:Hilbert} Let $\sr M,\,\sr N$ be (infinite-dimensional) Banach manifolds. Then: \begin{enumerate} \item [(a)] If $\sr M,\,\sr N$ are weakly homotopy equivalent, then they are in fact homeomorphic (diffeomorphic if $\sr M,\,\sr N$ are Hilbert manifolds). \item [(b)] If the Banach manifold $ \sr M $ and the finite-dimensional manifold $ M $ are weakly homotopy equivalent, then $ \sr M $ is homeomorphic to $ M \times \L $; in particular, $ \sr M $ and $ M $ are homotopy equivalent. \item [(c)] Let $E$ and $F$ be separable Banach spaces. Suppose $i\colon F\to E$ is a bounded, injective linear map with dense image and $\sr M\subs E$ is a smooth closed submanifold of finite codimension. Then $\sr N=i^{-1}(\sr M)$ is a smooth closed submanifold of\, $F$ and $i\colon \sr N\to \sr M$ is a homotopy equivalence. \end{enumerate} \end{lem} \begin{proof} Part (a) follows from thm.~15 in \cite{Palais} and cor.~3 in \cite{Henderson}. For part (b), apply (a) to $ \sr M $ and $ \sr N = M \times \L $. Part (c) is thm.~2 in \cite{BurSalTom}. \end{proof} \begin{lem}\label{L:net} Let $\L$ be a separable Hilbert space, $D\subs \L$ a dense vector subspace, $L\subs \L$ a submanifold of finite codimension and $U$ an open subset of $L$. Then the set inclusion $D\cap U\to U$ is a weak homotopy equivalence. \end{lem} \begin{proof} We shall prove the lemma when $L=h^{-1}(0)$ for some submersion $h\colon V\to \R^n$, where $V$ is an open subset of $\L$. This is sufficient for our purposes and the general assertion can be deduced from this by using a partition of unity subordinate to a suitable cover of $L$. Let $T$ be a tubular neighborhood of $U$ in $V$ such that $T\cap L=U$. Let $K$ be a compact simplicial complex and $f\colon K\to U$ a continuous map. We shall obtain a continuous $H\colon [0,2]\times K\to U$ such that $H(0,a)=f(a)$ for every $a\in K$ and $H(\se{2}\times K)\subs D\cap U$. Let $e_j$ denote the $j$-th vector in the canonical basis for $\R^n$, $e_0=-\sum_{j=1}^ne_j$ and let $\De\subs \R^n$ denote the $n$-simplex $[e_0,\dots,e_n]$. Let $[x_0,x_1,\dots,x_n]\subs T$ be an $n$-simplex and $\vphi\colon \De\to [x_0,x_1,\dots,x_n]$ be given by \begin{equation* \vphi\bigg(\sum_{j=0}^n s_je_j\bigg)=\sum_{j=0}^n s_j x_j, \text{\ \ where\ \ $\sum_{j=0}^n s_j=1$\ \ and\ \ $s_j\geq 0$\ \ for all\ \ $j=0,\dots,n$}. \end{equation*} We shall say that $[x_0,x_1,\dots,x_n]$ is \tdfn{neat} if $h\circ \vphi\colon \De\to \R^n$ is an embedding and $0\in (h\circ \vphi)(\Int \De)$. Given $p\in T$, let $dh_p$ denote the derivative of $h$ at $p$ and $N_p=\ker(dh_p)$. Define $w_j\colon T\to \L$ by: \begin{equation}\label{E:w_j} w_j(p)=\big(dh_p|_{N_p^\perp}\big)^{-1}(e_j)\quad (p\in T,~j=0,\dots,n). \end{equation} Notice that $h\big(p+\sum_j\la_j w_j(p)\big)=h(p)+ \sum_j\la_j e_j+o(\abs{\la})$ (for $\la=(\la_0,\dots,\la_n)$ and $p\in T$). Hence, using compactness of $K$, we can find $r,\eps>0$ such that: \begin{enumerate} \item [(i)] For any $p\in f(K)$, $[p+rw_0(p),\dots,p+rw_n(p)]\subs T$ and it is neat; \item [(ii)] If $p\in f(K)$ and $\abs{q_j-(p+rw_j(p))}<\eps$ for each $j$, then $[q_0,\dots,q_n]\subs T$ and it is neat. \end{enumerate} Let $a_i$ ($i=1,\dots,m$) be the vertices of the triangulation of $K$. Set $v_i=f(a_i)$ and \begin{equation* v_{ij}=v_i+rw_j(v_i)\quad (i=1,\dots,m,~j=0,\dots,n). \end{equation*} For each such $i,j$, choose $\te{v}_{ij}\in D\cap T$ with $\abs{\te{v}_{ij}-v_{ij}}<\frac{\eps}{2}$. Let \begin{alignat}{9} &v_{ij}(s)=(2-s)v_{ij}+(s-1)\te{v}_{ij},\ \text{ so that } \notag \\ \label{E:simest} &\abs{v_{ij}(s)-v_{ij}}<\frac{\eps}{2}\ \ (s\in [1,2],~i=1,\dots,m,~j=0,\dots,n). \end{alignat} For any $i,i'\in \se{1,\dots,m}$ and $j=0,\dots,n$, we have \begin{equation* \abs{v_{ij}-v_{i'j}}\leq \abs{f(a_{i})-f(a_{i'})}+r\abs{w_j\circ f(a_{i})-w_j\circ f(a_{i'})}. \end{equation*} Since $f$ and the $w_j$ are continuous functions, we can suppose that the triangulation of $K$ is so fine that $\abs{v_{ij}-v_{i'j}}<\frac{\eps}{2}$ for each $j=0,\dots,n$ whenever there exists a simplex having $a_{i}$, $a_{i'}$ as two of its vertices. Let $a\in K$ lie in some $d$-simplex of this triangulation, say, $a=\sum_{i=1}^{d+1}t_ia_i$ (where each $t_i>0$ and $\sum_i t_i=1$). Set \begin{equation* z_{j}(s)=\sum_{i=1}^{d+1}t_iv_{ij}(s)\quad (s\in [1,2],~j=0,\dots,n). \end{equation*} Then $[z_0(s),\dots,z_n(s)]$ is a neat simplex because condition (ii) is satisfied (with $p=v_1$): \begin{equation* \bigg\vert \sum_{i=1}^{d+1}t_iv_{ij}(s)-v_{1j}\bigg\vert\leq \sum_{i=1}^{d+1}t_i \big( \abs{v_{ij}(s)-v_{ij}} + \abs{v_{ij}-v_{1j}} \big) < \eps, \end{equation*} the strict inequality coming from \eqref{E:simest} and our hypothesis on the triangulation. Define $H(s,a)$ as the unique element of $h^{-1}(0)\cap [z_0(s),\dots,z_n(s)]$ ($s\in [1,2]$). Observe that for any $a\in K$, $H(s,a)\in U=h^{-1}(0)\cap T$ $(s\in [1,2]$) and $H(2,a)\in D\cap U$, as it is the convex combination of the $\te{v}_{ij}\in D$. By reducing $r,\eps>0$ (and refining the triangulation of $K$) if necessary, we can ensure that \begin{equation* (1-s)f(a)+sH(1,a)\in T\quad \text{for all $s\in [0,1]$ and $a\in K$.} \end{equation*} Let $\pr\colon T\to U$ be the associated retraction. Complete the definition of $H$ by setting: \begin{equation* H(s,a)=\pr \big((1-s)f(a)+sH(1,a)\big)\quad \text{($s\in [0,1]$,~$a\in K$).} \end{equation*} The existence of $H$ shows that $f$ is homotopic within $U$ to a map whose image is contained in $D\cap U$. Taking $K=\Ss^k$, we conclude that the set inclusion $D\cap U\to U$ induces surjective maps $\pi_k(D\cap U)\to \pi_k(U)$ for all $k\in \N$. We now establish that the inclusion $D\cap U\to U$ induces injections on all homotopy groups. Let $k\in \N$, $G\colon \mathbb{D}^{k+1}\to U$ be continuous and suppose that the image of $g=G|_{\Ss^k}$ is contained in $D\cap U$. Let $G_0\colon \mathbb{D}^{k+1}\to D\cap U$ be a close approximation to $G$; the existence of $G_0$ was proved above. Let $\eps\in (0,1)$ and define \begin{equation* G_1\colon \mathbb{D}^{k+1}\to D\cap T\text{\ \ by\ \ }G_1(a)= \begin{cases} (1-s)g\big(\tfrac{a}{\abs{a}}\big)+sG_0\big(\tfrac{a}{\abs{a}}\big) & \text{\ \ if\ \ $\abs{a}=(1-s\eps)$,~$s\in [0,1]$} \\ G_0\big(\tfrac{a}{1-\eps}\big) & \text{\ \ if\ \ $\abs{a}\leq 1-\eps$ } \end{cases} \end{equation*} Notice that we can make $G_1$ as close as desired to $G$ by a suitable choice of $G_0$ and $\eps$. Let $w_j$ be as in \eqref{E:w_j}. We claim that there exist continuous functions $\te{w}_j\colon \mathbb{D}^{k+1}\to D$ $(j=0,\dots,n)$ such that: \begin{enumerate} \item [(iii)] $\sum_{j=0}^n \te{w}_j(a)=0$ for all $a\in \mathbb{D}^{k+1}$; \item [(iv)] For any $a\in \mathbb{D}^{k+1}$, $[G_1(a)+\te{w}_0(a),\dots,G_1(a)+\te{w}_n(a)]\subs D\cap T$ and it is neat. \end{enumerate} To prove this, invoke condition (ii) above (with $\mathbb{D}^{k+1}$ in place of $K$ and $G$ in place of $f$) together with denseness of $D$ to find constant $\te{w}_j$ on open sets which cover $\mathbb{D}^{k+1}$, and use a partition of unity. By (iv), for each $a\in \mathbb{D}^{k+1}$ there exist unique $t_0(a),\dots,t_n(a)\in [0,1]$ such that $\sum_{i}t_i(a)=1$ and \begin{equation* G_2(a)=G_1(a)+t_0(a)\te{w}_0(a)+\dots+t_n(a)\te{w}_n(a)\in h^{-1}(0). \end{equation*} We obtain thus a continuous map $G_2\colon \mathbb{D}^{k+1}\to D\cap U$. Since ${G_1}|_{\Ss^k}=g$ and $h\circ g=0$, we conclude from (iii) and uniqueness of the $t_i$ that $G_2|_{\Ss^k}=g$. Therefore, $G_2$ is a nullhomotopy of $g$ in $D\cap U$. \end{proof} \begin{crl}\label{L:dense} The subset of all smooth curves in $\sr LS_{\ka_1}^{\ka_2}(u,v)$ is dense in the latter. \end{crl} \begin{proof} Take $\L=L^2[0,1]\times L^2[0,1]$, $D=C^\infty[0,1]\times C^\infty[0,1]$ and $U$ an open subset of $L=\sr LS_{\kappa_1}^{\kappa_2}(u,v)$. Then it is a trivial consequence of (\ref{L:net}) that $D\cap U\neq \emptyset$ if $U\neq \emptyset$ \end{proof} \begin{crl}[smooth approximation]\label{L:smoothie} Let $\sr U\subs \sr LS_{\ka_1}^{\ka_2}(u,v)$ be open, $K$ be a compact simplicial complex and $f\colon K\to \sr U$ a continuous map. Then there exists a continuous $g\colon K\to \sr U$ such that: \begin{enumerate} \item [(i)] $f\iso g$ within $\sr U$. \item [(ii)] $g(a)$ is a smooth curve for all $a\in K$. \item [(iii)] All derivatives of $g(a)$ with respect to $t$ depend continuously on $a\in K$. \end{enumerate} Thus, the map $j\colon \sr CS_{\ka_1}^{\ka_2}(u,v)\to \sr LS_{\ka_1}^{\ka_2}(u,v)$ in \eqref{E:injection} induces surjections $\pi_k(j^{-1}(\sr U))\to \pi_k(\sr U)$ for all $k\in \N$. \end{crl} \begin{proof} Parts (i) and (ii) are exactly what was established in the first part of the proof of (\ref{L:net}), in the special case where $\L=L^2[0,1]\times L^2[0,1]$, $D=C^\infty[0,1]\times C^\infty[0,1]$, $L=\sr LS_{\kappa_1}^{\kappa_2}(u,v)$ and $U=\sr U$. The image of the function $g=H_2\colon K\to \sr U$ constructed there is contained in a finite-dimensional vector subspace of $D$, viz., the one generated by all $\te{v}_{ij}$, so (iii) also holds. \end{proof} \begin{lem}[$ \sr C\home \sr L $]\label{L:C^2} Let $ S $ be complete. Then the inclusion $i\colon \sr CS_{\ka_1}^{\ka_2}(u,v)^r\to \sr LS_{\ka_1}^{\ka_2}(u,v)$ is a homotopy equivalence for any $r\geq 2$. Consequently, $\sr CS_{\kappa_1}^{\kappa_2}(u,v)^r$ is homeomorphic to $\sr LS_{\kappa_1}^{\kappa_2}(u,v)$. \end{lem} \begin{proof} Let $\L=L^2[0,1]\times L^2[0,1]$, let $F=C^{r-1}[0,1]\times C^{r-2}[0,1]$ (where $C^k[0,1]$ denotes the set of all $C^k$ functions $[0,1]\to \R$, with the $C^k$ norm) and let $i\colon F\to \L$ be set inclusion. Setting $\sr M=\sr LS_{\ka_1}^{\ka_2}(u,v)$, we conclude from (\ref{L:Hilbert}\?(c)) that $i\colon \sr N=i^{-1}(\sr M)\inc \sr M$ is a homotopy equivalence. We claim that $\sr N$ is homeomorphic to $\sr CS_{\ka_1}^{\ka_2}(u,v)^r$, where the homeomorphism is obtained by associating a pair $(\hat \sig,\hat \ka)\in \sr N$ to the curve $\ga$ obtained by solving \eqref{E:de}, with $\sig$ and $\ka$ as in \eqref{E:Sobolev}. Suppose first that $\ga\in \sr CS_{\ka_1}^{\ka_2}(u,v)^r$. Then $\abs{\dot\ga}$ (resp.~$\ka$) is a function $[0,1]\to \R$ of class $C^{r-1}$ (resp.~$C^{r-2}$). Hence, so are $\hat{\sig}=h\circ \abs{\dot\ga}$ and $\hat \ka=h_{\ka_1}^{\ka_2}\circ \ka$, since $h$ and $h_{\ka_1}^{\ka_2}$ are smooth. Moreover, if $\ga,\,\eta\in \sr CS_{\ka_1}^{\ka_2}(u,v)^r$ are close in $C^r$ topology, then $\hat \ka_\ga$ is $C^{r-2}$-close to $\hat\ka_\eta$ and $\hat \sig_\ga$ is $C^{r-1}$-close to $\hat \sig_\eta$. Conversely, if $(\hat{\sig},\hat{\ka})\in \sr N$, then $\sig=h^{-1}\circ \hat \sig$ is of class $C^{r-1}$ and $\ka=(h_{\ka_1}^{\ka_2})^{-1}\circ \hat \ka$ of class $C^{r-2}$. Since all functions on the right side of \eqref{E:de} are of class (at least) $C^{r-2}$, the solution $\ta=\ta_\ga$ to this initial value problem is of class $C^{r-1}$. Moreover, $\dot\ga=\sig\ta$, hence the velocity vector of $\ga$ is seen to be of class $C^{r-1}$. We conclude that $\ga$ is a curve of class $C^r$. Further, continuous dependence on the parameters of a differential equation shows that the correspondence $(\hat \sig,\hat \ka)\mapsto \ta_\ga$ is continuous. Since $\ga$ is obtained by integrating $\sig\ta_\ga$, we deduce that the map $(\hat\sig,\hat\ka)\mapsto \ga$ is likewise continuous. The last assertion of the lemma follows from (\ref{L:Hilbert}\?(c)). \end{proof} \section*{Acknowledgements} The first author is partially supported by grants from \ltsc{capes}, \ltsc{cnpq} and \ltsc{faperj}. The second author gratefully acknowledges C.~Gorodski, \ltsc{ime-usp} and \ltsc{unb} for hosting him as a post-doctoral fellow, and \ltsc{fapesp} (grant 14/22556-3) and \ltsc{capes} for the financial support. The authors thank the anonymous referee for her/his helpful comments. \vfill\eject
2024-02-18T23:40:50.186Z
2018-04-26T02:03:28.000Z
algebraic_stack_train_0000
3,441
19,542
proofpile-arXiv_066-1069
\section{Introduction} Decomposition is the observation that some local quantum field theories are equivalent to disjoint unions of other local quantum field theories, essentially a counterexample to old lore linking locality and cluster decomposition. It was first\footnote{ For purposes of historical language translation, before the term `one-form symmetry' was coined, theories with one-form symmetries were sometimes called `gerby' theories, in reference to the fact that a gerbe is a fiber bundle whose fibers are higher groups. } observed in \cite{Hellerman:2006zs} in two-dimensional gauge theories and orbifolds with trivially-acting subgroups (nonminimally-charged matter) \cite{Pantev:2005rh,Pantev:2005wj,Pantev:2005zs}, and since then has been developed in many other references, see e.g.~\cite{Pantev:2022kpl,Robbins:2020msp,Robbins:2021ibx,Robbins:2021lry,Robbins:2021xce,Yu:2021zmu,Tanizaki:2019rbk,Cherman:2020cvw,Cherman:2021nox,Sharpe:2014tca,Eager:2020rra,Anderson:2013sia,Sharpe:2019ddn,Komargodski:2020mxz,ajt1,ajt2,ajt3,t1,gt1,xt1,Caldararu:2010ljp,Hellerman:2010fv,Nguyen:2021yld,Nguyen:2021naa,Honda:2021ovk,Huang:2021zvu} and e.g.~\cite{Sharpe:2006vd,Sharpe:2010zz,Sharpe:2010iv,Sharpe:2019yag,Sharpe:2022ene} for reviews. Decomposition is not limited to two dimensions, and indeed four-dimensional versions of decomposition have been described in \cite{Tanizaki:2019rbk,Cherman:2020cvw}. The common thread linking these different examples involves what is now called a higher-form symmetry: a quantum field theory in $d$ spacetime dimensions decomposes if it has a global $(d-1)$-form symmetry (possibly realized noninvertibly) \cite{Tanizaki:2019rbk,Cherman:2020cvw}. In this paper, following up \cite{Pantev:2022kpl}, we turn to decomposition in three-dimensional Chern-Simons theories with gauged noneffectively-acting one-form symmetries. Briefly, we find that \begin{equation} \label{eq:main} \left[ \mbox{Chern-Simons}(H) / BA \right] \: = \: \coprod_{\theta \in \hat{K}} \mbox{Chern-Simons}(G)_{\theta}, \end{equation} where $G = H / (A/K)$, $K \subset A$ defines the trivially-acting subgroup, and $\theta$ indicates a discrete theta angle coupling to an appropriate characteristic class of $G$ bundles, On the boundary, this reduces to decomposition in noneffectively-acting orbifolds of two-dimensional WZW models. A key role is played by the fact that the bulk discrete theta angles (coupling to bundle characteristic classes) become discrete torsion on the boundary, a result we explain in detail. The fact that the bulk decomposition correctly implies a known decomposition of the two-dimensional boundary theory provides a strong consistency check on our proposal. In two dimensions, decomposition has had a variety of applications, for example in giving nonperturbative constructions of geometries in phases of some gauged linear sigma models (GLSMs) \cite{Caldararu:2010ljp,Hori:2011pd,Addington:2012zv,Sharpe:2012ji,Halverson:2013eua,Ballard:2013fxa,Sharpe:2013bwa,Hori:2013gga,Hori:2016txh,Wong:2017cqs,Kapustka:2017jyt,Parsian:2018fhm,Chen:2018qww,Guo:2021aqj}, in Gromov-Witten theory \cite{ajt1,ajt2,ajt3,t1,gt1,xt1}, in computing elliptic genera to check claims about IR limits of pure supersymmetric gauge theories \cite{Eager:2020rra}, and recently in understanding Wang-Wen-Witten anomaly resolution \cite{Wang:2017loc,Robbins:2021ibx,Robbins:2021lry,Robbins:2021xce}. Chern-Simons theories are the starting point for many physics questions, and so we anticipate that the results of this paper should have a variety of applications. For example, as is well-known, three-dimensional AdS gravity can be understood as a Chern-Simons theory \cite{Witten:1988hc}, making Chern-Simons theories a natural playground for addressing questions in three-dimensional gravity, an approach used in e.g.~\cite{Benini:2022hzx} to address Marolf-Maxfield factorization questions \cite{Marolf:2020xie}. We anticipate that this work may have analogous uses. Similarly, one of the original applications of two-dimensional decomposition was to understand phases of certain gauged linear sigma models, where decomposition was used locally (ala Born-Oppenheimer) to understand IR limits of certain theories as nonperturbatively-realized branched covers of spaces \cite{Caldararu:2010ljp}. We expect that similar ideas could be used to understand the IR limits of certain Chern-Simons-matter theories. We begin in section~\ref{sect:review-wzw} with a review of decomposition in two-dimensional WZW orbifolds, which not only serves as a review of decomposition, but also describes the decomposition pertinent to boundaries in the three-dimensional Chern-Simons theories we discuss. In section~\ref{sect:decomp} we describe the primary proposal of this paper, namely decomposition in Chern-Simons theories with gauged one-form symmetry groups, which takes the form~(\ref{eq:main}). All Chern-Simons theories are assumed to have levels such that the theories exist on the three manifolds over which they are defined. We describe how this bulk decomposition maps to boundary WZW models, and reproduces standard results on decomposition in two-dimensional noneffective orbifolds, which serves as a strong consistency test of our claims. We also observe that in all these examples, the boundary discrete theta angles (choices of discrete torsion in boundary WZW models) are trivial, which is often reflected in the bulk discrete theta angles. In section~\ref{sect:spectra} we discuss the spectra of these theories. We begin with an explanation and review of monopole operators, local operators (analogues of twist fields in two-dimensional orbifolds) which can be used to construct projection operators. We then discuss line operators. When gauging ordinary one-form symmetries, the standard technology of anyon condensation can be used to describe the line operators. However, to describe noneffectively-acting one-form symmetries (in which a subgroup acts trivially), as relevant for this paper, requires a minor extension, which we propose and utilize. In section~\ref{sect:exs} we walk through the details of bulk and boundary decomposition, spectrum computations, and consistency tests such as level-rank duality in a variety of concrete examples. Finally in section~\ref{sect:boundary-g-g} we briefly discuss the related case of boundary $G/G$ models. These two-dimensional theories decompose, and we briefly discuss their corresponding bulk theories. In appendix~\ref{app:lineops} we summarize some results on line operators that are used in the main text. In appendix~\ref{app:crossed-module} we give a brief overview of crossed modules, to make this paper self-contained, as they are used in the description of three-dimensional decomposition. In appendix~\ref{sect:gauging-bk} we describe gauging effectively-acting one-form symmetries without appealing to line operators. \section{Warm-up: Decomposition in WZW orbifolds} \label{sect:review-wzw} As a warm-up exercise, let us briefly review decomposition in two dimensions, and apply it towards orbifolds of WZW models. Consider an orbifold $[X/\Gamma]$ where a central subgroup $K \subset \Gamma$ acts trivially on $X$. As has been discussed previously (see e.g.~\cite{Hellerman:2006zs}), for an ordinary (orientation-preserving) orbifold, \begin{equation} {\rm QFT}\left( [X/\Gamma] \right) \: = \: \coprod_{\theta \in \hat{K}} {\rm QFT}\left( [X/G]_{\theta(\omega)} \right), \end{equation} where $\theta(\omega)$ is a choice of discrete torsion, given as the image of the extension class $[\omega] \in H^2(G,K)$ corresponding to \begin{equation} 1 \: \longrightarrow \: K \: \longrightarrow \: \Gamma \: \longrightarrow \: G \: \longrightarrow \: 1 \end{equation} under the map $\theta: K \rightarrow U(1)$, yielding $\theta(\omega) \in H^2(K,U(1))$. Consider a $\Gamma$ orbifold of a WZW model for a group $H$, with $K \subset \Gamma$ acting trivially, and $G = \Gamma/K$ a subset of the center of $H$, acting freely on $H$. Then, as a special case of the decomposition above, we have that \begin{equation} \label{eq:wzw:decomp:1} \left[ {\rm WZW}(H) / \Gamma \right] \: = \: \coprod_{\theta \in \hat{K}} {\rm WZW}( H/G )_{\theta(\omega)}, \end{equation} with both sides at the same level. That said, (ordinary) discrete torsion vanishes for cyclic subgroups, so the only occasion on which $\theta(\omega)$ can be nontrivial will be if $H = {\rm Spin}(4n)$ and $\Gamma/K = {\mathbb Z}_2 \times {\mathbb Z}_2$. (We will discuss that case in section~\ref{sect:ex:cs-spin-4n}.) For example, consider a ${\mathbb Z}_4$ orbifold of an $SU(2)$ WZW model, where a ${\mathbb Z}_2 \subset {\mathbb Z}_4$ acts trivially, and the ${\mathbb Z}_2$ coset is the freely-acting center of $SU(2)$. For an ordinary (orientation-preserving) orbifold, since there is no discrete torsion in a ${\mathbb Z}_2$ orbifold, we have that \begin{equation} \left[ {\rm WZW}(SU(2)) / {\mathbb Z}_4 \right] \: = \: \coprod_{2} {\rm WZW}(SO(3)) \end{equation} (with all WZW models at the same level). Although we will not utilize orientifolds in this paper, in principle one can also consider orientation-reversing orbifolds (orientifolds) of WZW models, see e.g.~\cite{Gawedzki:2007uz,Brunner:2001fs,Pradisi:1995qy,Pradisi:1995pp,Schreiber:2005mi,Bachas:2001id}. See \cite{Sharpe:2009hr} and references therein for discussions of discrete torsion in orientifolds. So far we have discussed discrete torsion weighting different universes. In principle, WZW models can also be weighted by analogues of discrete theta angles. Although these are better known in the case of gauge theories\footnote{ Discrete theta angles in gauge theories in unrelated contexts have a long history, see e.g.~\cite{Hori:1994uf}, \cite[section 6]{Hori:1994nc}, \cite[section 4]{Hori:2011pd} for two-dimensional examples and \cite{Gaiotto:2010be,Aharony:2013hda,Ang:2019txy} for four-dimensional examples. }, the point is that if a group manifold $G$ has a torsion characteristic class, some $w \in H^2(G,F)$ for some coefficient module $F$, then there exists a discrete theta angle $\theta \in \hat{F}$ that weights maps into $G$, via a term in the action of the form \begin{equation} \int_{\Sigma} \langle \theta, \phi^* w \rangle, \end{equation} where $\Sigma$ is the worldsheet and $\phi: \Sigma \rightarrow G$ any map in the path integral. If $G = \tilde{G}/Z$ for some finite group $Z$, these discrete theta angles can also, for appropriate $w$, correspond to choices of discrete torsion in a $Z$ orbifold of a WZW model on $\tilde{G}$. In section~\ref{sect:boundary-wzw} we shall see that the choices of discrete theta angle above that arise in the WZW orbifolds appearing on boundaries of decompositions of one-form-gauged Chern-Simons theories, are the same as choices of discrete torsion. \section{Decomposition in noneffective one-form symmetry gaugings} \label{sect:decomp} In general terms, one expects a decomposition in a $d$-dimensional quantum field theory whenever it has a global $(d-1)$-form symmetry \cite{Tanizaki:2019rbk,Cherman:2020cvw}. A typical example of a decomposition in two dimensions involves gauging a non-effective group action: a group action in which a subgroup acts trivially on the theory being gauged, in the sense that its generator commutes with the operators of that theory: $[J,{\cal O}] = 0$. Gauging a trivially-acting group results in a global one-form symmetry, which is responsible for a decomposition. In principle, an analogous phenomeon exists in three dimensions, involving the gauging of `trivially-acting' one-form symmetries. Here, for a one-form action to be trivial means that it commutes with the line operators in the theory, as we shall elaborate below. In this section, after a short overview of the notion of non-effective one-form symmetries, we make a precise prediction for decomposition. \subsection{Non-effective one-form symmetry group actions} \label{sect:trivacting} We define a `trivially-acting' one-form symmetry in terms of the fusion algebra of the corresponding lines, and a `non-effective' one-form symmetry is one in which a subset of the lines acts trivially. First, let us recall some basics of gauging one-form symmetries, which in three dimensions we will describe by the fusion algebra of line operators (see e.g.~\cite[section 3.1]{Roumpedakis:2022aik}, \cite{js,sp}, \cite[section II]{Barkeshli:2014cna} and references therein for a detailed discussion), with gauging as in e.g.~\cite{Moore:1989yh}. Anomalies in such a gauging are discussed in e.g.~\cite[section 2.3]{Benini:2022hzx}, \cite[section 2.1]{Hsin:2018vcg}, \cite{Moore:1988qv,bk,Kitaev:2005hzj}. In order to be gaugeable, its 't Hooft anomaly must vanish, which requires that the lines be mutually transparent, meaning that they have trivial mutual braiding. In particular, a one-form symmetry necessarily has abelian lines, for which the braiding is completely characterized by their spins (see e.g.~\cite[equ'n (2.28)]{Benini:2022hzx}, \cite[section 2]{Hsin:2018vcg}), schematically \begin{equation} \raisebox{-25pt}{ \begin{picture}(30,50) \ArrowLine(15,0)(15,50) \Text(18,45)[l]{$b$} \ArrowArc(15,25)(10,110,270) \ArrowArc(15,25)(10,-90,70) \Text(30,25)[l]{$a$} \end{picture} } \: \: = \: B(a,b) \: \raisebox{-25pt}{ \begin{picture}(10,50) \ArrowLine(5,0)(5,50) \Text(8,25)[l]{$b$} \end{picture} } \end{equation} where \begin{equation} B(a,b) \: = \: \exp\left( 2\pi i\left( h(a \times b) - h(a) - h(b) \right) \right), \end{equation} where $a$, $b$ denote lines, and $h(a) \mod 1$ is the spin of the line $a$. Note that if the spins are integers, then $B=1$ and there is no obstruction. Conversely, if $B = 1$, then spins are integers or half integers. We take\footnote{ We are using ``$B$'' to mean several different things in this section. We use $BK$ to denote a one-form symmetry, a standard notation in mathematics, going back decades. (In physics, the notation $K^{[1]}$ is sometimes used instead.) Later we will use $BG$ to denote a classifying space. In this section, we also use $B(a,b)$ to denote line monodromies. } a `trivially-acting $BK$' to be described by a set of lines $\{ g \}$ such that all other lines $b$ both \begin{enumerate} \item have trivial monodromy under $g$, meaning $B(g,b) = 1$, and also are \item invariant under fusion with $g$, $g \times b = b$, \end{enumerate} for all $g$. (In effect, there are two conditions in three dimensions, whereas invariance in two dimensions really boils down to a single constraint of the form $[J, {\cal O}] = 0$.) To be clear, this notion can be somewhat counterintuitive. Consider for example $SU(2)$ Chern-Simons theory. This theory has a $B {\mathbb Z}_2$ one-form symmetry defined by the center of $SU(2)$. However, although the classical action is invariant under the center, the Wilson lines are not invariant, as the $B {\mathbb Z}_2$ action multiplies Wilson lines by phases (corresponding to the $n$-ality of the corresponding representation with respect to the center). In particular, the $B {\mathbb Z}_2$ action on $SU(2)$ Chern-Simons theory defined by the center of $SU(2)$ is not trivial. \subsection{Basic decomposition prediction} In \cite{Pantev:2022kpl}, it was argued that in a quotient by a 2-group $\Gamma$ of the form \begin{equation} 1 \: \longrightarrow \: BK \: \longrightarrow \: \Gamma \: \longrightarrow \: G \: \longrightarrow \: 1, \end{equation} where the $BK$ acts trivially, the path integral sums over both $K$ gerbes and a subset of $G$ bundles, specifically $G$ bundles satisfying a constraint. In general, if one has a group $H$ and an abelian group $A$ with a map $d: A \rightarrow H$ whose image is in the center of $H$, then the crossed module\footnote{ See appendix~\ref{app:crossed-module} for an introduction to crossed modules, or alternatively \cite[appendix A]{Lee:2021crt}, \cite[section 2]{Bhardwaj:2021wif}. } $\Gamma_{\cdot} = \{A \rightarrow H\}$ defines a 2-group we shall label $\Gamma$. So long as we are interested in flat bundles, we can apply the same analysis as \cite{Pantev:2022kpl}, and argue that $\Gamma$ bundles on a three-manifold $M$ map to $G = H/{\rm im}\, A$ bundles satisfying a condition. This 2-group fits into an exact sequence \begin{equation} \label{eq:a-to-h} 1 \: \longrightarrow \: K \: \longrightarrow \: A \: \stackrel{d}{\longrightarrow} \: H \: \longrightarrow \: G \: \longrightarrow \: 1, \end{equation} where $K = {\rm Ker}\, d$. (Physically, $d$ just encodes the $A$ action, by projecting it to a subgroup of the center of $H$.) This exact sequence defines an element \begin{equation} \omega \: \in \: H^3_{\rm group}(G,K) \: = \: H^3_{\rm sing}(BG,K), \end{equation} which we will give explicitly in~(\ref{eq:defn-omega}), and the condition that $G$ bundles must satisfy to be im the image of $\Gamma$ bundles is that \begin{equation} \label{eq:constr} \phi^* \omega \: = \: 0, \end{equation} for $\phi: M \rightarrow B \Gamma$ the map defining the $\Gamma$ bundle on $M$, for the same reasons discussed in \cite{Pantev:2022kpl}. Next, we describe the element $\omega$ corresponding to the extension~(\ref{eq:a-to-h}), appearing in the constraint~(\ref{eq:constr}) above. Let $Z = {\rm im}\, d \subset Z(H)$, the center of $H$, and $w_G$ the $Z$-valued degree-two characteristic class for $G$ corresponding to a generator of $H^2_{\rm sing}(BG,Z)$. (For example, for $G = SO(n)$, $w_G$ is the second Stiefel-Whitney class $w_2$.) Let $\alpha \in H^2_{\rm group}(Z,K)$ be the class of the extension \begin{equation} \label{eq:ext-alpha} 1 \: \longrightarrow K \: \longrightarrow \: A \: \longrightarrow \: Z \: \longrightarrow \: 1, \end{equation} and let \begin{equation} \beta_{\alpha}: \: H^2_{\rm sing}(BG,Z) \: \longrightarrow \: H^3_{\rm sing}(BG,K) \end{equation} be the Bockstein homomorphism in the long exact sequence associated to the extension~(\ref{eq:ext-alpha}). Then \begin{equation} \label{eq:defn-omega} \omega \: = \: \beta_{\alpha}( w_G ) \: \in \: H^3_{\rm sing}(BG,K). \end{equation} When discussing boundary WZW models, it will be useful to describe $\omega$ differently. To that end, we use the fact that \begin{equation} H^n_{\rm sing}(BG,Z) \: = \: {\rm Map}\left( BG, K(Z,n) \right), \end{equation} to write write $w_G$ and $\alpha$ as maps \begin{equation} w_G: \: BG \: \longrightarrow \: K(Z,2), \: \: \: \alpha: \: BZ ( = K(Z,1)) \: \longrightarrow \: K(K,2). \end{equation} Since Eilenberg-Maclane spaces are in the stable category, where $B$ exists as a functor, we can define \begin{equation} B \alpha: \: K(Z,2) \: \longrightarrow \: K(K,3), \end{equation} hence \begin{equation} B\alpha \circ w_G: \: BG \: \longrightarrow \: K(K,3), \end{equation} and so defines an element of $H^3_{\rm sing}(BG,K)$. Furthermore, $B \alpha$ is just the Bockstein homomorphism $\beta_{\alpha}$, hence \begin{equation} \label{eq:balpha-bock} \omega \: = \: B \alpha \circ w_G \: = \: \beta_{\alpha}(w_G), \end{equation} and so we recover the description of $\omega$ above. So far, we have argued that on general principles, our $\Gamma$ gauge theory should be described by a $G$ gauge theory such that the $G$ bundles satisfy the constraint~(\ref{eq:constr}). Just as in \cite{Hellerman:2006zs,Pantev:2022kpl}, such a restriction on instantons can be implemented by a sum over universes. The constraint~(\ref{eq:constr}), namely $\phi^* \omega = 0$, is implemented by summing over $G$ Chern-Simons theories with discrete theta angles coupling to $\omega$, formally \begin{equation} \label{eq:decomp-predict} \left[ \mbox{Chern-Simons}(H) / B A \right] \: = \: \coprod_{\theta \in \hat{K} } \mbox{Chern-Simons}(G)_{\theta}, \end{equation} where $\theta$ is the three-dimensional discrete theta angle coupling to $\phi^* \omega$, for levels and underlying three-manifolds for which these theories are defined\footnote{ As has been noted in e.g.~\cite{Moore:1989yh}, \cite[appendix C]{Seiberg:2016rsg}, \cite[appendix A]{Seiberg:2016gmd}, \cite{Belov:2005ze,Freed:1992vw,freed2}, not every Chern-Simons theory with every level is well-defined on every three-manifold. The basic issue is that Chern-Simons actions are not precisely gauge-invariant, but under gauge transformations shift by an amount proportional to $2\pi$. Depending upon the gauge group and the three-manifold, the proportionality factor may or may not be integral. If $k$ times that proportionality factor is integral, then the exponential of the action is gauge-invariant, and the theory is well-defined; if that product is not integral, then the path integral is not gauge-invariant and so not defined. Even if it is defined, it may depend upon subtle choices. For example, \cite[appendix A]{Seiberg:2016gmd} argues that the (ordinary, bosonic) $U(1)_1$ Chern-Simons theory is well-defined only on spin three-manifolds, and furthermore that the choices of values of the action, the Chern-Simons invariants in the sense of \cite{Borel:1999bx,deBoer:2001wca}, are in one-to-one correspondence with the spin structures. More generally, gauging one-form symmetries can create issues of this form, precisely because one twists gauge fields by gerbes, which results in `twisted' bundles and connections not present in the original theory, of fractional instanton numbers. }. This is our prediction for decomposition in three-dimensional Chern-Simons theories. The $G$ Chern-Simons theory is defined to be the $B({\rm im}\,A)$ gauging of the $H$ Chern-Simons theory, at the same level as the $H$ Chern-Simons theory. This is important to distinguish because sometimes gauging one-form symmetries can shift levels. For example, \cite[section C.1]{Seiberg:2016rsg} argues that, schematically, $U(1)_{4m}/B{\mathbb Z}_2 = U(1)_m$, and not $U(1)_{4m}$, despite the fact that as groups, $U(1) / {\mathbb Z}_2 = U(1)$. The reader should note that the decomposition statement above correctly reproduces ordinary one-form gaugings. Consider the case that $K = 1$, so that the map $d: A \rightarrow H$ is one-to-one into the center of $H$. Then, decomposition~(\ref{eq:decomp-predict}) correctly predicts that \begin{equation} \left[ \mbox{Chern-Simons}(H) / BA \right] \: = \: \mbox{Chern-Simons}(G), \end{equation} which is a standard result (see e.g.~\cite{Moore:1989yh}). Decomposition becomes interesting in cases in which $K \neq 1$. In section~\ref{sect:exs} we will check this statement in several examples, outlining how it both reproduces known results as well as explains new cases. \subsection{Boundary WZW models} \label{sect:boundary-wzw} Let us now turn to Chern-Simons theories on manifolds with boundary, and the corresponding theories on the boundaries. We will see that the bulk Chern-Simons decomposition of the previous section correctly predicts a decomposition of boundary WZW models, which matches existing results on decomposition in two-dimensional orbifolds. This matching involves a rather interesting relation between characteristic classes of bundles on three-manifolds and choices of discrete torsion in two-dimensional orbifolds. In particular, the fact that the three-dimensional decomposition correctly reproduces two-dimensional decomposition on the boundary is an important consistency test of our proposal. Briefly, as has been discussed elsewhere (see e.g.~\cite{Elitzur:1989nr,Bos:1989kn,Axelrod:1989xt,Coussaert:1995zp,ncat}, \cite[section 4.2]{Dijkgraaf:1989pz}, \cite[section 5.2]{Gawedzki:1999bq}, and in related contexts \cite{Fiorenza:2012ec,frs}), on a three-manifold with boundary, a bulk Chern-Simons theory for gauge group $G$ naturally couples to a (chiral) WZW model for the group $G$ on the boundary. If the Chern-Simons theory has level $k$, then (see e.g.~\cite[section 4.2]{Dijkgraaf:1989pz}) the boundary WZW model has level $\tau(k)$, where \begin{equation} \tau: \: H^n_{\rm sing}(BG,F) \: \longrightarrow \: H^{n-1}_{\rm sing}(G,F) \end{equation} is the loop space map\footnote{ This is the natural map \begin{eqnarray} \lefteqn{ H^n_{\rm sing}(BG,F) \: = \: {\rm Map}\left( BG, K(F,n) \right) } \nonumber \\ & \hspace*{0.25in} \longrightarrow & {\rm Map}\left( \Omega( BG ), \Omega( K(F,n) ) \right) \: = \: {\rm Map}\left( G, K(F,n-1) \right) \: = \: H^{n-1}_{\rm sing}(G,F). \end{eqnarray} which sends any $f \in {\rm Map}(BG, K(F,n))$ to $\Omega(f)$. For later use, to construct explicit maps, one needs concrete choices of e.g.~$X \mapsto \Omega B X$, for which we refer the reader to e.g.~\cite{segal1,dunn1,may1,maythomas}. As such choices do not alter cohomology classes, we will not discuss them explicitly in this paper. } for any abelian group $F$, and we take Chern-Simons levels\footnote{ As before, levels are assumed to be such that the theory exists. } $k \in H^4_{\rm sing}(BG,{\mathbb Z})$, and WZW levels $\tau(k) \in H^3_{\rm sing}(G,{\mathbb Z})$. Similarly, if the Chern-Simons theory has a discrete theta angle coupling to some characteristic class defined by an element of $\omega \in H^3(BG,F)$, then the boundary WZW model couples\footnote{ We would like to thank Y.~Tachikawa for a discussion of discrete theta angles in this context. } to a discrete theta angle defined by $\tau(\omega) \in H^2(G,F)$. Such discrete theta angles in two-dimensional WZW models are reviewed in section~\ref{sect:review-wzw}. Given that standard bulk Chern-Simons / boundary WZW model relationship reviewed above, the three-dimensional decomposition prediction~\ref{eq:decomp-predict} implies that in the associated boundary RCFT, an $A$ orbifold of a WZW model for $H$ is equivalent to a disjoint union of WZW models for $G$, \begin{equation} \label{eq:wzw:decomp:boundary} [ {\rm WZW}(H) / A ] \: = \: \coprod_{\theta \in \hat{K}} {\rm WZW}(G)_{\theta}, \end{equation} with levels and discrete theta angles related to those of the bulk theory by the map $\tau$. We will see later in this section that although the WZW discrete theta angles $\theta$ are derived from characteristic classes in the Chern-Simons theory, they nevertheless correspond to choices of discrete torsion in the boundary orbifolds. As a consistency check, let us show that $\tau$ commutes with gauging $BA$, so that the levels on the left and right-hand sides of~(\ref{eq:wzw:decomp:boundary}) match, just as they did\footnote{ Modulo subtleties discussed there in special cases, such as those arising from the fact that $U(1)/{\mathbb Z}_k = U(1)$ as a group, but the corresponding Chern-Simons theories have different levels. } in the bulk prediction~(\ref{eq:decomp-predict}). First, for $G$ any topological group, there is a natural homotopy equivalence between the loop space $\Omega(BG)$ and $G$ (meaning that $BG$ is a delooping of $G$). Also, for any abelian group $F$, the Eilenberg-Maclane space $K(F,n-1)$ is homotopy equivalent to loop space $\Omega( K(F,n) )$. Since \begin{equation} H^n_{\rm sing}(BG,F) \: = \: {\rm Map}(BG, K(F,n)) \end{equation} and since $\Omega$ is a functor, for any continuous homomorphism $f: G_1 \rightarrow G_2$ between topological groups $G_1$, $G_2$, there is a continuous map $Bf: B G_1 \rightarrow B G_2$ and natural maps \begin{eqnarray} {\rm Map}\left( BG_2, K(F,n) \right) & \longrightarrow & {\rm Map}\left( BG_1, K(F,n) \right), \\ a & \mapsto & B(f \circ a) \end{eqnarray} and \begin{eqnarray} {\rm Map}\left(G_2, K(F,n-1) \right) & \longrightarrow & {\rm Map}\left(G_1, K(F,n-1) \right), \\ b & \mapsto & f \circ b. \end{eqnarray} Combining these maps, one finds that for any Lie group $G$ with $K$ a subgroup of the center, the following diagram commutes: \begin{equation} \xymatrix{ H^3_{\rm sing}(B (G/K), F) \ar[r] \ar[d] & H^2_{\rm sing}( G/K, F) \ar[d] \\ H^3_{\rm sing}(BG, F) \ar[r] & H^2_{\rm sing}(G,F). } \end{equation} This tells us that the levels appearing on either side of the boundary WZW relation~(\ref{eq:wzw:decomp:boundary}) match, as expected, consistent with the prediction~(\ref{eq:decomp-predict}) of the bulk Chern-Simons theory. Now, we will argue that the WZW model discrete theta angles, arising as $\tau$ of characteristic classes in the Chern-Simons theory, are the same as choices of discrete torsion in the boundary theory. This will be important in understanding how the three-dimensional Chern-Simons decomposition compares to two-dimensional decompositions as reviewed in section~\ref{sect:review-wzw}. For simplicity, we will assume that $H$ is the universal covering, so that $Z = \pi_1(G)$. (Similar results exist in more general cases.) To that end, since $\tau$ is the loop space functor, we can write \begin{equation} \tau\left( \beta_{\alpha}(w_G) \right) \: = \: \tau( B \alpha \circ w_G ) \: = \: \Omega(B \alpha \circ w_G) \: = \: \Omega(B \alpha) \circ \Omega(w_G). \end{equation} Now, $\Omega(B \alpha) = \alpha$, and \begin{equation} \Omega(w_G) \: \in \: {\rm Map}( \Omega(BG), \Omega( K(Z,2) ) ) \: = \: {\rm Map}(G, K(Z,1) ), \end{equation} so $\Omega(w_G)$ is a map $G \rightarrow BZ$. Now, we claim that $\Omega(w_G)$ is also the cell attachment map $p$ of the Postnikov tower, $p: G \rightarrow B \pi_1(G) = BZ$, where $Z = \pi_1(G)$. To make this clear, recall that the Postnikov tower map is the classifying map for the universal cover. In other words, if $\tilde{G}$ is the universal covering group of $G$, then $p^* EZ = \tilde{G}$. Now, on the other hand, $B \tilde{G} \rightarrow BG$ is a principal $K(Z,1)$ bundle on $BG$, which corresponds to a map $BG \rightarrow B( K(Z,1) ) = K(Z,2)$, which is $w_G$. Applying the loop space functor gives the map $\Omega(w_G): G \rightarrow K(Z,1) = BZ$, which is then more or less tautologically $p$. In particular, we see that \begin{equation} \tau( B \alpha \circ w_G) \: = \: \alpha \circ \Omega(w_G) \: = \: \alpha \circ p. \end{equation} The expression above relates the Chern-Simons discrete theta angles (coupling to bundle characteristic classes) to discrete torsion on the boundary. We can see this as follows. If $\phi: \Sigma \rightarrow G$ is any map from the worldsheet $\Sigma$ into the target $G$, then $p \circ \phi: \Sigma \rightarrow BZ$ defines a $Z$-twisted sector over $\Sigma$. In particular, the discrete theta angle phase \begin{equation} \langle \theta, \phi^* (\alpha \circ p) \rangle, \end{equation} for for $\theta: K \rightarrow U(1)$ any character of $K$, corresponds to discrete torsion in the $Z$-twisted sector defined by $p \circ \phi$, specifically discrete torsion given by $\theta(\alpha) \in H^2_{\rm group}(Z, U(1))$, for $\alpha \in H^2_{\rm group}(Z,K)$. Thus, we see that $\tau$ relates discrete theta angles coupling to bundle characteristic classes on three-manifolds, to discrete torsion in two-dimensional orbifolds on boundaries. In passing, this phenomenon that three-dimensional bulk discrete theta angles become discrete torsion in boundary two-dimensional orbifolds is also visible in the case that the bulk theory is a finite 2-group orbifold, see \cite[section 3.2]{Pantev:2022kpl}. Now, let us compare the decomposition~(\ref{eq:wzw:decomp:boundary}) in boundary WZW models, implied by bulk Chern-Simons decomposition, to standard results \cite{Hellerman:2006zs} on decomposition in two-dimensional orbifolds, as reviewed earlier in section~\ref{sect:review-wzw}. Certainly the form of the boundary decomposition~(\ref{eq:wzw:decomp:boundary}) is identical to that arising in two-dimensional orbifolds with trivially-acting central subgroups, possibly modulo the form of the discrete theta angles. We have just argued that the discrete theta angles arising on the boundary correspond to choices of discrete torsion, and in fact, the discrete torsion phases arising in the boundary case match those in the ordinary two-dimensional case. We can relate these two pictures of boundary discrete theta angles as follows. Recall $\alpha \in H^2_{\rm group}(Z,K)$ is the class of the extension \begin{equation} 1 \: \longrightarrow \: K \: \longrightarrow \: A \: \longrightarrow \: Z \: \longrightarrow \: 1. \end{equation} In two-dimensional decomposition in $A$ orbifolds with trivially-acting central subgroups $K$, the discrete torsion phase factors on a universe associated with $\theta \in \hat{K}$ are precisely the image of $\alpha$ under $\theta$: \begin{eqnarray} H^2_{\rm group}(Z,K) & \longrightarrow & H^2_{\rm group}(Z,U(1)), \\ \alpha & \mapsto & \theta \circ \alpha. \end{eqnarray} These are the same as the discrete torsion phases arising in the boundary WZW decomposition~(\ref{eq:wzw:decomp:boundary}), as we have just discussed, and we will confirm explicitly in examples in section~\ref{sect:exs} that the decomposition above in the boundary theory precisely coincides with the decomposition of WZW orbifolds given in~(\ref{eq:wzw:decomp:1}). This matching is an important consistency test of our proposal. \subsection{Nontriviality of discrete theta angles} \label{sect:nontriv} In the boundary WZW models appearing in these decompositions, the discrete torsion on each universe appearing in a decomposition is trivial. For most single group factors, this is because the center is usually a cyclic group, and cyclic group orbifolds have no discrete torsion. The exceptions are the groups Spin$(4n)$, which have center ${\mathbb Z}_2 \times {\mathbb Z}_2$. That finite group admits discrete torsion; however, to generate the discrete torsion in a decomposition of a string orbifold, the orbifold group must be nonabelian, and so cannot arise as the boundary of a three-dimensional theory, as we will discuss in greater detail in section~\ref{sect:ex:cs-spin-4n}. In at least some examples, not only are the boundary discrete theta angles (discrete torsions) trivial, but the bulk discrete theta angles are also trivial. For example\footnote{ We would like to thank Y.~Tachikawa for making this observation. }, in bulk theories, for cases in which $K = {\mathbb Z}_2$, $Z = {\mathbb Z}_2$, and $A = {\mathbb Z}_4$, so that the extension $\alpha$ is \begin{equation} 1 \: \longrightarrow \: {\mathbb Z}_2 \: \longrightarrow {\mathbb Z}_4 \: \longrightarrow \: {\mathbb Z}_2 \: \longrightarrow \: 1, \end{equation} the bulk discrete theta angle couples to the Bockstein $\beta_{\alpha}$ of a distinguished element $w_G \in H^2(M_3,{\mathbb Z}_2)$. Now, for this $\alpha$, \begin{equation} \beta_{\alpha}(w_G) \: = \: {\rm Sq}^1(w_G), \end{equation} and as we will argue in section~\ref{sect:ex:su2-bz4}, \begin{equation} {\rm Sq}^1(w_G) \: = \: w_1(TM_3) \cup w_G, \end{equation} hence it can only be nonzero on nonorientable spaces. However, we only define Chern-Simons theories on oriented three-manifolds, so for all cases we consider, these bulk discrete theta angles vanish. Similarly, if the three-manifold is $T^3$, the pertinent Bockstein homomorphism will vanish, and one cannot get a nonzero bulk discrete theta angle. Briefly, for any short exact sequence \begin{equation} 1 \: \longrightarrow \: K \: \longrightarrow \: A \: \longrightarrow \: Z \: \longrightarrow \: 1, \end{equation} for $K, A, Z$ abelian, the induced map \begin{equation} H^2( T^3, A ) \: \longrightarrow \: H^2(T^3, Z) \end{equation} is surjective (since each of those cohomology groups is just Hom from a free abelian group into the coefficients), which implies that in the long exact sequence \begin{equation} H^2(T^3, K) \: \longrightarrow \: H^2(T^3, A) \: \longrightarrow \: H^2(T^3, Z) \: \stackrel{\beta}{\longrightarrow} \: H^3(T^3, K), \end{equation} the Bockstein $\beta = 0$, and so the bulk discrete theta angles are trivial in corresponding cases. For another example, consider Lens spaces. From \cite[example 3E.2]{hatcher}, for the Bockstein associated to the short exact sequence \begin{equation} 1 \: \longrightarrow \: {\mathbb Z}_m \: \longrightarrow \: {\mathbb Z}_{m^2} \: \longrightarrow \: {\mathbb Z}_m \: \longrightarrow \: 1, \end{equation} the associated Bockstein maps generators of $H^1(L, {\mathbb Z}_m)$ to generators of $H^2(L,{\mathbb Z}_m)$, for $L$ a Lens space, but $\beta^2 = 0$, hence the associated Bockstein map \begin{equation} \beta: \: H^2(L, {\mathbb Z}_m) \: \longrightarrow \: H^3(L, {\mathbb Z}_m) \end{equation} necessarily vanishes, and so the bulk discrete theta angles are trivial in corresponding cases. More generally, whether the bulk discrete theta angles are always trivial is a reflection of the map $\tau: H^3_{\rm sing}(BG,K) \rightarrow H^2_{\rm sing}(BG,K)$. For example, if $\tau$ is injective, then triviality of the boundary discrete theta angles implies triviality of the bulk discrete theta angles. We leave general questions about the injectivity of $\tau$ for future work. In passing, note that in the bulk, orientability plays a key role. At least abstractly, it is tempting to speculate about more general cases involving e.g.~orientifolds of boundary WZW modelsi, as might arise if the three-manifold descends to a solid Klein bottle (a three-manifold whose boundary is the two-dimensional Klein bottle). On such a nonorientable space, at least sometimes the discrete theta angles would be nontrivial. Furthermore, in orientifolds, discrete torsion is counted by $H^2_{\rm group} (Z,U(1))$ with a nontrivial action on the coefficients (see e.g.~\cite{Sharpe:2009hr,Bachas:2001id,Gawedzki:2007uz,Brunner:2001fs}), so that for example $H^2_{\rm group}({\mathbb Z}_2,U(1))$ can be nonzero, which again would result in boundary WZW models with nonzero discrete theta angle contributions. \section{Spectra} \label{sect:spectra} In this section we briefly describe the spectra of monopole operators and line operators in a theory with a gauged trivially-acting one-form symmetry, and argue that the results are consistent with decomposition~(\ref{eq:decomp-predict}). \subsection{Monopole operators} In two dimensional theories, when one gauges a non-effectively-acting group, one gets twist fields and Gukov-Witten operators corresponding to conjugacy classes in the trivially-acting subgroup. In three dimensional theories, instead of twist fields, one has monopole operators (see e.g.~\cite{Borokhov:2002ib,Borokhov:2002cg}), which play the same role. In this section we will outline their properties. In two dimensions, twist fields generate branch cuts, which in the language of topological defect lines are real codimension-one walls that implement the gauging of the zero-form symmetry. In three dimensions, when gauging a one-form symmetry, from thinking about topological defect ones one sees the theory has codimension-two lines, which end in monopole operators, in the same way that in two dimensions, the orbifold branch cuts terminate in twist fields. We can think of the monopole operators in three dimensions as local disorder operators: on a sphere surrounding the monopole operator associated to a $BG$ symmetry, one has a nontrivial $G$ gerbe, corresponding to an element of $H^2(S^2,G)$ (for $G$ assumed finite), just as on a circle surrounding a twist field in two dimensions one has a nontrivial bundle. In two dimensions, the twist fields associated to trivially-acting gauged zero-form symmetries are local dimension-zero operators, which can be used to form projectors onto the universes of decomposition. In three dimensions, the monopole operators associated to trivially-acting gauged one-form symmetries are closely analogous, and can again be used to form projection operators, in exactly the same fashion. In \cite[section 4.1.4]{Pantev:2022kpl}, projection operators are explicitly constructed from monopole operators in three-dimensional theories, and we encourage the reader to consult that reference for further details. \subsection{Line operator spectrum} \label{sect:lineops} Given a `gaugable' one-form symmetry, described by a subset of the lines in the theory, there is a standard procedure for computing the spectrum of lines in the gauged theory, given as follows (see e.g.~\cite[section 2]{Moore:1989yh}, \cite[section 2.5]{Cordova:2017vab}, \cite{Benini:2022hzx}). For $B {\mathbb Z}_n$, let $g$ denote a line generating the others, and then: \begin{itemize} \item Exclude from the spectrum all lines $a$ which have monodromy\footnote{ In terms of the S-matrix, $B(a,b) = S_{ab}/S_{0b}$, see e.g.~\cite[equ'n (40)]{Barkeshli:2014cna}. } $B(g,a) \neq 1$, under a generator $g$, where the $g$ action on a line $b$ is determined by the process \begin{equation} \raisebox{-25pt}{ \begin{picture}(30,50) \ArrowLine(15,0)(15,50) \Text(18,45)[l]{$b$} \ArrowArc(15,25)(10,110,270) \ArrowArc(15,25)(10,-90,70) \Text(30,25)[l]{$g$} \end{picture} } \: \: = \: B(g,b) \: \raisebox{-25pt}{ \begin{picture}(10,50) \ArrowLine(5,0)(5,50) \Text(8,25)[l]{$b$} \end{picture} } \end{equation} \item Identify any two lines $b$, $g \times b$ that differ by fusion with $g$, and finally \item Lines $b$ that are invariant under fusion (meaning $a = g \times a$) become $n$ lines in the spectrum of the gauged theory. \end{itemize} This is closely analogous to two-dimensional orbifolds, in which one omits non-invariant operators, and fixed points lead to twist fields. This is a special case of a more general procedure, sometimes known as anyon condensation, which is also applicable to noninvertible symmetries, unlike the basic algorithm above. See e.g.~\cite[section 4.1]{Benini:2022hzx}, \cite{Kong:2013aya,Fuchs:2002cm,Cordova:2017vab,Yu:2021zmu,Gaiotto:2019xmp} for further details. Now, in our case, the (noneffectively-acting) one-form symmetry we wish to gauge is not described by a set of lines within the original theory. Sometimes, in some special cases, we can describe it by adding lines to the theory, such as in the case that the entire gauged one-form symmetry acts trivially. In general, however, that procedure is not well-defined. Consider for example the case of $SU(2)_4$ Chern-Simons theory, whose spectrum of lines $\{ (0), (1), (2), (3), (4) \}$ is as described in appendix~\ref{app:lineops}. Let us consider gauging a $B {\mathbb Z}_4$. Now, $SU(2)_4$ has a $B {\mathbb Z}_2$ symmetry, coresponding to the lines (0), (1), so one could imagine extending it to $B {\mathbb Z}_4$ by replacing $\{ (0), (1) \}$ with $\{ (0), \ell_1, \ell_2, \ell_3 \}$ which obey \begin{equation} \ell_i \times \ell_j \: = \: \ell_{i + j \mod 4}, \end{equation} with $\ell_0 = (0)$. In order for the image of ${\mathbb Z}_2 \hookrightarrow {\mathbb Z}_4$ to act trivially, we require \begin{equation} \ell_2 \times (2,3,4) \: = \: (2,3,4), \end{equation} and for this to descend to the ordinary $SU(2)_4$, we also require \begin{equation} \ell_1 \times (2,3,4) \: = \: \ell_3 \times (2,3,4), \end{equation} which must match $(1) \times (2,3,4)$ in the $SU(2)_4$ fusion algebra given in appendix~\ref{app:lineops}: \begin{equation} \ell_{1,3} \times (2) \: = \: (3), \: \: \: \ell_{1,3} \times (3) \: = \: (2), \: \: \: \ell_{1,3} \times (4) \: = \: (4). \end{equation} The new lines $\ell_{i}$ are then defined to have trivial monodromy with all other lines \begin{equation} B(\ell_i, x) \: = \: 1 \end{equation} for all lines $x$. This much is uniquely specified by the statement of the extension. Now, we have not completely specified the extension of $SU(2)_4$; for example, the product $(2) \times (3)$ in $SU(2)_4$ contains a (1), so we would still need to decide whether to replace (1) with $\ell_1$ or $\ell_3$, for example. However, in making such choices, we find an internal contradiction with the structure we have already described, namely a failure of associativity. For example, in $SU(2)_4$, as described in appendix~\ref{app:lineops}, \begin{equation} (2) \times (3) \: = \: (1) + (4), \: \: \: (3) \times (3) \: = \: (0) + (4). \end{equation} We could replace the (1) above with either $\ell_1$ or $\ell_3$. Suppose we take \begin{equation} (2) \times (3) \: = \: \ell_1 + (4). \end{equation} Then, \begin{equation} \ell_1 \times ( (2) \times (3) ) \: = \: \ell_1 \times (\ell_1 + (4) ) \: = \: \ell_2 + (4), \end{equation} \begin{equation} ( \ell_1 \times (2) ) \times (3) \: = \: (3) \times (3) \: = \: (0) + (4). \end{equation} However, $\ell_2 \neq (0)$, so we see that \begin{equation} \ell_1 \times ( (2) \times (3) ) \: \neq \: ( \ell_1 \times (2) ) \times (3), \end{equation} and so associativity is broken. We encounter a similar problem if we choose \begin{equation} (2) \times (3) \: = \: \ell_3 + (4) \end{equation} instead and consider fusion with $\ell_3$. Put simply, we cannot enlarge the $B {\mathbb Z}_2$ inside $SU(2)_4$ to a noneffective $B {\mathbb Z}_4$, without breaking associativity. With this in mind, we outline here\footnote{ Although we are only interested in isomorphism classes of objects, presumably the full categorical description is in terms of module categories, or as a minor variation on the group actions described in \cite[section III.B]{Barkeshli:2014cna}. As we are only interested here in counting (isomorphism classes of) objects, and this is merely a minor variation on existing methods, we will be very schematic. } a minor extension of the prescription of \cite[section 2]{Moore:1989yh}, \cite[section 2.5]{Cordova:2017vab}, \cite{Benini:2022hzx} for counting line operators in three-dimensional theories with gauged one-form symmetries. (As we will not be using noninvertible symmetries, we will not attempt to describe the analogous construction for condensation algebra objects here.) Our approach is motivated by the action of a group $G$ on a set $M$: we distinguish $G$ and $M$, $G$ is not a subset of $M$ in general, though we can still define an action of $G$ on $M$ that enables us to make sense of the quotient $M/G$. Let $G$ be a finite abelian group, so that $BG$ is a group of one-form symmetries, and associate lines to elements of $G$. Consider a set of simple lines ${\cal C}$ (objects in a braided tensor category, which we will gauge). An action of $BG$ on ${\cal C}$ is then described by giving, for each $g \in G$ and line $L \in {\cal C}$, \begin{itemize} \item a monodromy $B(g,L)$, such that \begin{equation} B(g_1, L) \, B(g_2, L) \: = \: B(g_1 g_2, L), \end{equation} and \item a fusion $g \times L \in {\mathbb Z}[{\cal C}]$, meaning $g \times L = \sum_c N^c_{gL} c$ for $c \in {\cal C}$ and $N^c_{gL} \in {\mathbb Z}$, with the property that \begin{equation} g_1 \times \left( g_2 \times L \right) \: = \: (g_1 g_2) \times L. \end{equation} \end{itemize} We say that a line in $BG$ corresponding to $g \in G$ acts trivially if, for all $L \in {\cal C}$, \begin{equation} B(g,L) \: = \: 1 \: \: \: \mbox{ and } \: \: \: g \times L \: = \: L, \end{equation} and then to say $BG$ acts noneffectively, as in section~\ref{sect:trivacting}, means that some for some $g \neq 1$ in $G$, the line corresponding to $G$ acts trivially. Now, given an action of $BG$ on ${\cal C}$, we propose to construct the lines of the quotient ${\cal C}/BG$ as follows, by close analogy with \cite[section 2]{Moore:1989yh}, \cite[section 2.5]{Cordova:2017vab}, \cite{Benini:2022hzx}. \begin{itemize} \item Exclude any $L$ such that for some $g \in G$, $B(g,L) \neq 1$, \item Identify $L \sim g \times L$ for each $g \in G$, \item For each $g \in G$ such that $g \times L = L$, we get a copy of $L$ in ${\cal C}/BG$. \end{itemize} It is straightforward to check that in the special case the lines in $B G$ are a subset of those in ${\cal C}$, this reduces to the prescription reviewed earlier and in \cite[section 2]{Moore:1989yh}, \cite[section 2.5]{Cordova:2017vab}, \cite{Benini:2022hzx}. As another special case, note that if all of $BG$ acts trivially, then in the quotient ${\cal C}/BG$, \begin{itemize} \item No lines in ${\cal C}$ are excluded, since $B(g,L) = 1$ for all $L$, \item No lines are identified, since $g \times L = L$ for all $L$, so fusion does not relate different lines, \item Since $g \times L = L$ for each $g \in G$ and each $L \in {\cal C}$, we get $|G|$ copies of the lines in ${\cal C}$. \end{itemize} This is consistent with the expectations of decomposition in this case: if we gauge a $BG$ that acts completely trivially on a theory, in the sense above, one expects to get $|G|$ copies of the theory. We will apply this computation in specific examples in Chern-Simons theories later in this paper, but for the moment, we give two toy examples, to illustrate the idea. First, consider $B {\mathbb Z}_2 / B {\mathbb Z}_2$. Let the lines of ${\cal C} = B {\mathbb Z}_2$ be generated over ${\mathbb Z}$ by $\{ (0), (1) \}$, where \begin{equation} (0) \times (0) \: = \: (0), \: \: \: (0) \times (1) \: = \: (1), \: \: \: (1) \times (1) \: = \: (0), \end{equation} and $B {\mathbb Z}_2$ acts as \begin{equation} g \times (0) \: = \: (1), \: \: \: g \times (1) \: = \: (0), \end{equation} and we take all monodromies $B = 1$. Then, applying the procedure above, to get the lines of ${\cal C}/B {\mathbb Z}_2$, \begin{itemize} \item Since $B(g,L) = 1$ for all $L \in {\cal C}$, no lines are excluded, \item Since $g \times (0) = (1)$, $(0) \sim (1)$, \item No lines are invariant. \end{itemize} Hence the quotient is generated by one single line, as one would expect. Next, consider $B {\mathbb Z}_2 / B {\mathbb Z}_4$, where the $B {\mathbb Z}_4$ acts noneffectively. Let the lines of ${\cal C} = B {\mathbb Z}_2$ be as above, and the generator $g$ of $B {\mathbb Z}_4$ acts as \begin{equation} g \times (0) \: = \: (1), \: \: \: g \times (1) \: = \: (0). \end{equation} As before, we take all monodromies $B = 1$. Applying the procedure above, \begin{itemize} \item Since $B(g,L) = 1$ for all $L \in {\cal C}$, no lines are excluded. \item Since $g \times (0) = (1)$, $(0) \sim (1)$, \item Since $g^2 \times (0) = (0)$ and $g^2 \times (1) = (1)$, $(0) \sim (1)$ appears twice in the quotient. \end{itemize} Thus, the quotient ${\cal C} / B {\mathbb Z}_4$ is generated over ${\mathbb Z}$ by two lines, as expected since a ${\mathbb Z}_2 \subset {\mathbb Z}_4$ describes trivially-acting lines. We should also briefly observe that the theories we are describing, which decompose, have the property that they violate the axiom of remote detectability in a topological order, see e.g.~\cite{Levin:2013gaa,Kong:2014qka,Johnson-Freyd:2020usu}. This axiom says that there are no invisible lines in the bulk theory (technically, that the category of lines has trivial center). Violation of remote detectability signals multiple vacua and therefore a decomposition, much as cluster decomposition in other contexts \cite{Hellerman:2006zs}. \subsection{Bulk-boundary map} Let us now consider the bulk-boundary map between lines in the three-dimensional bulk and on the two-dimensional boundary. Let ${\cal C}$ be the category of lines which act trivially in the bulk. Suppose we have a line in ${\cal C}$ which ends on the boundary, defining an object in the two-dimensional vertex operator algebra $V$. We can describe this bulk-boundary relation by a functor \begin{equation} F: \: {\cal C} \: \longrightarrow \: {\rm Rep}(V), \end{equation} (for Rep$(V)$ the category of representations of $V$) which takes a line to the vector space of ways the line can end on the boundary, giving point operators. As observed in \cite[section 3.5]{Yu:2020twi}, a one-form symmetry that acts trivially in the bulk might act nontrivially on the boundary, and the theory can still decompose, much as with Chan-Paton factors and D-branes in two-dimensional theories. Broadly speaking, the different line operators in the three-dimensional bulk end on the various two-dimensional sectors of the boundary theory. A three-dimensional theory may have surface operators which are not totally determined by the line operators. In the case where the three-dimensional theory has only a local vacuum, all the surfaces can built as condensations, i.e.~networks of lines. However, when there are multiple vacua, as in the cases we are interested in, then this fails to be true. The surfaces which are not built as a network of lines will end on a line on the boundary. These lines define the `action' of a trivially acting zero-form symmetry, In two dimensions, if one gauges a trivially-acting zero-form symmetry, then one obtains an emergent global one-form symmetry (and hence a decomposition). From the decomposition conjecture~(\ref{eq:decomp-predict}), the different universes and hence the different ground states are labeled by elements of the Pontryagin dual of the one-form symmetry group. On the other hand, the surfaces in the bulk which enact a 2-form symmetry, come from gauging a trivially acting one-form symmetry. So while the lines that the surface ends on has trivial action on the boundary, the surface itself is not necessarily trivial in the bulk. This is summarized in the following diagram, where $F$ is the functor that makes objects to the boundary: \begin{equation} \xymatrix{ \mbox{\underline{Bulk symmetry}} & \mbox{\underline{Boundary symmetry}} \\ \mbox{trivially-acting one-form} \ar[r]^-{F} \ar[d]^{\rm gauge} & \mbox{trivially-acting zero-form} \ar[d]^{\rm gauge} \\ \mbox{global two-form} \ar[r]^-{F} & \mbox{global one-form} } \end{equation} \section{Examples} \label{sect:exs} In the next several subsections we will walk through examples of the decomposition proposed in section~\ref{sect:decomp}. Where possible, we will apply level-rank duality to perform self-consistency tests. In all cases, we will compare to the decomposition of the boundary WZW model. In particular, as reviewed in section~\ref{sect:review-wzw}, decomposition is reasonably well-understood in two-dimensional theories, and so we get solid consistency tests by checking that the boundary WZW decomposition implied by the bulk Chern-Simons decomposition matches existing two-dimensional results. In each case, we will assume that levels are chosen so that the theories are well-defined, but will not list those conditions explicitly. \subsection{Chern-Simons$(SU(2))/B {\mathbb Z}_2$, $K=1$} \label{sect:ex:so3} In this section, we will reproduce a well-known result as a special case of the decomposition prediction~(\ref{eq:decomp-predict}). Specifically, we consider gauging the $B {\mathbb Z}_2$ central one-form symmetry in $SU(2)$ Chern-Simons theory. Here, this $B {\mathbb Z}_2$ is not trivially-acting, and so no decomposition is expected. In particular, this gauging is known (see e.g.~\cite{Moore:1989yh}) to be equivalent to the $SO(3)$ Chern-Simons theory at the same level. At the level of the path integral for the gauge theory, this is discussed in appendix~\ref{sect:gauging-bk}. We can understand this as a special case of the decomposition prediction~(\ref{eq:decomp-predict}). In the language of that statement, we identify $A = {\mathbb Z}_2$, $H = SU(2)$, and $d: A \rightarrow H$ is the inclusion map of the center, ${\mathbb Z}_2 \hookrightarrow \: SU(2)$. Then, the kernel of $d$ vanishes, so $K = 1$, and $G = H/A = SO(3)$. This corresponds to the exact sequence \begin{equation} 1 \: \longrightarrow \: 1 \: \longrightarrow \: {\mathbb Z}_2 \: \stackrel{d}{\longrightarrow} \: SU(2) \: \longrightarrow \: SO(3) \: \longrightarrow \: 1. \end{equation} Furthermore, in the case, since $K = 1$, the extension class $[\omega] \in H^3(G,K)$ is trivial, $\omega = 1$, so $\phi^* \omega = 1$ and there is no discrete theta angle. Putting this together, we see that the decomposition prediction~(\ref{eq:decomp-predict}) in this case is \begin{equation} \left[ \mbox{Chern-Simons}(SU(2)) / B {\mathbb Z}_2 \right] \: = \: \mbox{Chern-Simons}(SO(3)), \end{equation} which reproduces known results. Let us also compute the line operator spectrum in this example. This is a standard computation, but we will quickly outline it using the tools of section~\ref{sect:lineops}, with an eye towards later, more obscure, versions. There are five line operators in $SU(2)_4$ Chern-Simons, as listed in appendix~\ref{app:lineops}, which we denote \begin{equation} (0), \: (1), \: (2), \: (3), \: (4). \end{equation} We gauge a $B {\mathbb Z}_2$, with lines $\{ \ell_0, \ell_1 \}$, where \begin{equation} \ell_i \times \ell_j \: = \: \ell_{i + j \mod 2}, \end{equation} and which act on the $SU(2)_4$ lines as \begin{equation} \ell_0 \times L \: = \: L, \: \: \: \ell_1 \times L \: = \: (1) \times L, \end{equation} and with \begin{equation} B(\ell_0, L) \: = \: +1, \: \: \: B(\ell_1, 0) \: = \: B(\ell_1, 1) \: = \: B(\ell_1,4) \: = \: +1, \: \: \: B(\ell_1, 2) \: = \: B(\ell_1, 3) \: = \: -1. \end{equation} (Clearly, we can identify the action of this $B {\mathbb Z}_2$ with the action of the lines $(0)$, $(1)$ in $SU(2)_4$.) It is straightforward to check that this gives a well-defined action in the sense of section~\ref{sect:lineops}. Applying the procedure there, to get the lines of $SU(2)_4 / B {\mathbb Z}_2$, \begin{itemize} \item the lines (2), (3) are not invariant under monodromies and so should be excluded, \item from $(1) \times (1) = (0)$, the lines $(0)$ and $(1)$ should be identified in the quotient, and \item from $(1) \times (4) = (4)$, the line $(4)$ is duplicated, \end{itemize} so that the $SU(2)_4/B {\mathbb Z}_2$ spectrum consists of the vacuum line and two copies of $(4)$, which is the standard result for $SO(3)_4$. Now, let us turn to the boundary theory. On the boundary, this reduces to the statement \begin{equation} \left[ {\rm WZW}(SU(2)) / {\mathbb Z}_2 \right] \: = \: {\rm WZW}(SO(3)), \end{equation} which is standard. \subsection{Chern-Simons$(SU(2)) \times [{\rm point}/B {\mathbb Z}_2]$, $K = {\mathbb Z}_2$} Now, let us apply the decomposition prediction~(\ref{eq:decomp-predict}) to a different case, namely one in which we gauge a trivially-acting $B {\mathbb Z}_2$ `acting' on an $SU(2)$ Chern-Simons theory, uncoupled from the center one-form symmetry of the $SU(2)$ theory. This is perhaps the cleanest example of a $B {\mathbb Z}_2$ gauging that acts trivially: we gauge a $B {\mathbb Z}_2$ in bulk that does nothing at all to the $SU(2)$. Let us apply the decomposition prediction~(\ref{eq:decomp-predict}) to this case. Here, in the notation of~(\ref{eq:decomp-predict}), we take $H = SU(2)$ and $A = {\mathbb Z}_2$; however, the map $d: A \rightarrow H$ maps all of ${\mathbb Z}_2$ to $1$. In this case, the kernel of $d$, $K$, is all of ${\mathbb Z}_2$, and $G = H = SU(2)$. The decomposition prediction for this case is that \begin{equation} \left[ \mbox{Chern-Simons}(SU(2)) / B {\mathbb Z}_2 \right] \: = \: \coprod_{\theta \in \hat{K}} \mbox{Chern-Simons}(SU(2)), \end{equation} two copies of the $SU(2)$ Chern-Simons theory. Furthermore, in this case there are no nontrivial discrete theta angles, hence the decomposition prediction can be written more simply as \begin{equation} \left[ \mbox{Chern-Simons}(SU(2)) / B {\mathbb Z}_2 \right] \: = \: \coprod_2 \mbox{Chern-Simons}(SU(2)). \end{equation} Let us briefly consider the spectrum of line operators, following the procedure discussed in section~\ref{sect:lineops}. We describe the trivially-acting $B {\mathbb Z}_2$ in terms of two lines $\{\ell_0, \ell_1\}$, where \begin{equation} \ell_i \times \ell_j \: = \: \ell_{i+j \mod 2}, \end{equation} and with an action on the lines of $SU(2)_4$ given by \begin{equation} B(\ell_i, L) \: = \: +1, \: \: \: \ell_i \times L \: = \: L. \end{equation} It is straightforward to check that this gives a well-defined action in the sense of section~\ref{sect:lineops}. Next, we compute the spectrum of $SU(2)_4 / B {\mathbb Z}_2$, for this trivially-acting $B {\mathbb Z}_2$. From the rules in section~\ref{sect:lineops}, \begin{itemize} \item None of the original lines of the $SU(2)$ Chern-Simons theory are omitted, as they all have trivial monodromy under the generator $(a)$, \item Since $(a) \times (a) = (0)$, we see that in the gauged theory, $(a)$ and $(0)$ are identified with one another, \item Since all of the original lines are invariant under fusion ($(a) \times (x) = (x)$), they are all duplicated. \end{itemize} As a result, the line operator spectrum of the gauged theory is two copies of the line operator spectrum of the original $SU(2)$ Chern-Simons theory, consistent with decomposition. This result could also be obtained by adding one new line $a$ to the lines of $SU(2)_4$, which interacts trivially with all other lines, and then condensing $\{ (0), a \}$ in the ordinary fashion, though as we discussed in section~\ref{sect:lineops}, it will not always be possible to do that. Next, we turn to the boundary theory. In the boundary WZW model, bulk decomposition becomes the statement that \begin{equation} [ {\rm WZW}(SU(2)) / {\mathbb Z}_2 ] \: = \: \coprod_2 {\rm WZW}(SU(2)). \end{equation} In the ${\mathbb Z}_2$ orbifold on the left, the ${\mathbb Z}_2$ acts trivially on the $SU(2)$ WZW model, for which case ordinary two-dimensional decomposition predicts exactly the statement above, that the completely-trivially-acting ${\mathbb Z}_2$ orbifold of a WZW model is just two copies of the same WZW model. Thus, the boundary theory matches results from two-dimensional decomposition, as expected. \subsection{Chern-Simons$(SU(2)) / B {\mathbb Z}_{4}$, $K = {\mathbb Z}_2$} \label{sect:ex:su2-bz4} Consider a $SU(2)$ Chern-Simons theory in three dimensions, and gauge a $B {\mathbb Z}_4$ that acts via projecting to a $B{\mathbb Z}_2$ which acts as the center symmetry. In this case, there is a trivially-acting $B {\mathbb Z}_2$, so in broad brushstrokes one expects two copies of a $B {\mathbb Z}_2$-gauged $SU(2)$ Chern-Simons theory. Let us walk through the prediction of the decomposition prediction~(\ref{eq:decomp-predict}) in this case. Here, we have $H = SU(2)$ and $A = {\mathbb Z}_2$, with the map $d: A \rightarrow SU(2)$ mapping the ${\mathbb Z}_4$ onto the center ${\mathbb Z}_2$ of $SU(2)$. Thus, the map $d$ is surjective, but not injective: its kernel $K = {\mathbb Z}_2$. Similarly, \begin{equation} G \: = \: H / {\rm im}\, d \: = \: SU(2)/ {\mathbb Z}_2 \: = \: SO(3). \end{equation} Putting this together, we see in this case that the decomposition prediction~(\ref{eq:decomp-predict}) is \begin{equation} \label{eq:decomp:su2:z4} \left[ \mbox{Chern-Simons}(SU(2)) / B {\mathbb Z}_4 \right] \: = \: \mbox{Chern-Simons}(SO(3))_+ \: \coprod \: \mbox{Chern-Simons}(SO(3))_-, \end{equation} where the $\pm$ denote the two values of the discrete theta angle coupling to the characteristic class defined by $\beta_{\alpha}( w_G = w_{SO(3)})$, for $\alpha$ the class of the extension \begin{equation} \label{eq:z2-bock} 1 \: \longrightarrow \: {\mathbb Z}_2 \: \longrightarrow \: {\mathbb Z}_4 \: \longrightarrow \: {\mathbb Z}_2 \: \longrightarrow \: 1, \end{equation} and where here, $w_{SO(3)} = w_2$, the second Stiefel-Whitney class. Next, we will argue\footnote{ E.S. would like to thank Y.~Tachikawa for observing the pertinent properties of $w_3$.} that the characteristic class $\beta_{\alpha}(w_2)$ is the third Stiefel-Whitney class $w_3$. From the Wu formula \cite[prob. 8-A]{ms} for Steenrod squares, which map ${\rm Sq}^k: H^{\bullet}(X,{\mathbb Z}_2) \rightarrow H^{\bullet+k}(X,{\mathbb Z}_2)$, $k \geq 0$: \begin{equation} {\rm Sq}^k( w_m(\xi) ) \: = \: \sum_{t=0}^k \left( \begin{array}{c} k-m \\ t \end{array} \right) w_{k-t}(\xi) \cup w_{m+t}(\xi) \: = \: \sum_{t=0}^k \left( \begin{array}{c} m-k+t-1 \\ t \end{array} \right) w_{k-t}(\xi) \cup w_{m+t}(\xi), \end{equation} where each $w_j = w_j(\xi)$ for a real vector bundle $\xi$, and in the equality, we have used the fact that \begin{eqnarray} \left( \begin{array}{c} k-m \\ t \end{array} \right) & = & \frac{ (k-m)(k-m-1) \cdots (k-m-t+1) }{ t! }, \\ & = & (\pm) \frac{ (m-k)(m-k+1)\cdots(m-k+t-1)}{ t! } \: \equiv \: \left( \begin{array}{c} m-k+t-1 \\ t \end{array} \right) \mod 2. \nonumber \end{eqnarray} (See e.g.~\cite{se1} for this and related observations.) As a result, for any real vector bundle, \begin{equation} {\rm Sq}^1(w_2) \: = \: w_1 \cup w_2 \: + \: w_0 \cup w_3 \: = \: w_1 \cup w_2 \: + \: w_3, \end{equation} so if $w_1 = 0$, as is the case for $SO(3)$ bundles, then $w_3 = {\rm Sq}^1(w_2)$. (In principle, this is one explanation of why all $SO(3)$ bundles can be constructed by twisting $SU(2)$ bundles by ${\mathbb Z}_2$ gerbes: the gerbe characteristic class determines not only the second Stiefel-Whitney class $w_2$ of the $SO(3)$ bundles, but also $w_3$ via Sq$^1$, as above.) Furthermore, the action of Sq$^1$ is the Bockstein homomorphism $\beta$ associated to the extension \begin{equation} \label{eq:z2z4} 1 \: \longrightarrow \: {\mathbb Z}_2 \: \longrightarrow \: {\mathbb Z}_4 \: \longrightarrow \: {\mathbb Z}_2 \: \longrightarrow \: 1, \end{equation} (see e.g.~\cite[section 4.L]{hatcher},) meaning \begin{equation} {\rm Sq}^1(x) \: = \: \beta(x) \end{equation} for any $x$. The extension~(\ref{eq:z2z4}) above coincides with $\alpha$ in the present case, so we see that in this example, the discrete theta angle couples to \begin{equation} \beta_{\alpha}(w_2) \: = \: {\rm Sq}^1(w_2), \end{equation} using~(\ref{eq:balpha-bock}). We also see that in this example, this class can be described even more simply as $w_3$, the third Stiefel-Whitney class, as $w_3 = {\rm Sq}^1(w_2)$. Now, on a three-manifold $M$, we can write Sq$^1(x)$ for any $x$ in terms of the Wu class $\nu_1 \in H^1(M,{\mathbb Z}_2)$ as \cite[chapter 11]{ms} \begin{equation} {\rm Sq}^1(x) \: = \: \nu_1 \cup x. \end{equation} Furthermore, \cite[theorem 11.14]{ms} \begin{equation} \nu_1 \: = \: w_1(TM), \end{equation} so assembling these pieces, we have that \begin{equation} w_3(\xi) \: = \: {\rm Sq}^1(w_2(\xi)) \: = \: w_1(M) \cup w_2(\xi). \end{equation} As a result, the third Stiefel-Whitney class $w_3$ will only be nontrivial on a nonorientable three-manifold $M$. However, Chern-Simons theories are not defined on nonorientable spaces. In section~\ref{sect:ex:u1-bzkn}, we will use level-rank duality to perform a self-consistency check of decomposition in this case. Now, let us check this prediction by computing the line spectrum in this gauged Chern-Simons theory. First, following section~\ref{sect:lineops}, we define a $B {\mathbb Z}_4$ by lines $\{ \ell_0, \ell_1, \ell_2, \ell_3 \}$ such that \begin{equation} \ell_i \times \ell_j \: = \: \ell_{i+j \mod 4}, \end{equation} and which act on the lines of $SU(2)_4$ (described in appendix~\ref{app:lineops}) as follows: \begin{equation} B(\ell_{0,2}, L) \: = \: +1, \: \: \: B(\ell_{1,3}, 0) \: = \: B(\ell_{1,3}, 1) \: = \: B(\ell_{1,3}, 4) \: = \: +1, \: \: \: B(\ell_{1,3}, 2) \: = \: B(\ell_{1,3}, 3) \: = \: -1, \end{equation} \begin{equation} \ell_0 \times L \: = \: \ell_2 \times L \: = \: L, \: \: \: \ell_1 \times L \: = \: \ell_3 \times L \: = \: (1) \times L. \end{equation} It is straightforward to check that this action of $B {\mathbb Z}_4$ on the lines of $SU(2)_4$ is well-defined in the sense of section~\ref{sect:lineops}. As $\ell_2$ acts trivially, this is also a non-effective action, in the sense of section~\ref{sect:trivacting}. Next, we follow the procedure outlined in section~\ref{sect:lineops} to get the lines of $SU(2)_4 / B {\mathbb Z}_4$: \begin{itemize} \item Lines (2), (3) have $B(\ell_{1,3}, L) \neq +1$, and so are omitted. \item Since $\ell_{1,3} \times (1) = (0)$, we identify the lines $(0) \sim (1)$. \item Since $\ell_i \times (4) = (4)$ for all $i$, we get four copies of (4) in the spectrum of $SU(2)_4 / B {\mathbb Z}_4$, and since $\ell_{0,2} \times (1) = (1)$, $\ell_{0,2} \times (0) = (0)$, we get two copies of $(0) \sim (1)$. \end{itemize} Thus, we see that we get two copies of the lines of $SO(3)_4$, consistent with expectations from decomposition. Before going on, let us compute the lines in one more example, specifically $SU(2)_4 / B {\mathbb Z}_{2p}$, where the ${\mathbb Z}_{2p}$ projects to the ${\mathbb Z}_2$ center of $SU(2)_4$, with kernel ${\mathbb Z}_4$. The lines of $B {\mathbb Z}_{2p}$ are $\{\ell_0, \cdots, \ell_{2p-1}\}$, where \begin{equation} \ell_i \times \ell_j \: = \: \ell_{i + j \mod 2p}, \end{equation} and their action on $SU(2)_4$ is given by \begin{equation} B(\ell_{\rm even}, L) \: = \: +1, \: \: \: B(\ell_{\rm odd}, 0) \: = \: +1 \: = \: B(\ell_{\rm odd}, 1) \: = \: B(\ell_{\rm odd}, 4), \end{equation} \begin{equation} B(\ell_{\rm odd}, 2) \: = \: -1 \: = \: B(\ell_{\rm odd}, 3), \end{equation} \begin{equation} \ell_{\rm even} \times L \: = \: L, \: \: \: \ell_{\rm odd} \times L \: = \: (1) \times L. \end{equation} As before, it is straightforward to check that this action of $B {\mathbb Z}_{2p}$ is well-defined in the sense of section~\ref{sect:lineops}, and since $\{\ell_{\rm even}\}$ act trivially, it is a non-effective action, in the sense of section~\ref{sect:trivacting}. Next, we follow the procedure outlined in section~\ref{sect:lineops} to get the lines of $SU(2)_4 / B {\mathbb Z}_{2p}$: \begin{itemize} \item Lines (2), (3) have $B(\ell_{\rm odd}, L) \neq +1$, and so are omitted. \item Since $\ell_{\rm odd} \times (1) = (0)$, we identify the lines $(0) \sim (1)$. \item Since $\ell_i \times (4) = (4)$ for all $i$, we get $2p$ copies of (4), and since $\ell_{\rm even} \times (1) = (1)$, $\ell_{\rm even} \times (0) = (0)$, we get $p$ copies of $(0) \sim (1)$. \end{itemize} Altogether, we find $p$ copies of the lines of $SO(3)_4$, consistent with expectations from decomposition, since $B {\mathbb Z}_p$ acts trivially. Before going on, let us briefly discuss the boundary theory. The Chern-Simons decomposition~(\ref{eq:decomp:su2:z4}) becomes a decomposition of WZW models, formally \begin{equation} \left[ {\rm WZW}(SU(2)) / {\mathbb Z}_4 \right] \: = \: {\rm WZW}(SO(3))_+ \: \coprod \: {\rm WZW}(SO(3))_-. \end{equation} Here the ${\mathbb Z}_2$ discrete theta angle couples to the image of the element of $H^3(B SO(3), {\mathbb Z}_2)$ (corresponding to third Stiefel-Whitney classes) in $H^2( SO(3), {\mathbb Z}_2) = {\mathbb Z}_2$. However, the generator of this group is Sq$^1(a)$, where $a$ generates $H^1(SO(3), {\mathbb Z}_2)$ and for reasons discussed previously, Sq$^1(a) = w_1(TM) \cup a$, hence is nonzero only if the two-dimensional space is nonorientable. We will consider various generalizations of this example, returning to this example for special levels to utilize level-rank duality consistency checks in section~\ref{sect:ex:u1-bzkn}. \subsection{Chern-Simons$(SU(n)) / B {\mathbb Z}_{np}$, $K = {\mathbb Z}_{p}$} Next, we will consider gauging the action of $B {\mathbb Z}_{np}$ on $SU(n)$ Chern-Simons, where the ${\mathbb Z}_{np}$ acts by projecting to the center ${\mathbb Z}_n$ of $SU(n)$, and study the discrete theta angles for special values of $n$ and $p$ beyond those discussed already. In terms of the decomposition prediction~(\ref{eq:decomp-predict}), we take $A = {\mathbb Z}_{np}$, $H = SU(n)$, and $d: A \rightarrow H$ acts by projecting to $Z = {\mathbb Z}_n \subset Z(H)$. Then, the kernel $K = {\mathbb Z}_{p}$, $G = SU(n)/{\mathbb Z}_n$, and we have the long exact sequence \begin{equation} 1 \: \longrightarrow \: {\mathbb Z}_{\ell} \: \longrightarrow \: {\mathbb Z}_{n\ell} \: \longrightarrow \: SU(n) \: \longrightarrow \: SU(n)/{\mathbb Z}_{n} \: \longrightarrow \: 1. \end{equation} In general terms, decomposition~(\ref{eq:decomp-predict}) then predicts that \begin{equation} \left[ \mbox{Chern-Simons}(SU(n)) / BA \right] \: = \: \coprod_{\theta \in \hat{K}} \mbox{Chern-Simons}(SU(n)/{\mathbb Z}_n)_{\theta(\omega)}, \end{equation} where the $\theta(\omega)$ are discrete theta angles coupling to the characteristic class defined by $\beta_{\alpha}( w_{SU(n)/{\mathbb Z}_n})$, where $w_{ SU(n)/{\mathbb Z}_n} \in H^2_{\rm sing}(B SU(n)/{\mathbb Z}_n, {\mathbb Z}_n)$ is a generalization of the second Stiefel-Whitney class to $n \geq 2$, and $\beta_{\alpha}$ is the Bockstein map in the long exact sequence associated to the extension \begin{equation} 1 \: \longrightarrow \: K ( = {\mathbb Z}_p ) \: \longrightarrow \: A (= {\mathbb Z}_{np}) \: \longrightarrow \: Z ( = {\mathbb Z}_n ) \: \longrightarrow \: 1, \end{equation} with extension class $\alpha \in H^2_{\rm group}( Z, K)$. We will evaluate this expression for some special cases in which we will simplify the expression for discrete theta angles. We will use \cite{bb}, which provides the cohomology of $SU(n)/{\mathbb Z}_n$, which (modulo a degree shift) is essentially the same. (See also \cite{xgu,duan,kac,notbohm,km,borel}.) First, consider the case that $p$ is a prime number that does not divide $n$. Then, from \cite[section 7]{bb}, \begin{equation} H^{\bullet}_{\rm sing}(B SU(n)/{\mathbb Z}_n, {\mathbb Z}_p) \: = \: H^{\bullet}_{\rm sing}(B SU(n), {\mathbb Z}_p), \end{equation} and so there is no ${\mathbb Z}_p$-valued characteristic class in degree three, hence no discrete theta angle. In this case, the decomposition above can be written more simply as \begin{equation} \left[ \mbox{Chern-Simons}(SU(n)) / BA \right] \: = \: \coprod_{p} \mbox{Chern-Simons}(SU(n)/{\mathbb Z}_n). \end{equation} Next, suppose that $p=2$, and $n=2m$ for $m$ odd. From \cite[cor. 4.2]{bb}, the group $H^3_{\rm sing}( B SU(n)/{\mathbb Z}_n, {\mathbb Z}_2) \neq 0$, and so for $w_{ SU(n)/{\mathbb Z}_n } \in H^2_{\rm sing}(B SU(n)/{\mathbb Z}_n, {\mathbb Z}_n)$, we get a discrete theta angle coupling to $\beta_{\alpha}(w_{SU(n)/{\mathbb Z}_n})$, the image of $w_{ SU(n)/{\mathbb Z}_n}$ under the Bockstein map associated to the extension \begin{equation} 1 \: \longrightarrow \: {\mathbb Z}_p \: \longrightarrow \: {\mathbb Z}_{pn} \: \longrightarrow \: {\mathbb Z}_n \: \longrightarrow \: 1, \end{equation} with extension class $\alpha \in H^2_{\rm group}( {\mathbb Z}_n, {\mathbb Z}_p)$. Since $p=2$, we can write $\beta_{\alpha}(w_{SU(n)/{\mathbb Z}_n}) = {\rm Sq}^1(w_{SU(n)/{\mathbb Z}_n})$, as before, and also just as before, it is only nonzero on nonoriented spaces, as we saw for the case of $SU(2)$ and $SO(3)$ theories in section~\ref{sect:ex:su2-bz4}. Now, let us consider the corresponding boundary WZW model. The bulk decomposition above predicts \begin{equation} \left[ {\rm WZW}( SU(n) ) / {\mathbb Z}_{n p} \right] \: = \: \coprod_{\theta \in \hat{K}} {\rm WZW}( SU(n)/{\mathbb Z}_n )_{\theta(\omega)}. \end{equation} Now, from ordinary two-dimensional decomposition, since there is no discrete torsion in ${\mathbb Z}_n$, \begin{equation} \left[ {\rm WZW}( SU(n) ) / {\mathbb Z}_{n p} \right] \: = \: \coprod_{\ell} {\rm WZW}( SU(n) / {\mathbb Z}_n ). \end{equation} This is certainly consistent with the special cases computed above, in which the bulk discrete theta angle vanishes. \subsection{Chern-Simons$({\rm Spin}(n)) / B {\mathbb Z}_{2p}$, $K = {\mathbb Z}_p$} \label{sect:cs:spin-n} Next, we consider a simple generalization of the example above, in which we gauge a $B {\mathbb Z}_{2p}$ action on Spin$(n)$ Chern-Simons, in which the $B {\mathbb Z}_{2p}$ acts by first projecting to $B {\mathbb Z}_2$ which acts through (a subgroup of) the center. We begin by discussing the case that the ${\mathbb Z}_2$ is such that Spin$(n)/{\mathbb Z}_2 = SO(n)$. In the case that $n$ is divisible by four, there is a second choice of ${\mathbb Z}_2$ subgroup, for which the quotient Spin$(n)/{\mathbb Z}_2 \neq SO(n)$. We will discuss the second case at the end of this section. In terms of the decomposition prediction~(\ref{eq:decomp-predict}), we take $A = {\mathbb Z}_{2p}$, $H = {\rm Spin}(n)$, and $d: A \rightarrow H$ is the map that projects ${\mathbb Z}_{2p}$ onto the ${\mathbb Z}_2$ in the center of Spin$(n)$ such that Spin$(n)/{\mathbb Z}_2 = SO(n)$. Then, the kernel of $d$ is $K = {\mathbb Z}_p$, $G = H/A = SO(n)$, and we have the exact sequence \begin{equation} 1 \: \longrightarrow \: {\mathbb Z}_p \: \longrightarrow \: {\mathbb Z}_{2p} \: \longrightarrow \: {\rm Spin}(n) \: \longrightarrow \: SO(n) \: \longrightarrow \: 1. \end{equation} This extension is nontrivial, and defines a discrete theta angle coupling to $\beta_{\alpha}(w_{SO(n)})$, with $w_{SO(n)} = w_2$, the second Stiefel-Whitney class, as before, and the Bockstein homomorphism $\beta_{\alpha}$ is associated to the extension \begin{equation} 1 \: \longrightarrow \: {\mathbb Z}_p \: \longrightarrow \: {\mathbb Z}_{2p} \: \longrightarrow \: {\mathbb Z}_2 \: \longrightarrow \: 1 \end{equation} of extension class $\alpha \in H^2_{\rm group}({\mathbb Z}_2,{\mathbb Z}_p)$. Decomposition then predicts~(\ref{eq:decomp-predict}) \begin{equation} \label{eq:spinn-z2p} \left[ \mbox{Chern-Simons}({\rm Spin}(n)) / B {\mathbb Z}_{2p} \right] \: = \: \coprod_{\theta \in \hat{\mathbb Z}_p} \mbox{Chern-Simons}(SO(n))_{\theta}, \end{equation} where the $\theta$ denotes the discrete theta angle coupling. In the case that $p=2$, for the same reasons as discussed in section~\ref{sect:ex:su2-bz4}, we can identify $\beta_{\alpha}(w_2)$ with $w_3$, the third Stiefel-Whitney class. However, by the same reasoning as described in subsection~\ref{sect:ex:su2-bz4}, the third Stiefel-Whitney class will only be nontrivial on nonorientable three-manifolds. Therefore, on orientable three-manifolds, for $p=2$, the statement of decomposition reduces to \begin{equation} \left[ \mbox{Chern-Simons}({\rm Spin}(n)) / B {\mathbb Z}_4 \right] \: = \: \coprod_2 \mbox{Chern-Simons}(SO(n)). \end{equation} Next, let us briefly compare to the boundary WZW model. On the boundary, from the decomposition~(\ref{eq:spinn-z2p}), we have \begin{equation} \left[ {\rm WZW}( {\rm Spin}(n) ) / {\mathbb Z}_{2p} \right] \: = \: \coprod_{\theta \in \hat{\mathbb Z}_p} {\rm WZW}( SO(n) )_{\theta}, \end{equation} For the case $p=2$, for the same reasons as noted in section~(\ref{sect:ex:su2-bz4}, for oriented spaces, the discrete theta angles are trivial, as the characteristic class they couple to vanish. As a result, on oriented spaces, for $p=2$ we can equivalently write \begin{equation} \left[ {\rm WZW}( {\rm Spin}(n) ) / {\mathbb Z}_4 \right] \: = \: \coprod_2 {\rm WZW}( SO(n) ). \end{equation} This is consistent with the prediction of decomposition in two dimensions in this case. As reviewed in section~\ref{sect:review-wzw}, essentially because there is no discrete torsion in a ${\mathbb Z}_2$ orbifold, in a two-dimensional WZW orbifold by ${\mathbb Z}_{2p}$ with trivially-acting ${\mathbb Z}_p$, we have \begin{equation} \left[ {\rm WZW}( {\rm Spin}(n) ) / {\mathbb Z}_4 \right] \: = \: \coprod_p {\rm WZW}( SO(n) ). \end{equation} For $p=2$ this is certainly consistent with the bulk description. So far we have discussed the case that the ${\mathbb Z}_{2p}$ maps to ${\mathbb Z}_2 \subset {\rm Spin}(n)$ such that \begin{equation} {\rm Spin}(n)/{\mathbb Z}_2 \: = \: SO(n). \end{equation} In the case that $n$ is divisible by four, there is another choice of ${\mathbb Z}_2$ subgroup of the center of Spin$(n)$, which leads to a quotient \begin{equation} {\rm Spin}(n)/{\mathbb Z}_2 \: \neq \: SO(n), \end{equation} which for example projects out the vector representation. (See e.g.~\cite{Witten:1997bs} for a discussion in a different context.) This second quotient group is sometimes denoted Semi-spin$(n)$, abbreviated $Ss(n)$ (see e.g.~\cite[section 11]{borel}). Relevant material on the cohomology of $Ss(n)$ can be found in e.g.~\cite[section 9]{bb}. \subsection{Chern-Simons$({\rm Spin}(4n+2)) / B {\mathbb Z}_{4p}$, $K = {\mathbb Z}_{p}$} Let us consider the case of a Chern-Simons theory with gauge group Spin$(4n+2)$ and a gauged $B {\mathbb Z}_{4p}$, where the ${\mathbb Z}_{4p}$ maps to the center (${\mathbb Z}_4$) of Spin$(4n+2)$, with kernel $K = {\mathbb Z}_p$. In terms of the decomposition prediction~(\ref{eq:decomp-predict}), we take $A = {\mathbb Z}_{4p}$, $H = {\rm Spin}(4n+2)$, and $d: A \rightarrow H$ projects ${\mathbb Z}_{4p}$ onto the central ${\mathbb Z}_4 \subset {\rm Spin}(4n+2)$. The kernel of $d$ is $K = {\mathbb Z}_p$, $G = H / A = SO(4n+2)/{\mathbb Z}_2$, and we have the exact sequence \begin{equation} 1 \: \longrightarrow \: {\mathbb Z}_p \: \longrightarrow \: {\mathbb Z}_{4p} \: \longrightarrow \: {\rm Spin}(4n+2) \: \longrightarrow \: SO(4n+2)/{\mathbb Z}_2 \: \longrightarrow \: 1. \end{equation} Decomposition then predicts~(\ref{eq:decomp-predict}) \begin{equation} \label{eq:spin4np2:bz4p} \left[ \mbox{Chern-Simons}( {\rm Spin}(4n+2) ) / B {\mathbb Z}_{4p} \right] \: = \: \coprod_{\theta \in \hat{\mathbb Z}_p} \mbox{Chern-Simons}( SO(4n+2)/ {\mathbb Z}_2 )_{\theta(\omega)}, \end{equation} where the discrete theta angle couples to a characteristic class $\beta_{\alpha}(w_{{\rm Spin}(4n+2)/{\mathbb Z}_4})$ for $\beta_{\alpha}$ the Bockstein map associated to the short exact sequence \begin{equation} 1 \: \longrightarrow \: {\mathbb Z}_p \: \longrightarrow \: {\mathbb Z}_{4p} \: \longrightarrow \: {\mathbb Z}_4 \: \longrightarrow \: 1 \end{equation} of extension class $\alpha \in H^2_{\rm group}( {\mathbb Z}_4, {\mathbb Z}_p)$. Consider for example the case $p=2$. From \cite[lemma 8.1]{bb}, $SO(4n+2)/{\mathbb Z}_2$ has one characteristic class in $H^3( B SO(4n+2)/{\mathbb Z}_2, {\mathbb Z}_2 )$, related to $w_3$ of a covering $SO(4n+2)$ bundle. In the boundary WZW model, the decomposition~(\ref{eq:spin4np2:bz4p}) predicts \begin{equation} \left[ {\rm WZW}({\rm Spin}(4n+2)) / {\mathbb Z}_{4p} \right] \: = \: \coprod_{\theta \in \hat{\mathbb Z}_p} {\rm WZW}( SO(4n+2)/{\mathbb Z}_2)_{\theta}. \end{equation} Ordinary two-dimensional decomposition predicts in this case that \begin{equation} \left[ {\rm WZW}({\rm Spin}(4n+2)) / {\mathbb Z}_{4p} \right] \: = \: \coprod_p {\rm WZW}( SO(4n+2)/{\mathbb Z}_2), \end{equation} essentially because there is no discrete torsion in a ${\mathbb Z}_4$ orbifold. \subsection{Chern-Simons$({\rm Spin}(4n)) / B ({\mathbb Z}_2 \times {\mathbb Z}_{2p})$, $K = {\mathbb Z}_p$} \label{sect:ex:cs-spin-4n} Next, we consider the case of a $B ({\mathbb Z}_2 \times {\mathbb Z}_{2p})$ action on a Spin$(4n)$ Chern-Simons theory. Here, Spin$(4n)$ has center ${\mathbb Z}_2 \times {\mathbb Z}_2$, and the ${\mathbb Z}_2 \times {\mathbb Z}_{2p}$ acts by first mapping to the center. In terms of the decomposition prediction~(\ref{eq:decomp-predict}), we take $A = {\mathbb Z}_2 \times {\mathbb Z}_{2p}$, $H = {\rm Spin}(4n)$, $d: A \rightarrow H$ maps $A$ onto the center, $K = {\rm Ker}\, d = {\mathbb Z}_p$, hence we predict \begin{equation} \label{eq:spin4n:z2-z2p} \left[ \mbox{Chern-Simons}( {\rm Spin}(4n) ) / B ( {\mathbb Z}_2 \times {\mathbb Z}_{2p}) \right] \: = \: \coprod_{\theta \in \hat{\mathbb Z}_p} \mbox{Chern-Simons}( SO(4n) / {\mathbb Z}_2 )_{\theta}, \end{equation} where the discrete theta angle couples to $\beta_{\alpha}(w_{{\rm Spin}(4n)/{\mathbb Z}_2 \times {\mathbb Z}_2})$, for $\beta_{\alpha}$ the Bockstein map associated to the short exact sequence \begin{equation} 1 \: \longrightarrow \: {\mathbb Z}_p \: \longrightarrow \: {\mathbb Z}_2 \times {\mathbb Z}_{2p} \: \longrightarrow \: {\mathbb Z}_2 \times {\mathbb Z}_2 \: \longrightarrow \: 1 \end{equation} of extension class $\alpha \in H^2_{\rm group}( {\mathbb Z}_2 \times {\mathbb Z}_2, {\mathbb Z}_p)$. Consider for example $p=2$. From \cite[lemma 8.1]{bb}, $SO(4n)/{\mathbb Z}_2$ has one characteristic class in $H^3( B SO(4n)/{\mathbb Z}_2, {\mathbb Z}_2)$, related to $w_3$ of a covering $SO(4n)$ bundle. Now, let us consider this in the boundary WZW model. The bulk decomposition~(\ref{eq:spin4n:z2-z2p}) predicts that \begin{equation} \label{eq:wzw:spin4n-z2z2p} \left[ {\rm WZW}( {\rm Spin}(4n) ) / ( {\mathbb Z}_2 \times {\mathbb Z}_{2p} ) \right] \: = \: \coprod_{\theta \in \hat{\mathbb Z}_p} {\rm WZW}( SO(4n) / {\mathbb Z}_2 )_{\theta}, \end{equation} where, as discussed in section~\ref{sect:boundary-wzw}, the boundary discrete theta angles $\theta$ correspond to choices of discrete torsion, here in a $G = {\mathbb Z}_2 \times {\mathbb Z}_2$ orbifold. We can understand those boundary discrete theta angles more precisely by comparing to the predictions of two-dimensional decomposition. We have a $\Gamma = {\mathbb Z}_2 \times {\mathbb Z}_{2p}$ orbifold, with trivially-acting $K = {\mathbb Z}_p$, and $G = \Gamma/K = {\mathbb Z}_2 \times {\mathbb Z}_2$. In principle, $G$ can contain discrete torsion, since $H^2({\mathbb Z}_2 \times {\mathbb Z}_2, U(1)) = {\mathbb Z}_2$, so we should compute to see if we get nontrivial discrete torsion in any factors. Any such discrete torsion is the image of the extension class in $H^2(G,K)$ corresponding to \begin{equation} 1 \: \longrightarrow \: K \: \longrightarrow \: \Gamma \: \longrightarrow \: G \: \longrightarrow \: 1 \end{equation} under the map $K \rightarrow U(1)$ defined by the representation of $K$ corresponding to that universe, and the extension class is nontrivial; nevertheless, as discussed in \cite[section 6.1]{Robbins:2020msp}, its image in $H^2(G,U(1))$ is trivial for both irreducible representations of $K$. As a result, two-dimensional decomposition predicts \begin{equation} \left[ {\rm WZW}( {\rm Spin}(4n) ) / ( {\mathbb Z}_2 \times {\mathbb Z}_{2p} ) \right] \: = \: \coprod_p {\rm WZW}( SO(4n) / {\mathbb Z}_2 ). \end{equation} In particular, the boundary discrete theta angles vanish. In passing, we should observe that this is a nontrivial constraint. The two choices of discrete torsion in the WZW model for Spin$(4n)/{\mathbb Z}_2 \times {\mathbb Z}_2$ correspond to two distinct quantum theories, each of which can be described as the WZW model for $SO(4n)$, see e.g.~\cite{Felder:1988sd,Gawedzki:2003pm,Gaberdiel:1995yk,Runkel:2008gr,Gawedzki:2009jj}. Furthermore, in two dimensions, certainly there exist examples in which both choices of discrete torsion appear. For example, only slightly generalizing results in \cite{Hellerman:2006zs}, \begin{eqnarray} \left[ {\rm WZW}( {\rm Spin}(4n) ) / D_4 \right] & = & {\rm WZW}( SO(4n) / {\mathbb Z}_2 )_+ \: \coprod \: {\rm WZW}( SO(4n) / {\mathbb Z}_2 )_-, \\ \left[ {\rm WZW}( {\rm Spin}(4n) ) / {\mathbb H} \right] & = & {\rm WZW}( SO(4n) / {\mathbb Z}_2 )_+ \: \coprod \: {\rm WZW}( SO(4n) / {\mathbb Z}_2 )_-, \end{eqnarray} where in both $D_4$ and ${\mathbb H}$ the ${\mathbb Z}_2$ center is taken to act trivially, and the $\pm$ indicate the two choices of discrete torsion. However, because both the dihedral group $D_4$ and the group of unit quaternions ${\mathbb H}$ are nonabelian, there is no Chern-Simons version of the decompositions above. That is fortuitous, as of the two $SO(4n)/{\mathbb Z}_2$ WZW models, the one with nonzero discrete torsion also does not have a Chern-Simons dual \cite{Gawedzki:2009jj,waldorf-priv}. More generally, in order to get a two-dimensional decomposition of $[$WZW$({\rm Spin}(4n))/\Gamma]$ to copies of WZW$(SO(4n)/{\mathbb Z}_2)$ with nontrivial discrete torsion, it is straightforward to check that $\Gamma$ must be nonabelian, and so does not admit a Chern-Simons description. \subsection{Chern-Simons$(Sp(n))/B{\mathbb Z}_{2p}$, $K = {\mathbb Z}_p$} Next, consider the case of a Chern-Simons theory with gauge group $Sp(n)$ and a gauged $B {\mathbb Z}_{2p}$, where the ${\mathbb Z}_{2p}$ maps to the center (${\mathbb Z}_2$) of $Sp(n)$. In terms of the decomposition prediction~(\ref{eq:decomp-predict}), we take $A = {\mathbb Z}_{2p}$, $H = Sp(n)$, and $d: A \rightarrow H$ projects ${\mathbb Z}_{2p}$ onto the central ${\mathbb Z}_2 \subset Sp(n)$, with $K = {\rm Ker}\, d = {\mathbb Z}_p$. Decomposition then predicts~(\ref{eq:decomp-predict}) \begin{equation} \label{eq:spn:z2p} \left[ \mbox{Chern-Simons}( Sp(n) ) / B {\mathbb Z}_{2p} \right] \: = \: \coprod_{\theta \in \hat{\mathbb Z}_p} \mbox{Chern-Simons}(Sp(n) / {\mathbb Z}_2)_{\theta}, \end{equation} where the discrete theta angle couples to a characteristic class $\beta_{\alpha}(w_{Sp(n)/{\mathbb Z}_2})$ for $\beta_{\alpha}$ the Bockstein map associated to the short exact sequence \begin{equation} 1 \: \longrightarrow \: {\mathbb Z}_p \: \longrightarrow \: {\mathbb Z}_{2p} \: \longrightarrow \: {\mathbb Z}_2 \: \longrightarrow \: 1 \end{equation} of extension class $\alpha \in H^2_{\rm group}({\mathbb Z}_2,{\mathbb Z}_p)$. See e.g.~\cite[section 8]{bb} for results on pertinent characteristic classes. In the boundary WZW model, the bulk decomposition~(\ref{eq:spn:z2p} predicts \begin{equation} \left[ {\rm WZW}(Sp(n)) / {\mathbb Z}_2 \right] \: = \: \coprod_{\theta \in \hat{\mathbb Z}_p} {\rm WZW}( Sp(n)/{\mathbb Z}_2 )_{\theta}. \end{equation} Because there is no discrete torsion in a ${\mathbb Z}_2$ orbifold, two-dimensional decomposition predicts in this case that \begin{equation} \left[ {\rm WZW}(Sp(n)) / {\mathbb Z}_2 \right] \: = \: \coprod_p {\rm WZW}( Sp(n)/{\mathbb Z}_2 ). \end{equation} \subsection{Chern-Simons$(U(1))_k / B {\mathbb Z}_{\ell p}$, $K = {\mathbb Z}_p$} \label{sect:ex:u1-bzkn} Consider a $U(1)_k$ Chern-Simons theory in three dimensions. This theory has a global $B {\mathbb Z}_k$ symmetry which can be gauged (see e.g.~\cite{Kreuzer:1993tf,Fuchs:1996dd}, \cite[appendix C]{Hsin:2018vcg}). It has slightly different properties depending upon whether $k$ is even or odd (see e.g.~\cite[section 2.2]{Benini:2022hzx}): \begin{itemize} \item When $k$ is even, this theory has $k$ line operators, labelled by elements of ${\mathbb Z}_k$. If $k$ is 0 mod 8, then the $B {\mathbb Z}_k$ one-form symmetry generator has integer spin. If $k$ is 2 mod 8, then the one form generator has spin 1/4 and if $k$ is 4 mod 8, then the one-form symmetry generator is spin 1/2. \item When $k$ is odd, the theory has $2k$ lines labelled by elements of ${\mathbb Z}_{2k}$ and is moreover a spin TQFT. The line with the label $k$ is the transparent fermion. \end{itemize} Now, consider gauging a $B {\mathbb Z}_{\ell p}$, where $\ell$ divides $n$, where the ${\mathbb Z}_{\ell p}$ projects to ${\mathbb Z}_{\ell} \subset {\mathbb Z}_k$, for that $B {\mathbb Z}_k$ above, with kernel $B {\mathbb Z}_p$. Let us apply the decomposition prediction~(\ref{eq:decomp-predict}) to this case. In the language of~(\ref{eq:decomp-predict}), $A = {\mathbb Z}_{\ell p}$ and $H = U(1)$. Here, the map $d: A \rightarrow H$ is given by projecting $A = {\mathbb Z}_{\ell p}$ to a ${\mathbb Z}_{\ell} \subset {\mathbb Z}_k \subset U(1)$, and so it has kernel $K = {\mathbb Z}_p$. Furthermore, \begin{equation} G \: = \: H/{\rm im}\, d \: = \: U(1) / {\mathbb Z}_{\ell} \: = \: U(1). \end{equation} In this case, $BU(1) = {\mathbb C}{\mathbb P}^{\infty}$ has no odd degree cohomology, so there cannot be any discrete theta angles. Thus, the decomposition prediction~(\ref{eq:decomp-predict}) for this case is that \begin{equation} \label{eq:ex:u1} \left[ \mbox{Chern-Simons}(U(1)_k) / B {\mathbb Z}_{\ell p} \right] \: = \: \coprod_{p} \left[ \mbox{Chern-Simons}(U(1)_k)/B {\mathbb Z}_{\ell} \right], \end{equation} a sum of $p$ theories (consistent with a trivially-acting $B {\mathbb Z}_p$) with no discrete theta angles. In particular, note that the right-hand side is a sum of $U(1)_k/B {\mathbb Z}_{\ell}$ Chern-Simons theories, which is not necessarily the same as a union of $U(1)_k$ Chern-Simons theories. Although as groups $U(1)/{\mathbb Z}_k = U(1)$, gauging a Chern-Simons theory by a one-form symmetry is a bit different. For example, $U(1)_{4m}/B {\mathbb Z}_2 = U(1)_m$, from \cite[section C.1]{Seiberg:2016rsg}. (On the boundary, one has a $U(1)$ WZW model, meaning a sigma model on $S^1$, with radius determined by the level. Gauging the $B {\mathbb Z}_k$ in bulk becomes gauging a ${\mathbb Z}_k$ rotation in the boundary theory, which changes the radius and hence the level.) We can use level-rank duality to perform a consistency test. Begin with the decomposition described in section~\ref{sect:ex:su2-bz4} at level 1, namely, \begin{equation} \left[ \mbox{Chern-Simons}(SU(2)_1) / B {\mathbb Z}_4 \right] \: = \: \mbox{Chern-Simons}(SO(3)_1)_+ \: \coprod \: \mbox{Chern-Simons}(SO(3)_1)_-. \end{equation} Here we have kept track of the discrete theta angle; we only consider Chern-Simons theories on orientable manifolds, so no discrete theta angle is visible, so the prediction of section~\ref{sect:ex:su2-bz4} in this case is more simply \begin{equation} \left[ \mbox{Chern-Simons}(SU(2)_1) / B {\mathbb Z}_4 \right] \: = \: \coprod_2 \mbox{Chern-Simons}(SO(3)_1). \end{equation} From level-rank duality, we know \cite[sections 3.1, 3.2]{Hsin:2016blu} \begin{equation} U(1)_2 \: = \: U(1)_{-2} \: \leftrightarrow \: SU(2)_1, \end{equation} so we have that \begin{equation} \left[ \mbox{Chern-Simons}(U(1)_2) / B {\mathbb Z}_2 \right] \: = \: \left[ SU(2)_1 / B {\mathbb Z}_2 \right] \: = \: \mbox{Chern-Simons}(SO(3)_1). \end{equation} Thus, we see from level-rank duality that our decomposition in section~\ref{sect:ex:su2-bz4} implies \begin{equation} \left[ \mbox{Chern-Simons}(U(1)_2) / B{\mathbb Z}_4 \right] \: = \: \coprod_2 \left[ \mbox{Chern-Simons}( U(1)_2) / B {\mathbb Z}_2 \right], \end{equation} which is a special case of the result~(\ref{eq:ex:u1}), confirming in this case that the decomposition prediction~(\ref{eq:decomp-predict}) is giving results compatible with this example of level-rank duality. Next, we compute the spectrum of line operators in $U(1)_8 / B {\mathbb Z}_{2p}$, using the methods of section~\ref{sect:lineops}, hwere in the gauging, the ${\mathbb Z}_{2p}$ projects to ${\mathbb Z}_2$ with trivially acting ${\mathbb Z}_p$. We describe the ${\mathbb Z}_{2p}$ by a set of lines $\{ \ell_i \}$, $i \in \{0, \cdots, 2p-1\}$, where \begin{equation} \ell_i \times \ell_j \: = \: \ell_{i+j \mod 8}. \end{equation} $U(1)_8$ has eight lines, labelled \begin{equation} (0), \: (1), \: (2), \: (3), \: (4), \: (5), \: (6), \: (7) \end{equation} whose properties are listed in appendix~\ref{app:lineops}, and for which $\{(0), (4)\}$ encode a $B {\mathbb Z}_2$. The action of $B {\mathbb Z}_{2p}$ on the lines of $U(1)_8$ is given as follows: \begin{equation} B(\ell_{\rm even}, L) \: = \: +1, \: \: \: B(\ell_{\rm odd}, L) \: = \: B(4,L), \end{equation} \begin{equation} \ell_{\rm even} \times L \: = \: L, \: \: \: \ell_{\rm odd} \times L \: = \: (4) \times L, \end{equation} using the monodromies and fusion algebra described in appendix~\ref{app:lineops}. It is straightforward that this gives a well-defined action in the sense of section~\ref{sect:lineops}. Next, we compute the spectrum of lines in $U(1)_8 / B {\mathbb Z}_8$, following the procedure of section~\ref{sect:lineops}. \begin{itemize} \item The lines (1), (3), (5), (7) have $B(\ell_{\rm odd}, L) = -1 \neq +1$, and so are excluded. \item $\ell_1 \times (0) = (4)$, $\ell_1 \times (2) = (6)$, so we identify $(0) \sim (4)$, $(2) \sim (6)$. \item $\ell_{\rm even} \times L = L$, so we get $p$ copies of $(0) \sim (4)$ and $(2) \sim (6)$. \end{itemize} Thus, the resulting spectrum is $p$ copies of $\{ (0) \sim (4), (2) \sim (6) \}$, which is the same as $p$ copies of the line operator spectrum of $U(1)_8 / B {\mathbb Z}_2$, as expected from decomposition, since there is a trivially-acting ${\mathbb Z}_p$. Next, let us compare to boundary WZW models. A (boundary) WZW model for the group $U(1)$ is the same as a $c=1$ free scalar, of radius determined by the level. (See e.g.~\cite[appendix C.1]{Seiberg:2016rsg} for discussions of the RCFTs arising at particular values of the level.) Gauging the bulk one-form symmetry corresponds to orbifolding the boundary $c=1$ theory, which just changes the radius of the target-space circle in that boundary $c=1$ theory. In a two-dimensional sigma model with target $S^1$, if we orbifold by a ${\mathbb Z}_{kp}$ where ${\mathbb Z}_p \subset {\mathbb Z}_{kp}$ acts trivially, then from two-dimensional decomposition, the resulting theory is equivalent to $p$ copies of the effectively-acting ${\mathbb Z}_k$ orbifold, precisely matching~(\ref{eq:ex:u1}), as expected. \subsection{Exceptional groups} So far we have discussed quotients of Chern-Simons theories for the gauge groups $SU(n)$, Spin$(n)$, and $Sp(n)$. We can also consider cases with exceptional gauge groups. Although $G_2$, $F_4$, and $E_8$ have no center, the group $E_6$ has center ${\mathbb Z}_3$, and $E_7$ has center ${\mathbb Z}_2$ (see e.g.~\cite[appendix A]{Distler:2007av}). For example, applying decomposition~(\ref{eq:decomp-predict}), for a ${\mathbb Z}_{3p}$ that acts on $E_6$ by projecting to the ${\mathbb Z}_3$ center with kernel ${\mathbb Z}_p$, \begin{equation} \left[ \mbox{Chern-Simons}(E_6)/B {\mathbb Z}_{3p} \right] \: = \: \coprod_{\theta \in \hat{\mathbb Z}_p} \mbox{Chern-Simons}( E_6/{\mathbb Z}_3 )_{\theta}, \end{equation} where the discrete theta angle couples to $\beta_{\alpha}(w_{E_6/{\mathbb Z}_3})$, for $\beta_{\alpha}$ the Bockstein map associated to the short exact sequence \begin{equation} 1 \: \longrightarrow \: {\mathbb Z}_p \: \longrightarrow \: {\mathbb Z}_{3p} \: \longrightarrow \: {\mathbb Z}_{3} \: \longrightarrow \: 1 \end{equation} of extension class $\alpha \in H^2_{\rm group}( {\mathbb Z}_3, {\mathbb Z}_p )$. Similarly, from decomposition~(\ref{eq:decomp-predict}), for a ${\mathbb Z}_{2p}$ that acts on $E_7$ by projecting to the ${\mathbb Z}_2$ center with kernel ${\mathbb Z}_p$, \begin{equation} \left[ \mbox{Chern-Simons}(E_7)/B {\mathbb Z}_{2p} \right] \: = \: \coprod_{\theta \in \hat{\mathbb Z}_p} \mbox{Chern-Simons}( E_7/{\mathbb Z}_2 )_{\theta}, \end{equation} where the discrete theta angle couples to $\beta_{\alpha}(w_{E_7/{\mathbb Z}_2})$, for $\beta_{\alpha}$ the Bockstein map associated to the short exact sequence \begin{equation} 1 \: \longrightarrow \: {\mathbb Z}_p \: \longrightarrow \: {\mathbb Z}_{2p} \: \longrightarrow \: {\mathbb Z}_{3} \: \longrightarrow \: 1 \end{equation} of extension class $\alpha \in H^2_{\rm group}( {\mathbb Z}_2, {\mathbb Z}_p )$. In both cases, in the boundary WZW model, this reduces to two-dimensional decomposition of a WZW orbifold, with the discrete theta angles becoming choices of discrete torsion. In both cases, as the orbifolds involve cyclic groups, discrete torsion is trivial, so the boundary decomposition yields just a disjoint union of copies of the same WZW orbifold. \subsection{Chern-Simons$(H_1 \times H_2) / BA$} For completeness, let us also briefly discuss decomposition in gauged Chern-Simons theories whose gauge groups are a product of Lie groups. Specifically, consider the gauge of a gauged $BA$ action, for $A$ finite and abelian, on a Chern-Simons theory for $H_1 \times H_2$ (at various levels, such that the gauge theory is well-defined on the given three-manifold). Bulk decomposition takes the same form as~(\ref{eq:decomp-predict}): \begin{equation} \left[ {\rm Chern-Simons}(H_1 \times H_2) / BA \right] \: = \: \coprod_{\theta \in \hat{K}} {\rm Chern-Simons}(G)_{\theta}, \end{equation} where \begin{equation} 1 \: \longrightarrow \: K \: \longrightarrow \: A \: \stackrel{d}{\longrightarrow} \: H_1 \times H_2 \: \longrightarrow \: G \: \longrightarrow \: 1, \end{equation} and the discrete theta angle couples to $\beta_{\alpha}(w_G)$, for $\beta_{\alpha}$ the Bockstein homomorphism associated to \begin{equation} 1 \: \longrightarrow \: K \: \longrightarrow \: A \: \longrightarrow \: Z \: \longrightarrow \: 1, \end{equation} classified by $\alpha \in H^2_{\rm group}(Z, K)$, where $Z$ is a subgroup of the product of the centers of $H_{1,2}$, given by the image of $d$. On the boundary, as before, this reduces to decomposition in the two-dimensional theory, here \begin{equation} \left[ {\rm WZW}(H_1 \times H_2) / A \right] \: = \: \coprod_{\theta \in \hat{K}} {\rm WZW}(G)_{\theta}, \end{equation} where the discrete theta angles $\theta$ now correspond to choices of discrete torsion in a \begin{equation} [{\rm WZW}(H_1 \times H_2) / Z] \end{equation} orbifold. Essentially because $A$ is abelian, for ultimately the same reasons as in section~\ref{sect:ex:cs-spin-4n}, the discrete torsion is trivial on each universe. \subsection{Finite 2-group orbifolds} So far we have focused on Chern-Simons theories in three dimensions, but the same ideas apply to the finite 2-group orbifolds discussed in \cite{Pantev:2022kpl}. There, orbifolds by 2-groups $\Gamma$ were described, where $\Gamma$ is an extension \begin{equation} 1 \: \longrightarrow \: BK \: \longrightarrow \: \Gamma \: \longrightarrow \: G \: \longrightarrow \: 1, \end{equation} where $G$, $K$ are both finite and $K$ is abelian, determined by $[\omega] \in H^3_{\rm group}(G,K)$. Now, $\Gamma$ can also be described by a crossed module $\{d: A \rightarrow H\}$, corresponding to a four-term exact sequence of ordinary groups \begin{equation} 1 \: \longrightarrow \: K \: \longrightarrow \: A \: \stackrel{d}{\longrightarrow} \: H \: \longrightarrow \: G \: \longrightarrow \: 1, \end{equation} also determined (up to equivalences) by $[\omega] \in H^3_{\rm group}(G,K)$ (see e.g.~\cite[section IV.9]{hs} for related observations). In this language, we can write the 2-group orbifold $[X/\Gamma]$ in terms of the crossed module as \begin{equation} [X/\Gamma] \: = \: \left[ [X/H] / BA \right], \end{equation} at least for a presentation in which $A$ is abelian. For this slightly different physical realization in terms of finite groups, the statement of decomposition~(\ref{eq:decomp-predict}) is modified, but only slightly: \begin{equation} [X/\Gamma] \: = \: \left[ [X/H] / BA \right] \: = \: \oplus_{\theta \in \hat{K}} [X/G]_{\omega(\theta)}, \end{equation} where the discrete torsion (formerly discrete theta angle) $\omega(\theta)$ is defined by $\phi^* \omega$. In this sense, the decomposition described in this paper is simply a variation on the 2-group orbifold decomposition described in \cite{Pantev:2022kpl}. The fact that bulk discrete theta angles (here, $C$-field analogues of discrete torsion) become (ordinary) discrete torsion in the boundary theory was also observed in \cite[section 3.2]{Pantev:2022kpl}. In passing, we should also observe that results in finite 2-group orbifolds have a qualitatively different form. For example, \cite[section 4.4]{Pantev:2022kpl} described an orbifold by a 2-group extension \begin{equation} \label{eq:2gp-ext} 1 \: \longrightarrow \: B {\mathbb Z}_2 \: \longrightarrow \: \Gamma \: \longrightarrow \: ( {\mathbb Z}_2 )^3 \: \longrightarrow \: 1. \end{equation} In this case, $[X/\Gamma]$ is equivalent to a pair of copies of $[X/ ({\mathbb Z}_2)^3]$ orbifolds, each with a different $C$ field discrete torsion in $H^3_{\rm group}( ({\mathbb Z}_2)^3, U(1))$, which is nontrivial even on $T^3$. One could imagine an analogous theory here, such as a quotient of $SU(2)^3$ Chern-Simons by $BA$ (for $A$ a finite abelian group, with $K = {\mathbb Z}_2$ kernel, say) that leads to a disjoint union of $SO(3)^3$ Chern-Simons theories. Here, however, in the case of Chern-Simons theories, no analogue of $C$ field discrete torsion is present for $T^3$, partly because (as noted in section~\ref{sect:nontriv}) the pertinent Bockstein homomorphism vanishes. Part of the difference between these two theories is that in the Chern-Simons case, the pertinent exact sequence of finite groups has the form \begin{equation} \label{eq:su23-se} 1 \: \longrightarrow \: {\mathbb Z}_2 \: \longrightarrow \: A \: \longrightarrow \: ( {\mathbb Z}_2 )^3 \: \longrightarrow \: 1, \end{equation} whereas by contrast the analogous sequence in~\cite{Pantev:2022kpl}, namely~(\ref{eq:2gp-ext}), can be alternately encoded as a four-term sequence \begin{equation} 1 \: \longrightarrow \: {\mathbb Z}_2 \: \longrightarrow \: P' \: \longrightarrow \: Q' \: \longrightarrow \: ( {\mathbb Z}_2 )^3 \: \longrightarrow \: 1, \end{equation} which realizes an element of $H^3_{\rm group}( ({\mathbb Z}_2)^3, {\mathbb Z}_2)$. By contrast, the short exact sequence~(\ref{eq:su23-se}) realizes an element of $H^2_{\rm group}( ({\mathbb Z}_2)^3, {\mathbb Z}_2)$, cohomology of different degree; the crossed module construction realizes a 2-group, but involves different groups. \section{Boundary $G/G$ models} \label{sect:boundary-g-g} For completeness, in this section we include a different example of a decomposition. Consider gauged WZW models $G/H$ at level $k$, on the boundary of a three-dimensional theory. Because the $H$ action being gauged is an adjoint action \cite{Witten:1991mk}, if the center $Z(H)$ of $H$ is nonzero, it acts trivially, and in two dimensions, the resulting gauged WZW model decomposes into universes indexed by irreducible representations of $Z(H)$. Now, let us compare to the bulk theory. From \cite[section 3]{Moore:1989yh}, for the gauged WZW model $G/H$ at level $k$, the bulk three-dimensional theory is a $(G \times H)/Z$ gauge theory, with $Z$ the commmon center of $G$ and $H$, with action \begin{equation} \label{eq:bulk-gauged-wzw} k \ell S_{\rm CS}(G) \: - \: k S_{\rm CS}(H), \end{equation} where $\ell$ is the index of the embedding $H \hookrightarrow G$, Consider the special case of the two-dimensional $G/G$ model, on the boundary of a three-dimensional theory. The $G/G$ model decomposes into universes indexed by the integrable representations. (In principle this is because it is a unitary topological field theory \cite{Durhuus:1993cq,Moore:2006dw}; the specific relation to decomposition is via noninvertible symmetries, as discussed in \cite{Komargodski:2020mxz,Huang:2021zvu}.) From the discussion above, the bulk dual to the boundary $G/G$ model appears to have an identically-zero action~(\ref{eq:bulk-gauged-wzw}). Since the boundary theory is a topological field theory, this would be trivially consistent. For more general boundary $G/H$ gauged WZW models, the bulk action~(\ref{eq:bulk-gauged-wzw}) does not vanish identically. Decomposition of the boundary suggests that the bulk may also decompose, in which case the bulk theory should admit a global two-form symmetry. We leave elucidating that symmetry for future work. \section{Conclusions} In this paper, we have discussed decomposition in three-dimensional Chern-Simons theories with gauged noneffectively-acting one-form symmetries. In the bulk decomposition, the different universes of the decomposition have discrete theta angles coupling to bundle characteristic classes, specifically, images under Bockstein maps of canonical degree-two characteristic classes. On the boundary, those map to choices of discrete torsion, and the bulk decomposition becomes a standard orbifold decomposition, involving WZW models, which serves as a strong consistency test. There are many directions this work could be taken. One example would be to consider decomposition in gauged Chern-Simons theories in which the original theory has a discrete theta angle, analogous to decomposition in two-dimensional orbifolds with discrete torsion \cite{Robbins:2020msp}. Another example would be to consider decomposition in Chern-Simons-matter theories, rather than pure Chern-Simons. Similarly, it would be interesting to consider decomposition in holomorphic Chern-Simons \cite{rtthesis}, or deformations of Chern-Simons theories, as arise when studying disk instanton corrections in string compactifications. It would also be interesting to understand dimensional reduction of decomposition to two dimensions. The dimensional reduction of pure Chern-Simons is the two-dimensional $G/G$ model (which as a unitary TFT already admits a decomposition \cite{Durhuus:1993cq,Moore:2006dw,Komargodski:2020mxz,Huang:2021zvu}), and the $BK$ symmetry in three dimensions should become a $K \times BK$ symmetry in the two-dimensional theory. In condensed matter physics, there exists a realization of Chern-Simons theories known as the Levin-Wen model \cite{Levin:2004mi}, and it would be interesting to consider this story in that setting. In a different direction, Chern-Simons theories can also arise on boundaries of four-dimensional theories, and it would be interesting to study decomposition in that context, perhaps relating it to the decomposition arising after instanton restriction in \cite{Tanizaki:2019rbk}. There, the instanton restriction resulted in a disjoint union of four-dimensional Yang-Mills theories with theta angle terms of the form \begin{equation} \frac{1}{8 \pi^2} \frac{2 \pi m}{k} \int {\rm Tr}\,F\wedge F, \end{equation} for $m \in \{0, 1, \cdots, k-1\}$, which implements the restriction on instantons. On a boundary, that would become a disjoint union of theories, whose actions have Chern-Simons terms of the form \begin{equation} \frac{1}{8 \pi^2} \frac{2 \pi m}{k} \int \omega_{\rm CS}, \end{equation} clearly related to the disjoint unions of Chern-Simons theories we discuss in this paper. We leave such considerations for future work. \section{Acknowledgements} We would like to thank C.~Closset, S.~Datta, J.~Distler, D.~Berwick~Evans, T.~Gomez, S.~Gukov, L.~Lin, D.~Robbins, I.~Runkel, U.~Schreiber, J.~Stasheff, Y.~Tachikawa, T.~Vandermeuelen, and K.~Waldorf for useful discussions. We would further like to thank M.~Yu for initial collaboration and many discussions. T.P. was partially supported by NSF/BSF grant DMS-2200914, NSF grant DMS-1901876, and Simons Collaboration grant number 347070. E.S. was partially supported by NSF grant PHY-2014086.
2024-02-18T23:40:51.504Z
2022-07-01T02:00:27.000Z
algebraic_stack_train_0000
3,493
18,215
proofpile-arXiv_066-1139
\section{INTRODUCTION} Recently, research on active matter has emerged as a vital area of research and attracted much attention in various fields of science\cite{bechinger2016active, Ramaswamy2017active, pietzonka2021oddity, magistris2015intro}. Active matter is a special class of nonequilibrium systems which is inherently or intrinsically driven far away from equilibrium. The particles in such a system are capable of self propelling by their own in the environment. They consume energy from the environment by means of their internal mechanisms and generate a spontaneous flow in the system \cite{magistris2015intro,dombrowski2004self}. These particles are termed as active or self-propelling particles. Examples of active matter include motile biological microorganisms like bacteria or unicellular protozoa \cite{berg1972chemo, machemer1972ciliary,corbyn2021stochastic}, artificially synthesized microswimmers like Janus particles \cite{walther2013janus, howse2007self}, microrobots, hexbugs\cite{scholz2018rotating}, etc. There exists some standard models like active Brownian particle (ABP) model to treat the dynamics of such particles at both single particle level as well as collective level \cite{hagen2009NonGaussian,hagen2011brownian, cates2013when, kanaya2020steady, lowen2020inertial}. Recently, one of the simplest and nontrivial model known as active Ornstein-Uhlenbeck particle (AOUP) model \cite{lehle2018analyzing, bonilla2019active, martin2021statistical} is proposed for modeling the over damped dynamics of such self-propelled particles. In ABP model, both the translational and rotational diffusion of the particles are taken into account while in AOUP model, the velocity of the particle follows the Ornstein-Uhlenbeck process. The AOUP model is explored in detail in literature as it makes the exact analytical calculations possible~\cite{szamel2014self,sandford2017pressure,marini2017pressure,Das2018confined,wittman2018effective,caprini2018linear,caprini2020time}. Both these models are successful in explaining many important features of active matter such as accumulation near boundary~\cite{marini2015towards,Gompper2020roadmap}, motility induced phase separation (MIPS)~\cite{cates2015motility} and so on. Unfortunately, inertia which is an important property of the physical systems, was not initially considered in these models. For macroscopic or massive self-propelling particles moving in a gaseous or low viscous medium, inertial effects become prominent and this poses some new challenges in the theoretical modeling of this kind of systems. Typically, millimeter sized particles while moving in a low viscous medium, are strongly influenced by inertia. Macroscopic swimmers\cite{gazzola2014scaling,saadat2017rules,gazzola2015gait} and flying insects\cite{sane2003aerodynamics} are some apt examples where inertia plays an important role in their dynamics, both at the single particle level as well as collective level \cite{lowen2020inertial}. Hence, inertia needs to be introduced in both AOUP as well as ABP models. Indeed, in some of the recent works, the introduction of inertia in these models could describe well the dynamics of active particles~\cite{caprini2021inertial, caprini2021spatial}. It is also reported that fine tuning of inertia results in some qualitative modification in the fundamental properties of active systems such as inertial delay between orientation dynamics and translational velocity of active particles\cite{scholz2018inertial}, development of different dynamical states\cite{dauchot2019dynamics}, motility induced phase separation\cite{mandal2019motility}, etc. The stochastic dynamics of a charged particle in the presence of a magnetic field is an interesting problem with potential applications in plasma physics, astrophysics, electromagnetic theory, etc\cite{Singh1996stochastic, jayannavar1981orbital, saha2008nonequilibrium, aquino2009fluctuation, harko2016electro, lin2020seperation, jin2021collective}. According to Bohr-van Leeuwen (BvL) theorem\cite{nielsen1972niels, van1921problemes, dattagupta1997landau, Sahoo2007charged}, there is no orbital magnetism for a classical system of charged particles in equilibrium. However, when an inertial system exhibits activity in the dynamics, it does not follow the well known fluctuation dissipation theorem\cite{kubo1966fluct} and comes out of equilibrium. As a result, a nonzero orbital magnetism appears in the presence of magnetic field and the system passes through a magnetic phase transition depending on the complex interplay of the activity time and other time scales involved in the dynamics \cite{kumar2012classical, muhsin2021orbital}. When a time dependent magnetic field is applied to charged Brownian swimmers, it can either enhance or reduce the effective diffusion of swimmers\cite{sandoval2016magnetic}. On the other hand, the dynamics of a charged active Brownian particle when subjected to a space dependent magnetic field, it induces inhomogeneity and flux in the system \cite{vuijk2020lorentz}. Similarly, under stochastic resetting, an active system in the presence of magnetic field yields some exotic steady state behaviour \cite{abdoli2021stochastic}. Motivated by these recent findings, herein, we explore the transport properties of a charged and inertial active Ornstein-Uhlenbeck particle in a viscous medium and under a static magnetic field. In particular, we show that inertia is necessary for the magnetic field to influence the dynamics. The Brownian dynamics of an inertial charged particle in a magnetic field driven by an exponentially correlated noise and by a colored Gaussian thermal noise is already discussed in Refs.~\cite{Karmeshu1974brownian, paraan2008brownian, lisy2013brownian, baura2013study, lisy2014effect} and Ref.~\cite{das2017fokker}, respectively. In these models, the dynamics is always mapped to it's thermal equilibrium limit, where the generalized fluctuation dissipation relation (GFDR) is satisfied. In our work, we consider the model as the dynamics of an active particle, which is different from the dynamics described in the previously discussed models in the sense that the active fluctuations are athermal and hence it can not be always mapped to an equilibrium limit. However, in the equilibrium limit of our model, where the fluctuation dissipation relation (FDR) is satisfied and in the vanishing limit of noise correlation time, some of our findings, especially the steady state diffusion shows similar behaviour as reported in Refs.~\cite{Karmeshu1974brownian,paraan2008brownian} for a free particle and in Refs.~\cite{lisy2013brownian,lisy2014effect} for a confined harmonic particle, respectively. We have organized the paper in the following way. In Sec. II, we present our model, the methodology adopted, and introduction to the dynamical parameters of interest. The results and discussion are presented in Sec. III, followed by a summary in Sec. IV. \section{MODEL AND METHOD} We consider a charged active Ornstein-Uhlenbeck particle of mass $m$ self-propelling in a two dimensional (2D) plane. The particle is confined by a harmonic potential $U(x,y) = \frac{1}{2} k (x^2 + y^2)$ with $k$ as the harmonic constant. A magnetic field ${\bf B} = B {\bf \hat{z}}$ is applied perpendicular to the plane of the motion of particle, where ${\bf \hat{z}}$ is the unit vector along the Z-direction. The dynamics of the particle is given by Langevin's equation of motion \cite{arsha2021velocity, muhsin2021orbital, Sahoo2007charged} \begin{equation} m\ddot {\bf r}(t)=-\gamma {\bf v}(t) + \frac{|q|}{c}[{\bf v(t)}\times {\bf B}]-k{\bf{r}}(t)+\sqrt{2D}{\bf \xi}(t), \label{eq:model-vector} \end{equation} where ${\bf \ddot{r}}=\dot{\bf v}$ is the acceleration of the particle and $m\ddot{\bf r}$ is the inertial force in the dynamics. The first term in the right hand side of Eq.~\eqref{eq:model-vector} is the viscous force on the particle because of the interaction of the particle with the surrounding medium, with $\gamma$ being the viscous coefficient of the medium. The second term represents the Lorentz force caused by the presence of magnetic field \cite{maxwell1873treatise} and the third term is the force exerted by the harmonic confinement. ${\bf \xi}(t)$ is the noise term which follows the Ornstein-Uhlenbeck process \begin{equation} t_c \dot{{\bf \xi}}(t) = -{\bf \xi}(t) + \eta(t), \label{eq:noise-model} \end{equation} with $\eta(t)$ being the delta correlated white noise. $D$ is the strength of the Ornstein-Uhlenbeck noise \cite{sevilla2019generalized, woillez2020nonlocal, Das2018confined}. Further, $\bf{\xi}(t)$ satisfies the following properties \begin{equation} \langle \xi_\alpha(t) \rangle = 0, \qquad \langle \xi_\alpha(t) \xi_\beta(t^\prime) \rangle = \frac{\delta_{\alpha\beta}}{2t_c}e^{\frac{-|t - t^\prime|}{t_c}}. \label{eq:noise-stat} \end{equation} Here, $t_{c}$ is the noise correlation time or persistence time of the dynamics and $(\alpha, \beta) \in (X, Y)$. A finite correlation of noise for a time $t_{c}$ represents the persistence of activity upto $t=t_{c}$ and it decays exponentially with $t_{c}$. Hence, a finite and nonzero $t_{c}$ especially quantifies the activity of the system. In the $t_{c} \rightarrow 0$ limit, the active fluctuation becomes thermal and the system becomes passive in nature. In the present work, we consider $D=\gamma k_{B} T$ (fluctuation-dissipation relation or FDR) to have the typical thermal equilibrium limit of the dynamics at temperature $T$ \cite{fodor2016far,mandal2017entropy}. However, for a nonzero $t_{c}$, the dynamics is in nonequilibrium with an effective temperature which is different from the actual temperature of the system \cite{tailleur2009sedimentation}. In the long time limit, one can define this effective temperature with the self-propulsion speed of the active particle and can relate it to the strength of noise, $D$ \cite{ fily2012athermal}. By defining $\Gamma = \frac{\gamma}{m},\ \omega_c = \frac{|q|B}{mc}, \ \text{and}\ \omega_0 = \sqrt{\frac{k}{m}}$ and introducing a complex variable $z(t) = x(t) + i\ y(t)$, Eq.~\eqref{eq:model-vector} can be rewritten in terms of $z(t)$ as \begin{equation} \ddot{z}(t) + \Gamma \dot{z}(t) - j \omega_c \dot{z}(t) + \omega_0^2 z(t) = \epsilon(t), \label{eq:model-complex} \end{equation} where, $j =\sqrt{-1}$ and $\epsilon(t)=\frac{\sqrt{2 D}}{m}\left[ \xi_x(t) + j\ \xi_y(t)\right]$. By performing the Laplace transform of the complex variables $z(t)$ and $\dot{z}(t)$ $\left[ \mathcal{L}\{z\}(s) = \int\limits_0^\infty e^{-s t} z(t)\, dt\ \text{and} \ \mathcal{L}\{\dot{z}\}(s) = \int\limits_0^\infty e^{-s t} \dot{z}(t)\, dt \right]$, with initial conditions $z(0) = z_0$ and $\dot{z}(0) = v_0$, respectively and using the partial fraction method, the solution of the dynamics [Eq. \eqref{eq:model-complex}] can be obtained as \begin{equation} z(t) = \sum_{i=1}^2 b_i z_0 e^{s_i t} + \sum_{i=1}^2 a_i v_0 e^{s_i t} + \sum_{i=1}^2 a_i \int\limits_0^t e^{s_i (t - t^\prime)} \xi(t^\prime) dt^\prime. \label{eq:model-solution} \end{equation} Here $s_i$'s are given by \begin{equation*} s_{1,2} = \dfrac{-\Omega \pm \sqrt{\Omega^2 - 4\omega_0^2}}{2}, \label{eq:solution-si} \end{equation*} with $\Omega = \Gamma - j \omega_c$. The coefficients $a_i$'s and $b_i$'s are given by \begin{equation*} a_{1,2} = \pm \frac{1}{\sqrt{\Omega^2 - 4\omega_0^2}}\ \text{and} \ b_{1,2} = \pm \frac{\Omega \mp \sqrt{\Omega^2 - 4\omega_0^2}}{2\sqrt{\Omega^2 - 4\omega_0^2}}, \label{eq:solution-ai-bi} \end{equation*} respectively. In order to analyze the transport behaviour of such a system, we focus mainly on the mean displacement, steady state correlations, and mean square displacement. The mean displacement (MD), $\langle R(t) \rangle$ can be calculated from the relation \begin{align} \langle R(t) \rangle = & \langle z(t) - z(0) \rangle \nonumber \\ = &\langle x(t) - x(0) \rangle + j \langle y(t) - y(0) \rangle \nonumber \\ = & a_1 \bigl[ e^{s_1 t} \left(s_1 z_0+v_0+\Omega z_0\right) \nonumber \\ &-e^{s_2 t} \left(s_2 z_0+v_0+\Omega z_0\right) \bigr] - z_0. \label{eq:solution-zavg} \end{align} The steady state position correlation [$C_r(t)$] and velocity correlation [$C_v(t)$] can be defined as \begin{align} C_r(t) & = \lim_{t^\prime \rightarrow \infty} \langle {\bf r}(t^\prime) \cdot {\bf r}(t^\prime + t) \rangle \nonumber \\ & = \lim_{t^\prime \rightarrow \infty} Re\left\{\langle z(t^\prime) z^*(t^\prime + t) \rangle \right\} \label{eq:def-cxt} \end{align} and \begin{align} C_v(t) & = \lim_{t^\prime \rightarrow \infty} \langle {\bf v}(t^\prime) \cdot {\bf v}(t^\prime + t) \rangle \nonumber \\ & = \lim_{t^\prime \rightarrow \infty} Re\left\{\langle \dot{z}(t^\prime) \dot{z}^*(t^\prime + t) \rangle \right\}. \label{eq:def-cvt} \end{align} In Eqs.~\eqref{eq:def-cxt} and \eqref{eq:def-cvt}, `$*$' denotes the complex conjugate and $Re\{\}$ represents the real part. Similarly, the mean square displacement(MSD), $\langle R^2(t) \rangle$ is given by the relation \begin{align} \langle R^2(t) \rangle & = \langle [{\bf r}(t) - {\bf r_0}]^2 \rangle \nonumber \\ & = \langle |z(t) - z_0|^2 \rangle. \label{eq:def-msd} \end{align} \begin{figure}[!ht] \includegraphics[scale=0.57]{Fig1.eps} \caption{Simulated trajectories of a free particle ($\omega_{0}=0$) in (a) and (b) and in the presence of harmonic confinement ($\omega_{0}=1.0$) in (c) and (d). The color map shows the strength of the harmonic confinement. Low magnetic field ($\omega_{c}=0.01$) is considered for (a) and (c) while high magnetic field ($\omega_{c}=5.0$) is taken for (b) and (d). The other common parameters are $m=1$, $\gamma=1.0$, and $t_{c}=1.0$.} \label{fig:traj} \end{figure} The simulation of the dynamics [Eq.~\eqref{eq:model-vector}] is carried out using Heun's method algorithm \cite{gard1988intro} and Fox algorithm approaches\cite{fox1988fast}. A time step of $10^{-3}$ sec is chosen for each run of the simulation. For each realization, the simulation is run up to $10^5$ sec. The averages are taken over $10^5$ realizations after ignoring the initial transients (up to $10^3$ sec) in order for the system to reach the steady state. The detailed simulation results along with the analytical calculations are discussed in the following section. \section{RESULTS AND DISCUSSION} \begin{figure}[!ht] \centering \includegraphics[scale=0.42]{Fig2.eps} \caption{The 2D parametric plot of MD [Eq.~\eqref{eq:md-expansion}] is shown in (a), (c), and (d) for a free particle ($\omega_{0}=0.0$) and in (b) for a confined harmonic particle ($\omega_{0}=1.0$). The color map indicates the evolution of MD with time in both (a) and (b). For a free particle, MD attains a non-zero stationary value in the long time limit or at the steady state. The color map in (c) and (d) represents the evolution of this stationary MD with $\omega_{c}$ and $\gamma$, respectively. The other common parameters in (c) are: $z_0 = 0 + 0j,\ t_c = 1.0,\ v_0 = 1 + j,\ m = 1$, and $\gamma = 1$. Similarly, the other common parameters in (d) are: $z_0 = 0 + 0j,\ t_c = 1.0,\ v_0 = 1 + j,\ m = 1$, and $\omega_{c} = 1$.} \label{fig:md} \end{figure} In Fig.~\ref{fig:traj}, we have shown the simulated trajectories of the dynamics [Eq.~\eqref{eq:model-vector}] for a free particle [Fig.~\ref{fig:traj}(a) and (b)] as well as for a harmonically confined particle [Fig.~\ref{fig:traj}(c) and (d)]. The results presented in Figs.~\ref{fig:traj}(a) and (c) are for a low-strength magnetic field ($\omega_c=0.01$) whereas in Figs.~\ref{fig:traj}(b) and (d) are for a high-strength magnetic field ($\omega_c=5.0$). It is observed that in the absence of harmonic confinement ($\omega_{0}=0)$, the particle is set as free and the influence of a strong magnetic field makes the particle confined to a very small region. In this case, the directional movement of the self-propelling particle is dominated and the particle behaves as if it is trapped in the presence of a strong magnetic field [see Fig.~\ref{fig:traj}(b)]. On the other hand, when the particle is confined in a harmonic trap, it can not come out of the trap and under the influence of magnetic field, it precises around the field before coming back to the mean position in the long time limit. When the strength of the magnetic field is very large, the particle precises around the field for a longer time as well as travels a larger distance [see Fig.~\ref{fig:traj}(d)]. We have exactly calculated the mean displacement $\langle R(t) \rangle$ in the transient regime by expanding Eq.~\eqref{eq:solution-zavg} in the lower powers of $t$ as \begin{equation} \langle R(t) \rangle = v_0t - \frac{1}{2} \left( v_0\Omega - z_0\omega_0^2 \right)t^2 + \mathcal{O}(t^3). \label{eq:md-expansion} \end{equation} The parametric plot of MD [$\langle y(t) \rangle$ vs $\langle x(t) \rangle$] is shown in Fig.~\ref{fig:md} when the particle is set free as well as when the particle is confined in a harmonic trap. The time asymptotic limit of the MD approaches zero value for a harmonically confined particle [$\lim\limits_{t\rightarrow \infty} \langle R(t) \rangle =0$] irrespective of the strength of the magnetic field. That is why in the long time limit, the particle reaches the center of harmonic trap ($z = 0$), which is nothing but the initial position ($z=0$) of the particle, as depicted in Fig.~\ref{fig:md}(b). However, this is not the case for a free particle [Fig.~\ref{fig:md}(a)]. For a free particle, it is found that $\lim\limits_{t\rightarrow \infty} \langle R(t) \rangle^{(f)} = \dfrac{v_0}{\Omega}$ and hence it depends on the magnetic field as well as on the viscosity of the medium. In the absence of magnetic field, i.e., for $\omega_c \rightarrow 0$ limit, $\lim\limits_{t\rightarrow \infty}\langle R(t) \rangle^{(f)} = \dfrac{v_0}{\Gamma}$, which reflects that MD depends on the inertia of the particle. This is indeed consistent with the results reported in Ref.~\cite{nguyen_active_2022} for steady state MD. Figures~\ref{fig:md}(c) and (d) depict the 2D plots of the variation of steady state MD with $\omega_c$ and $\gamma$, respectively. It starts from the value $\dfrac{v_0}{\Gamma}$ for $\omega_c = 0$ and approaches zero value for a strong magnetic field [Fig.~\ref{fig:md}(c)]. Similarly, it starts from the value $\frac{jv_{0}}{\omega_{c}}$ for $\gamma=0$ and approaches zero for larger value of $\gamma$ [Fig.~\ref{fig:md}(d)]. This clearly indicates that the magnetic field has strong influence on MD only in the presence of inertia in the dynamics. In the absence of magnetic field, the MD increases with inertia and approaches zero for large $\gamma$ limit. It is also noteworthy that $\langle R(t) \rangle$ does not depend on the activity time or persistence time of the dynamics. This is because of the definition of statistical properties of the AOUP noise. In the lower time regime ($t\rightarrow 0$ limit), MD varies linearly with time and depends only on the initial velocity of the particle. \begin{figure}[!ht] \includegraphics[scale=0.31]{Fig3.eps} \caption{Normalized $C_{r}(t)$ [Eq.~\eqref{eq:solution-cxt}] as a function of $t$ is shown in (a) and (b) and normalized $C_{v}(t)$ [Eq.~\eqref{eq:solution-cvt}] as a function of $t$ is shown in (c) and (d), respectively for a confined harmonic particle ($\omega_0=1$), obtained from the analytical calculations as well as from the simulation for different values of $\omega_c$. We have taken $\gamma = 1.0$ in (a) and (c) and $\gamma = 10.0$ in (b) and (d). The other common parameters are $\ t_c = 1$ and $\ m = 1$.} \label{fig:cxt_cvt_wc} \end{figure} \begin{figure}[!hb] \includegraphics[scale=0.31]{Fig4.eps} \caption{Normalized $C_{r}(t)$ [Eq.~\eqref{eq:solution-cxt}] as a function of $t$ obtained from both analytical calculations and simulation for different values of $\omega_{0}$ and $t_c$ are shown in (a) and (b), respectively. Normalized $C_{v}(t)$ [Eq.~\eqref{eq:solution-cvt}] as a function of $t$ obtained from both analytical calculations and simulation for different values of $\omega_{0}$ and $t_c$ are shown in (c) and (d), respectively. We have taken $t_{c}=1$ in (a) and (c) and $\omega_0=1.0$ in (b) and (d). The other common parameters are $\omega_c = 1$, $m = 1$, and $\gamma = 1$.} \label{fig:cxt_cvt_w0_tc} \end{figure} Next, we pay attention to the steady state behaviour of the position correlation $C_r(t)$ and velocity correlation $C_v(t)$. Substituting the solution $z(t)$ from Eq.~\eqref{eq:model-solution} and the noise properties from Eq.~\eqref{eq:noise-stat} in Eq.~\eqref{eq:def-cxt}, $C_r(t)$ can be calculated as \begin{equation} \begin{split} C_r(t) = Re\Bigg\{ & \sum_{i=1}^2 \sum_{j=1}^2 \frac{2a_i a_j^* D}{m^2} \Bigg[ \frac{t_c e^{-t/tc}}{(t_cs_i - 1) (t_cs_j^* + 1)} \\ & - \frac{2 e^{s_j^* t}}{(s_i + s_j^*)(1 - t_c^2 s_j^{*2})} \Bigg] \Bigg\}. \end{split} \label{eq:solution-cxt} \end{equation} Similarly, substituting the solution $z(t)$ from Eq.~\eqref{eq:model-solution} and the noise properties from Eq.~\eqref{eq:noise-stat} in Eq.~\eqref{eq:def-cvt}, $C_v(t)$ can be calculated as \begin{equation} \begin{split} C_v(t) = Re\Biggl\{ & \sum_{i=1}^2 \sum_{j=1}^2 \frac{2c_i c_j^* D}{m^2} \Biggl[ \frac{t_c e^{-t/tc}}{(t_cs_i - 1) (t_cs_j^* + 1)} \\ & - \frac{2 e^{s_j^* t}}{(s_i + s_j^*)(1 - t_c^2 s_j^{*2})} \Biggr] \Biggr\}, \end{split} \label{eq:solution-cvt} \end{equation} where, \begin{equation} c_{1,2} = \frac{-\Omega \pm \sqrt{\Omega^2 - 4\omega_0^2}}{\sqrt{\Omega^2 - 4\omega_0^2}}. \label{eq:solution-ci} \end{equation} For a confined harmonic particle, the normalized $C_r(t)$ and $C_v(t)$ are plotted as a function of $t$ in Fig.~\ref{fig:cxt_cvt_wc} for different values of $\omega_{c}$. The results presented in Figs.~\ref{fig:cxt_cvt_wc}(a) and (b) for $C_r(t)$ are for inertial ($\gamma=1$) and overdamped ($\gamma=10$) regimes, respectively. Similarly, the results presented in Figs.~\ref{fig:cxt_cvt_wc}(c) and (d) for $C_v(t)$ are for inertial ($\gamma=1$) and overdamped ($\gamma=10$) regimes, respectively. The obtained analytical results are in good agreement with the simulation. It is observed that with increase in magnetic field strength ($\omega_{c}$), the correlation in position persists for longer time before decaying to zero, whereas the velocity correlation decays faster with $\omega_{c}$ as expected. Most importantly, in the overdamped regime ($\gamma=10$), where the inertial effects are negligible, the magnetic field does not have influence on the correlated behaviour of either position or velocity. The dependence of steady state correlations on harmonic confinement ($\omega_{0}$) and correlation time ($t_{c}$) are shown in Fig.~\ref{fig:cxt_cvt_w0_tc}. Both $C_{r}(t)$ and $C_v(t)$ decay faster with increase in $\omega_{0}$ whereas with increase in $t_{c}$, both quantities persist for longer time before decaying to zero. Using Eq.~\eqref{eq:model-solution} in Eq.~\eqref{eq:def-msd}, the MSD of the harmonically confined particle $\langle R^2(t)\rangle$ can exactly be calculated as \begin{widetext} \begin{equation} \begin{split} \langle R^2(t) \rangle = & \Biggl| a_1 \left(-\frac{z_0}{a_1}+\left(e^{s_1 t}-e^{s_2 t}\right) \left(v_0+\Omega z_0\right)+s_1 z_0 e^{s_1 t}-s_2 z_0 e^{s_2 t}\right) \Biggr|^2 \\ & +\sum_i\sum_j \frac{2a_i a_j^* D}{m^2} \Biggl[ \frac{t_c e^{t \left(s^*_j-\frac{1}{t_c}\right)}}{\left(t_c s_i+1\right) \left(1-t_c s^*_j\right)}+\frac{t_c e^{t \left(s_i-\frac{1}{t_c}\right)}}{\left(1-t_c s_i\right) \left(t_c s^*_j+1\right)} \\ & +\frac{t_c \left(s_i+s^*_j\right)-2}{\left(s_i+s^*_j\right) \left(t_c s_i-1\right) \left(t_c s^*_j-1\right)}+\frac{e^{t \left(s_i+s^*_j\right)} \left(t_c \left(s_i+s^*_j\right)+2\right)}{\left(s_i+s^*_j\right) \left(t_c s_i+1\right) \left(t_c s^*_j+1\right)} \Biggr]. \end{split} \label{eq:solution-msd} \end{equation} \end{widetext} With the help of Taylor series expansion, Eq.~\eqref{eq:solution-msd} can be expanded in the powers of $t$ and by dropping the higher powers of $t$, $\langle R^2(t) \rangle$ can be obtained as \begin{equation} \begin{split} \langle & R^2(t) \rangle = |v_0|^2t^2 - \frac{|v_0|^2 + 2(v_0^*z_0 + v_0z_0^*)\omega_0^2}{2}t^3 \\ & + \frac{1}{12}\Biggl(\frac{6 D}{m^2 t_c} + \omega_0^2(5\Gamma-j\omega_c)(v_0z_0^* + z_0v_0^*) \\ & + 3|z_0|^2\omega_0^4 + |v_0|^2(7\Gamma^2 -4\omega_0^2-\omega_c^2) \Biggr) t^4 +\mathcal{O}\left(t^5\right). \end{split} \label{eq:msd-shorttime} \end{equation} We have plotted the MSD as a function of $t$ for a free particle as well as for a confined harmonic particle in Figs.~\ref{fig:msd_wc} (a) and (b), respectively for different values of $\omega_{c}$. From the exact calculation of MSD, it is confirmed that in the $t \rightarrow 0$ limit, $\langle R^2(t) \rangle$ is proportional to $t^{2}$, hence the dynamics is ballistic in nature. The initial ballistic regime ($\propto t^2$) depends solely on the initial velocity ($v_{0}$) of the article. When $v_0 = 0$, the initial regime of MSD is proportional to $t^4$. Dependence of MSD on $\omega_c$ appears starting from the fourth power of $t$. Since there is harmonic confinement, the particle cannot escape to infinity. Hence, in the long time regime, MSD attains a constant or saturated value $\langle R^2 \rangle_{st}$ [see Fig.~\ref{fig:msd_wc}(b)], which is given by the expression \begin{equation} \begin{split} \langle R^2 \rangle_{st} = & |z_0|^2 + \frac{2 D \left(t_c^2 \left(\omega _c^2+\Gamma ^2+\omega _0^2\right)\right)}{\Gamma m^2 \omega _0^2 \left(\left(t_c \left(\omega _0^2 t_c+\Gamma \right)+1\right)^2+t_c^2 \omega _c^2\right)} \\ & +\frac{2D\left(\Gamma \omega _0^2 t_c^3+2 \Gamma t_c+1\right)}{\Gamma m^2 \omega _0^2 \left(\left(t_c \left(\omega _0^2 t_c+\Gamma \right)+1\right)^2+t_c^2 \omega _c^2\right)}. \end{split} \label{eq:msd_st} \end{equation} This saturated value of MSD depends on $\omega_c$. In the $\omega_c \rightarrow 0$ limit, $\langle R^2 \rangle_{st}$ is obtained as \begin{equation} \lim_{\omega_c \rightarrow 0} \langle R^2 \rangle_{st} = |z_0|^2 + \frac{2D(1 + t_c\Gamma)}{m^2 \omega_0^2 [\Gamma + t_c\Gamma(\Gamma + t_c \omega_0^2)]}, \end{equation} which is same as that reported in Ref.~\cite{nguyen_active_2022} in the absence of magnetic field. In the presence of very strong magnetic field, i.e., in the $\omega_c\rightarrow\infty$ limit, the stationary MSD is simply $|z_0|^2 + \frac{2 D}{m^2 \Gamma \omega_0^2}$, which is independent of $t_c$. The same value of $\langle R^2 \rangle_{st}$ is obtained when we take the white noise limit, i.e., in the limit $t_c \rightarrow 0$. This confirms that the particle behaves like a passive particle in the presence of a high magnetic field. In thermal equilibrium limit of our model, MSD shows the similar behaviour as reported in Ref.~\cite{lisy2013brownian}. It is also observed that $\langle R^2 \rangle_{st}$ is an increasing function of $\omega_c$, and hence magnetic field enhances the overall displacement for a confined harmonic particle. This is very well reflected from Fig.~\ref{fig:msd_wc}(b). The MSD for a free particle $\langle R^2 (t) \rangle^{(f)}$ can be calculated by substituting $\omega_0=0$ and simplifying Eq.~\eqref{eq:solution-msd} as \begin{widetext} \begin{equation} \begin{split} \langle R^2 (t)\rangle^{(f)} = & \frac{2 |v_0|^2 e^{-\Gamma t} \left(\cosh (\Gamma t)-\cos (\omega_c t )\right)}{\Gamma ^2+\omega_c^2} + \frac{2D}{m^2(\Gamma^2 + \omega_c^2)}\Biggl[ \\ & \frac{e^{-\Gamma t} \left(2 \cos (\omega_c t) \left(t_c \omega_c^2 (\Gamma t_c+1)+\Gamma (\Gamma t_c-2) (\Gamma t_c-1)\right)-2 \omega_c \sin (\omega_c t) \left(t_c^2 \left(\Gamma ^2+\omega_c^2\right)-4 \Gamma t_c+2\right)\right)}{\left(\Gamma ^2+\omega_c^2\right) \left(t_c^2 \omega_c^2+(\Gamma t_c-1)^2\right)} \\ & + \frac{2 t_c^2 e^{-t \left(\Gamma +\frac{1}{t_c}\right)} \left(\omega_c \sin (\omega_c t) \left(t_c^2 \left(\Gamma ^2+\omega_c^2\right)-2 \Gamma t_c-1\right)+\cos (\omega_c t) \left(-\Gamma \left(\Gamma ^2 t_c^2-1\right)-t_c \omega_c^2 (\Gamma t_c+2)\right)\right)}{\left(t_c^2 \omega_c^2+(\Gamma t_c-1)^2\right) \left(t_c^2 \omega_c^2+(\Gamma t_c+1)^2\right)} \\ & -\frac{4 \Gamma }{\Gamma ^2+\omega_c^2}+\frac{2 t_c e^{-\frac{t}{t_c}} (\Gamma t_c-1)}{t_c^2 \omega_c^2+(\Gamma t_c-1)^2}+\frac{e^{-2 \Gamma t} (\Gamma t_c-1)}{\Gamma \left(t_c^2 \omega_c^2+(\Gamma t_c-1)^2\right)}+\frac{(\Gamma t_c+1) (2 \Gamma t_c+1)}{\Gamma \left(t_c^2 \omega_c^2+(\Gamma t_c+1)^2\right)} + 2t - 2t_c + 2t_c e^{\frac{-t}{t_c}} \Biggr]. \end{split} \label{eq:solution-msd-free} \end{equation} \end{widetext} Expanding Eq.~\eqref{eq:solution-msd-free} in the powers of $t$, we get \begin{equation} \begin{split} \langle R^2 &(t) \rangle^{(f)} = |v_0|^2 t^2 - \Gamma |v_0|^2 t^3 \\ & + \left(\frac{D}{2 m^2 t_c}+\frac{7 \Gamma ^2 |v_0|^2}{12} - \frac{|v_0|^2 \omega_c^2}{12}\right) t^4 +\mathcal{O}\left(t^5\right). \end{split} \label{eq:msd-shorttime-free} \end{equation} From this equation, it is confirmed that $\langle R^2 (t) \rangle^{(f)}$ depends on magnetic field and in the absence of magnetic field ($\omega_{c} \rightarrow 0$ limit), the result for $\langle R^2 (t) \rangle^{(f)}$ is consistent with that reported for a free particle in Ref.~\cite{nguyen_active_2022}. In the time asymptotic limit ($t \rightarrow \infty$), the MSD in Eq.~\eqref{eq:solution-msd-free} reduces to \begin{equation} \begin{split} \langle R^2 \rangle_{st}^{(f)} &= \frac{2D}{m^2(\Gamma^2 + \omega_c^2)}\Biggl( -\frac{4 \Gamma }{\Gamma ^2+\omega_c^2}+2 t -2 t_c \\ & +\frac{(\Gamma t_c+1) (2 \Gamma t_c+1)}{\Gamma \left(t_c^2 \omega_c^2+(\Gamma t_c+1)^2\right)} \Biggr). \end{split} \label{eq:solution-msd-free-longtime} \end{equation} \begin{figure}[!hb] \includegraphics[scale=0.52]{Fig5.eps} \caption{MSD as a function of $t$ for different $\omega_c$ values (a) for a free particle [Eq.~\eqref{eq:solution-msd-free}] and (b) for a confined harmonic particle [Eq.~\eqref{eq:solution-msd}]. The other common parameters are $t_{c}=1$, $\gamma = 1$, and $m = 1$.} \label{fig:msd_wc} \end{figure} \begin{figure*}[!ht] \includegraphics[scale=0.52]{Fig6.eps} \caption{For a harmonically confined particle ($\omega_{0}=1.0$), MSD as a function of $t$ [Eq.~\eqref{eq:solution-msd}] (a) for different $\Gamma$ values fixing $t_c=1.0$ and (b) for different $t_{c}$ values fixing $\Gamma=1.0$. For a free particle, MSD as a function of $t$ [Eq.~\eqref{eq:solution-msd-free}] (c) for different $\Gamma$ values fixing $t_c=1$ and (d) for different $t_c$ values fixing $\Gamma=1.0$. The common parameters are $\omega_c = 1.0$ and $m=1.0$.} \label{fig:msd_G_tc} \end{figure*} Thus, the steady state MSD for a free particle depends on $\omega_{c}$ and approaches zero in $\omega_{c} \rightarrow \infty$ limit. This indicates that the presence of magnetic field suppresses the overall displacement of a free particle in contrast to that of a harmonically confined particle. These results are summarized in Fig.~\ref{fig:msd_wc}, where it can be seen that the initial ballistic regimes are similar for both the free and confined harmonic particle. However, in the long time regime, MSD is linearly proportional to $t$ for a free particle (diffusive in nature) but it approaches a stationary value for a confined harmonic particle (non-diffusive in nature). The steady state MSD for a free particle gets suppressed with magnetic field, whereas it gets enhanced for a confined harmonic particle. Other than these, we observe oscillations in the intermediate time regimes for both free and harmonically confined particle which could be due to the influence of magnetic field. It is also to be noted that in $t_c\rightarrow \infty$ limit (with $t >> t_c$), $\langle R^2 \rangle_{st}^{(f)}$ can be obtained as \begin{equation} \lim_{t_c\rightarrow\infty} \langle R^2 \rangle_{st}^{(f)} = \frac{2D}{m^2(\Gamma^2 + \omega_c^2)}\Biggl( -\frac{2 \Gamma }{\Gamma ^2+\omega_c^2}+2 t \Biggr), \label{eq:solution-msd-free-hightc} \end{equation} which is independent of $t_c$. The MSD as a function of $t$ is plotted for different $\Gamma$ and $t_{c}$ values in Figs.~\ref{fig:msd_G_tc}(a) and (b) for a confined harmonic particle and in Figs.~\ref{fig:msd_G_tc}(c) and (d) for a free particle, respectively. It can be seen that for free particle in the time asymptotic limit, MSD is independent of $t_c$ while for confined harmonic particle, $t_c$ suppresses MSD. However, for both free and harmonically confined particle, the MSD gets suppressed with $\Gamma$. Since MSD for a free particle in the time asymptotic limit is proportional to $t$, the steady state diffusion coefficient for a free particle $\mathcal{D}_f$ can be calculated as \begin{equation} \mathcal{D}_f = \lim_{t\rightarrow\infty} \frac{\langle R^2 (t)\rangle^{(f)}}{2t}= \frac{2D}{\gamma^2 + m^2\omega_c^2}. \end{equation} Substituting $\omega_{c}=\frac{qB}{mc}$ in the above equation, $\mathcal{D}_f$ can be simplified as \begin{equation} \mathcal{D}_f = \frac{2D c^{2}}{\gamma^2 c^{2} + q^2 B^2}. \end{equation} Hence, $\mathcal{D}_f$ is independent of the mass of the particle but it depends on the magnetic field. It approaches zero when the particle is subjected to a strong magnetic field. In equilibrium limit ($D=\gamma k_{B} T$), the diffusive behaviour is found to be similar to that reported in Ref.~\cite{paraan2008brownian} and in the absence of magnetic field, the expression of $D_{f}$ is same as that reported in Ref.~\cite{nguyen_active_2022}. \section{SUMMARY} In this work, we have studied the motion of a charged inertial active Ornstein-Uhlenbeck particle in the presence of a magnetic field. One of the important observations is that, the magnetic field has strong influence in the dynamical behaviour of the particle because of the presence of inertia in the dynamics. The particle (if free) on an average covers a finite distance before settling down at a constant value in the long time limit. This constant value is found to be dependent on magnetic field which gets reduced with increase in field strength. On the contrary, if the particle is confined in a harmonic trap, it always comes back to the mean position of the trap, irrespective of the magnetic field. For a highly viscous medium, where the inertial influence is negligible, the dynamical behaviour of the particle is not affected by the magnetic field. Furthermore, the initial time regime of the mean square displacement is found to be similar and shows ballistic behaviour for both free and confined harmonic particle. On the other hand, the time asymptotic regime is diffusive for a free particle and non-diffusive for a harmonic particle. The ballistic regime for both free and confined harmonic particle gets reduced with increase in magnetic field strength. Surprisingly, for a harmonically confined particle, the steady state mean square displacement in the presence of a very strong magnetic field is same as that for a passive particle. When the strength of the magnetic field is very high, the steady state mean square displacement becomes independent of the field as well as on the noise correlation time or persistent time of the dynamics, ensuring the particle to behave like a passive particle. To understand this feature, it is further necessary to explore the relaxation behaviour of the dynamics and quantify the degree of irreversibility in terms of entropy production and non equilibrium temperature \cite{mandal2017entropy,fodor2016far}. Similarly, for a free particle, in the time asymptotic limit, the MSD becomes independent of activity despite the persistence of activity for a longer time. We believe that the results of our model are amenable for experimental verification and can be applied to implement the magnetic control on a charged active suspension by fine tuning the strength of the external magnetic field. It would be further interesting to explore the relaxation behaviour of the dynamics by introducing elasticity in the viscous solution \cite{igor2012visco, sevilla2019generalized}. Moreover, the inertial AOUP particle under the action of magnetic field can be extended to more complex situation as in Ref.~\cite{vuijk2020lorentz, abdoli2021stochastic}. \section{Acknowledgement} M.S. acknowledges the start-up grant from UGC Faculty recharge program, Govt. of India for financial support.
2024-02-18T23:40:51.803Z
2022-06-30T02:19:56.000Z
algebraic_stack_train_0000
3,514
6,034
proofpile-arXiv_066-1143
\section{Introduction} Graphs are highly informative, flexible, and natural way of representing data in various real-world domains. Graph neural networks (GNNs) \cite{kipf2017semi,hamilton2017inductive,velickovic2018graph} have emerged as the standard tool to analyze graph data which is non-euclidean and irregular in nature. GNNs have gained state-of-the-art results in various graph analytical tasks ranging from applications in biology and healthcare \cite{dong2022mucomid, Ahmedt2021graphmedical} to recommending friends in a social network \cite{fan2019graph}. GNNs' success can be attributed to their ability to extract powerful latent features via complex aggregation of neighborhood aggregations \cite{funke2021zorro, ying2019:gnnexplainer}. However, these models are inherently black-box and complex, making it extremely difficult to understand the underlying reasoning for their predictions. With the growing adoption of these models in various sensitive domains, efforts have been made to explain their decisions in terms of feature as well as neighborhood attributions. Model explanations can offer insights into the internal decision-making process of the model which builds the trust of the users. Moreover, owing to the current regulations \cite{officialgrpr2016} and guidelines for building trustworthy AI systems, several proposals advocate for deploying (automated) model explanations \cite{goodman2017european, selbst2018meaningful}. Nevertheless, releasing additional information such as explanations can have adverse affects on the privacy of the training data. While the risk to privacy due to model explanations exists for machine learning models in general \cite{shokri2021privacy}, it can have more severe implications for graph neural networks. For instance, several works \cite{olatunji2021membership,du2018towards} have established the increased vulnerability of GNNs to privacy attacks due to the additional encoding of graph structure in the model itself. We initiate the \emph{first investigation} of the effect of releasing feature explanations for graph neural networks on \emph{the leakage of private information} in the training data. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{motivating_scenario_rec_attack.pdf} \caption{The importance scores for the features as provided by an explanation can be exploited to infer the graph structure. Here, we provide an example of binary explanation where a score of $1$ indicates that the corresponding feature is part of the explanation.} \label{fig:motivating-example} \end{figure} To analyze the information leakage due to explanations, we take the perspective of an adversary whose goal is to infer the hidden connections among the training nodes. Consider a setting where the user has access to node features and labels but not the graph structure among the nodes. For example, node features could be part of the public profile of various individuals. The graph structure among the nodes could be some kind of social network which is private. Now, let's say that the user wants to obtain a trained GNN on her data while providing the node features and labels to a central authority that has access to the private graph structure. We ask here the question: \emph{how much do the feature-based explanations leak information about the graph structure used to train the GNN model?} We quantify this information leakage via several graph reconstruction attacks. A visual illustration is shown in Figure \ref{fig:motivating-example}. Specifically, our threat model consists of two main settings: first, where the adversary has access only to feature explanations, and second, where the adversary has additional information on node features/labels. Note that we only focus on feature-based explanations for GNNs in this work. For other explanation types such as node or edge explanations, as the returned nodes and edges belong to the node's neighborhood, it becomes trivial to reconstruct the graph structure. \subsection{Our Contributions and Findings} Ours is the first work to analyze the risks of releasing feature explanations in GNNs on the privacy of the relationships/connections in the training nodes. We quantify the information leakage via explanations by the success rate of several graph reconstruction attacks. Our attacks range from simple explanation similarity-based attack to more complex attacks exploiting graph structure learning techniques. Besides, we provide a thorough analysis of information leakage via feature-based explanations produced by three classes of GNN post-hoc explanation methods including \emph{gradient-based}, \emph{perturbation-based} and \emph{surrogate model-based}. To analyze the differences in the robustness of the explanation methods to our privacy attacks, we investigate the explanation utility in terms of \emph{faithfulness} and \emph{sparsity}. We find that the gradient-based methods are the most susceptible to graph reconstruction attacks even though the corresponding explanations are least faithful to the model. In other words, have low utility. This is an important finding as the corresponding explanations could release a large amount of private information without offering any utility to the user. The perturbation-based approach \textsc{Zorro}\xspace and its variant show the highest explanation utility as well as a high success rate of the attack, pointing to the expected trade-off between privacy and explanation utility. We perform our study over three types of datasets with varying properties. For instance, our first dataset has a large number of binary features but the feature space is very sparse. Our second dataset has fewer but denser features. Our final dataset has a very small number of features. We find that the information leakage varies with explanation techniques as well as the feature size of the dataset. The dataset with the smallest feature size (8) is the most difficult to attack. All baseline attacks which rely on knowledge of only features and labels perform no better than a random guess. In such a case, explanation-based attacks provide an improvement, though not very huge, in inferring private graph structure information. Finally, we develop a perturbation-based defense for releasing feature-based explanations. Our defense employs a randomized response mechanism to perturb the individual explanation bits. We show that our defense reduces the attack to a random guess with a small drop in the explanation utility. Our anonymized code is available at \url{xxxxxxxxx}. \section{Preliminaries} \subsection{Graph Neural Networks} Graph neural networks (GNNs) \cite{kipf2017semi,hamilton2017inductive,velickovic2018graph} are a special family of deep learning model designed to perform inference on graph-structured data. The variants of GNNs, such as graph convolutional network (GCN), compute the representation of a node by utilizing its feature representation and that of the neighboring nodes. That is, GNNs compute node representations by recursive aggregation and transformation of feature representations of its neighbors. Let $G = (V, E)$ denote a graph where $V$ is the node-set, and $E$ represents the edges or links among the nodes. Furthermore, let $\boldsymbol{x}_{i}^{(\ell)}$ be the feature representation of node $i$ at layer $\ell$, ${\mathcal{N}}_{i}$ denote the set of its 1-hop neighbors and $\bm{\theta}$ is a learnable weight matrix. Formally, the $\ell$-th layer of a graph convolutional operation can be described as \begin{align} \boldsymbol{z}_{i}^{(\ell)}=&\operatorname{AGGREGATION}^{(\ell)}\left(\left\{\boldsymbol{x}_{i}^{(\ell-1)},\left\{\boldsymbol{x}_{j}^{(\ell-1)} \mid j \in{\mathcal{N}}_{i}\right\}\right\}\right) \\ \boldsymbol{x}_{i}^{(\ell)}= &\operatorname{TRANSFORMATION} ^{(\ell)}\left(\boldsymbol{z}_{i}^{(\ell)}\right) \end{align} Then a softmax layer is applied to the node representations at the last layer (say $L$) for the final prediction of the node classes $\mathcal{C}$ \begin{align}\label{eq:pred} \boldsymbol{y}\leftarrow \operatorname{argmax}(\operatorname{softmax}(\boldsymbol{z}_{i}^{(L)}\bm{\theta})), \end{align} GNNs have been shown to possess increased vulnerability to privacy attacks due to encoding of additional graph structure in the model \cite{olatunji2021membership,du2018towards}. We further investigate the privacy risks of releasing post-hoc explanations for GNN models. \subsection{Explaining Graph Neural Networks} GNNs are deep learning models which are inherently black-box or non-interpretable. This black-box behavior becomes more critical for applying them in sensitive domains like medical, crime, and finance. Consequently, recent works have proposed post-hoc explainability techniques to explain the decisions of an already trained model. In this work, we are concerned with the task of node classification where the model is trained to predict node labels in a graph. For such a task, an instance is a single node. An explanation for a node usually consists of a subset of the most important features as well as a subset of its neighboring nodes/edges responsible for the model's prediction. Depending on the explanation method the importance is usually quantified either as a continuous score (also referred to as a soft mask) or a binary score (also called a hard mask). In this work, we consider three popular classes of explanation methods: \emph{gradient-based} \cite{selvaraju2017grad,baldassarre2019explainability,pope2019explainability}, \emph{perturbation-based} \cite{ying2019:gnnexplainer, funke2021zorro}, and \emph{surrogate} \cite{huang2020graphlime} methods. \mpara{Gradient-based methods.} These approaches usually employ the gradients of the target model's prediction with respect to input features as importance scores for the node features. We use two gradient-based methods in our study, namely \textbf{\textsc{Grad}\xspace}\cite{imageclasssimonyan2013deep} and Grad-Input (\textbf{\textsc{Grad-I}\xspace}) \cite{sundararajan2017:integratedgrad}. For a given graph $G$ and the trained GNN model $f(\boldsymbol{X},G,\theta)$ (where $\theta$ is the set of parameters and $\boldsymbol{X}$ is the features matrix), \textsc{Grad}\xspace generates an explanation $\mathcal{E}_X$ by assigning continuous valued importance scores to the features. For node $i$ and $\boldsymbol{x}_{i}$ is its features vector, The score is calculated by $\frac{\partial f}{\partial \boldsymbol{x}_{i}}$. \textsc{Grad-I}\xspace transforms \textsc{Grad}\xspace explanation by an element-wise multiplication with the input features ($\boldsymbol{x}_{i}\odot\frac{\partial f}{\partial \boldsymbol{x}_{i}}$). \mpara{Perturbation-based methods.} Perturbation-based methods obtain soft or hard masks over the features/nodes/edges as explanations by monitoring the change in prediction with respect to different input perturbations. We use two methods from this class: \textbf{\textsc{Zorro}\xspace} \cite{funke2021zorro} and \textbf{\textsc{GNNExp}\xspace} \cite{ying2019:gnnexplainer}. \textsc{Zorro}\xspace learns discrete masks over input nodes and node features as explanations using a greedy algorithm. It optimizes a fidelity-based objective that measures how the new predictions match the original predictions of the model by fixing the selected nodes/features and replacing the others with random noise values. This returns hard mask for the explanations. \textsc{GNNExp}\xspace learns soft masks over edges and features by minimizing the cross-entropy loss between the predictions of the original graph and the predictions of the newly obtained (masked) graph. We also utilize the \textsc{Zorro}\xspace variant that provides soft explanation masks called \textbf{\textsc{Zorro-S}\xspace}. \textsc{Zorro-S}\xspace relaxes the argmax in \textsc{Zorro}\xspace's objective with a softmax, such that the masked are retrievable with standard gradient-based optimization. Together with the regularization terms of \textsc{GNNExp}\xspace, \textsc{Zorro-S}\xspace learns sparse soft masks. \mpara{Surrogate methods} Surrogate methods fit a simple and interpretable model to a sampled local dataset corresponding to the query node. For example, the sampled dataset can be generated from the neighbors of the given query node. The explanations from the surrogate model are then used to explain the original predictions. We use GraphLime\cite{huang2020graphlime} which we denote as \textbf{\textsc{GLime}\xspace} from this class. As an interpretable model, it uses the global feature selection method HSIC-Lasso~\cite{yamada2014high}. The sampled dataset consists of the node and its neighborhood. The set of the most important features returned by HSIC-Lasso is used as an explanation for the GNN model. \textit{Remark:} Please note that in this work, we assume that only feature-based explanations are released to the user. In the presence of node/edge explanations, an adversary can trivially reconstruct large parts of the neighborhood as the returned nodes/edges are part of the node's original neighborhood. \subsubsection{Measuring explanation quality.} We measure the quality of the explanation by its ability to approximate the model's behavior which is referred to as \textbf{\textit{faithfulness}}. As the groundtruth for explanations is not available, we use the \textit{RDT-Fidelity} proposed by \cite{funke2021zorro} to measure faithfulness. The corresponding fidelity score measure how the original and new predictions match by fixing the selected nodes/features and replacing the others with random noise values. Formally, the \textit{RDT-Fidelity} of explanation $\mathcal{E}_X$ corresponding to explanation mask $M(\mathcal{E}_X)$ with respect to the GNN $f$ and the noise distribution $\mathcal{N}$ is given by \begin{equation} \label{eq:fidelity} \mathcal{F}(\mathcal{E}_X) = \mathbb{E}_{Y_{\mathcal{E}_X}|Z\sim \mathcal{N}} \left[\mathbbm{1}_{f\left(X\right)=f(Y_{\mathcal{E}_X})}\right], \end{equation} where the perturbed input is given by \begin{equation} \tilde{I}_{\mathcal{E}_X} = X\odot M(\mathcal{E}_X) + Z\odot(\mathbbm{1} - M(\mathcal{E}_X)), Z\sim \mathcal{N}, \label{eq:noisy_features_matrix} \end{equation} where $\odot$ denotes an element-wise multiplication, and $\mathbbm{1}$ is a matrix of ones with the corresponding size. \mpara{Sparsity.} We further note that by definition, the complete input is faithful to the model. Therefore, we further measure the sparsity of the explanation. A meaningful explanation should be sparse and should only contain a small subset of features most predictive of the model decision. We use the entropy-based sparsity definition from \cite{funke2021zorro} as it is applicable for both soft and hard explanation masks. Let $p$ be the normalized distribution of explanation (feature) masks. Then the sparsity of an explanation is given by the entropy $H(p)$ over the mask distribution, $$H(p)= -\sum_{f\in M} p(f) \log p(f).$$ Note that the entropy here is bounded by $\log(|M|)$, where $M$ corresponds to the size of feature set. The lower the entropy, the sparser the explanation. We will utilize these two metrics used in measuring explanation quality to argue about the differences in the attack performance. \section{Threat Model and Attack Methodology} \subsection{Motivation} We consider the setting in which the features and graph (adjacency matrix) are held by different data holders in practice. Specifically, the central trusted server has access of the graph structure and trains a GNN model using the node features and labels provided by the user. The user can further query the trained GNN model by providing node features. As the features are already known to the user, revealing \textit{feature explanations} for the prediction might be considered a safe way to increase the trust of the user in the model. We investigate such a scenario and uncover the increased privacy risks of releasing \emph{feature explanations even if the original node features/labels are already known to the adversary}. As an example of such a scenario, consider a GNN model trained on a SWIFT (Society for Worldwide Interbank Financial Telecommunication) \cite{swiftweb2022} financial network spanning several financial institutions to detect malicious users. For the sake of transparency, each financial institution in the network has the capability of querying a GNN model trained with all the SWIFT data to easily identify malicious transactions by inputting the message/account number (\textit{node features}) and the model returns a \textit{label} indicating whether the transaction is malicious or not (node classification task). In addition, the central server returns a feature explanation for the model's prediction. Assuming that there exists an attacker (insider of a financial institution with malicious intentions) and has some messages (node features). She is interested in knowing if there was a transaction between two customer accounts (target nodes). Such leakage will put the different customers at risk of estimating their financial worth or publicizing their relationships. \subsection{Threat Model} We consider two scenarios: \textbf{first}, in which the adversary has access to node features and or labels and obtains additional access to feature explanations, and \textbf{second}, in which the adversary has access only to feature explanations. The goal of the adversary is to infer the private connections of the graph used to train the GNN model. Corresponding to the above settings, we categorize our attacks as \emph{explanation augmentation} (when explanations are augmented with other information to launch the attack) and \emph{explanation-only} attacks. Through our five proposed attacks, we investigate the privacy leakage from the explanations generated by six different explanation methods. Our simple similarity-based explanation-only attack already allows us to quantify the additional information that the feature-based explanation encodes about the graph structure. Our explanation augmentation attacks are based on the graph structure learning paradigm which allows us to effectively integrate additional known information in the learning of the private graph structure. Besides, our explanation augmentation attacks also result in a successfully trained GNN model without the knowledge of the true graph structure offering an additional advantage to the adversary. \subsection{Attack Methodologies} \label{sec:attackmeth} Here, we provide a detailed description of our two types of attacks. We commence with the \textit{explanation-only attack } in which we utilize only the provided explanation to launch the attack followed by the \textit{explanation augmentation} attacks in which more information such as node labels or/and features are exploited in addition to the explanation. The taxonomy of our attacks based on the attacker's knowledge is presented in Table \ref{tab:attack-settings}. \mpara{Explanation-only Attack.} \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{explain_sim.pdf} \caption{\ensuremath{\textsc{ExplainSim}}\xspace attack. Each node is assigned a feature explanation vector where blue (1) and red (0) indicate whether the feature is part of the explanation or not. The attacker then assigns an edge by computing the pairwise similarity represented as $d(node_i, node_j)$ between nodes' explanation vectors. We show the representation of \textsc{Zorro}\xspace's explanation for easier visualization.} \label{fig:explainsim} \end{figure} This is an unsupervised attack in which the attacker only has access to the explanations and does not have access to the features or the labels. The attacker measures the distance between each pair of explanation vectors and assigns an edge between them if their distance is small. The intuition is that the model might assign similar labels to connected nodes which lead to similar explanations. We experimented with various distance metrics but cosine similarity performs best across all dataset. We refer to this similarity-based attack as \textbf{\ensuremath{\textsc{ExplainSim}}\xspace}. This attack is illustrated in Figure \ref{fig:explainsim}. \mpara{Explanation Augmentation Attack.} \begin{figure*}[h!] \centering \includegraphics[width=0.8\textwidth]{explanationattack.pdf} \caption{% \label{fig:exhoney} % Overview of \ensuremath{\textsc{GSEF}}\xspace. The generator takes node features and explanations as input and outputs an adjacency matrix which may be non-normalized by the adjacency normalizer. The normalized adjacency matrix is used in predicting both the class labels and reconstructing the node features (explanations) by the denoising autoencoders. The final reconstructed adjacency is the one that minimizes the reconstruction error on each of the node features and explanations, and the loss of the class label prediction. } \end{figure*} Towards explanation augmentation attacks, we leverage the graph structure learning paradigm of \cite{fatemi2021slaps}. In particular, we employ two \textit{generator modules} for generating graph edges corresponding to features and explanations, respectively. The generators are trained using feature/explanation reconstruction-based losses as well as the node classification loss. We commence by describing the common architecture of the attack model followed by its concrete usage in the four attack variations in Section \ref{sec:augmentattacks}. \paragraph{\textbf{Generators}} The two generators take the node features/explanations as input and output two adjacency matrices. We employ the full parameterization (FP) approach to model the generators similar to those used in \cite{franceschi2019learning, fatemi2021slaps}. In other words, each element of the reconstructed adjacency matrix $\tilde{\ensuremath{\bm{A}}}$ is treated as a separate learnable parameter. The adjacency matrix is parameterized using Bernoulli distribution. Let the output adjacency matrix function be given as $\tilde{\ensuremath{\bm{A}}}=\func{G}_{FP}(\ensuremath{\bm{X}}; \bm{\theta}_\func{G})=\bm{\theta}_\func{G}$ where $\bm{\theta}_\func{G}\in\mathbb{R}^{n\times n}$ and $\func{G}_{FP}(\cdot;\cdot)$ denotes the generator function. To obtain a symmetric adjacency matrix with all positive elements, we perform the following transformation \begin{equation*} \ensuremath{\bm{A}} = \ensuremath{\bm{D}}^{-\frac{1}{2}}\Big(\frac{\func{P}_{[0,1]}(\tilde{\ensuremath{\bm{A}}})+\func{P}_{[0,1]}(\tilde{\ensuremath{\bm{A}}})^T}{2}\Big)\ensuremath{\bm{D}}^{-\frac{1}{2}}, \end{equation*} where \func{P} is a non-negative function defined by \begin{equation*} \func{P}_{[0,1]}[x]=\left\{ \begin{array}{rcl} 0 & & x < 0,\\ 1 & & x > 1,\\ x & & otherwise. \end{array} \right. \label{eqn:projection} \end{equation*} The final adjacency matrix is computed by adding the matrices corresponding to two generators. Any element greater than one is trimmed to 1. The final graph is then generated by sampling from the learnt Bernoulli distributions. \paragraph{ \textbf{Training with self-supervision and node classification loss}} The parameters of the generator/adjacency matrix are trained using a supervised loss as well as feature reconstruction losses. For supervised training, the graph sampled from the generator is fed to a graph convolution network (GCN) to predict node labels. Note that the GCN used here is not the target model. We use the cross-entropy loss ($CE$) between the predicted labels ($\tilde{Y}$) and the ground-truth labels ($Y$), $\loss{L}_C = CE(Y, \tilde{Y})$. For self-supervision, we employ \textit{denoising graph autoencoders} which aims at reconstructing node features and explanations given as input noisy features/explanations and the learnt graph structure. We represent our self-supervised task models as $\func{M_{DAE}}$ and $\func{M_{DAE_{\mathcal{E}_X}}}$ for the node features and explanations respectively. They take $\tilde{\ensuremath{\bm{X}}}$ ($\tilde{\mathcal{E}}_X$), the noisy node features (explanations) as input and produces a denoised version of the features (explanations) with the same dimension. The noise is added to random indices, represented by \emph{idx}, of the node feature (explanation). Let $\ensuremath{\bm{X}}_{idx}$ ($\mathcal{E}_{X_{idx}}$) be the true values of the indices, we minimize the following objectives for the node features and explanations respectively: \begin{equation}\label{eq:loss-dae} \loss{L}_{DAE}= \mathcal{L}(\ensuremath{\bm{X}}_{idx}, \func{\func{M_{DAE}}}(\tilde{\ensuremath{\bm{X}}}, \ensuremath{\bm{A}}_X; \bm{\theta}_{\func{M_{DAE}}})_{idx}) \end{equation} \begin{equation}\label{eq:loss-daex} \loss{L}_{DAE_{\mathcal{E}_X}}= \mathcal{L}(\mathcal{E}_{X_{idx}}, \func{\func{M_{DAE_{\mathcal{E}_X}}}}(\tilde{\mathcal{E}}_X, \ensuremath{\bm{A}}_{\mathcal{E}_X}; \bm{\theta}_{\func{M_{DAE_{\mathcal{E}_X}}}})_{idx}) \end{equation} where $\ensuremath{\bm{A}}_X$ and $\ensuremath{\bm{A}}_{\mathcal{E}_X}$ are the generated adjacency matrices corresponding to the node features and explanations respectively and $\mathcal{L}$ is either the binary cross entropy loss or the mean squared error depending on the dataset. The final training loss for private graph extraction is then given by $$\loss{L}= \loss{L}_{DAE} + \loss{L}_{DAE_{\mathcal{E}_X}} + \loss{L}_C.$$ \textbf{Choice of noise and $\mathcal{L}$.} For binary datasets and hard masking explanation (\textsc{Zorro}\xspace), we use the binary cross entropy loss as $\mathcal{L}$. We randomly flip $r=20$ percent of the indices whose values are $1$ to $0$. For datasets and explanations with continuous values, we add independent Gaussian noise to $r=20$ percent of the indices to each of the features or explanations. The loss in this case is the mean-squared error. We refer to the above attack framework as \textbf{G}raph \textbf{S}tealing with \textbf{E}xplanations and \textbf{F}eatures (\textbf{\ensuremath{\textsc{GSEF}}\xspace}) and the schematic diagram is given in Figure \ref{fig:exhoney}. Besides, we have three attack variations which employs a single generator module as described below. \subsubsection{Attack variations} \label{sec:augmentattacks} \begin{table}[h!] \caption{Attack taxonomy based on attacker's knowledge of node features ($\ensuremath{\bm{X}}$), labels ($\ensuremath{\bm{Y}}$) and feature explanations ($\mathcal{E}_X$).} \label{tab:attack-settings} \begin{tabular}{llll} \toprule \textsc{Attack} & {$\ensuremath{\bm{X}}$} & {$\ensuremath{\bm{Y}}$} & $\mathcal{E}_X$ \\ \midrule \ensuremath{\textsc{ExplainSim}}\xspace & \xmark & \xmark & \cmark \\ \ensuremath{\textsc{GSEF}}\xspace & \cmark & \cmark & \cmark \\ \ensuremath{\textsc{GSEF-concat}}\xspace & \cmark & \cmark & \cmark \\ \ensuremath{\textsc{GSEF-mult}}\xspace & \cmark & \cmark & \cmark \\ \ensuremath{\textsc{GSE}}\xspace & \xmark & \cmark & \cmark \\ \bottomrule \end{tabular} \end{table} Besides \ensuremath{\textsc{GSEF}}\xspace, we have three attack variations that employ explanations (i) \textbf{{\ensuremath{\textsc{GSEF-concat}}\xspace}} in which we concatenate the node features and explanations and feed the concatenated input to a single generator module (ii) \textbf{\ensuremath{\textsc{GSEF-mult}}\xspace} in which we perform element-wise multiplication between the features and the explanations and feed them into a single graph generator module. This is equivalent to assigning importance to the node features which emphasize the essential characteristics of the nodes. Similar to \ensuremath{\textsc{GSEF-concat}}\xspace, we reconstruct the adjacency matrix using one generator of Figure \ref{fig:exhoney} and (iii) \textbf{\ensuremath{\textsc{GSE}}\xspace} in which the attacker only has access to the explanations and labels. Here, we also employ only one generator with explanations as input. \section{Experiments} In this section, we present the experimental results to show the effectiveness of explanation-based attacks. Specifically, our experiments are designed to answer the following research questions: \begin{RQ} How does the knowledge of feature explanations influence the reconstruction of private graph structure? \label{rq:attack-performance} \end{RQ} \begin{RQ} \label{rq:exp-vulnerability} What are the differences between explanation methods with respect to privacy leakage? \end{RQ} \begin{RQ} What is the additional advantage of the adversary (for example in terms of the utility of the inferred information on a downstream task) on explanation augmentation attacks? \label{rq:reconstruct-edges} \end{RQ} \begin{RQ} How much does the lack of knowledge about groundtruth node labels affect attack performance? \label{rq:targetmodelaccess} \end{RQ} \subsection{Experimental Settings} \subsubsection{Attack baselines without explanations} \paragraph{\textbf{\textsc{FeatureSim}\xspace}} In this unsupervised attack, the attacker computes the pairwise similarity between pairs of the actual features to reconstruct the graph. Specifically, an edge exists between two nodes if the distance between their feature representation is low. We use the cosine similarity as a measure of similarity because it performs better than other distance metrics. \paragraph{\textbf{\textsc{Lsa}\xspace} \cite{he2021stealing}} \textsc{Lsa}\xspace (Link stealing attack) is a black-box attack that assumes that the attacker has access to a dataset drawn from a similar distribution as that of the target data (shadow dataset). Additionally, LSA knows the architecture of the target model and can train a corresponding shadow model that replicates the behavior of the target model. The goal of the attack is to infer sensitive links between nodes of interest. We compare our results with that of their proposed attack-2 where an attacker has access to the node features and labels. They trained a separate MLP model (reference model) using the available target attributes and their corresponding labels to obtain posteriors. Then, LSA computes the pairwise distance between posteriors obtained from the target model and that of the reference model for the nodes of interest. We use cosine similarity as the distance metric. \paragraph{\textbf{\textsc{GraphMI}\xspace} \cite{zhanggraphmi}} \textsc{GraphMI}\xspace is a white-box attack in which the attacker has access to the parameters of the target model, node features, all the node labels, and other auxiliary information like edge density. The goal of an attacker is to reconstruct the sensitive links or connections between nodes. The attack model uses the cross entropy loss between the true labels and the output posterior distribution of the target model along with feature smoothness and adjacency sparsity constraints to train a fully parameterized adjacency matrix. The graph is then reconstructed using the graph autoencoder module in which the encoder is replaced by learnt parameters of the target model and the decoder is a logistic function. \paragraph{\textbf{\textsc{Slaps}\xspace} \cite{fatemi2021slaps}} Since our attack model is built on top of the graph structure learning framework of SLAPS, we performed an experiment using the vanilla SLAPS. Given node features and labels, the goal of SLAPS is to reconstruct the graph that works best for the node classification task. \subsubsection{Target GNN model.} We employ a 2-layer graph convolution network (GCN) \cite{kipf2017semi} as our target GNN model. We use a learning rate of 0.001 and trained for 200 epochs. \subsubsection{Evaluation metrics} Following the existing works \cite{zhanggraphmi, he2021stealing}, we use the area under the receiver operating cost curve (AUC) and average precision (AP) to evaluate our attack. For all experiments (including baselines), we randomly sample equal number pairs of connected and unconnected nodes from the original graph and the predicted graph for evaluation. We measure both the AUC and the AP on these randomly selected node pairs. All our experiments were conducted for $10$ different instantiations using PyTorch Geometric library \cite{fey2019fast} on 11GB GeForce GTX 1080 Ti GPU, and we report the mean values across all runs. \subsubsection{Datasets} We use three commonly used datasets chosen based on varying graph properties such as their feature dimensions and structural properties. The task on all datasets is node classification. We present the data statistics in Table \ref{tab:data-stat}. \paragraph{\textbf{\textsc{Cora}\xspace}} The \textsc{Cora}\xspace dataset~\cite{sen2008collective} is a citation dataset where each research article is a node and there exists an edge between two articles if one article cites the other. Each node has a label that shows the article category. The features of each node are represented by a 0/1-valued word vector which indicates the presence or absence of the word from the abstract of the article. \paragraph{\textbf{\textsc{CoraML}\xspace}} In \textsc{CoraML}\xspace dataset~\cite{sen2008collective}, each node is research article and its abstract is available as raw text. In contrast to the above dataset, the raw text of the abstract is transformed into a dense feature representation. We preprocess the text by removing stop words, web-page links and special characters. Then, we generate Word2Vec~\cite{mikolov2013efficient} embedding of each word. Finally, we generate the feature vector by taking the average over the embedding of all words in the abstract. \paragraph{\textbf{\textsc{Bitcoin}\xspace}} The \textsc{Bitcoin}\xspace-Alpha dataset\cite{kumar2016edge} is a signed network of trading accounts. Each account is represented as a node and there is a weighted edge between any two accounts which represents the trust between accounts. The maximum weight value is +10, indicating total trust and the lowest is -10 which indicates total distrust. Each node is assigned a label which indicates whether the account is trustworthy or not-trustworthy. The features vector of each node is based on the rating by other users such as the average positive or negative rating. We follow the procedures in ~\cite{vu2020pgm} for generating the feature vectors. \begin{table} \caption{Dataset statistics. $|V|$ and $|E|$ denotes the number of nodes and edges respectively, $\mathcal{C}$, $\mathbf{X}_d$, and \textbf{deg} denotes the number of classes, size of feature dimension and the average degree of the corresponding graph dataset.} \label{tab:data-stat} \begin{tabular}{lccc} \toprule & \textbf{\textsc{Cora}\xspace} & \textbf{\textsc{CoraML}\xspace} & \textbf{\textsc{Bitcoin}\xspace} \\\midrule $|V|$ & 2708 & 2995 & 3783 \\ $|E|$ & 5429 & 4113 & 14124 \\ $\mathbf{X}_d$ & 1433 & 300 & 8 \\ $\mathcal{C}$ & 7 & 7 & 2 \\ \textbf{deg} & 3.9 & 2.75 & 7.5 \\\bottomrule \end{tabular} \end{table} \subsection{Result Analysis} \begin{table*}[h!!] \caption{Attack performance and baselines. The best performing attack(s) on each explanation method is(are) highlighted in bold, and the second best attack(s) is(are) underlined.} \centering \begin{tabular}{clp{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}} \toprule \multicolumn{1}{l}{$Exp$} & \multirow{1}{*}{\textbf{Method}}& \multicolumn{2}{c}{\textbf{\textsc{Cora}\xspace}} & \multicolumn{2}{c}{\textbf{\textsc{CoraML}\xspace}} & \multicolumn{2}{c}{\textbf{\textsc{Bitcoin}\xspace}}\\ \cmidrule(r){3-4} \cmidrule(r){5-6} \cmidrule(r){7-8} & & AUC & AP & AUC & AP & AUC & AP \\ \midrule \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Baseline}}} & \textsc{FeatureSim}\xspace & 0.796 & 0.822 & 0.736 & 0.776 & 0.536 & 0.476\\ & \textsc{Lsa}\xspace \cite{he2021stealing} & 0.794 & 0.829 & 0.728 & 0.759 & 0.530 & 0.500\\ & \textsc{GraphMI}\xspace \cite{zhanggraphmi} & 0.859 & 0.834 & 0.815 & 0.810 & 0.583 & 0.515\\ & \textsc{Slaps}\xspace \cite{fatemi2021slaps} & 0.716 & 0.757 & 0.682 & 0.738 & 0.590 & 0.557\\ \midrule \parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\textsc{Grad}\xspace}}} & \ensuremath{\textsc{GSEF-concat}}\xspace & 0.694 & 0.733 & 0.685 & 0.749 & 0.447 & 0.476\\ & \ensuremath{\textsc{GSEF-mult}}\xspace &0.692 & 0.749 & 0.683 & 0.762 & 0.266 & 0.381\\ & \ensuremath{\textsc{GSEF}}\xspace &\underline{0.947} & \underline{0.955} & \bf{0.902} & \underline{0.832} & \bf{0.700} & \bf{0.715}\\ & \ensuremath{\textsc{GSE}}\xspace &0.870 & 0.893 & 0.689 & 0.761 & 0.254 & 0.376\\ & \ensuremath{\textsc{ExplainSim}}\xspace & \bf{0.983} & \bf{0.980} & 0.900 & \bf{0.904} & \underline{0.694} & \underline{0.656}\\ \midrule \parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\textsc{Grad-I}\xspace}}} & \ensuremath{\textsc{GSEF-concat}}\xspace & 0.700 & 0.755 & 0.703 & 0.753 & 0.522 & 0.526\\ & \ensuremath{\textsc{GSEF-mult}}\xspace & 0.665 & 0.702 & 0.710 & 0.743 & 0.228 & 0.363\\ & \ensuremath{\textsc{GSEF}}\xspace & \underline{0.914} & \underline{0.917} & \underline{0.802} & \textbf{0.842} & \textbf{0.710} & \textbf{0.725}\\ & \ensuremath{\textsc{GSE}}\xspace &0.872 & 0.900 &0.725 & 0.790 & 0.256 & 0.377\\ & \ensuremath{\textsc{ExplainSim}}\xspace &\bf 0.983 & \bf 0.978 & \bf 0.908 & \bf 0.911 & 0.690 & 0.651\\ \midrule \parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\textsc{Zorro}\xspace}}} & \ensuremath{\textsc{GSEF-concat}}\xspace & 0.823 & 0.860 & 0.735 & 0.786 & \underline{0.575} & 0.529\\ & \ensuremath{\textsc{GSEF-mult}}\xspace & 0.723 & 0.756 & 0.681 & 0.697 & 0.399 & 0.449\\ & \ensuremath{\textsc{GSEF}}\xspace & \bf 0.884 & \bf 0.880 & 0.776 & 0.820 & 0.537 & \underline{0.527}\\ & \ensuremath{\textsc{GSE}}\xspace & 0.779 & 0.810 & 0.722 & 0.777 & \bf 0.596 & \bf 0.561\\ & \ensuremath{\textsc{ExplainSim}}\xspace & \underline{0.871} & \underline{0.873} & \bf 0.806 & \bf 0.829 & 0.427 & 0.485\\ \midrule \parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\textsc{Zorro-S}\xspace}}} & \ensuremath{\textsc{GSEF-concat}}\xspace & 0.881 & 0.913 & 0.751 & 0.804 & \bf 0.602 & \bf 0.586\\ & \ensuremath{\textsc{GSEF-mult}}\xspace & 0.752 & 0.784 & 0.710 & 0.727 & 0.536 & 0.524\\ & \ensuremath{\textsc{GSEF}}\xspace & \bf{0.921} & \bf{0.918} & \bf{0.797} & \bf{0.801} & \underline{0.595} & \underline{0.572}\\ & \ensuremath{\textsc{GSE}}\xspace & 0.891 & 0.916 & 0.774 & 0.818 & 0.560 & 0.561\\ & \ensuremath{\textsc{ExplainSim}}\xspace & \underline{0.912} & \underline{0.932} & 0.732 & 0.804 & 0.480 & 0.489\\ \midrule \parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\textsc{GLime}\xspace}}} & \ensuremath{\textsc{GSEF-concat}}\xspace & \underline{0.634} & \underline{0.685} & \underline{0.627} & \underline{0.664} & \underline{0.536} & \underline{0.538}\\ & \ensuremath{\textsc{GSEF-mult}}\xspace & 0.517 & 0.529 & 0.563 & 0.570 & 0.238 & 0.362\\ & \ensuremath{\textsc{GSEF}}\xspace & \bf{0.769} & \bf{0.800} & \bf{0.681} & \bf{0.740} & \bf{0.548} & \bf{0.542}\\ & \ensuremath{\textsc{GSE}}\xspace &0.559 & 0.588 & 0.503 & 0.565 & 0.262 & 0.371\\ & \ensuremath{\textsc{ExplainSim}}\xspace &0.513 & 0.535 & 0.522 & 0.515 & 0.502 & 0.498\\ \midrule \parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\textsc{GNNExp}\xspace}}} & \ensuremath{\textsc{GSEF-concat}}\xspace & 0.600 & 0.639 & 0.649 & 0.677 & 0.418 & 0.459\\ & \ensuremath{\textsc{GSEF-mult}}\xspace & \underline{0.703} & \underline{0.750} & \underline{0.661} & \underline{0.720} & 0.391 & 0.451\\ & \ensuremath{\textsc{GSEF}}\xspace & \bf{0.790} & \bf{0.808} & \bf{0.700} & \bf{0.732} & \bf{0.605} & \bf{0.573}\\ & \ensuremath{\textsc{GSE}}\xspace &0.514 & 0.540 & 0.461 & 0.494 & 0.322 & 0.406\\ & \ensuremath{\textsc{ExplainSim}}\xspace &0.517 & 0.513 & 0.498 & 0.499 & \underline{0.539} & \underline{0.523}\\ \bottomrule \end{tabular} \quad \begin{tabular}{p{7.5cm}} \begin{tcolorbox} \underline{\textsc{\Large Summary of attack comparisons}} \begin{itemize}[leftmargin=*] \setlength\itemsep{1em} \large \item The amount of information contained in the explanation alone for the graph structure can be quantified using the \ensuremath{\textsc{ExplainSim}}\xspace attack. \item Explanation only (\ensuremath{\textsc{ExplainSim}}\xspace) and explanation augmentation (\ensuremath{\textsc{GSEF}}\xspace) attacks for all explanation methods other than \textsc{GLime}\xspace and \textsc{GNNExp}\xspace outperform all baseline methods. \item Among the baseline approaches, the white-box access-based attack, \textsc{GraphMI}\xspace, performs the best followed by \textsc{FeatureSim}\xspace. \item The relatively good performance of \textsc{FeatureSim}\xspace in datasets points to a high correlation of node features with node connections in all datasets other than \textsc{Bitcoin}\xspace. \item The information leakage for \textsc{Bitcoin}\xspace is limited by small feature size. \item For \textsc{GLime}\xspace and \textsc{GNNExp}\xspace, we observe that the explanation contains little information about the graph structure. The reason behind this is further revealed in the fidelity-sparsity analysis of the obtained explanations. \end{itemize} \end{tcolorbox} \end{tabular} \label{tab:main-result} \end{table*} \subsubsection{\textbf{\large Analysing the information leakage by explanations (RQ~\ref{rq:attack-performance})}} \label{sec:attack-performance} The detailed results of different attacks are provided in Table \ref{tab:main-result}. Our results show that the explanation-only (\ensuremath{\textsc{ExplainSim}}\xspace) and explanation augmentation (\ensuremath{\textsc{GSEF}}\xspace) attacks for all explanation methods other than \textsc{GLime}\xspace and \textsc{GNNExp}\xspace outperform all baseline methods and by far reveal the most information about the private graph structure. We attribute the superior performance of \ensuremath{\textsc{GSEF}}\xspace to the multi-task learning paradigm that aims to reconstruct both the features and explanations. Our results also supports our assumption that a graph structure that is good for predicting the node labels is also good for predicting the node features and explanations. \begin{figure*}[h!] \centering \subfigure[\textsc{Cora}\xspace]{\label{fig:cora-attrsim-exp-only}\includegraphics[width=0.3\linewidth]{attribexponly_cora.pdf}} \subfigure[\textsc{CoraML}\xspace]{\label{fig:coraml-attrsim-exp-only}\includegraphics[width=0.3\linewidth]{attribexponly_coraml.pdf}} \subfigure[\textsc{Bitcoin}\xspace]{\label{fig:bitcoin-attrsim-exp-only}\includegraphics[width=0.3\linewidth]{attribexponly_bitcoin.pdf}} \caption{Performance of explanation-only attack (\ensuremath{\textsc{ExplainSim}}\xspace) on the different datasets. The adopted baseline is \textsc{FeatureSim}\xspace which performs the pairwise similarities using the true node features.} \label{fig:attrsim-ex} \end{figure*} In the following we provide an indepth result analysis. \begin{itemize}[leftmargin=*] \item[$\circ$] \textbf{Baseline methods.} Among the baseline methods \textsc{GraphMI}\xspace is the best performing attack. This is not very surprising as \textsc{GraphMI}\xspace has white-box access to the target GNN in addition to access to node features and labels. On the contrary, the \textsc{FeatureSim}\xspace attack which only uses node features shows competitive performance. This highlights the fact that the features alone are very informative of the graph structure of the studied datasets (except \textsc{Bitcoin}\xspace in which all baseline attacks almost fail with an AUC score of close to 0.5). \item[$\circ$] \mpara{Comparison of the privacy leakage via explanations with that of features.} Figure \ref{fig:attrsim-ex} compares the performance of two similarity-based attacks using explanations (\ensuremath{\textsc{ExplainSim}}\xspace) and features (\textsc{FeatureSim}\xspace) respectively. Note that these attacks do not use any other information except explanations and features respectively. Hence, allowing us to compare their information content. We note that except for \textsc{GLime}\xspace and \textsc{GNNExp}\xspace, \ensuremath{\textsc{ExplainSim}}\xspace outperforms \textsc{FeatureSim}\xspace for all datasets except \textsc{Bitcoin}\xspace. Moreover, for \textsc{Bitcoin}\xspace, both of these attacks fail (with AUC close to 0.5) except for gradient-based explanation methods. \item[$\circ$] \mpara{Explanation augmentation with features and labels.} Next, we compare the explanation augmentation attack \ensuremath{\textsc{GSEF}}\xspace with the vanilla graph structure learning approach \textsc{Slaps}\xspace which only uses node features and labels. \ensuremath{\textsc{GSEF}}\xspace outperforms \textsc{Slaps}\xspace (Figure \ref{fig:ex-honey}) which points to the added utility of using explanations to reconstruct the graph structure. \item[$\circ$] \mpara{Explanation augmentation attack variants.} In Figure \ref{fig:ex-honey-ex}, we compare the performance of \ensuremath{\textsc{GSE}}\xspace which have no access to the true features with \textsc{Slaps}\xspace which utilize the true features. We observe that on most datasets, the attack performance significantly outperform \textsc{Slaps}\xspace. This emphasizes that explanations encode the feature information albeit the importance. Comparing the performance of \ensuremath{\textsc{GSEF-concat}}\xspace and \ensuremath{\textsc{GSEF-mult}}\xspace with \ensuremath{\textsc{GSEF}}\xspace on all dataset shows that independently extracting the adjacency matrix from the features and explanations respectively and then combining the two adjacency matrix is better than combining the features and explanations at input stage. \item[$\circ$] \mpara{Attack on \textsc{Bitcoin}\xspace.} We observe that all baseline attacks fail for \textsc{Bitcoin}\xspace. We attribute this to the very small feature size (=8) and the small number of labels (=2). The attacks are not able to exploit the little information exposed by a small set of features/labels. Explanation-based attacks especially in the case of gradient-based explanations are more successful than baselines but less than for other datasets. The reason can be again attributed to the small feature size. Even though the explanations provide additional information, the number of revealed bits in the explanation is equal to the feature dimension which is quite small as compared to other datasets. \end{itemize} \subsubsection {\textbf{\large Differences in privacy leakage (RQ \ref{rq:exp-vulnerability}).}} All explanation method leaks significant information via the reconstruction attacks except for \textsc{GLime}\xspace and \textsc{GNNExp}\xspace. We observe that for \textsc{GLime}\xspace and \textsc{GNNExp}\xspace, the explanation-based attacks do not perform better than the baselines which do not utilize any explanation. Moreover, gradient-based methods are most vulnerable to privacy leakage. To understand the reason behind these observations, we investigate the explanation quality. We measure the goodness of the explanation by its ability to approximate the model's behavior which is also referred to as \textit{faithfulness}. As the groundtruth for explanations is not available, we use the RDT-Fidelity proposed by \cite{funke2021zorro} to measure faithfulness. The results are shown in Table \ref{tab:rdt-fidelity-sparsity}. We further note that by definition, the complete input is faithful to the model. Therefore, in addition, we measure the sparsity of the explanation. A meaningful explanation should be sparse and should only contain a small subset of features most predictive of the model decision. We use the entropy-based sparsity definition from \cite{funke2021zorro} as it is applicable for both soft and hard explanation masks. The results are shown in Table \ref{tab:rdt-fidelity-sparsity}. We analyse the tradeoffs of privacy and explanation goodness in the following. \begin{itemize} \item \textbf{First}, we observe that \textsc{GNNExp}\xspace has the lowest sparsity (the higher the entropy, the more uniform is the explanation mask distribution i.e. higher the explanation density). In other words, almost all features are marked equally important. Hence, it is not surprising that it shows a high fidelity. This is the main reason why \ensuremath{\textsc{ExplainSim}}\xspace fails because there is no distinguishing power contained in the explanations. \item \textbf{Second}, we observe that gradient-based explanations (\textsc{Grad}\xspace and \textsc{Grad-I}\xspace) contain the most information about the graph structure though they have low fidelity, i.e., they do not reflect the model's decision process. It appears that these two methods provide the most similar explanations for connected nodes. This is really the worst case when the explanations are not useful but leak maximum private information about the graph structure. \item \textbf{Third}, \textsc{GLime}\xspace has the highest sparsity and lowest fidelity. \textsc{GLime}\xspace runs the HSIC-Lasso feature selection method over a local dataset created using the node and its neighborhood. HSIC-Lasso is known to output a very small set of most predictive features \cite{dong2020featureselection} when used for global feature selection. But for the current setting of instance-wise feature selection, i.e., finding the most predictive features for decision over an instance/node, \textsc{GLime}\xspace's explanation turns out to be too short which is neither faithful to the model nor contains any predictive information about the neighborhood. \item \textbf{Finally}, the explanations of \textsc{Zorro}\xspace and \textsc{Zorro-S}\xspace show the highest fidelity and intermediate sparsity pointing to their high quality. The \ensuremath{\textsc{GSEF}}\xspace attack also obtains high AUC scores for two datasets pointing to the expected increased privacy risk with an increase in explanation utility. \end{itemize} \begin{table}[h!!] \caption{RDT-Fidelity and sparsity (entropy) of different explanation methods. For fidelity, the higher the better. For sparsity, the lower the better} \label{tab:rdt-fidelity-sparsity} \begin{tabular}{lp{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}} \toprule $Exp$ & \multicolumn{2}{c}{\textsc{Cora}\xspace} & \multicolumn{2}{c}{\textsc{CoraML}\xspace} & \multicolumn{2}{c}{\textsc{Bitcoin}\xspace} \\ \cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7} & \small{Fidelity} & \small{Sparsity} & \small{Fidelity} & \small{Sparsity} & \small{Fidelity} & \small{Sparsity} \\ \midrule \textbf{\textsc{Grad}\xspace} & 0.23 & 3.99 & 0.22 & 5.24 & 0.83 & 0.64 \\ \textbf{\textsc{Grad-I}\xspace} & 0.19 & 3.99 & 0.20 & 5.30 & 0.82 & 0.64 \\ \textbf{\textsc{Zorro}\xspace} & 0.89 & 1.83 & 0.96 & 3.33 & 0.99 & 0.37 \\ \textbf{\textsc{Zorro-S}\xspace} & 0.98 & 2.49 & 0.84 & 2.75 & 0.95 & 0.96 \\ \textbf{\textsc{GLime}\xspace} & 0.19 & 0.88 & 0.20 & 0.98 & 0.82 & 0.13 \\ \textbf{\textsc{GNNExp}\xspace} & 0.74 & 7.27 & 0.55 & 5.70 & 0.90 & 2.05 \\ \bottomrule \end{tabular} \end{table} \begin{figure*} \centering \subfigure[\textsc{Cora}\xspace]{\label{fig:cora-exp-only}\includegraphics[width=0.28\linewidth]{explanation-only_cora.pdf}} \subfigure[\textsc{CoraML}\xspace]{\label{fig:coraml-exp-only}\includegraphics[width=0.28\linewidth]{explanation-only_coraml.pdf}} \subfigure[\textsc{Bitcoin}\xspace]{\label{fig:bitcoin-exp-only}\includegraphics[width=0.28\linewidth]{explanation-only_bitcoin.pdf}} \caption{Average AUC and AP of \ensuremath{\textsc{GSE}}\xspace attack on the different datasets. The adopted baseline is \textsc{Slaps}\xspace which use the true node features.} \label{fig:ex-honey-ex} \end{figure*} \begin{figure*} \centering \subfigure[\textsc{Cora}\xspace]{\label{fig:cora-gsef-slaps}\includegraphics[width=0.28\linewidth]{gsef_slaps_cora.pdf}} \subfigure[\textsc{CoraML}\xspace]{\label{fig:coraml-gsef-slaps}\includegraphics[width=0.28\linewidth]{gsef_slaps_coraml.pdf}} \subfigure[\textsc{Bitcoin}\xspace]{\label{fig:bitcoin-gsef-slaps}\includegraphics[width=0.28\linewidth]{gsef_slaps_bitcoin.pdf}} \caption{Average AUC and AP of \ensuremath{\textsc{GSEF}}\xspace attack on the different datasets. The adopted baseline is \textsc{Slaps}\xspace which use the true node features.} \label{fig:ex-honey} \end{figure*} \subsubsection{\textbf{\large Adversary's advantage in terms of trained GNN model (RQ~\ref{rq:reconstruct-edges})}} \label{sec:reconstruct-edges} Here, we formalize a quantitative advantage measure that captures the privacy risk posed by the different attacks. The attacker is at an advantage if she can train a well-performing model (on a downstream task) using the reconstructed graph. As the attack models based on graph structure learning implicit trains a GNN on the reconstructed graph, we quantify the attacker's advantage by the performance on the downstream task of node classification. \begin{hypothesis} If the explanations and the reconstructed graph can perform better on a downstream task with high confidence, then the reconstructed adjacency is a valid representation of the graph structure. Hence, the attacker has an advantage quantified by Equation \ref{eq:attacker_advantage}. \end{hypothesis} We define the attacker's advantage as \begin{gather} \label{eq:attacker_advantage} Advantage = \mathcal{R}(f(\mathcal{E}_X;Adj_{rec};\theta_W), y), \end{gather} where $f$ is a 2-layer GCN model parameterized by $\theta_W$, $\mathcal{E}_X$ is the explanation matrix, $Adj_{rec}$ is the reconstructed graph by the attacker, $y$ is the groundtruth label and $\mathcal{R}$ is a advantage measure that compares the predictions on $f$ and the groundtruth label. We use accuracy as the choice of $\mathcal{R}$. We compare the results to that of \textsc{Slaps}\xspace that use the actual features and groundtruth label for learning the graph structure, and the original performance (denoted by $Max$) in which the model is trained with true features, labels, and graph structure. We analyze the attacker's advantage corresponding to four attacks \ensuremath{\textsc{GSEF-concat}}\xspace, \ensuremath{\textsc{GSEF-mult}}\xspace, \ensuremath{\textsc{GSEF}}\xspace, and \ensuremath{\textsc{GSE}}\xspace. The intuition is that if the attacker's advantage is not better than \textsc{Slaps}\xspace, then the best advantage an attacker can have is similar to having the actual feature and performing a graph structure learning. Also, if the attacker's advantage is greater or equal to $Max$, then the attacker has an equivalent advantage as she would have by possessing the actual feature and graph. An example use case of the attacker's advantage is shown in Figure \ref{fig:advantage_intuition}. Specifically, if a model trained with say Jane's full data (true features and graph) and another trained only with her explanations (no graph or true features), both models will make the same prediction about Jane. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{advantage_intuition.pdf} \caption{An example use case of the attacker's advantage} \label{fig:advantage_intuition} \end{figure} The detailed results for attacker's advantage are plotted in Figure~\ref{fig:advantage}. We observe that on \textsc{Cora}\xspace, the attacker obtains highest advantage for \textsc{Grad}\xspace, \textsc{Grad-I}\xspace, \textsc{Zorro}\xspace and \textsc{Zorro-S}\xspace explanations. On \textsc{CoraML}\xspace, the highest advantage is obtained with \textsc{Grad}\xspace and \textsc{Zorro-S}\xspace explanations. Usually, the attacker's advantage is positively correlated with the success rate of corresponding attacks for both \textsc{Cora}\xspace and \textsc{CoraML}\xspace. On \textsc{Bitcoin}\xspace, the attacker's advantage for all explanation methods is usually high. This is surprising as the success rate for attacks is relatively lower than for other datasets. This might imply that the reconstructed graph has the same semantics as the true graph, if not the exact structure. \begin{figure*} \centering \subfigure[\textsc{Cora}\xspace]{\label{fig:cora-reconstructed-acc}\includegraphics[width=1\linewidth]{reconstructed_accuracycora.pdf}} \subfigure[\textsc{CoraML}\xspace]{\label{fig:coraml-reconstructed-acc}\includegraphics[width=1\linewidth]{reconstructed_accuracycoraml.pdf}} \subfigure[\textsc{Bitcoin}\xspace]{\label{fig:bitcoin-reconstructed-acc}\includegraphics[width=1\linewidth]{reconstructed_accuracybitcoin.pdf}} \caption{Accuracy of reconstructed graph on a downstream node classification task by all models on different datasets. Blue line is the original accuracy using the true features and edges while the yellow line is the \textsc{Slaps}\xspace accuracy.} \label{fig:advantage} \end{figure*} \subsubsection{\textbf{\large Lack of groundtruth labels (RQ~\ref{rq:targetmodelaccess})}} \label{sec:targetmodelaccess} We relax the assumption that the attacker has access to groundtruth labels. Instead, she has black-box access to the target model. This is made possible with the popularity of machine learning as a service (MLaaS) where a user can input a query and get the predictions as output. Therefore, the "groundtruth" label is the one obtained from the target model. As representative explanations, we show the performance on \textsc{Grad}\xspace and \textsc{Zorro}\xspace on all datasets. As shown in Table \ref{tab:groundtruth-vs-blackbox-attack}, on the \textsc{Grad}\xspace explanation, we observe a 3\% gain in attack performance on \ensuremath{\textsc{GSEF-concat}}\xspace when the attacker has access to the target model on the \textsc{Cora}\xspace and \textsc{CoraML}\xspace dataset. On the \textsc{Bitcoin}\xspace dataset, we observe a decrease of 2\% in AUC and an increase of 6\% in AP. The corresponding performance on \textsc{Zorro}\xspace follows the same with no significant change in the performance on \textsc{CoraML}\xspace and a 2\% decrease in performance on \textsc{Cora}\xspace. The performance of \ensuremath{\textsc{GSEF-mult}}\xspace and \ensuremath{\textsc{GSEF}}\xspace attack decreases across all datasets with \textsc{Bitcoin}\xspace having the worst performance reduction of up to 29\% on \textsc{Grad}\xspace. However, on \textsc{Zorro}\xspace, there is a 2\% gain on \textsc{Cora}\xspace, 4\% decrease on \textsc{CoraML}\xspace, and upto 36\% decrease on \textsc{Bitcoin}\xspace. We observe performance drop on \ensuremath{\textsc{GSEF}}\xspace and \ensuremath{\textsc{GSE}}\xspace on all datasets across both explanations except for \ensuremath{\textsc{GSE}}\xspace on \textsc{Grad}\xspace, which has up to 5\% gain in performance on the \textsc{Cora}\xspace and \textsc{CoraML}\xspace datasets. It is important to note that the \textsc{Bitcoin}\xspace dataset has the least performance across all attacks even when the true groundtruth label is used. Therefore, the large disparity in performance when the label is generated from the trained black-box model is not surprising. \textbf{Summary.} For \textsc{Cora}\xspace and \textsc{CoraML}\xspace dataset, on \ensuremath{\textsc{GSEF-concat}}\xspace and \ensuremath{\textsc{GSE}}\xspace attacks, having black-box access to the target model performs better than the attacker having access to the groundtruth label. For \ensuremath{\textsc{GSEF-mult}}\xspace and \ensuremath{\textsc{GSEF}}\xspace attacks, it is better to have access to groundtruth label to achieve the best attack success rate. For \textsc{Bitcoin}\xspace dataset, the groundtruth labels performs better than blackbox access to target model on all attacks. \begin{table*}[h!!] \caption{Performance comparison of relaxing the availability of groundtruth labels (\textbf{$Y$}) assumption. Here, the attacker has a black-box access to the target model ($\mathcal{M}$). We perform the experiment on the \textsc{Grad}\xspace and \textsc{Zorro}\xspace explanation method on all datasets. $\Delta$ is the percentage difference. Negative value implies that the groudtruth labels are preferred over black-box access.} \centering \begin{tabular}{p{2cm}lp{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering} |p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}} \toprule \multicolumn{1}{l}{\textbf{Dataset}} & \multirow{1}{*}{\textbf{Method}}& \multicolumn{2}{c}{\textbf{Y$_\textsc{Grad}\xspace$}} & \multicolumn{2}{c}{\textbf{$\mathcal{M_\textsc{Grad}\xspace}$}} & \multicolumn{2}{c}{\textbf{$\Delta_\textsc{Grad}\xspace$}}& \multicolumn{2}{c}{\textbf{Y$_\textsc{Zorro}\xspace$}} & \multicolumn{2}{c}{\textbf{$\mathcal{M_\textsc{Zorro}\xspace}$}} & \multicolumn{2}{c}{\textbf{$\Delta_\textsc{Zorro}\xspace$}}\\ \cmidrule(r){3-4} \cmidrule(r){5-6} \cmidrule(r){7-8} \cmidrule(r){9-10} \cmidrule(r){11-12} \cmidrule(r){13-14} & & AUC & AP & AUC & AP & AUC & AP & AUC & AP & AUC & AP & AUC & AP \\ \midrule \parbox[t]{2mm}{\multirow{5}{*}{{\textsc{Cora}\xspace }}} & \ensuremath{\textsc{GSEF-concat}}\xspace & 0.694 & 0.733 & 0.717 & 0.744 & 3.3 & 1.5 & 0.823 & 0.860 & 0.779 & 0.810 & -2.8 & -5.7\\ & \ensuremath{\textsc{GSEF-mult}}\xspace &0.692 & 0.749 & 0.671 & 0.705 & -3.1 & -5.8 &0.723 & 0.756 & 0.740 & 0.772 & 2.4 & 2.0\\ & \ensuremath{\textsc{GSEF}}\xspace &0.947 & 0.955 & 0.926 & 0.935 & -2.2 & -2.1 &0.884 & 0.880 & 0.871 & 0.881 & -1.5 & 0.1\\ & \ensuremath{\textsc{GSE}}\xspace &0.870 & 0.893 & 0.891 & 0.923 & 2.4 & 3.3 &0.779 & 0.810 & 0.814 & 0.849 & 4.5 & 4.9 \\ \midrule \parbox[t]{2mm}{\multirow{5}{*}{\shortstack[l]{\textsc{CoraML}\xspace }}} & \ensuremath{\textsc{GSEF-concat}}\xspace & 0.685 & 0.749 & 0.707 & 0.780 & 3.2 & 4.1 & 0.735 & 0.786 & 0.738 & 0.792 & 0.4 & 0.8\\ & \ensuremath{\textsc{GSEF-mult}}\xspace & 0.683 & 0.762 & 0.666 & 0.723 & -2.5 & -5.1 & 0.681 & 0.697 & 0.653 & 0.692 & -4.1 & -0.7\\ & \ensuremath{\textsc{GSEF}}\xspace & 0.902 & 0.832 & 0.808 & 0.852 & -10.4 & 2.4 & 0.776 & 0.820 & 0.751 & 0.796 & -3.2 & -2.9\\ & \ensuremath{\textsc{GSE}}\xspace & 0.689 & 0.761 & 0.725 & 0.788 & 5.3 & 3.6 & 0.722 & 0.777 & 0.713 & 0.759 & -1.2 & -2.3\\ \midrule \parbox[t]{2mm}{\multirow{5}{*}{\shortstack[l]{\textsc{Bitcoin}\xspace }}} & \ensuremath{\textsc{GSEF-concat}}\xspace & 0.447 & 0.476 & 0.435 & 0.504 & -2.7 & 6.0 & 0.575 & 0.529 & 0.523 & 0.517 & -9.0 & -2.3\\ & \ensuremath{\textsc{GSEF-mult}}\xspace & 0.266 & 0.381 & 0.208 & 0.365 & -21.7 & -4.1 & 0.399 & 0.449 & 0.255 & 0.369 & -36.1 & -17.8 \\ & \ensuremath{\textsc{GSEF}}\xspace & 0.700 & 0.715 & 0.497 & 0.540 & -29.1 & -24.5 & 0.537 & 0.527 & 0.312 & 0.412 & -41.8 & -21.7\\ & \ensuremath{\textsc{GSE}}\xspace & 0.254 & 0.376 & 0.200 & 0.352 & -21.3 & -6.4 & 0.596 & 0.561 & 0.491 & 0.503 & -17.6 & -10.3\\ \bottomrule \end{tabular} \label{tab:groundtruth-vs-blackbox-attack} \end{table*} \section{Defense} \mpara{Explanation perturbation.} To limit the information leakage by the explanation, we perturb each explanation bit using a randomized response mechanism \cite{kairouz2016discrete, wang2016using}. Specifically for 0/1 feature (explanation) mask as in \textsc{Zorro}\xspace, we flip each bit of the explanation with probability that depends on the privacy budget $\epsilon$ as follows \begin{equation}\label{eq:randomized-response} Pr(\mathbf{\mathcal{E}}_{x_i}^\prime =1) = \begin{cases} \frac{e^\epsilon}{e^\epsilon + 1}, \quad \text{if}~\mathbf{\mathcal{E}}_{x_i}=1, \\ \frac{1}{e^\epsilon + 1}, \quad \text{if}~\mathbf{\mathcal{E}}_{x_i}=0, \end{cases} \end{equation} where $\mathbf{\mathcal{E}}_{x_i}$ and $\mathbf{\mathcal{E}}_{x_i}^\prime$ are true and perturbed $i^{th}$ bit of explanation $\mathbf{\mathcal{E}}_{x}$ respectively. Note that our defense mechanism satisfies $d\epsilon$-local differential privacy. \begin{lemma} \label{lemma:randres} For an explanation with $d$ dimensions, the explanation perturbation defense mechanism in Equation \ref{eq:randomized-response} satisfies $d\epsilon$-local differential privacy. \end{lemma} \begin{proof} Note that for an explanation corresponding to two graph datasets $D$ and $D^\prime$ differing in a single edge, the ratio of probabilities of obtaining a certain explanation can be bounded as follows. $$\frac{Pr[\mathcal{E}_X(D)= S]}{Pr[\mathcal{E}_X(D^\prime) = S]} =\prod_{i=1}^d \frac{Pr[\mathbf{\mathcal{E}}_{x_i}(D)= S_i]}{Pr[\mathbf{\mathcal{E}}_{x_i}(D^\prime) = S_i]} \le \prod_{i=1}^d {{e^{\epsilon}\over e^{\epsilon}+1} \over {1\over e^{\epsilon} +1}} =e^{d\epsilon}.$$ \end{proof} \mpara{Defense evaluation.} We evaluate our defense mechanism on \ensuremath{\textsc{ExplainSim}}\xspace attack as it best quantifies the information leakage due to the explanations alone. All other attacks assume the availability of other information such as features and labels. We use two datasets: \textsc{Cora}\xspace and \textsc{CoraML}\xspace. As evaluation metrics, we use the AUC score and AP to compute the attack success rate after the defense. Besides, we measure the utility of the perturbed explanation in terms of fidelity, sparsity and the percentage of 1 bits that is retained from the original explanation (intersection). \mpara{Defense results.} As shown in Figure \ref{fig:zorro-defense-cora}, the explanation perturbation based on randomized response mechanism clearly defends against the attack. For instance, at a very high privacy level $\epsilon = 0.0001$ which gives $d\epsilon= 0.14$, the attack performance drastically dropped to 0.56 in AUC and 0.59 in AP which is about 36\% decrease over the non-private released explanation. As expected, the attack performance decreases significantly with increase in the amount of noise ($\epsilon$ decreases). In Table \ref{tab:defense-fidelity-sparsity-cora}, we analyse the change in explanation utility due to our perturbation mechanism. We observe that on \textsc{Cora}\xspace, with lowest privacy loss level, there is a drop of $5.61\%$ in the fidelity when the attack is already reduced to a random guess. The entropy of the mask distribution increases, in other words the explanation sparsity decreases. For \textsc{Zorro}\xspace, this implies that more bits are set to 1 than in the true explanation mask. Even though this decreases explanation utility to some extent, we point out that $74.68\%$ of true explanation is still retained. Moreover, the sparsity is still lower than achieved by \textsc{GNNExp}\xspace explanations even without any perturbations. While quantitatively, the change in explanation sparsity seems to be acceptable, more application dependent qualitative studies would be required to evaluate the change in utility of explanations. Nevertheless, we provide a promising first defense for future development and possible improvements. We obtain similar results for \textsc{CoraML}\xspace which are provided in Appendix \ref{sec:defense-coraml}. \mpara{Defense variant for soft explanation masks.} Note that Equation \ref{eq:randomized-response} is only applicable to explanations that returns binary values. For explanations with continuous values, we can adapt the defense as follows. Keeping the original value ($\mathbf{\mathcal{E}_x}_i$) when the flipped coin lands heads but when it lands tail, replace ($\mathbf{\mathcal{E}_x}_i$) with ${\mathcal{E}_x}^\prime_i$ where ${\mathcal{E}_x}^\prime_i$ is a random number drawn from normal distribution ($\mathbf{\mathcal{E}_x}^\prime_i \sim \mathcal{N}(0,\,1)$). \begin{table}[] \caption{Fidelity, sparsity and percentage of 1 bits in the true explanation that is retained in the perturbed explanation (intersection) after defense for different $\epsilon$ on the \textsc{Cora}\xspace dataset for \textsc{Zorro}\xspace explanation. $\infty$ implies no privacy.} \label{tab:defense-fidelity-sparsity-cora} \begin{tblr}{lccc} \toprule \textbf{$\epsilon$} & \textbf{Fidelity} & \textbf{Sparsity} & \textbf{Intersection}\\ \midrule \textbf{0.0001} & 0.84 & 5.91 & 74.68 \\ \textbf{0.001} & 0.84 & 5.91 & 74.70 \\ \textbf{0.01} & 0.84 & 5.89 & 75.03 \\ \textbf{0.1} & 0.84 & 5.80 & 75.10 \\ \textbf{0.2} & 0.83 & 5.71 & 75.60 \\ \textbf{0.4} & 0.82 & 5.49 & 76.45 \\ \textbf{0.6} & 0.81 & 5.25 & 77.16 \\ \textbf{0.8} & 0.81 & 5.00 & 78.66 \\ \textbf{1} & 0.81 & 4.73 & 80.10 \\ $\infty$ & 0.89 & 1.83 & 100\\ \bottomrule \end{tblr} \end{table} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{zorro_defense_rr_cora.pdf} \caption{Privacy budget and corresponding attack performance of \ensuremath{\textsc{ExplainSim}}\xspace for \textsc{Zorro}\xspace explanation on the \textsc{Cora}\xspace dataset. $\infty$ implies that no perturbation is performed.} \label{fig:zorro-defense-cora} \end{figure} \input{relatedwork} \bibliographystyle{ACM-Reference-Format} \section{Related Works} \paragraph{\textbf{Private graph extraction attacks}} Given a black-box access to a GNN model that is trained on a target dataset and the adversary’s background knowledge, \citet{he2021stealing} proposed link stealing attacks to infer whether there is a link between a given pair of nodes in the target dataset. Their attacks are specific to the adversary's background knowledge which range from simply exploiting the node feature similarities to a shadow model-based attack. \citet{wu2021linkteller} proposed an edge re-identification attack for vertically partitioned graph learning. Their attack setting is different from ours and is applicable in scenarios where the high-dimensional features and high-order adjacency information are usually heterogeneous and held by different data holders. GraphMI \cite{zhanggraphmi} aims at reconstructing the adjacency matrix of the target graph given whitebox access to the trained model, node features and labels. \paragraph{\textbf{Other inference attacks and defenses on GNNs}} Several other attacks such as membership inference \cite{olatunji2021membership, duddu2020quantifying} and model extraction attacks \cite{wu2020model} have been proposed to quantify privacy leakage in GNNs. In membership inference attack, the goal of the attacker is to infer whether a node was part of the data used in training the GNN model via black-box access to the trained model. In model extraction attack, the attacker aims to steal the trained model's parameter and hyperparameters to duplicate or mimic the functionality of the target model via the predictions returned from querying the model\cite{wu2020model}. Recently, several defenses against these attacks have been proposed which are mainly based on differential privacy. \citet{olatunji2021releasing} proposed a method for releasing GNN models by combining knowledge-distillation framework with two noise mechanisms, random subsampling, and noisy labeling. Their centralized setting approach trains a student model using public graph and private labels obtained from a teacher model trained exclusively for each query node (personalized teacher models). Their method, by design, defends against membership inference attacks and model extraction attack since only the student model (which has limited and perturbed information) is released. \citet{sajadmanesh2020locally} proposed a locally differentially private GNN model by considering a distributed setting where nodes and labels are private, and the graph structure is known to the central server. Their approach perturbs both the node features and labels to ensure differential privacy guarantee. However, all attacks and defenses are not applicable to explanations. \paragraph{\textbf{Membership Inference attack and explanations}} On euclidean data such as images, \citet{shokri2021privacy} analyzed the privacy risks of feature-based model explanations using membership inference attacks which quantifies the extent at which model predictions and their explanations leak information about the presence of a datapoint in the training set of a model. We emphasize that the goal of \citet{shokri2021privacy} differ from ours in that we focus on reconstructing the entire graph structure from feature-based explanations. Also their investigations are limited to non-graph data and the corresponding target and explanation models. \paragraph{\textbf{Adversarial attacks and GNNs.}} Another line of research focuses on the vulnerability of GNNs to adversarial attacks \cite{wu2019adversarial, zhang2020backdoor, zugner2018adversarial, zugner2019adversarial, dai2018adversarial, wang2019attacking}. The goal of the attacker is to fool the GNN model into making a wrong prediction by manipulating node features or the structural information of nodes. A recent work \cite{fan2021jointly} used explanation method such as GNNExplainer as a method for detecting adversarial perturbation on graphs. Hence, acting as a tool for inspecting adversarial attacks on GNN models. They further proposed an adversarial attack framework (GEAttack) that exploits the vulnerabilities of explanation methods and the GNN model. This allows the attacker to simultaneously fool the GNN model and misguide the inspection from the explanation method. Our work differs significantly from this work in that first, we aim to reconstruct the graph from the explanations and secondly, to quantify the privacy leakage of explanations on GNN models. \section{Conclusion} We initiate the first investigation on the privacy risks of releasing post-hoc explanations of graph neural networks. Concretely, we quantify the information leakage of explanations via our proposed \emph{five} graph reconstruction attacks. The goal of the attacker is to reconstruct the private graph structure information used to train a GNN model. Our results show that even when the explanations alone is available without any additional auxiliary information, the attacker can reconstruct the graph structure with an AUC score of more than $90\%$. Our explanation-based attacks outperform all baseline methods pointing to the additional privacy risk of releasing explanations. We propose a perturbation-based defense mechanism which reduces the attack to a random guess. The defense leads to a slight decrease in fidelity. At the lowest privacy loss, the perturbed explanation still contains around 75\% of the true explanation. While quantitatively, the change in explanation sparsity seems to be acceptable, more application dependent qualitative studies would be required to evaluate the change in utility of explanations. We emphasize that we strongly believe in transparency of graph machine learning and acknowledge the need of explaining trained models. At the same time, our work points out the associated privacy risks which cannot be ignored. We believe that our work would encourage future work on finding solutions to balance the complex trade-off between privacy and transparency.
2024-02-18T23:40:51.820Z
2022-06-30T02:20:31.000Z
algebraic_stack_train_0000
3,515
11,532
proofpile-arXiv_066-1172
\section{Introduction} In the last few decades new experimental techniques \cite{tech-dowling, golter, accanto17rapid, perreault17quantum, rossi} have been developed which enabled the study of particles and phenomena at a length scale where quantum effects play a dominant role. In these studies the quantum systems considered interact with their ambient environment with varying degrees of isolation. Mostly systems exhibit significant variation in their behaviour as a result of weak or strong interaction with the environment. This has resulted in a renewed focus in the study of quantum systems which are open to the environment \cite{breuer02}. When the interaction between the system and the environment is weak, one can microscopically derive its evolution \cite{davies74, breuer02} through a series of approximations (Born-Markov and secular) in the form of the celebrated Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) master equation \cite{GKS, lindblad76, breuer02}, \begin{align} \label{GKSL} \frac{d\rho}{dt}=-i[H_S,\rho] &+\sum_{k=1}^n\gamma_k\Big(A_k\rho A_k^{\dagger} -\frac{1}{2}\{A_k^{\dagger}A_k,\rho\}\Big), \end{align} where $A_k$'s are the jump operators, $H_S$ is the system Hamiltonian and the rates $\gamma_k\ge 0$ for all $k$. Such classes of master equations are called semi-group master equations as the evolution maps resulting from these master equations form a semi-group. However, for a vast majority of dynamics, interaction is not weak and all the approximations used to derive master equations in GKSL form are not valid. Consequently, such general closed form master equations do not exist when the interaction is not weak. Moreover, when we have time dependent and positive rates i.e. $\gamma_k(t)\ge 0$ for $k=1,..,n$ in Eq. (\ref{GKSL}), we call the corresponding evolutions completely positive divisible or CP-divisible \cite{rivas14quantum,b-review,NM4,NM3,PhysRevAChakraborty}. CP-divisible evolutions are usually called Markovian. Theory of open quantum systems provides a solid foundation to the emergent field of quantum thermodynamics \cite{kosloff13quantum, binder18book, vinjanampathy16quantum}. Dynamical framework of quantum mechanics allows one to address finite time thermodynamics processes. Specifically, in the weak coupling limit, microscopically derived Markovian master equation (also known as Davies construction \cite{davies74}) in the GKSL form gives a consistent and universal description of the basic thermodynamic laws \cite{alicki79the, kosloff13quantum, alicki18introduction}. Originally, Davies construction was engineered for time independent system Hamiltonian. Later on it was generalized for the time dependent scenarios \cite{kosloffannual, Davies1978, Albash_2012, kamleitner, yamaguchi}. Beyond weak coupling approximation, where non-Markovianity inevitably enters into the picture, it is not straightforward to establish a consistent framework of thermodynamics, largely due to the unavailability of a unique closed form master equation as mentioned before. Consequently, a number of approaches \cite{Esposito_2010, kato_strong, stratsberg-collision,llobet-strong,prb-strong, strasberg-collisional-strong, rivas-strong, Bergmann2021, Miller2018, Nazir2018, Kato2018} have been proposed to deal with strong interaction without compromising the thermodynamic consistency. One of the major applications of quantum thermodynamics is the study of quantum thermal machines \cite{sun-engine, kosloffannual, GELBWASERKLIMOVSKY2015329, alicki18introduction}, which are typically restricted to weak coupling scenario. New experimental techniques \cite{strong-exp1, strong-exp2, strong-exp3, strong-exp4, strong-exp5, strong-exp6, strong-exp7} to access strongly coupled regime and recent theoretical progresses have now opened the avenue to consider the performance of thermal machines beyond weak coupling scenario \cite{Gelbwaser-Klimovsky-strong, Strasberg-strong, kosloff-strong, eisert-strong, Mu_2017,secular-3, nazir-strong, nazir-strong2, archak-strong,McConnell_2022}. In general, it has been observed that strong coupling effect reduces the performance of a thermal machine \cite{Strasberg-strong, nazir-strong, nazir-strong2, segal-strong, hasegawa-strong}. On the other hand, there are several studies \cite{arpan, Zhang_2014, abiuso-non, serra-non} that showed that non-Markovian effect is actually beneficial for enhancing the performance even in the regime of weak coupling \cite{Strasberg-strong, arpan}. Although there are some objections \cite{thomas-non, Wiedmann_2020, shirai-non} to this non-Markovian boosting due to the neglecting of the coupling and decoupling cost, recently genuine non-Markovian advantage has been reported \cite{polish-non} taking into account these previous shortcomings. Evidently, it is an intriguing task to investigate the interplay between strong interaction and non-Markovianity \cite{segal-connection} with respect to thermodynamic tasks, and it still remains a largely unexplored area. With this goal, here we consider a model of quantum Otto cycle, where the working medium qubit is connected to another single qubit (working as bath) with arbitrary interaction strength. Following Ref. \cite{Prathik_Cherian_2019}, we devise a two-qubit unitary evolution such that the exact reduced dynamics of the working medium resembles a semi-group master equation i e. in the GKSL form with constant coefficients, representing pumping and damping of a single qubit system. There are several advantages for choosing this model. Firstly, we go beyond the weak coupling approximation and yet get the exact dynamics in the GKSL form. Secondly, by tweaking the interaction Hamiltonian, we can make the dynamics non-Markovian. This gives us a way to study strong coupling and non-markovianity at the same time. Finally, we have control over the thermalization process taking place in contact with a finite bath. We work out analytical expressions for efficiency (coefficient of performance) and power (cooling power) for Otto engine (refrigerator) employing the thermodynamic framework suited for strong coupling. We notice that transition from Markovian to non-Markovian scenario gives better performance even in the regime of strong interaction. This paper is organized as follow. In Sec. \ref{chapter-2}, we give a short introduction to Otto cycle with conventional weak coupling approximation. In Sec. \ref{forma}, we discuss the strong coupling formalism we use in our paper. Next we describe our model of qubit dynamics in Sec. \ref{dyn-des}. Implementation of the Otto cycle is described in Sec. \ref{otto-cyc-des}. In Sec. \ref{markov-nonmarkov}, we discuss the thermodynamic implications of Markovian and non-Markovian dynamics. Finally, in Sec. \ref{conclu}, we conclude. \section{Weakly coupled Otto cycle} \label{chapter-2} We present a brief discussion on the conventional Otto cycle where the working medium (WM) with Hamiltonian $H_{\rm S}$ is weakly connected to two thermal baths, one at a time, with temperatures $T_h$ and $T_c$ ($T_h>T_c$) respectively. The setup is described by the total Hamiltonian, \begin{equation} H(t)=H_{\rm S}(t)+H_{ B_h}+H_{ B_c}+H_{\rm SB}(t), \end{equation} where, $H_{\rm B_h}$, $H_{\rm B_c}$ are the self Hamiltonians of the hot and cold bath respectively and $H_{\rm SB}(t)=H_{\rm SB}^h(t)+H_{\rm SB}^c(t)$ denotes the interaction Hamiltonian. The cycle consists of four strokes as described below. Schematic diagram of the cycle is given in Fig \ref{cycle}(a). \begin{figure*} \centering \includegraphics[width=130mm]{Fig_otto.pdf} \caption{Schematic of Otto cycle for (a) weak and (b) strong coupling.} \label{cycle} \end{figure*} For simplicity we take $\hbar=k_{\rm B}=1$. We here consider that the time dependence of the WM Hamiltonian is controlled through an external parameter $\omega(t)$ and we write the system Hamiltonian as $H_{\rm S}(\omega(t))$. We also denote $H_{S,\alpha}$ as the WM Hamiltonian at each point of the schematic of Fig. (\ref{cycle}(a)), with $\alpha=\{A,B,C,D\}$. \\\\ \textit{\textbf{First stroke}}: Initially (point A in the schematic diagram \ref{cycle}(a)), the WM is prepared in the state $\rho_S^A$ with Hamiltonian $H_{S,A}=H_{\rm S}(\omega=\omega_A)\equiv H_{\rm S}(\omega_A)$, in equilibrium with the cold bath. Baths are assumed to be always in equilibrium state with their respective Hamiltonians and temperatures. Therefore, the initial joint state of system-bath can be written as, \begin{equation} \rho_{\rm tot}^A=\rho_S^A\otimes\rho_B^c=\frac{e^{-\beta_c H_{\rm S}(\omega_A)}}{{\rm Tr}[e^{-\beta_c H_{\rm S}^A}]}\otimes \frac{e^{-\beta_c H_{\rm B_c}}}{{\rm Tr}[e^{-\beta_c H_{\rm B_c}}]}, \end{equation} First stroke is unitary, where the WM is decoupled from the bath and WM Hamiltonian $H_{\rm S}(\omega(t))$ is changed from $H_{S,A}=H_{\rm S}(\omega_A)$ at point $A$ to $H_{S,B}=H_{\rm S}(\omega_B)$ at point $B$ in a time duration $\tau_{u1}$. The final state of the WM after the first unitary stroke is, \begin{equation} \rho_S^B=U_1\rho_S^A U^{\dagger}_1, \end{equation} where, $U_1=\mathcal{T}\exp\left[{-i\int_A^B H_{\rm S}(\omega(t))dt}\right]$ is the unitary operator. \\\\ \textit{\textbf{Second stroke}}: In this stroke from point $B$ to $C$, the WM is connected to the hot bath at inverse temperature $\beta_h$ for a time interval $\tau_h$, while keeping the WM Hamiltonian fixed at $H_{\rm S}(\omega_B)$ throughout the process. Evolution of the WM is governed by the Markovian master equation in GKSL form derived microscopically for weak coupling and standard Born-Markov, secular approximations \cite{breuer02}, \begin{equation} \dot{\rho}_S(t)=-i[H_{\rm S}(\omega_B),\rho_S(t)]+\mathcal{D}_h[\rho_S(t)], \end{equation} where $\mathcal{D}_h$ is the dissipative superoperator. After a sufficiently long time $\tau_h>>\tau_B$ (bath correlation time), the WM is equilibriated with the bath with state $\rho_S^C= {e^{-\beta_h H_{\rm S}(\omega_B)}}/{{\rm Tr}[e^{-\beta_h H_{\rm S}(\omega_B)}]}$. Due to weak coupling approximation the joint system-bath state is always in the form $\rho_{\rm tot}(t)=\rho_S(t)\otimes \rho_B^i$ ($i=h,c$).\\\\ \textit{\textbf{Third stroke}}: Similar to the first stroke, this is the second unitary stroke, where the Hamiltonian is changed back from $H_{S,C}=H(\omega_B)$ to $H_{S,D}=H(\omega_A)$ in a time interval $\tau_{u2}$. Final state of the working medium after the first unitary stroke is, \begin{equation} \rho_S^D=U_2\rho_S^C U^{\dagger}_2, \end{equation} where, $U_2=\mathcal{T}\exp\left[{-i\int_C^D H_{\rm S}(\omega(t))dt}\right]$ is the unitary operator.\\\\ \textit{\textbf{Fourth stroke}}: This is the second thermalization stroke, where the WM is connected to the cold bath at inverse temperature $\beta_c$, keeping the Hamiltonian fixed at $H_{\rm S}(\omega_A)$. If the stroke duration $\tau_c$ is sufficiently long ($\tau_c>>\tau_B$), the WM is returned to the initial thermal state $\rho_S^D=\rho_S^A$ completing the cycle. Total cycle time is given by $\tau=\tau_{u1}+\tau_h+\tau_{u2}+\tau_c$. The definition of heat and work is well defined in regime of weak interaction, given by respectively \cite{alicki79the,vinjanampathy16quantum}, \begin{equation} \mathcal{Q}=\int{\rm Tr}[\dot{\rho}_S(t)H_{\rm S}(t)]dt,~~\mathcal{W}=\int{\rm Tr}[{\rho}_S(t)\dot{H}_{\rm S}(t)]dt \end{equation} Now, we consider a specific model where the Hamiltonian of the WM is given as, \begin{equation} H_{\rm S}(t)=\omega(t)\sigma_z. \end{equation} As mentioned before, $\omega(t)$ is the external parameter, which is changed from $\omega_A=\omega_c$ to $\omega_B=\omega_h$ in the first unitary stroke and back to $\omega_c$ in the final unitary stroke. Two thermal baths are always in usual equilibrium states with inverse temperatures $\beta_h$ and $\beta_c (\beta_h<\beta_c)$ respectively. We calculate the heat and work done in each stroke for this model. Note that, in the unitary strokes no heat is exchanged and in the thermalization strokes no work is done as the Hamiltonian is kept fixed. Defining the average energy of the WM at the $\alpha$-th ($\alpha=A,B,C,D$) point as, $E_{\alpha}={\rm Tr}[\rho_S^\alpha H_{S,\alpha}]$, we get the following expressions for work and heat in different strokes, \begin{align} \label{expr-stroke1} &\mathcal{W}^0_{AB} = \braket{E_B}-\braket{E_A} =\,(\omega_c-\omega_h)\tanh \beta_c \omega_c\\ \label{expr-stroke2} &\mathcal{Q}^0_h = \braket{E_C}-\braket{E_B}= \,\omega_h(\tanh \beta_c \omega_c- \tanh \beta_h \omega_h) \\ \label{expr-stroke3} &\mathcal{W}^0_{CD} = \braket{E_D}-\braket{E_C}= \,(\omega_h-\omega_c)\tanh \beta_h \omega_h\\ \label{expr-stroke4} &\mathcal{Q}^0_c = \braket{E_A}-\braket{E_D} =\,\omega_c(\tanh \beta_h \omega_h-\tanh \beta_c \omega_c) \end{align} It is evident from the above expressions that $\mathcal{W}^0_{AB}+\mathcal{W}^0_{CD}=-(\mathcal{Q}_h^0+\mathcal{Q}_c^0)$, which is nothing but the energy conservation or the first law of thermodynamics. When $\omega_h/\omega_c>\beta_c/\beta_h$, the cycle works as a heat engine and we get the following expression for the power $\mathcal{P}_0$ as, \begin{align} &\mathcal{P}_0=-\frac{\mathcal{W}}{\tau}=-\frac{\mathcal{W}^0_{AB}+\mathcal{W}^0_{CD}}{\tau}=\frac{\mathcal{Q}_h+\mathcal{Q}_c}{\tau} \end{align} and efficiency $\eta_0$ as, \begin{align} \eta_0 =-\frac{\mathcal{W}}{\mathcal{Q}^0_h}=-\frac{\mathcal{W}^0_{AB}+\mathcal{W}^0_{CD}}{\mathcal{Q}^0_h}=1-\frac{\omega_c}{\omega_h}. \end{align} Similarly, in the refrigerator regime that is when $\omega_h/\omega_c<\beta_c/\beta_h$, cooling rate $\kappa_0$ is given as, \begin{align} \kappa_0=\frac{\mathcal{Q}^0_c}{\tau}, \end{align} and coefficient of performance $K_0$ is given as, \begin{align} {K}_0=\frac{\mathcal{Q}^0_c}{\mathcal{W}^0_{AB}+\mathcal{W}^0_{CD}}=\frac{\omega_c}{\omega_h-\omega_c}. \end{align} Here, we have used the sign convention that energy flow (heat, work) is positive (negative) if it enters (leaves) the WM. Hence, a heat engine (refrigerator) is characterized by $\mathcal{Q}_h>0$ ($<0$), $\mathcal{Q}_c<0$ ($>0$), and $\mathcal{W}<0$ ($>0$). Second law of thermodynamics gives us the bound on efficiency (coefficient of performance) for the engine (refrigerator). It states that the total entropy production is never negative. Now, for each separate thermalization stroke one has \cite{callen1985thermodynamics,vinjanampathy16quantum} \begin{equation} \Delta S_{\rm tot}=\Delta S-\beta \Delta Q\geq 0, \end{equation} where, $\Delta S$ is the change in the von-Neumann entropy \cite{nielson} of the system in a thermodynamic process and $\Delta Q$ is the heat entering to the system form a bath at inverse temperature $\beta$. In our model of Otto cycle, one can check that $\rho_S^B=\rho_S^A$ and $\rho_S^C=\rho_S^D$. Hence, change in the von-Neumann entropy of the system in the two thermalization strokes cancel each other and second law takes the form, \begin{equation} \beta_h \mathcal{Q}_h^0+\beta_c \mathcal{Q}_c^0\leq 0. \end{equation} as of course $\Delta S_{\rm tot}$ remains zero in the unitary processes. Validity of the above inequality can easily be seen from the expressions of Eq. (\ref{expr-stroke1}) to Eq. (\ref{expr-stroke4}) and employing the fact that $\tanh x$ is a monotonically increasing function of $x$. This implies that, \begin{equation} \eta_0=1+\frac{\mathcal{Q}_c^0}{\mathcal{Q}_h^0}=1-\frac{\omega_c}{\omega_h}\leq 1-\frac{\beta_h}{\beta_c}. \end{equation} Similarly, in the refrigerator regime, $K_0\leq \frac{\beta_h}{\beta_c-\beta_h}$. This limit is famously known as Carnot limit. \section{Strongly coupled Otto Cycle} In the strongly coupled model of the Otto cycle, the descriptions of the strokes are the same as in the weakly coupled one. Difference will come only in the thermodynamic framework. In this case, thermalization stroke will make the system-bath joint state a correlated one and the marginal bath state will no longer be a equilibrium state. Consequently, the thermodynamic analysis will change and we have to adopt different definitions of the thermodynamic observables suited for strongly coupled scenario. Here we follow the framework of Ref. \cite{Esposito_2010, kato_strong, rivas-strong} to define the thermodynamic quantities. \subsection{Formalism} \label{forma} Let us start by giving a short account of this framework. We first write the total Hamiltonian of a system-bath setup as following, \begin{equation} H_{\rm tot}(t)= H_{\rm S}(t)+H_{\rm B}+H_{\rm SB}(t). \end{equation} Change in average energy of the joint system-bath state is identified as the work performed, \begin{equation} \label{work} dW(t)=dE_{\rm SB}(t)={\rm Tr}[dH_{\rm tot}(t)\rho_{SB}(t)+H_{\rm tot}(t)d\rho_{SB}(t)], \end{equation} where, $E_{\rm SB}(t)={\rm Tr}[\rho_{SB}(t)H_{\rm tot}]$ is the total energy of the joint state $\rho_{SB}(t)$ of the system and bath. Heat is defined as the energy flowing out of the reservoir, \begin{align} \label{sdef} \nonumber dQ(t)&=-d{\rm Tr}_B[H_{\rm B}\rho_B(t)]=-{\rm Tr}_B[H_{\rm B} d\rho_B(t)]\\ &={\rm Tr}[(H_{\rm S}(t)+H_{\rm SB}(t))d\rho_{SB}(t)], \end{align} where, $\rho_B(t)=Tr_S[\rho_{SB}(t)]$. Internal energy of the system is defined as, \begin{equation} \label{int-energy} E_{\rm S}(t)=Tr_{SB}[(H_S(t)+H_{SB}(t))\rho_{SB}(t)]. \end{equation} Now, it is easy to see that, \begin{equation} dE_{\rm S}(t)=dW(t)+dQ(t), \end{equation} which is nothing but the first law of thermodynamics. In the weak coupling limit ($H_{\rm SB}\approx 0$), these definitions boils down to the conventional definitions stated in the previous section. Let us assume the initial joint state as, \begin{equation} \rho_{SB}(0)=\rho_S(0)\otimes \rho_B^\beta, \end{equation} where, $\rho_B^\beta$ is the thermal state of the bath with inverse temperature $\beta$. The state of the joint system-bath at time $t=\tau$ is given by, \begin{equation} \rho_{SB}(t)=U(\tau,0)\rho_{SB}(0)U^{\dagger}(\tau,0), \end{equation} where, $U(\tau,0)$ is the unitary generated by the total Hamiltonian $H_{\rm tot}(t)$. As mentioned before, entropy production is defined as $\Delta S_{\rm tot}=\Delta S-\beta \Delta Q$. Note that, $\beta$ is the initial temperature of the bath. At later times, the reduced state of the bath is not even a thermal state. It can be shown that \cite{Esposito_2010, rivas-strong}, \begin{equation} \Delta S_{\rm tot}(t)=S(\rho_{SB}(t)\parallel \rho_S(t)\otimes \rho_B^\beta)\geq 0, \end{equation} where, $S(\phi\parallel\psi)$ is the relative entropy between two quantum states $\phi$ and $\psi$. This shows the validity of the second law of thermodynamics in this formalism. Next we derive the master equation used to describe the dynamics in our model of Otto cycle. \subsection{Dynamics with single qubit bath} \label{dyn-des} We consider a two-qubit total Hamiltonian which can be considered as the total Hamiltonian of the system-bath setup, \begin{align} \label{ham-model} &H_{\rm{tot}}(t)= H_{\rm S}\otimes \mathds{1} + \openone \otimes H_{\rm B}+H_{\rm SB}(t) \nonumber\\ &= \omega (\sigma_z\otimes \openone + \openone \otimes\, \sigma_z)+H_{\rm SB}(t) \end{align} where the system Hamiltonian is $H_{\rm S}=\omega \sigma_z$, bath Hamiltonian is $H_{\rm B}=\omega \sigma_z$, and the interaction Hamiltonian $H_{\rm SB}(t)$ reads \begin{equation}\label{} H_{\rm SB}(t) = \frac{f(t)}{2} ( \sigma_x \otimes \sigma_x + \sigma_y \otimes \sigma_y ) , \end{equation} where $f(t)$ is a time dependent coupling strength. The matrix form representation reads \begin{equation} \label{int} H_{\rm SB}(t)=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & f(t) & 0 \\ 0 & f(t) & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}. \end{equation} Note here that we have chosen $H_{\rm S}$ and $H_{\rm B}$ in such a way in Eq. (\ref{ham-model}) that $H_{\rm tot}(t)$ is different time commuting. We have also chosen this special form for the Hamiltonian so that for a specific choice of $f(t)$ (as discussed later) the system evolution will be described by a semi-group master equation \cite{breuer02, Prathik_Cherian_2019}. Not only that, we can also smoothly transit to non-Markovian regime by changing the form of $f(t)$. Now, we choose the initial states of the system and environment to be, \begin{align} \rho_{\rm S}(0)&= \begin{bmatrix} p & x \\ x^* & 1-p \\ \end{bmatrix}, \label{inistate} ~\rho_{\rm B}(0)=\frac{1}{2} \begin{bmatrix} 1-g & 0 \\ 0 & 1+g \\ \end{bmatrix}. \end{align} where, $0\le p,g \le 1$ and $x$ is a complex number with $|x|^2\le p(1-p)$. One can assign a temperature to the initial bath state with respect to the bath Hamiltonian $H_{\rm B}$ to write it as a thermal state. The initial joint system-bath state $\rho_{SB}(0)=\rho_{\rm S}(0)\otimes\rho_{\rm B}(0)$ evolves through the unitary, \begin{equation} U(t,0)=\exp \Big[\int_0^tdt'\, H_{\rm tot}(t')\Big]. \end{equation} Note here that we have used the fact that $H_{\rm tot}(t)$ is different time commuting. The time evolved system state is $\rho_{\rm S}(t)={\rm Tr}_B \big[\rho_{ SB}(t)\big]$, where $ \rho_{ SB}(t)=U(t,0)\rho_{SB}(0)U^{\dagger}(t,0)$. The explicit form of $\rho_{S}$ can be written as, \begin{align} \label{sys-evolved} \rho_{ S}(t)=\Lambda_t[\rho_S(0)] =\begin{bmatrix} p(t) & x e^{-2 i \omega t} \cos F(t) \\ x^* e^{2 i \omega t} \cos F(t) & 1-p(t) \end{bmatrix} \end{align} where $$ p(t)= p\cos^2 F(t) + \frac{1-g}{2}\sin^2 F(t) , $$ $\Lambda_t$ is the dynamical map, and $F(t)=\int_0^tf(t')\,dt'$. The corresponding master equation \begin{align} \frac{d\rho_S}{dt} &= \mathcal{L}_t[\rho_S], \label{defn-gen} \end{align} reads as follows (cf. Appendix \ref{seconda}) \begin{align}\label{gen-me} &\frac{d\rho_{\rm S}(t)}{dt}=-i\omega [\sigma_z,\rho_{\rm S}(t)] \nonumber \\ &+ \gamma_-(t)\Big(\sigma_-\rho_{\rm S}(t)\,\sigma_+-\frac{1}{2}\{\sigma_+\sigma_-,\rho_{\rm S}(t)\}\Big)\nonumber\\ &+ \gamma_+(t) \Big(\sigma_+\rho_{\rm S}(t)\,\sigma_- - \frac{1}{2}\{\sigma_-\sigma_+,\rho_{\rm S}(t)\}\Big) , \end{align} with \begin{equation}\label{} \gamma_\pm(t) = (1 \mp g) \gamma(t) , \end{equation} and \begin{equation} \gamma(t) = f(t)\tan F(t) . \label{markovian-cond} \end{equation} It is, therefore, clear that the evolution is Markovian (CP-divisible) if \cite{rhp-non, rivas14quantum, cond-markovian} \begin{equation} \gamma(t) \geq 0 . \label{markovian-cond} \end{equation} Interestingly, one can show \cite{Prathik_Cherian_2019} that choosing $f(t)$ \begin{equation} \label{optft} f(t) =\frac{ e^{- t/2 g}}{2 g \sqrt{1-e^{- t/g}}} , \end{equation} leads to $\gamma(t) = \frac{1}{2g}$ and hence both rates \begin{align} \gamma_- &= \frac{1+g}{2g}, \nonumber \\ \gamma_+ &= \frac{1-g}{2g} , \nonumber \end{align} are time independent leading to GKLS Markovian master equation. In this case the asymptotic state of the system is a thermal state in the following form, \begin{equation} \label{asym-sys-semi} \rho_S(t\rightarrow \infty)=\frac{1}{2}\begin{bmatrix} {1-g} & 0 \\ 0 & {1+g}\\ \end{bmatrix}. \end{equation} Later we discuss also non-Markovian generalization of the master equation in Eq. (\ref{gen-me}) with other choices of $f(t)$. \subsection{Implementation of Otto cycle} \label{otto-cyc-des} In this section we implement an Otto cycle where the WM is connected to two single qubit baths (hot and cold). Dynamics in the thermalization strokes is described by the formalism developed upstairs. For the sake of clarity of notation, we will append all the relevant quantities in the single qubit bath, namely Hamiltonians, $\omega,g,f(t)$ and $F(t)$, with a suffix $h$ or $c$ depending on whether it is used in connection with the hot bath or the cold bath, respectively. Total Hamiltonian of the WM and the baths are described as, \begin{equation} H(t)=H_{\rm S }(t)+H_{\rm B_h}+H_{\rm B_c}+H_{\rm SB}(t), \end{equation} where $H_{\rm S}(t)=\omega(t)\sigma_z$. External parameter $\omega(t)$ is varied from $\omega_c$ to $\omega_h$ in the first unitary stroke and changed back to $\omega_c$ in the second unitary stroke. $H_{\rm B_h}$ and $H_{\rm B_c}$ are $\omega_h\sigma_z$ and $\omega_c\sigma_z$, in accordance to the Eq. (\ref{ham-model}). Interaction Hamiltonian $H_{\rm SB}(t)=H_{\rm SB}^h(t)+H_{\rm SB}^c(t)$ is given as Eq. (\ref{int}), with prefix $h$ and $c$ for $f(t)$ the contact with hot and cold bath respectively. Initial states of the hot and cold baths are as following, \begin{equation} \label{baths} \rho_{\rm B_h}(0)= \frac{1}{2}\begin{bmatrix} {1-g_h} & 0 \\ 0 & {1+g_h} \\ \end{bmatrix}, ~\rho_{\rm B_c}(0)= \frac{1}{2}\begin{bmatrix} {1-g_c} & 0 \\ 0 & {1+g_c} \\ \end{bmatrix}. \end{equation} Initial temperatures of the baths can be determined by writing the states in the form of thermal states, \begin{equation} \label{b-st} \rho_{\rm B_j}(0)=\frac{e^{-\beta_j H_{\rm B_j}}}{Z_j},~~j=\{h,c\}, \end{equation} where $Z_j={\rm Tr}[e^{-\beta_j H_{\rm B_j}}]$, which gives us $g_h=\tanh \beta_h\omega_h$, and similarly, $g_c=\tanh \beta_c\omega_c$. Below we describe the strokes of the cycle. Schematic of the cycle is shown in Fig. \ref{cycle}(b). WM is initially (point A1) prepared in the thermal state corresponding to the initial temperature of the cold bath and the total WM-bath state is prepared initially in a product state as following, \begin{equation} \rho_{\rm tot}^A=\frac{e^{-\beta_c H_{\rm S}(\omega_c)}}{{\rm Tr}[e^{-\beta_c H_{\rm S}(\omega_c)}]}\otimes \frac{e^{-\beta_c H_{\rm B_c}}}{{\rm Tr[e^{-\beta_c H_{\rm B_c}}]}}, \end{equation} where the initial state of the cold bath in Eq. (\ref{baths}) is written in the form of Eq. (\ref{b-st}). Below we describe the strokes of the Otto cycle. \\\\ \textbf{\textit{First stroke}}: In the first unitary stroke, WM is disconnected form the baths and the external parameter $\omega(t)$ of the system Hamiltonian is varied from $\omega_c$ (point $A1$) to $\omega_h$ (point $B0$) in a time interval $\tau_{u1}$. State doesn't change during the evolution and remains constant at $\rho_S^{A1}=e^{-\beta_cH_{\rm S}(\omega_c)}/Z_c$, where $Z_c={\rm Tr[e^{-\beta_c H_{\rm S}(\omega_c)}]}$. No heat is exchanged in this process, whereas the work done is given by, \begin{align} \mathcal{W}_{AB} &= \braket{E^{B0}_S}-\braket{E^{A1}_S}=(\omega_c-\omega_h)\tanh \beta_c \omega_c. \end{align} Here, $E_S^{\alpha}={\rm Tr}[\rho_S^{A1} H_{\rm S}(\omega_\alpha)]$, with $\alpha=\{h,c\}$.\\\\ \textbf{\textit{Connecting the hot bath}}: WM is connected to the hot bath as represented by point $B0$ to $B1$ in the schematic diagram (Fig. \ref{cycle}(b)). We assume that this coupling operation is instantaneous. Hence, the state of the WM and the bath do not change during this operation. Additionally, interaction Hamiltonian also remains constant. As a result the energy change of the total WM-bath setup during this operation is, \begin{align} &\mathcal{W}^{\rm con}_B = \text{Tr}\left[H^h_{\rm SB}(0)\left(\rho_{\rm B_h}(0)\otimes e^{-\beta_c H_{\rm S}(\omega_c)}/Z_c\right)\right] \nonumber = 0. \end{align} where $\rho_{\rm B_h}(0)$ is as given in Eq. ({\ref{baths}), with $g_h=\tanh \beta_h\omega_h$, and $H_{\rm SB}^h(0)$ is given as Eq. (\ref{int}) with the parameter as $f^h(0)$. Functional form of $f^j(t)$ for $j=\{h,c\}$ will be specified later for both Markovian and non-Markovian scenario. \\\\ \textbf{\textit{Second stroke}}: Second stroke is the thermalization stroke after the WM is connected to the hot bath. As the state of the bath does not change during the connection of WM to it, at the start of the stroke, its state is given by $\rho_{\rm B_h}(0)$. We assume that the WM is kept in contact with the bath for a time interval $\tau_h$ ($B1$ to $C0$ in the schematic), keeping the system Hamiltonian constant at $H_{\rm S}(\omega_h)$. Work done in this process is zero as calculated using the Eq. (\ref{work}). Using the definition in Eq. (\ref{sdef}), heat exchanged in this stroke is given as, \begin{align} \nonumber \mathcal{Q}_h=\mathcal{Q}_{BC}&=\int_0^{\tau_h} dt \,\,\text{Tr}\Big[\big(\omega_h\sigma_z+H^h_{\rm SB}(t)\big)\frac{d}{dt}\rho_{\rm tot}(t)\Big] \\ & =\omega_h (\tanh \beta_c \omega_c - \tanh \beta_h \omega_h) \sin^2 F^h(\tau_h) \nonumber\\ & =\mathcal{Q}^0_h \sin^2 F^h(\tau_h). \end{align} Here, $F^h(\tau_h)=\int_0^{\tau_h} f^h(t) dt$ and $\mathcal{Q}_h^0$ is the heat exchanged in the weakly coupled Otto cycle (assuming the WM is thermalized at the end of the stroke), given as Eq. (\ref{expr-stroke2}). After the thermalization stroke the total state of the WM-bath setup is $\rho_{\rm tot}^{C0}$, which is in general a correlated state. Reduced state of the WM denoted by $\rho_S^{C0}$ will be in the form of Eq. (\ref{sys-evolved}), with $x=0$, and $p$ to be the initial population of the WM before the start of the stroke.\\\\ \textbf{\textit{Disconnecting the hot bath}}: The work done to remove the bath is given by, \begin{align} \mathcal{W}^{\rm discon}_C =- \text{Tr}\big[H^h_{\rm SB}(\tau_h)\,\,\rho_{\rm tot}^{C0}\big] = 0, \end{align} where we again assumed the process is instantaneous and denoted from the point $C0$ to $C1$ in Fig. \ref{cycle}. \\\\ \textbf{\textit{Third stroke}}: This is the second and final unitary stroke which is represented from the point $C1$ to $D0$ in the schematic (Fig. \ref{cycle}(b)), taking place in the time interval $\tau_{u2}$. WM is disconnected from the bath and the system Hamiltonian is changed back from $H_{\rm S}(\omega_h)$ to $H_{\rm S}(\omega_c)$. reduced state of the WM at the start of this stroke is given as, \begin{align} &\rho_S^{C1}= \begin{bmatrix} p^{C1} & 0 \\ 0 &1- p^{C1}, \end{bmatrix} \end{align} where, $p^{C1}=\frac{e^{-\beta_h \omega_h}}{Z_h}+\frac{\cos^2 F^h(\tau_h)}{2}(g_h-g_c)$. Here $Z_h={\rm Tr}[e^{-\beta_h H_{\rm S}(\omega_h)}]$. Reduced state of the WM will not change during the unitary evolution. The work done in this stroke is thus, \begin{align} \mathcal{W}_{CD}&=\braket{E_{D0}}-\braket{E_{C1}}=(\omega_c-\omega_h)\text{Tr}[\sigma_z\,\rho_S^{C1}] \nonumber\\ &=(\omega_h-\omega_c)\big[g_h-\cos^2F^h(\tau_h)(g_h-g_c)\big]. \end{align} Where $g_h=\tanh \beta_h\omega_h$ and $g_c=\tanh \beta_c\omega_c$ as mentioned before.\\\\ \textbf{\textit{Connecting the cold bath}}: Similarly as before the process (from $D0$ to $D1$ in Fig. \ref{cycle}(b)) is instantaneous and the work done in the process is, \begin{align} \mathcal{W}^{\rm con}_D = \text{Tr}\left[H^c_{\rm SB}(0)\left(\rho_S^{C1}\otimes \rho_{\rm B_c}(0)\right) \right] = 0. \end{align} Here, $\rho_{\rm B_c}(0)$ is as given in Eq. ({\ref{baths}), with $g_c=\tanh \beta_c\omega_c$, and $H_{\rm SB}^c(0)$ is given as Eq. (\ref{int}) with the parameter denoted as $f^c(0)$.\\\\ \textbf{\textit{Fourth stroke}}: This is the second and final thermalization stroke denoted from $D1$ to $A0$ in the schematic (Fig. \ref{cycle}(b)). After connecting the the WM to the cold bath, it is kept in contact for a time interval $\tau_c$. Work done is again zero for this stroke. Using the definition in Eq. (\ref{sdef}), heat exchange is calculated to be, \begin{align} &\mathcal{Q}_c=\mathcal{Q}_{DA}=\int_0^{\tau_c} dt\text{Tr}\Big[\big(\omega_c\sigma_z+H^c_{\rm SB}(t)\big)\frac{d}{dt}\rho_{\rm tot}(t)\Big] \nonumber\\ & =\omega_c (\tanh \beta_h \omega_h - \tanh \beta_c \omega_c)\sin^2 F^h(\tau_h) \sin^2 F^c(\tau_c) \nonumber\\ & =\mathcal{Q}^0_c \sin^2 F^h(\tau_h) \sin^2 F^c(\tau_c), \end{align} where $\mathcal{Q}^0_c$ is the heat exchanged in the weakly coupled Otto cycle (assuming the WM is thermalized at the end of the stroke). At the end of this stroke, state of the total WM-bath setup is $\rho_{\rm tot}^{A0}$, which is again correlated in general.\\\\ \textbf{\textit{Disconnecting the cold bath}}: In the last step, the cold bath is disconnected from the WM instantaneously (shown as $A0$ to $A1$ in Fig. \ref{cycle}(b)). Similarly as before the work done in this process is also zero. \begin{align} \mathcal{W}^{\rm discon}_A =- \text{Tr}\big[H^c_{\rm SB}(\tau_c)\,\,\rho_{\rm tot}^{A0}\big] = 0. \end{align} In general the work cost for connecting and disconnecting the baths with WM is not free \cite{nazir-strong, nazir-strong2}. But for our special kind of model the cost turns out to be zero. \\\\ Now, total work done in the cycle is given by $\mathcal{W}=\mathcal{W}_{AB}+\mathcal{W}_{CD}$ which is, \begin{align} \mathcal{W}&= (\omega_c-\omega_h)\big(\tanh \beta_c\omega_c-\tanh \beta_h\omega_h\big)\sin^2F^h(\tau_h) \nonumber\\ &=\mathcal{W}_0 \sin^2F^h(\tau_h). \end{align} Here, $\mathcal{W}_0$ is the total work done in the weakly coupled Otto cycle. Thus for heat engine regime, we find the expression for power and efficiency as, \begin{align} &\mathcal{P}=-\frac{\mathcal{W}}{\tau}=\mathcal{P}_0 \sin^2F^h(\tau_h),~~\text{and}~~\eta=-\frac{\mathcal{W}}{\mathcal{Q}_h}=\eta_0. \end{align} where, $\mathcal{P}_0$ and $\eta_0$ are the power and efficiency for the weakly coupled Otto cycle in the previous section. Interestingly, we see that efficiency in both weak and strongly coupled heat engine are same. This shows that even with approximate thermalizations in the second and fourth stroke, we can achieve the maximum efficiency for our model of strongly coupled Otto engine. Whereas, to reach maximum efficiency in case of weakly coupled Otto engine, we need perfect thermalizations in the non-unitary strokes. For refrigerator regime, the expressions for cooling rate and CoP are following, \begin{align} &\kappa=\frac{\mathcal{Q}_c}{\tau}=\kappa_0\sin^2F^h(\tau_h)\sin^2F^c(\tau_c),\\ &{K}=\frac{\mathcal{Q}_c}{\mathcal{W}}={K}_0\sin^2F^c(\tau_c). \end{align} Interestingly, for the refrigerator regime, coefficient of performance is dependent on the last thermalization stroke. In the next section we show that with perfect thermalization in the last unitary stroke $\sin^2F^c(\tau_c)=1$, and we achieve the maximum coefficient of performance in the strongly coupled Otto cycle too. \subsection{Markovian and non-Markovian scenario} \label{markov-nonmarkov} Depending upon the functional form of $f(t)$, one can make the system dynamics Markovian or non-Markovian. Let us first recall the form of $f(t)$ given in Eq. (\ref{optft}), \begin{equation} f(t) =\frac{ e^{- t/2 g}}{2 g \sqrt{1-e^{- t/g}}}. \end{equation} As a result we get $F(t)=\frac{\pi}{2}-\sin^{-1}e^{-t/2g}$, which gives us $f(t)\tan F(t)=1/2g \ge 0$ for all $t> 0$ and the corresponding master equation as a semi-group master equation. Hence, from Eq. (\ref{markovian-cond}) we find that the dynamics is Markovian. From Eq. (\ref{sys-evolved}), one can further note that, in the long time limit ($t\rightarrow \infty$) compared to the bath correlation time, initially diagonal system state in the $\sigma_z$ basis approaches to the fixed thermal state. This shows that indeed our model achieves thermalization. Now, it is straightforward to notice that $\sin^2 F(t)=1-e^{-t/g}$ if $\tau$ is the time taken for the thermalization strokes. So, on applying to the otto cycle we get, \begin{align} {K} = {K}_0(1-e^{-\tau_c/g_c}). \end{align} For perfect thermalization to occur in the last non-unitary stroke, in principle, we need $\tau_c\rightarrow \infty$ (in the scale of bath correlation time). This shows that we can get the maximum achievable coefficient of performance in the strongly coupled scenario. In this case we also notice that $\mathcal{W}=-(\mathcal{Q}_h+\mathcal{Q}_c)$, which is nothing but the first law of thermodynamics for a complete cycle. This justifies the consistency of our thermodynamic framework. \begin{figure}[h] \centering \includegraphics[width=82mm]{plot3.pdf} \caption{Plot of $f(t)\tan F(t)$ vs $t$ for Markovian (red dashed) and non-Markovian (solid blue) dynamics with $g=0.8$.} \label{cycle2} \end{figure} \begin{figure}[h] \centering \includegraphics[width=82mm]{plot2.pdf} \caption{Plot of $\mathcal{P}/\mathcal{P}_0$ vs $t$ for Markovian (red dashed) and non-Markovian (solid blue) dynamics with $g=0.8$.} \label{cycle3} \end{figure} We now choose the following form of $f(t)$, which gives a non-Markovian dynamics according to the condition of Eq. (\ref{markovian-cond}). It can be thought as a non-Markovian correction to the previous form of $f(t)$. \begin{align} f(t)&=\frac{ e^{- t/2 g}}{2 g \sqrt{1-e^{- t/g}}}-\frac{10 \sin (20 t)}{(10 t+1)^2}+\frac{20 \cos (20 t)}{10 t+1}\\ \end{align} One can easily check whether this functional form gives rise to non-Markovian dynamics. In Fig. \ref{cycle2}, we plot $f(t)\tan F(t)$ with $t$, whose non-negativity ensures Markovian dynamics. It is evident from the plot that, for the second form of $f(t)$, the condition breaks down resulting a non-Markovian dynamics. Whereas for the first form $f(t)\tan F(t)$ is always positive. Again, from Eq. (\ref{sys-evolved}), one can check that in the limit of $t\rightarrow\infty$, initially diagonal system state in $\sigma_z$ basis thermalizes for non-Markovian form of $f(t)$ also. In Fig. \ref{cycle3}, we plot $\sin^2 F(t)$ with $t$, for $g=0.8$ to show the non-Markovian advantage for power output in Otto engine. Clearly, the oscillatory behavior of $\sin^2 F(t)$ for non-Markovian scenario gives an enhancement over Markovian scenario as evident from the expression of power as $\mathcal{P}=\mathcal{P}_0\sin^2 F^h(\tau)$. With increasing time both reaches the limit $\mathcal{P}_0$ of weakly coupled Otto engine. Similarly, for Otto refrigerator one can see similar kind behavior. \section{Conclusion} \label{conclu} In this paper we have studied a model of quantum Otto cycle with single qubit bath. First, from a closed quantum evolution of two-qubits with a specially chosen joint Hamiltonian, we derive an exact master equation for a single qubit in the form of a semi-group master equation. Tweaking the form of the joint Hamiltonian one can end up with both Markovian and non-Markovian dynamics. Next we construct an Otto cycle employing this dynamics in the thermalization strokes to investigate the thermodynamic implications of this model. Our model provides a link to study the interplay between strong coupling and non-Markovianity. We employ the formalism of strongly coupled quantum thermodynamics to calculate the thermodynamic quantities for the Otto cycle for both Markovian and non-Makovian scenario. Interestingly, for Otto engine, we find that the efficiency is always maximal irrespective of whether the WM is thermalized or partially thermalized in the non-unitary strokes. Whereas for refrigerator, perfect thermalization in the last stroke is needed to achieve the maximal coefficient of performance. On the other hand, with approximate thermalization, power output is hampered in the strongly coupled Otto cycle. In this scenario, we can exploit the non-Markovianity which provides an enhancement of performance over the Markovian counter part. In the long time limit, power output for both Markovian and non-Markovian models reaches the limit of weakly coupled cycle. For Otto refrigerator also one can see similar effects. It is important to note that the observations are based on the specific model we have chosen. This special model has enabled us to demonstrate the non-Markovian advantage for thermodynamic tasks yet in the regime of strong coupling. \acknowledgements The work was supported by the Polish National Science Centre Project No. 2018/30/A/ST2/00837. SC would like to acknowledge Sibasish Ghosh for useful discussions on the problem. \onecolumngrid \section{Introduction} In the last few decades new experimental techniques \cite{tech-dowling, golter, accanto17rapid, perreault17quantum, rossi} have been developed which enabled the study of particles and phenomena at a length scale where quantum effects play a dominant role. In these studies the quantum systems considered interact with their ambient environment with varying degrees of isolation. Mostly systems exhibit significant variation in their behaviour as a result of weak or strong interaction with the environment. This has resulted in a renewed focus in the study of quantum systems which are open to the environment \cite{breuer02}. When the interaction between the system and the environment is weak, one can microscopically derive its evolution \cite{davies74, breuer02} through a series of approximations (Born-Markov and secular) in the form of the celebrated Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) master equation \cite{GKS, lindblad76, breuer02}, \begin{align} \label{GKSL} \frac{d\rho}{dt}=-i[H_S,\rho] &+\sum_{k=1}^n\gamma_k\Big(A_k\rho A_k^{\dagger} -\frac{1}{2}\{A_k^{\dagger}A_k,\rho\}\Big), \end{align} where $A_k$'s are the jump operators, $H_S$ is the system Hamiltonian and the rates $\gamma_k\ge 0$ for all $k$. Such classes of master equations are called semi-group master equations as the evolution maps resulting from these master equations form a semi-group. However, for a vast majority of dynamics, interaction is not weak and all the approximations used to derive master equations in GKSL form are not valid. Consequently, such general closed form master equations do not exist when the interaction is not weak. Moreover, when we have time dependent and positive rates i.e. $\gamma_k(t)\ge 0$ for $k=1,..,n$ in Eq. (\ref{GKSL}), we call the corresponding evolutions completely positive divisible or CP-divisible \cite{rivas14quantum,b-review,NM4,NM3,PhysRevAChakraborty}. CP-divisible evolutions are usually called Markovian. Theory of open quantum systems provides a solid foundation to the emergent field of quantum thermodynamics \cite{kosloff13quantum, binder18book, vinjanampathy16quantum}. Dynamical framework of quantum mechanics allows one to address finite time thermodynamics processes. Specifically, in the weak coupling limit, microscopically derived Markovian master equation (also known as Davies construction \cite{davies74}) in the GKSL form gives a consistent and universal description of the basic thermodynamic laws \cite{alicki79the, kosloff13quantum, alicki18introduction}. Originally, Davies construction was engineered for time independent system Hamiltonian. Later on it was generalized for the time dependent scenarios \cite{kosloffannual, Davies1978, Albash_2012, kamleitner, yamaguchi}. Beyond weak coupling approximation, where non-Markovianity inevitably enters into the picture, it is not straightforward to establish a consistent framework of thermodynamics, largely due to the unavailability of a unique closed form master equation as mentioned before. Consequently, a number of approaches \cite{Esposito_2010, kato_strong, stratsberg-collision,llobet-strong,prb-strong, strasberg-collisional-strong, rivas-strong, Bergmann2021, Miller2018, Nazir2018, Kato2018} have been proposed to deal with strong interaction without compromising the thermodynamic consistency. One of the major applications of quantum thermodynamics is the study of quantum thermal machines \cite{sun-engine, kosloffannual, GELBWASERKLIMOVSKY2015329, alicki18introduction}, which are typically restricted to weak coupling scenario. New experimental techniques \cite{strong-exp1, strong-exp2, strong-exp3, strong-exp4, strong-exp5, strong-exp6, strong-exp7} to access strongly coupled regime and recent theoretical progresses have now opened the avenue to consider the performance of thermal machines beyond weak coupling scenario \cite{Gelbwaser-Klimovsky-strong, Strasberg-strong, kosloff-strong, eisert-strong, Mu_2017,secular-3, nazir-strong, nazir-strong2, archak-strong,McConnell_2022}. In general, it has been observed that strong coupling effect reduces the performance of a thermal machine \cite{Strasberg-strong, nazir-strong, nazir-strong2, segal-strong, hasegawa-strong}. On the other hand, there are several studies \cite{arpan, Zhang_2014, abiuso-non, serra-non} that showed that non-Markovian effect is actually beneficial for enhancing the performance even in the regime of weak coupling \cite{Strasberg-strong, arpan}. Although there are some objections \cite{thomas-non, Wiedmann_2020, shirai-non} to this non-Markovian boosting due to the neglecting of the coupling and decoupling cost, recently genuine non-Markovian advantage has been reported \cite{polish-non} taking into account these previous shortcomings. Evidently, it is an intriguing task to investigate the interplay between strong interaction and non-Markovianity \cite{segal-connection} with respect to thermodynamic tasks, and it still remains a largely unexplored area. With this goal, here we consider a model of quantum Otto cycle, where the working medium qubit is connected to another single qubit (working as bath) with arbitrary interaction strength. Following Ref. \cite{Prathik_Cherian_2019}, we devise a two-qubit unitary evolution such that the exact reduced dynamics of the working medium resembles a semi-group master equation i e. in the GKSL form with constant coefficients, representing pumping and damping of a single qubit system. There are several advantages for choosing this model. Firstly, we go beyond the weak coupling approximation and yet get the exact dynamics in the GKSL form. Secondly, by tweaking the interaction Hamiltonian, we can make the dynamics non-Markovian. This gives us a way to study strong coupling and non-markovianity at the same time. Finally, we have control over the thermalization process taking place in contact with a finite bath. We work out analytical expressions for efficiency (coefficient of performance) and power (cooling power) for Otto engine (refrigerator) employing the thermodynamic framework suited for strong coupling. We notice that transition from Markovian to non-Markovian scenario gives better performance even in the regime of strong interaction. This paper is organized as follow. In Sec. \ref{chapter-2}, we give a short introduction to Otto cycle with conventional weak coupling approximation. In Sec. \ref{forma}, we discuss the strong coupling formalism we use in our paper. Next we describe our model of qubit dynamics in Sec. \ref{dyn-des}. Implementation of the Otto cycle is described in Sec. \ref{otto-cyc-des}. In Sec. \ref{markov-nonmarkov}, we discuss the thermodynamic implications of Markovian and non-Markovian dynamics. Finally, in Sec. \ref{conclu}, we conclude. \section{Weakly coupled Otto cycle} \label{chapter-2} We present a brief discussion on the conventional Otto cycle where the working medium (WM) with Hamiltonian $H_{\rm S}$ is weakly connected to two thermal baths, one at a time, with temperatures $T_h$ and $T_c$ ($T_h>T_c$) respectively. The setup is described by the total Hamiltonian, \begin{equation} H(t)=H_{\rm S}(t)+H_{ B_h}+H_{ B_c}+H_{\rm SB}(t), \end{equation} where, $H_{\rm B_h}$, $H_{\rm B_c}$ are the self Hamiltonians of the hot and cold bath respectively and $H_{\rm SB}(t)=H_{\rm SB}^h(t)+H_{\rm SB}^c(t)$ denotes the interaction Hamiltonian. The cycle consists of four strokes as described below. Schematic diagram of the cycle is given in Fig \ref{cycle}(a). \begin{figure*} \centering \includegraphics[width=130mm]{Fig_otto.pdf} \caption{Schematic of Otto cycle for (a) weak and (b) strong coupling.} \label{cycle} \end{figure*} For simplicity we take $\hbar=k_{\rm B}=1$. We here consider that the time dependence of the WM Hamiltonian is controlled through an external parameter $\omega(t)$ and we write the system Hamiltonian as $H_{\rm S}(\omega(t))$. We also denote $H_{S,\alpha}$ as the WM Hamiltonian at each point of the schematic of Fig. (\ref{cycle}(a)), with $\alpha=\{A,B,C,D\}$. \\\\ \textit{\textbf{First stroke}}: Initially (point A in the schematic diagram \ref{cycle}(a)), the WM is prepared in the state $\rho_S^A$ with Hamiltonian $H_{S,A}=H_{\rm S}(\omega=\omega_A)\equiv H_{\rm S}(\omega_A)$, in equilibrium with the cold bath. Baths are assumed to be always in equilibrium state with their respective Hamiltonians and temperatures. Therefore, the initial joint state of system-bath can be written as, \begin{equation} \rho_{\rm tot}^A=\rho_S^A\otimes\rho_B^c=\frac{e^{-\beta_c H_{\rm S}(\omega_A)}}{{\rm Tr}[e^{-\beta_c H_{\rm S}^A}]}\otimes \frac{e^{-\beta_c H_{\rm B_c}}}{{\rm Tr}[e^{-\beta_c H_{\rm B_c}}]}, \end{equation} First stroke is unitary, where the WM is decoupled from the bath and WM Hamiltonian $H_{\rm S}(\omega(t))$ is changed from $H_{S,A}=H_{\rm S}(\omega_A)$ at point $A$ to $H_{S,B}=H_{\rm S}(\omega_B)$ at point $B$ in a time duration $\tau_{u1}$. The final state of the WM after the first unitary stroke is, \begin{equation} \rho_S^B=U_1\rho_S^A U^{\dagger}_1, \end{equation} where, $U_1=\mathcal{T}\exp\left[{-i\int_A^B H_{\rm S}(\omega(t))dt}\right]$ is the unitary operator. \\\\ \textit{\textbf{Second stroke}}: In this stroke from point $B$ to $C$, the WM is connected to the hot bath at inverse temperature $\beta_h$ for a time interval $\tau_h$, while keeping the WM Hamiltonian fixed at $H_{\rm S}(\omega_B)$ throughout the process. Evolution of the WM is governed by the Markovian master equation in GKSL form derived microscopically for weak coupling and standard Born-Markov, secular approximations \cite{breuer02}, \begin{equation} \dot{\rho}_S(t)=-i[H_{\rm S}(\omega_B),\rho_S(t)]+\mathcal{D}_h[\rho_S(t)], \end{equation} where $\mathcal{D}_h$ is the dissipative superoperator. After a sufficiently long time $\tau_h>>\tau_B$ (bath correlation time), the WM is equilibriated with the bath with state $\rho_S^C= {e^{-\beta_h H_{\rm S}(\omega_B)}}/{{\rm Tr}[e^{-\beta_h H_{\rm S}(\omega_B)}]}$. Due to weak coupling approximation the joint system-bath state is always in the form $\rho_{\rm tot}(t)=\rho_S(t)\otimes \rho_B^i$ ($i=h,c$).\\\\ \textit{\textbf{Third stroke}}: Similar to the first stroke, this is the second unitary stroke, where the Hamiltonian is changed back from $H_{S,C}=H(\omega_B)$ to $H_{S,D}=H(\omega_A)$ in a time interval $\tau_{u2}$. Final state of the working medium after the first unitary stroke is, \begin{equation} \rho_S^D=U_2\rho_S^C U^{\dagger}_2, \end{equation} where, $U_2=\mathcal{T}\exp\left[{-i\int_C^D H_{\rm S}(\omega(t))dt}\right]$ is the unitary operator.\\\\ \textit{\textbf{Fourth stroke}}: This is the second thermalization stroke, where the WM is connected to the cold bath at inverse temperature $\beta_c$, keeping the Hamiltonian fixed at $H_{\rm S}(\omega_A)$. If the stroke duration $\tau_c$ is sufficiently long ($\tau_c>>\tau_B$), the WM is returned to the initial thermal state $\rho_S^D=\rho_S^A$ completing the cycle. Total cycle time is given by $\tau=\tau_{u1}+\tau_h+\tau_{u2}+\tau_c$. The definition of heat and work is well defined in regime of weak interaction, given by respectively \cite{alicki79the,vinjanampathy16quantum}, \begin{equation} \mathcal{Q}=\int{\rm Tr}[\dot{\rho}_S(t)H_{\rm S}(t)]dt,~~\mathcal{W}=\int{\rm Tr}[{\rho}_S(t)\dot{H}_{\rm S}(t)]dt \end{equation} Now, we consider a specific model where the Hamiltonian of the WM is given as, \begin{equation} H_{\rm S}(t)=\omega(t)\sigma_z. \end{equation} As mentioned before, $\omega(t)$ is the external parameter, which is changed from $\omega_A=\omega_c$ to $\omega_B=\omega_h$ in the first unitary stroke and back to $\omega_c$ in the final unitary stroke. Two thermal baths are always in usual equilibrium states with inverse temperatures $\beta_h$ and $\beta_c (\beta_h<\beta_c)$ respectively. We calculate the heat and work done in each stroke for this model. Note that, in the unitary strokes no heat is exchanged and in the thermalization strokes no work is done as the Hamiltonian is kept fixed. Defining the average energy of the WM at the $\alpha$-th ($\alpha=A,B,C,D$) point as, $E_{\alpha}={\rm Tr}[\rho_S^\alpha H_{S,\alpha}]$, we get the following expressions for work and heat in different strokes, \begin{align} \label{expr-stroke1} &\mathcal{W}^0_{AB} = \braket{E_B}-\braket{E_A} =\,(\omega_c-\omega_h)\tanh \beta_c \omega_c\\ \label{expr-stroke2} &\mathcal{Q}^0_h = \braket{E_C}-\braket{E_B}= \,\omega_h(\tanh \beta_c \omega_c- \tanh \beta_h \omega_h) \\ \label{expr-stroke3} &\mathcal{W}^0_{CD} = \braket{E_D}-\braket{E_C}= \,(\omega_h-\omega_c)\tanh \beta_h \omega_h\\ \label{expr-stroke4} &\mathcal{Q}^0_c = \braket{E_A}-\braket{E_D} =\,\omega_c(\tanh \beta_h \omega_h-\tanh \beta_c \omega_c) \end{align} It is evident from the above expressions that $\mathcal{W}^0_{AB}+\mathcal{W}^0_{CD}=-(\mathcal{Q}_h^0+\mathcal{Q}_c^0)$, which is nothing but the energy conservation or the first law of thermodynamics. When $\omega_h/\omega_c>\beta_c/\beta_h$, the cycle works as a heat engine and we get the following expression for the power $\mathcal{P}_0$ as, \begin{align} &\mathcal{P}_0=-\frac{\mathcal{W}}{\tau}=-\frac{\mathcal{W}^0_{AB}+\mathcal{W}^0_{CD}}{\tau}=\frac{\mathcal{Q}_h+\mathcal{Q}_c}{\tau} \end{align} and efficiency $\eta_0$ as, \begin{align} \eta_0 =-\frac{\mathcal{W}}{\mathcal{Q}^0_h}=-\frac{\mathcal{W}^0_{AB}+\mathcal{W}^0_{CD}}{\mathcal{Q}^0_h}=1-\frac{\omega_c}{\omega_h}. \end{align} Similarly, in the refrigerator regime that is when $\omega_h/\omega_c<\beta_c/\beta_h$, cooling rate $\kappa_0$ is given as, \begin{align} \kappa_0=\frac{\mathcal{Q}^0_c}{\tau}, \end{align} and coefficient of performance $K_0$ is given as, \begin{align} {K}_0=\frac{\mathcal{Q}^0_c}{\mathcal{W}^0_{AB}+\mathcal{W}^0_{CD}}=\frac{\omega_c}{\omega_h-\omega_c}. \end{align} Here, we have used the sign convention that energy flow (heat, work) is positive (negative) if it enters (leaves) the WM. Hence, a heat engine (refrigerator) is characterized by $\mathcal{Q}_h>0$ ($<0$), $\mathcal{Q}_c<0$ ($>0$), and $\mathcal{W}<0$ ($>0$). Second law of thermodynamics gives us the bound on efficiency (coefficient of performance) for the engine (refrigerator). It states that the total entropy production is never negative. Now, for each separate thermalization stroke one has \cite{callen1985thermodynamics,vinjanampathy16quantum} \begin{equation} \Delta S_{\rm tot}=\Delta S-\beta \Delta Q\geq 0, \end{equation} where, $\Delta S$ is the change in the von-Neumann entropy \cite{nielson} of the system in a thermodynamic process and $\Delta Q$ is the heat entering to the system form a bath at inverse temperature $\beta$. In our model of Otto cycle, one can check that $\rho_S^B=\rho_S^A$ and $\rho_S^C=\rho_S^D$. Hence, change in the von-Neumann entropy of the system in the two thermalization strokes cancel each other and second law takes the form, \begin{equation} \beta_h \mathcal{Q}_h^0+\beta_c \mathcal{Q}_c^0\leq 0. \end{equation} as of course $\Delta S_{\rm tot}$ remains zero in the unitary processes. Validity of the above inequality can easily be seen from the expressions of Eq. (\ref{expr-stroke1}) to Eq. (\ref{expr-stroke4}) and employing the fact that $\tanh x$ is a monotonically increasing function of $x$. This implies that, \begin{equation} \eta_0=1+\frac{\mathcal{Q}_c^0}{\mathcal{Q}_h^0}=1-\frac{\omega_c}{\omega_h}\leq 1-\frac{\beta_h}{\beta_c}. \end{equation} Similarly, in the refrigerator regime, $K_0\leq \frac{\beta_h}{\beta_c-\beta_h}$. This limit is famously known as Carnot limit. \section{Strongly coupled Otto Cycle} In the strongly coupled model of the Otto cycle, the descriptions of the strokes are the same as in the weakly coupled one. Difference will come only in the thermodynamic framework. In this case, thermalization stroke will make the system-bath joint state a correlated one and the marginal bath state will no longer be a equilibrium state. Consequently, the thermodynamic analysis will change and we have to adopt different definitions of the thermodynamic observables suited for strongly coupled scenario. Here we follow the framework of Ref. \cite{Esposito_2010, kato_strong, rivas-strong} to define the thermodynamic quantities. \subsection{Formalism} \label{forma} Let us start by giving a short account of this framework. We first write the total Hamiltonian of a system-bath setup as following, \begin{equation} H_{\rm tot}(t)= H_{\rm S}(t)+H_{\rm B}+H_{\rm SB}(t). \end{equation} Change in average energy of the joint system-bath state is identified as the work performed, \begin{equation} \label{work} dW(t)=dE_{\rm SB}(t)={\rm Tr}[dH_{\rm tot}(t)\rho_{SB}(t)+H_{\rm tot}(t)d\rho_{SB}(t)], \end{equation} where, $E_{\rm SB}(t)={\rm Tr}[\rho_{SB}(t)H_{\rm tot}]$ is the total energy of the joint state $\rho_{SB}(t)$ of the system and bath. Heat is defined as the energy flowing out of the reservoir, \begin{align} \label{sdef} \nonumber dQ(t)&=-d{\rm Tr}_B[H_{\rm B}\rho_B(t)]=-{\rm Tr}_B[H_{\rm B} d\rho_B(t)]\\ &={\rm Tr}[(H_{\rm S}(t)+H_{\rm SB}(t))d\rho_{SB}(t)], \end{align} where, $\rho_B(t)=Tr_S[\rho_{SB}(t)]$. Internal energy of the system is defined as, \begin{equation} \label{int-energy} E_{\rm S}(t)=Tr_{SB}[(H_S(t)+H_{SB}(t))\rho_{SB}(t)]. \end{equation} Now, it is easy to see that, \begin{equation} dE_{\rm S}(t)=dW(t)+dQ(t), \end{equation} which is nothing but the first law of thermodynamics. In the weak coupling limit ($H_{\rm SB}\approx 0$), these definitions boils down to the conventional definitions stated in the previous section. Let us assume the initial joint state as, \begin{equation} \rho_{SB}(0)=\rho_S(0)\otimes \rho_B^\beta, \end{equation} where, $\rho_B^\beta$ is the thermal state of the bath with inverse temperature $\beta$. The state of the joint system-bath at time $t=\tau$ is given by, \begin{equation} \rho_{SB}(t)=U(\tau,0)\rho_{SB}(0)U^{\dagger}(\tau,0), \end{equation} where, $U(\tau,0)$ is the unitary generated by the total Hamiltonian $H_{\rm tot}(t)$. As mentioned before, entropy production is defined as $\Delta S_{\rm tot}=\Delta S-\beta \Delta Q$. Note that, $\beta$ is the initial temperature of the bath. At later times, the reduced state of the bath is not even a thermal state. It can be shown that \cite{Esposito_2010, rivas-strong}, \begin{equation} \Delta S_{\rm tot}(t)=S(\rho_{SB}(t)\parallel \rho_S(t)\otimes \rho_B^\beta)\geq 0, \end{equation} where, $S(\phi\parallel\psi)$ is the relative entropy between two quantum states $\phi$ and $\psi$. This shows the validity of the second law of thermodynamics in this formalism. Next we derive the master equation used to describe the dynamics in our model of Otto cycle. \subsection{Dynamics with single qubit bath} \label{dyn-des} We consider a two-qubit total Hamiltonian which can be considered as the total Hamiltonian of the system-bath setup, \begin{align} \label{ham-model} &H_{\rm{tot}}(t)= H_{\rm S}\otimes \mathds{1} + \openone \otimes H_{\rm B}+H_{\rm SB}(t) \nonumber\\ &= \omega (\sigma_z\otimes \openone + \openone \otimes\, \sigma_z)+H_{\rm SB}(t) \end{align} where the system Hamiltonian is $H_{\rm S}=\omega \sigma_z$, bath Hamiltonian is $H_{\rm B}=\omega \sigma_z$, and the interaction Hamiltonian $H_{\rm SB}(t)$ reads \begin{equation}\label{} H_{\rm SB}(t) = \frac{f(t)}{2} ( \sigma_x \otimes \sigma_x + \sigma_y \otimes \sigma_y ) , \end{equation} where $f(t)$ is a time dependent coupling strength. The matrix form representation reads \begin{equation} \label{int} H_{\rm SB}(t)=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & f(t) & 0 \\ 0 & f(t) & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}. \end{equation} Note here that we have chosen $H_{\rm S}$ and $H_{\rm B}$ in such a way in Eq. (\ref{ham-model}) that $H_{\rm tot}(t)$ is different time commuting. We have also chosen this special form for the Hamiltonian so that for a specific choice of $f(t)$ (as discussed later) the system evolution will be described by a semi-group master equation \cite{breuer02, Prathik_Cherian_2019}. Not only that, we can also smoothly transit to non-Markovian regime by changing the form of $f(t)$. Now, we choose the initial states of the system and environment to be, \begin{align} \rho_{\rm S}(0)&= \begin{bmatrix} p & x \\ x^* & 1-p \\ \end{bmatrix}, \label{inistate} ~\rho_{\rm B}(0)=\frac{1}{2} \begin{bmatrix} 1-g & 0 \\ 0 & 1+g \\ \end{bmatrix}. \end{align} where, $0\le p,g \le 1$ and $x$ is a complex number with $|x|^2\le p(1-p)$. One can assign a temperature to the initial bath state with respect to the bath Hamiltonian $H_{\rm B}$ to write it as a thermal state. The initial joint system-bath state $\rho_{SB}(0)=\rho_{\rm S}(0)\otimes\rho_{\rm B}(0)$ evolves through the unitary, \begin{equation} U(t,0)=\exp \Big[\int_0^tdt'\, H_{\rm tot}(t')\Big]. \end{equation} Note here that we have used the fact that $H_{\rm tot}(t)$ is different time commuting. The time evolved system state is $\rho_{\rm S}(t)={\rm Tr}_B \big[\rho_{ SB}(t)\big]$, where $ \rho_{ SB}(t)=U(t,0)\rho_{SB}(0)U^{\dagger}(t,0)$. The explicit form of $\rho_{S}$ can be written as, \begin{align} \label{sys-evolved} \rho_{ S}(t)=\Lambda_t[\rho_S(0)] =\begin{bmatrix} p(t) & x e^{-2 i \omega t} \cos F(t) \\ x^* e^{2 i \omega t} \cos F(t) & 1-p(t) \end{bmatrix} \end{align} where $$ p(t)= p\cos^2 F(t) + \frac{1-g}{2}\sin^2 F(t) , $$ $\Lambda_t$ is the dynamical map, and $F(t)=\int_0^tf(t')\,dt'$. The corresponding master equation \begin{align} \frac{d\rho_S}{dt} &= \mathcal{L}_t[\rho_S], \label{defn-gen} \end{align} reads as follows (cf. Appendix \ref{seconda}) \begin{align}\label{gen-me} &\frac{d\rho_{\rm S}(t)}{dt}=-i\omega [\sigma_z,\rho_{\rm S}(t)] \nonumber \\ &+ \gamma_-(t)\Big(\sigma_-\rho_{\rm S}(t)\,\sigma_+-\frac{1}{2}\{\sigma_+\sigma_-,\rho_{\rm S}(t)\}\Big)\nonumber\\ &+ \gamma_+(t) \Big(\sigma_+\rho_{\rm S}(t)\,\sigma_- - \frac{1}{2}\{\sigma_-\sigma_+,\rho_{\rm S}(t)\}\Big) , \end{align} with \begin{equation}\label{} \gamma_\pm(t) = (1 \mp g) \gamma(t) , \end{equation} and \begin{equation} \gamma(t) = f(t)\tan F(t) . \label{markovian-cond} \end{equation} It is, therefore, clear that the evolution is Markovian (CP-divisible) if \cite{rhp-non, rivas14quantum, cond-markovian} \begin{equation} \gamma(t) \geq 0 . \label{markovian-cond} \end{equation} Interestingly, one can show \cite{Prathik_Cherian_2019} that choosing $f(t)$ \begin{equation} \label{optft} f(t) =\frac{ e^{- t/2 g}}{2 g \sqrt{1-e^{- t/g}}} , \end{equation} leads to $\gamma(t) = \frac{1}{2g}$ and hence both rates \begin{align} \gamma_- &= \frac{1+g}{2g}, \nonumber \\ \gamma_+ &= \frac{1-g}{2g} , \nonumber \end{align} are time independent leading to GKLS Markovian master equation. In this case the asymptotic state of the system is a thermal state in the following form, \begin{equation} \label{asym-sys-semi} \rho_S(t\rightarrow \infty)=\frac{1}{2}\begin{bmatrix} {1-g} & 0 \\ 0 & {1+g}\\ \end{bmatrix}. \end{equation} Later we discuss also non-Markovian generalization of the master equation in Eq. (\ref{gen-me}) with other choices of $f(t)$. \subsection{Implementation of Otto cycle} \label{otto-cyc-des} In this section we implement an Otto cycle where the WM is connected to two single qubit baths (hot and cold). Dynamics in the thermalization strokes is described by the formalism developed upstairs. For the sake of clarity of notation, we will append all the relevant quantities in the single qubit bath, namely Hamiltonians, $\omega,g,f(t)$ and $F(t)$, with a suffix $h$ or $c$ depending on whether it is used in connection with the hot bath or the cold bath, respectively. Total Hamiltonian of the WM and the baths are described as, \begin{equation} H(t)=H_{\rm S }(t)+H_{\rm B_h}+H_{\rm B_c}+H_{\rm SB}(t), \end{equation} where $H_{\rm S}(t)=\omega(t)\sigma_z$. External parameter $\omega(t)$ is varied from $\omega_c$ to $\omega_h$ in the first unitary stroke and changed back to $\omega_c$ in the second unitary stroke. $H_{\rm B_h}$ and $H_{\rm B_c}$ are $\omega_h\sigma_z$ and $\omega_c\sigma_z$, in accordance to the Eq. (\ref{ham-model}). Interaction Hamiltonian $H_{\rm SB}(t)=H_{\rm SB}^h(t)+H_{\rm SB}^c(t)$ is given as Eq. (\ref{int}), with prefix $h$ and $c$ for $f(t)$ the contact with hot and cold bath respectively. Initial states of the hot and cold baths are as following, \begin{equation} \label{baths} \rho_{\rm B_h}(0)= \frac{1}{2}\begin{bmatrix} {1-g_h} & 0 \\ 0 & {1+g_h} \\ \end{bmatrix}, ~\rho_{\rm B_c}(0)= \frac{1}{2}\begin{bmatrix} {1-g_c} & 0 \\ 0 & {1+g_c} \\ \end{bmatrix}. \end{equation} Initial temperatures of the baths can be determined by writing the states in the form of thermal states, \begin{equation} \label{b-st} \rho_{\rm B_j}(0)=\frac{e^{-\beta_j H_{\rm B_j}}}{Z_j},~~j=\{h,c\}, \end{equation} where $Z_j={\rm Tr}[e^{-\beta_j H_{\rm B_j}}]$, which gives us $g_h=\tanh \beta_h\omega_h$, and similarly, $g_c=\tanh \beta_c\omega_c$. Below we describe the strokes of the cycle. Schematic of the cycle is shown in Fig. \ref{cycle}(b). WM is initially (point A1) prepared in the thermal state corresponding to the initial temperature of the cold bath and the total WM-bath state is prepared initially in a product state as following, \begin{equation} \rho_{\rm tot}^A=\frac{e^{-\beta_c H_{\rm S}(\omega_c)}}{{\rm Tr}[e^{-\beta_c H_{\rm S}(\omega_c)}]}\otimes \frac{e^{-\beta_c H_{\rm B_c}}}{{\rm Tr[e^{-\beta_c H_{\rm B_c}}]}}, \end{equation} where the initial state of the cold bath in Eq. (\ref{baths}) is written in the form of Eq. (\ref{b-st}). Below we describe the strokes of the Otto cycle. \\\\ \textbf{\textit{First stroke}}: In the first unitary stroke, WM is disconnected form the baths and the external parameter $\omega(t)$ of the system Hamiltonian is varied from $\omega_c$ (point $A1$) to $\omega_h$ (point $B0$) in a time interval $\tau_{u1}$. State doesn't change during the evolution and remains constant at $\rho_S^{A1}=e^{-\beta_cH_{\rm S}(\omega_c)}/Z_c$, where $Z_c={\rm Tr[e^{-\beta_c H_{\rm S}(\omega_c)}]}$. No heat is exchanged in this process, whereas the work done is given by, \begin{align} \mathcal{W}_{AB} &= \braket{E^{B0}_S}-\braket{E^{A1}_S}=(\omega_c-\omega_h)\tanh \beta_c \omega_c. \end{align} Here, $E_S^{\alpha}={\rm Tr}[\rho_S^{A1} H_{\rm S}(\omega_\alpha)]$, with $\alpha=\{h,c\}$.\\\\ \textbf{\textit{Connecting the hot bath}}: WM is connected to the hot bath as represented by point $B0$ to $B1$ in the schematic diagram (Fig. \ref{cycle}(b)). We assume that this coupling operation is instantaneous. Hence, the state of the WM and the bath do not change during this operation. Additionally, interaction Hamiltonian also remains constant. As a result the energy change of the total WM-bath setup during this operation is, \begin{align} &\mathcal{W}^{\rm con}_B = \text{Tr}\left[H^h_{\rm SB}(0)\left(\rho_{\rm B_h}(0)\otimes e^{-\beta_c H_{\rm S}(\omega_c)}/Z_c\right)\right] \nonumber = 0. \end{align} where $\rho_{\rm B_h}(0)$ is as given in Eq. ({\ref{baths}), with $g_h=\tanh \beta_h\omega_h$, and $H_{\rm SB}^h(0)$ is given as Eq. (\ref{int}) with the parameter as $f^h(0)$. Functional form of $f^j(t)$ for $j=\{h,c\}$ will be specified later for both Markovian and non-Markovian scenario. \\\\ \textbf{\textit{Second stroke}}: Second stroke is the thermalization stroke after the WM is connected to the hot bath. As the state of the bath does not change during the connection of WM to it, at the start of the stroke, its state is given by $\rho_{\rm B_h}(0)$. We assume that the WM is kept in contact with the bath for a time interval $\tau_h$ ($B1$ to $C0$ in the schematic), keeping the system Hamiltonian constant at $H_{\rm S}(\omega_h)$. Work done in this process is zero as calculated using the Eq. (\ref{work}). Using the definition in Eq. (\ref{sdef}), heat exchanged in this stroke is given as, \begin{align} \nonumber \mathcal{Q}_h=\mathcal{Q}_{BC}&=\int_0^{\tau_h} dt \,\,\text{Tr}\Big[\big(\omega_h\sigma_z+H^h_{\rm SB}(t)\big)\frac{d}{dt}\rho_{\rm tot}(t)\Big] \\ & =\omega_h (\tanh \beta_c \omega_c - \tanh \beta_h \omega_h) \sin^2 F^h(\tau_h) \nonumber\\ & =\mathcal{Q}^0_h \sin^2 F^h(\tau_h). \end{align} Here, $F^h(\tau_h)=\int_0^{\tau_h} f^h(t) dt$ and $\mathcal{Q}_h^0$ is the heat exchanged in the weakly coupled Otto cycle (assuming the WM is thermalized at the end of the stroke), given as Eq. (\ref{expr-stroke2}). After the thermalization stroke the total state of the WM-bath setup is $\rho_{\rm tot}^{C0}$, which is in general a correlated state. Reduced state of the WM denoted by $\rho_S^{C0}$ will be in the form of Eq. (\ref{sys-evolved}), with $x=0$, and $p$ to be the initial population of the WM before the start of the stroke.\\\\ \textbf{\textit{Disconnecting the hot bath}}: The work done to remove the bath is given by, \begin{align} \mathcal{W}^{\rm discon}_C =- \text{Tr}\big[H^h_{\rm SB}(\tau_h)\,\,\rho_{\rm tot}^{C0}\big] = 0, \end{align} where we again assumed the process is instantaneous and denoted from the point $C0$ to $C1$ in Fig. \ref{cycle}. \\\\ \textbf{\textit{Third stroke}}: This is the second and final unitary stroke which is represented from the point $C1$ to $D0$ in the schematic (Fig. \ref{cycle}(b)), taking place in the time interval $\tau_{u2}$. WM is disconnected from the bath and the system Hamiltonian is changed back from $H_{\rm S}(\omega_h)$ to $H_{\rm S}(\omega_c)$. reduced state of the WM at the start of this stroke is given as, \begin{align} &\rho_S^{C1}= \begin{bmatrix} p^{C1} & 0 \\ 0 &1- p^{C1}, \end{bmatrix} \end{align} where, $p^{C1}=\frac{e^{-\beta_h \omega_h}}{Z_h}+\frac{\cos^2 F^h(\tau_h)}{2}(g_h-g_c)$. Here $Z_h={\rm Tr}[e^{-\beta_h H_{\rm S}(\omega_h)}]$. Reduced state of the WM will not change during the unitary evolution. The work done in this stroke is thus, \begin{align} \mathcal{W}_{CD}&=\braket{E_{D0}}-\braket{E_{C1}}=(\omega_c-\omega_h)\text{Tr}[\sigma_z\,\rho_S^{C1}] \nonumber\\ &=(\omega_h-\omega_c)\big[g_h-\cos^2F^h(\tau_h)(g_h-g_c)\big]. \end{align} Where $g_h=\tanh \beta_h\omega_h$ and $g_c=\tanh \beta_c\omega_c$ as mentioned before.\\\\ \textbf{\textit{Connecting the cold bath}}: Similarly as before the process (from $D0$ to $D1$ in Fig. \ref{cycle}(b)) is instantaneous and the work done in the process is, \begin{align} \mathcal{W}^{\rm con}_D = \text{Tr}\left[H^c_{\rm SB}(0)\left(\rho_S^{C1}\otimes \rho_{\rm B_c}(0)\right) \right] = 0. \end{align} Here, $\rho_{\rm B_c}(0)$ is as given in Eq. ({\ref{baths}), with $g_c=\tanh \beta_c\omega_c$, and $H_{\rm SB}^c(0)$ is given as Eq. (\ref{int}) with the parameter denoted as $f^c(0)$.\\\\ \textbf{\textit{Fourth stroke}}: This is the second and final thermalization stroke denoted from $D1$ to $A0$ in the schematic (Fig. \ref{cycle}(b)). After connecting the the WM to the cold bath, it is kept in contact for a time interval $\tau_c$. Work done is again zero for this stroke. Using the definition in Eq. (\ref{sdef}), heat exchange is calculated to be, \begin{align} &\mathcal{Q}_c=\mathcal{Q}_{DA}=\int_0^{\tau_c} dt\text{Tr}\Big[\big(\omega_c\sigma_z+H^c_{\rm SB}(t)\big)\frac{d}{dt}\rho_{\rm tot}(t)\Big] \nonumber\\ & =\omega_c (\tanh \beta_h \omega_h - \tanh \beta_c \omega_c)\sin^2 F^h(\tau_h) \sin^2 F^c(\tau_c) \nonumber\\ & =\mathcal{Q}^0_c \sin^2 F^h(\tau_h) \sin^2 F^c(\tau_c), \end{align} where $\mathcal{Q}^0_c$ is the heat exchanged in the weakly coupled Otto cycle (assuming the WM is thermalized at the end of the stroke). At the end of this stroke, state of the total WM-bath setup is $\rho_{\rm tot}^{A0}$, which is again correlated in general.\\\\ \textbf{\textit{Disconnecting the cold bath}}: In the last step, the cold bath is disconnected from the WM instantaneously (shown as $A0$ to $A1$ in Fig. \ref{cycle}(b)). Similarly as before the work done in this process is also zero. \begin{align} \mathcal{W}^{\rm discon}_A =- \text{Tr}\big[H^c_{\rm SB}(\tau_c)\,\,\rho_{\rm tot}^{A0}\big] = 0. \end{align} In general the work cost for connecting and disconnecting the baths with WM is not free \cite{nazir-strong, nazir-strong2}. But for our special kind of model the cost turns out to be zero. \\\\ Now, total work done in the cycle is given by $\mathcal{W}=\mathcal{W}_{AB}+\mathcal{W}_{CD}$ which is, \begin{align} \mathcal{W}&= (\omega_c-\omega_h)\big(\tanh \beta_c\omega_c-\tanh \beta_h\omega_h\big)\sin^2F^h(\tau_h) \nonumber\\ &=\mathcal{W}_0 \sin^2F^h(\tau_h). \end{align} Here, $\mathcal{W}_0$ is the total work done in the weakly coupled Otto cycle. Thus for heat engine regime, we find the expression for power and efficiency as, \begin{align} &\mathcal{P}=-\frac{\mathcal{W}}{\tau}=\mathcal{P}_0 \sin^2F^h(\tau_h),~~\text{and}~~\eta=-\frac{\mathcal{W}}{\mathcal{Q}_h}=\eta_0. \end{align} where, $\mathcal{P}_0$ and $\eta_0$ are the power and efficiency for the weakly coupled Otto cycle in the previous section. Interestingly, we see that efficiency in both weak and strongly coupled heat engine are same. This shows that even with approximate thermalizations in the second and fourth stroke, we can achieve the maximum efficiency for our model of strongly coupled Otto engine. Whereas, to reach maximum efficiency in case of weakly coupled Otto engine, we need perfect thermalizations in the non-unitary strokes. For refrigerator regime, the expressions for cooling rate and CoP are following, \begin{align} &\kappa=\frac{\mathcal{Q}_c}{\tau}=\kappa_0\sin^2F^h(\tau_h)\sin^2F^c(\tau_c),\\ &{K}=\frac{\mathcal{Q}_c}{\mathcal{W}}={K}_0\sin^2F^c(\tau_c). \end{align} Interestingly, for the refrigerator regime, coefficient of performance is dependent on the last thermalization stroke. In the next section we show that with perfect thermalization in the last unitary stroke $\sin^2F^c(\tau_c)=1$, and we achieve the maximum coefficient of performance in the strongly coupled Otto cycle too. \subsection{Markovian and non-Markovian scenario} \label{markov-nonmarkov} Depending upon the functional form of $f(t)$, one can make the system dynamics Markovian or non-Markovian. Let us first recall the form of $f(t)$ given in Eq. (\ref{optft}), \begin{equation} f(t) =\frac{ e^{- t/2 g}}{2 g \sqrt{1-e^{- t/g}}}. \end{equation} As a result we get $F(t)=\frac{\pi}{2}-\sin^{-1}e^{-t/2g}$, which gives us $f(t)\tan F(t)=1/2g \ge 0$ for all $t> 0$ and the corresponding master equation as a semi-group master equation. Hence, from Eq. (\ref{markovian-cond}) we find that the dynamics is Markovian. From Eq. (\ref{sys-evolved}), one can further note that, in the long time limit ($t\rightarrow \infty$) compared to the bath correlation time, initially diagonal system state in the $\sigma_z$ basis approaches to the fixed thermal state. This shows that indeed our model achieves thermalization. Now, it is straightforward to notice that $\sin^2 F(t)=1-e^{-t/g}$ if $\tau$ is the time taken for the thermalization strokes. So, on applying to the otto cycle we get, \begin{align} {K} = {K}_0(1-e^{-\tau_c/g_c}). \end{align} For perfect thermalization to occur in the last non-unitary stroke, in principle, we need $\tau_c\rightarrow \infty$ (in the scale of bath correlation time). This shows that we can get the maximum achievable coefficient of performance in the strongly coupled scenario. In this case we also notice that $\mathcal{W}=-(\mathcal{Q}_h+\mathcal{Q}_c)$, which is nothing but the first law of thermodynamics for a complete cycle. This justifies the consistency of our thermodynamic framework. \begin{figure}[h] \centering \includegraphics[width=82mm]{plot3.pdf} \caption{Plot of $f(t)\tan F(t)$ vs $t$ for Markovian (red dashed) and non-Markovian (solid blue) dynamics with $g=0.8$.} \label{cycle2} \end{figure} \begin{figure}[h] \centering \includegraphics[width=82mm]{plot2.pdf} \caption{Plot of $\mathcal{P}/\mathcal{P}_0$ vs $t$ for Markovian (red dashed) and non-Markovian (solid blue) dynamics with $g=0.8$.} \label{cycle3} \end{figure} We now choose the following form of $f(t)$, which gives a non-Markovian dynamics according to the condition of Eq. (\ref{markovian-cond}). It can be thought as a non-Markovian correction to the previous form of $f(t)$. \begin{align} f(t)&=\frac{ e^{- t/2 g}}{2 g \sqrt{1-e^{- t/g}}}-\frac{10 \sin (20 t)}{(10 t+1)^2}+\frac{20 \cos (20 t)}{10 t+1}\\ \end{align} One can easily check whether this functional form gives rise to non-Markovian dynamics. In Fig. \ref{cycle2}, we plot $f(t)\tan F(t)$ with $t$, whose non-negativity ensures Markovian dynamics. It is evident from the plot that, for the second form of $f(t)$, the condition breaks down resulting a non-Markovian dynamics. Whereas for the first form $f(t)\tan F(t)$ is always positive. Again, from Eq. (\ref{sys-evolved}), one can check that in the limit of $t\rightarrow\infty$, initially diagonal system state in $\sigma_z$ basis thermalizes for non-Markovian form of $f(t)$ also. In Fig. \ref{cycle3}, we plot $\sin^2 F(t)$ with $t$, for $g=0.8$ to show the non-Markovian advantage for power output in Otto engine. Clearly, the oscillatory behavior of $\sin^2 F(t)$ for non-Markovian scenario gives an enhancement over Markovian scenario as evident from the expression of power as $\mathcal{P}=\mathcal{P}_0\sin^2 F^h(\tau)$. With increasing time both reaches the limit $\mathcal{P}_0$ of weakly coupled Otto engine. Similarly, for Otto refrigerator one can see similar kind behavior. \section{Conclusion} \label{conclu} In this paper we have studied a model of quantum Otto cycle with single qubit bath. First, from a closed quantum evolution of two-qubits with a specially chosen joint Hamiltonian, we derive an exact master equation for a single qubit in the form of a semi-group master equation. Tweaking the form of the joint Hamiltonian one can end up with both Markovian and non-Markovian dynamics. Next we construct an Otto cycle employing this dynamics in the thermalization strokes to investigate the thermodynamic implications of this model. Our model provides a link to study the interplay between strong coupling and non-Markovianity. We employ the formalism of strongly coupled quantum thermodynamics to calculate the thermodynamic quantities for the Otto cycle for both Markovian and non-Makovian scenario. Interestingly, for Otto engine, we find that the efficiency is always maximal irrespective of whether the WM is thermalized or partially thermalized in the non-unitary strokes. Whereas for refrigerator, perfect thermalization in the last stroke is needed to achieve the maximal coefficient of performance. On the other hand, with approximate thermalization, power output is hampered in the strongly coupled Otto cycle. In this scenario, we can exploit the non-Markovianity which provides an enhancement of performance over the Markovian counter part. In the long time limit, power output for both Markovian and non-Markovian models reaches the limit of weakly coupled cycle. For Otto refrigerator also one can see similar effects. It is important to note that the observations are based on the specific model we have chosen. This special model has enabled us to demonstrate the non-Markovian advantage for thermodynamic tasks yet in the regime of strong coupling. \acknowledgements The work was supported by the Polish National Science Centre Project No. 2018/30/A/ST2/00837. SC would like to acknowledge Sibasish Ghosh for useful discussions on the problem. \onecolumngrid
2024-02-18T23:40:51.969Z
2022-06-30T02:21:18.000Z
algebraic_stack_train_0000
3,523
13,334
proofpile-arXiv_066-1237
\section{} \section{Introduction} The Chiral Perturbation Theory (CHPT) has been tremendously successful in describing low-energy hadronic properties in the non-perturbative regime of Quantum Chromodynamics (QCD). Some of the major goals of low-energy QCD are the study of hadronic form factors, which reflect the static structure, and the investigation of the dynamical hadronic response to the external electromagnetic field via electric and magnetic polarizabilities. To study the dependence of the electric and magnetic polarizabilities on the photon energy, we use the relativistic CHPT while applying the multipole expansion approach for the Compton structure functions. Unfortunately, the various versions of CHPT predict a rather broad spectrum of values for the polarizabilities, introducing theoretical uncertainty. However, so far CHPT is the only theory available in the regime of non-perturbative QCD, so our Computational Hadronic Model (CHM) employed here is based on relativistic CHPT. CHM gives us the opportunity to avoid the low-energy approximation in the Compton structure functions and retain all the possible degrees of freedom arising from the SU(3) chiral Lagrangian. The article is constructed as follows. Section 2 discusses the meson form factor and its role in the pion electroproduction. Section 3 and 4 are dedicated to the dynamical polarizabilities of the mesons and baryons, respectively. Section 5 briefly summarizes our conclusions. \section{Pion Form Factor} The spatial pion electromagnetic form factor has been addressed in \cite{Ga84,NR,JK,FC,IBG,MT} and is under experimental study currently \cite{Fpi}. To investigate the behavior of the pion form factor experimentally at momentum transfers $Q^{2}>0.3\, GeV^{2}$ and in the transitional region between long-distance and short-distance QCD, one must use the charged pion electroproduction process. The two-fold differential cross section for the exclusive pion electroproduction can be parametrized by a well known formula \cite{EP1} in terms of photoabsorption cross sections, where each term corresponds to certain polarization states of the virtual photon: \begin{eqnarray} &&2\pi\frac{d^{2}\sigma}{dtd\phi}=\epsilon\frac{d\sigma_{L}}{dt}+ \frac{d\sigma_{T}}{dt}+ \nonumber \\ \nonumber \\ &&\sqrt{2\epsilon(\epsilon+1)} \frac{d\sigma_{LT}}{dt}\cos\phi +\epsilon\frac{d\sigma_{TT}}{dt}\cos2\phi \space . \label{eq:1} \end{eqnarray} Here, subscripts L and T correspond to the longitudinal and transverse polarizations of the virtual photon respectively. Parameter $t$ is the negative momentum transfer to the hadronic target squared and $\phi$ is the azimuthal angle of the detected hadron in the center-of-mass reference frame. If the experimental setup has the azimuthal acceptance \cite{Fpi}, it is possible to determine interference terms, $\sigma_{LT}$ and $\sigma_{TT}$, and then extract the longitudinal term $\sigma_{L}$ by the Rosenbluth separation. In the t-pole approximation, the longitudinal term, $\sigma_{L}$, is related to the pion form factor, $F{}_{\pi}$, in the following way: \begin{eqnarray} \frac{d\sigma_{L}}{dt}\propto-\frac{tQ^{2}}{t-m_{\pi}^{2}}g_{\pi NN}^{2}(t)F_{\pi}^{2}(Q^{2},t),\label{eq:2} \space , \end{eqnarray} where $g{}_{\pi NN}(t)$ is a pion-proton coupling. Thus, Eq.(\ref{eq:2}) allows the extraction of the pion electromagnetic form factor for different momentum transfers above the pion production threshold. As it is well known, the determination of the pion form factor from the pion electroproduction is impacted by radiative corrections, $\delta=\frac{d\sigma_{obs}}{d\sigma_{o}}$. The radiative corrections to the electron current and vacuum polarization were calculated in \cite{AAB}, and leading hadronic corrections (two-photon box diagrams) were addressed in \cite{AAB2}. In \cite{AAB2}, it was found that the two-photon box correction could reach as much as -20\% for the backward kinematics ($\epsilon\rightarrow0$) and high momentum transfers. In order to calculate the form factor of the pion, we use the CHM from \cite{CHM}, which is based on CHPT. Here, we do one-loop calculations with a subtractive renormalization scheme for the scale fixed by the charge radius of the pion: ($<r_{\pi}^{2}>=(0.439\pm0.030)\,\mbox{fm\ensuremath{^{2}}}$). For the $\pi-\gamma-\pi$ interaction, we arrive at the following renormalized amplitude: \begin{eqnarray} M_{r}(q)=e\epsilon\cdot(p'+p) \ \bigg(1-\frac{q^{2}}{f_{\pi}^{2}}f_{1}(q^{2},\Lambda^{2},m_{\pi}^{2})\bigg)- \nonumber \\ e\frac{\epsilon\cdot q(p'^{2}-p^{2})}{f_{\pi}^{2}}f_{2}(q^{2},\Lambda^{2},m_{\pi}^{2}).\label{eq:3} \end{eqnarray} From this, we can form two form factors - one on-shell and another off-shell (if one of the pions is off-shell): \begin{eqnarray} &&F_{on-shell}(q^{2})=1-\frac{q^{2}}{f_{\pi}^{2}}f_{1}(q^{2},\Lambda^{2},m_{\pi}^{2})\label{eq:4}\\ &&F_{off-shell}(q^{2})=-\frac{p'^{2}-p^{2}}{f_{\pi}^{2}}f_{2}(q^{2},\Lambda^{2},m_{\pi}^{2}).\label{eq:5} \end{eqnarray} Here, $p$ and $p'$ are momenta of the incoming and outgoing pion, $q$ is the momentum of the virtual photon, and the functions, $f_{1, 2}(q^{2},Q^{2},m_{\pi}^{2})$, depend on one- and two-point Passarino-Veltman functions. To incorporate CHPT into the calculations of the two-photon box correction, we fit the pion on-shell monopole formfactor to the form factor in Eq.\ref{eq:4} for the $Q^{2}<0.3\,\mbox{GeV}^{2}$ and get $F_{on-shell}(Q^{2})=\frac{\Phi^{2}}{\Phi^{2}+Q^{2}}$ with $\Phi=0.73\,\mbox{GeV}$. A reason to choose the monopole form factor is its asymptotic behavior at high momentum transfers, $F_{on-shell}(Q^{2})\bigg|_{Q^{2}\rightarrow\infty}=\frac{8\pi\alpha_{s}(Q^{2})f_{\pi}^{2}}{Q^{2}}$, driven by the perturbative QCD. For the off-shell form factor in Eq.(\ref{eq:5}), we also choose the monopole form, which is fitted to the CHPT off-shell form factor, so we get $F_{off-shell}(Q^{2},t)=(m_{\pi}^{2}-t)\frac{Q^{2}}{\Omega^{2}+Q^{2}}$ with $\Omega=2.5\,\mbox{GeV.}$ \begin{figure}[!htpb] \begin{centering} \includegraphics[scale=0.28]{pionFF} \par\end{centering} \caption{On-shell and off-shell pion form factors. The red line shows the monopole fit of the on-shell formfactor to CHPT results. Experimental data are taken from \cite{Amend,Acker,Brauel,Hub}. The off-shell form factor is shown for both low ($t=-0.005\,\mbox{GeV}^{2}$, blue line) and high ($t=-0.212\,\mbox{GeV}^{2}$, green line) hadronic momentum transfers.} \label{fig3} \end{figure} From Fig.\ref{fig3}, one can see that our fit is in rather good agreement with experimental data. We now use the fitted form factors in the calculations of the two-photon box pion electroproduction correction with the same tools for the exact calculations as in \cite{AAB2}, for high ($Q^{2}=6.0\,\mbox{GeV}^{2}$) and low ($Q^{2}=0.3\,\mbox{GeV}^{2}$) momentum transfers (see Fig.\ref{fig3-1}). \begin{figure*}[!htpb] \begin{centering} \includegraphics[scale=0.28]{TPE-high} \includegraphics[scale=0.28]{TPE-low} \par\end{centering} \caption{Two-photon box radiative correction for the exclusive pion electroproduction for $Q^{2}=6.0\,\mbox{GeV}^{2}$ (left plot) and $Q^{2}=0.3\,\mbox{GeV}^{2}$ (right plot). The corrections calculated with the monopole pion form factor for $\Phi=0.73\,\mbox{GeV}$ and $\Phi=0.85\,\mbox{GeV}$ are shown by the dashed green and solid red lines, respectively.} \label{fig3-1} \end{figure*} Two lines, $\Phi=0.73\,\mbox{GeV}$ (green dashed line) and $\Phi=0.85\,\mbox{GeV}$ (red solid line), illustrate the box correction sensitivity to the choice of the scale in the form factor. As can be clearly seen from the right plot of Fig.\ref{fig3-1}, for the low momentum transfer the correction is rather sensitive to the scale, which induces an additional theoretical uncertainty due to the box correction model dependence. For the high momentum transfer (left plot of Fig.\ref{fig3-1}, we observe that the correction does not change much with the change of the scale in the form factor. Thus, we observe a reduced degree of model dependence in the correction for high momentum transfers. For the off-shell part of the pion form factor, we have observed a contribution of less than 1\% to the correction, which certainly diminishes any role of the off-shell form factor in the pion electroproduction process. \section{Dynamical Polarizabilities of Mesons } Experimentally, a unique opportunity to study the dynamical structure of hadrons over a wide kinematic range is provided by Compton scattering. In the non-relativistic approximation, the Hamiltonian related to the meson internal structure can be represented as \begin{eqnarray} H_{eff}=-\frac{1}{2}4\pi\alpha_{E}\vec{E}^{2}-\frac{1}{2}4\pi\beta_{M}\vec{H}^{2},\label{eq:6} \end{eqnarray} where $\alpha_{E}$ and $\beta_{M}$ are electric and magnetic polarizabilities of the meson, respectively. Up to now, only charged and neutral pion polarizabilities have been measured, producing a broad range of values \cite{Ahrens,Antipov,Ba92}. Precision measurements of the charged pion polarizability through the Primakoff two-pion photo-production process with linearly polarized photons is planned at JLab. The cross section for this process, \begin{eqnarray} \frac{d^{2}\sigma}{d\Omega_{\pi\pi}dW_{\pi\pi}}=\frac{2\alpha_{f}Z^{2}}{\pi^{2}}\frac{E_{\gamma}^{4}\beta^{2}}{W_{\pi\pi}}\frac{\sin\theta_{\pi\pi}^{2}}{Q^{4}}|F(Q^{2})|^{2} \nonumber \\ \nonumber \\ \sigma(\gamma\gamma\rightarrow\pi\pi)(1+2P_{\gamma}\cos2\phi_{\pi\pi}),\label{eq:7} \end{eqnarray} is related to the photon fusion cross section, $\sigma(\gamma\gamma\rightarrow\pi\pi)$, which can be easily turned into a Compton cross section by means of crossing symmetry. In Eq.(\ref{eq:7}), $W_{\pi\pi}$ is the $\pi\pi$ invariant mass, Z is the atomic number, $E_{\gamma}$ is the energy of the incident photon, $F(Q)$ is the electromagnetic form factor for the proton target, $\theta_{\pi\pi}$ is the lab angle for the two pions, $P_{\gamma}$ is the incident photon polarization and $\phi_{\pi\pi}$ is the azimuthal angle of the $\pi\pi$ system. One can relate photon fusion cross section, $\sigma(\gamma\gamma\rightarrow\pi\pi)$, to the polarizabilities of the pion \cite{Ba92,Do93} as \begin{eqnarray} \sigma_{\gamma\gamma\rightarrow\pi\pi}(|\cos\theta|<Z)=\frac{\kappa}{256\pi s^{2}}\int_{t_{a}}^{t_{b}}dt \nonumber \\ \nonumber \\ \nonumber \\ \Bigg(\bigg|m_{\pi}^{2}B_{o}-8\pi sm_{\pi}\beta+\frac{4\pi}{m_{\pi}}(\alpha+\beta)st\bigg|^{2}+\label{eq:8} \nonumber \\ \nonumber \\ \nonumber \\ \bigg|B_{o}+\frac{4\pi s}{m_{\pi}}(\alpha+\beta)\bigg|^{2}\frac{(m_{\pi}^{4}-tu)^{2}}{s^{2}}\Bigg), \end{eqnarray} where \begin{eqnarray*} &\displaystyle{B_{o}=16\pi\alpha_{f}{\displaystyle \frac{s}{(t-m_{\pi}^{2})(u-m_{\pi}^{2})}}|q|}, \end{eqnarray*} and \begin{eqnarray*} &\displaystyle t_{b,a}=m_{\pi}^{2}-\frac{1}{2}s\pm\frac{sZ}{2}\beta(s). \end{eqnarray*} Here, $s,\, t$ and $u$ are the Mandelstam variables, $|q|$ is the meson charge, $\beta(s)=\sqrt{\frac{s-4m_{\pi}^{2}}{s}}$ is the center-of-mass velocity of produced pions and $\kappa=1$ or $2$ for a neutral or charged pion, respectively. Eqs.\ref{eq:7} and \ref{eq:8} are valid for any meson, not just pions. For real Compton scattering, we can construct an invariant amplitude, \begin{eqnarray} \ M(\gamma\pi\rightarrow\gamma'\pi)=\epsilon'^{\mu}\epsilon^{\nu}\ M_{\mu\nu}.\label{eq:9} \end{eqnarray} Here $\boldsymbol{\epsilon}\:\mbox{and }\boldsymbol{\epsilon'}$ are polarization vectors of incoming and outgoing photons, respectively, and $M_{\mu\nu}$ is the Compton tensor, which is related to two Compton structure functions ($A(s,t)$ and $B(s,t)$) in the following way: \begin{eqnarray} M_{\mu\nu}=A(s,t)\ T_{\mu\nu}^{(1)}+B(s,t)\ T_{\mu\nu}^{(2)}.\label{eq:10} \end{eqnarray} The Lorentz tensors, $T_{\mu\nu}^{(1)}$ and $T_{_{\mu\nu}}^{(2)}$, are \begin{eqnarray} &T_{\mu\nu}^{(1)} =-\displaystyle{\frac{t}{2}g_{\mu\nu}-k_{3,\mu}k_{1,\nu}}\nonumber \\ \nonumber \\ T_{\mu\nu}^{(2)}& =\displaystyle{\frac{1}{2t}(s-m_{\pi}^{2})(u-m_{\pi}^{2})g_{\mu\nu}+k_{2,\mu}k_{2,\nu}+}\nonumber \\ \nonumber \\ &\displaystyle{\frac{s-m_{\pi}^{2}}{t}k_{3,\mu}k_{3,\nu}-\frac{u-m_{\pi}^{2}}{t}k_{2,\mu}k_{1,\nu}}.\label{eq:11} \end{eqnarray} The amplitude related to the meson electric and magnetic polarizabilities can now be rewritten as \begin{eqnarray} \ M(\gamma\pi\rightarrow\gamma'\pi)=\alpha\omega^{2}(\boldsymbol{\epsilon}{}^{\prime*}\cdot\boldsymbol{\epsilon})+\beta\omega^{2}({\bf s}{}^{\prime*}\cdot{\bf s}),\label{eq:12} \end{eqnarray} where $\omega$ is the photon energy and $\boldsymbol{s}=(\boldsymbol{k}\times\boldsymbol{\epsilon})$ denotes the magnetic vector. Combining Eq.(\ref{eq:10}) and (\ref{eq:12}), we get a connection between the polarizabilities and Compton structure functions: \begin{eqnarray} \alpha(s,t)=-\frac{1}{8\pi m}\bigg(A(s,t)+\frac{s-3m^{2}}{t}B(s,t)\bigg)\nonumber \\ \label{eq:13}\\ \beta(s,t)=\frac{1}{8\pi m}\bigg(A(s,t)+\frac{s-m^{2}}{t}B(s,t)\bigg).\nonumber \end{eqnarray} Terms in Eq.(\ref{eq:13}) are clearly energy-dependent, so we call them dynamical. In the limit when $s\rightarrow m^{2}$ and $t\rightarrow0$, we recover the static values of the polarizabilities. Using CHM and restricting our calculations to one-loop in CHPT (two-loop SU(2) calculations were done in \cite{Ga06} and showed a rather small impact on the cross section in Primakoff reaction), we calculate the dynamical polarizabilities of mesons including the entire SU(3) octet of mesons in the loop integrals. \begin{figure*}[!htpb] \begin{centering} \includegraphics[scale=0.28]{pion-pol-dyn} \includegraphics[scale=0.29]{pion-cross} \par\end{centering} \caption{Left graph: energy dependence of the pion electric polarizabilty. Right graph: $\gamma\gamma\rightarrow\pi\pi$ cross section, where red line corresponds to the Born cross section, blue line is the cross section related to the static CHPT value of polarizability and green line shows the cross section with dynamical polarizabilities from Eq.(\ref{eq:13}). The data points are taken from MARK-II \cite{Bo92}.} \label{fig5} \end{figure*} In addition, we include a structure-dependent pole contribution arising from the vector mesons in similar fashion to \cite{Ba92,Do93}. We now have the following static electric and magnetic polarizabilities (in units of $10^{-4}\mbox{fm}^{3}$): \begin{eqnarray} &&\alpha_{\pi^{\pm}}=\frac{8\alpha_{f}}{m_{\pi}f_{\pi}^{2}}(L_{9}+L_{10})=2.83; \nonumber \\ &&\beta_{\pi^{\pm}}=-\alpha_{\pi^{\pm}}+\frac{m_{\pi}}{4\pi}\frac{G_{\rho}}{M_{\rho}^{2}-m_{\pi}^{2}}=-2.76\nonumber \\ \label{eq:14} \nonumber \\ \nonumber \\ &&\alpha_{\pi^{0}}=-\frac{\alpha_{f}}{48\pi^{2}m_{\pi}f_{\pi}^{2}}=-0.50; \nonumber \\ &&\beta_{\pi^{o}}=-\alpha_{\pi^{o}}+\frac{m_{\pi}}{4\pi}\sum_{V=\rho,\omega}\frac{G_{V}}{M_{V}^{2}-m_{\pi}^{2}}=1.25 \space . \end{eqnarray} The pion coupling constant is $f_{\pi}=\sqrt{2}F_{\pi}=130.7\,\mbox{MeV}$, $\alpha_{f}$ is the fine structure constant, the low energy constants are $L_{9}=(5.99\pm0.43)\cdot10^{-3}$ and $L_{10}=(-4.5\pm0.7)\cdot10^{-3}$, and for the vector mesons coupling constants we use $G_{\rho}=0.044\,\mbox{GeV}$ and $G_{\omega}=0.495\,\mbox{GeV}$. The polarizabilities for the octet of mesons are summarized in Table \ref{tbl1}. \begin{table} \begin{centering} \begin{tabular}{|c|c|c|} \hline ($10^{-4}\,\mbox{fm}^{3}$) & $\alpha-\beta$ & $\alpha+\beta$\tabularnewline \hline \hline $\pi^{\pm}$ & 5.59 & 0.07\tabularnewline \hline $\pi^{0}$ & -1.75 & 0.75\tabularnewline \hline $\eta$ & -0.044 & 0.0\tabularnewline \hline $K^{\pm}$ & 0.88 & 0.0\tabularnewline \hline $K^{0}$ & 0.0032 & 0.0\tabularnewline \hline \end{tabular} \par\end{centering} \caption{Meson electric and magnetic static polarizabilities} \label{tbl1} \end{table} To extract the static pion polarizabilities from the $\gamma\gamma\rightarrow\pi\pi$ cross section, one can use Eq.(\ref{eq:13}) (with crossing symmetry $s\rightarrow t$ and $t\rightarrow s$) substituted into Eq.(\ref{eq:8}). The important role of the energy dependence of the polarizabilities in the description of the $\gamma\gamma\rightarrow\pi\pi$ cross section can be seen in Fig.\ref{fig5}. In Fig.\ref{fig6}, it is clearly visible that the dynamical polarizability of the kaon has a very strong influence on the $\gamma\gamma\rightarrow K^{+}K^{-}$ cross section. \begin{figure*}[!htpb] \begin{centering} \includegraphics[scale=0.28]{kaon-pol-dyn} \includegraphics[scale=0.28]{kaon-cross} \par\end{centering} \caption{Charged kaon dynamical polarizability and prediction for the $\gamma\gamma\rightarrow K^{+}K^{-}$ cross section. On the left plot, red line is the Born cross section, blue line corresponds to the static polarizability and green line to the dynamical polarizability cross section.} \label{fig6} \end{figure*} \section{Dynamical Polarizabilities of Baryons} The electric and magnetic polarizabilities of baryons introduce an additional contribution into the effective Hamiltonian for baryons in the electromagnetic field in the same way as defined in Eq.(6). The current PDG \cite{PDG} experimental values for the electric and magnetic polarizabilities for proton and neutron are (in units of $10^{-4} \mbox{fm}^3$): \begin{eqnarray*} \alpha_{p}=12.0\pm0.6; & & \beta_{p}=1.9\pm0.5;\\ \alpha_{n}=11.6\pm1.5; & & \beta_{n}=3.7\pm2.0 \space . \end{eqnarray*} In order to evaluate polarizabilities theoretically, one can use Compton scattering and relate an amplitude to the set of Compton structure functions $R_{i}$ \cite{Babusci} in the following way: \begin{eqnarray} &\displaystyle{\frac{1}{8\pi W}}M(\gamma B\rightarrow\gamma'B) = R_{1}(\boldsymbol{\epsilon}{}^{\prime*}\cdot\boldsymbol{\epsilon})+R_{2}({\bf s}{}^{\prime*}\cdot{\bf s})+ \nonumber \\ &iR_{3}\boldsymbol{\sigma}\cdot(\boldsymbol{\epsilon}{}^{\prime*}\times\boldsymbol{\epsilon})+iR_{4}\boldsymbol{\sigma}\cdot({\bf s}{}^{\prime*}\times{\bf s})+\nonumber \\ &iR_{5}((\boldsymbol{\sigma}\cdot\hat{{\bf k}})({\bf s^{\prime*}}\cdot\boldsymbol{\epsilon})-(\boldsymbol{\sigma}\cdot\hat{{\bf k}}^{\prime})({\bf s}\cdot\boldsymbol{\epsilon^{\prime*}}))+\nonumber \\ &iR_{6}((\boldsymbol{\sigma}\cdot\hat{{\bf k}}^{\prime})({\bf {\bf s^{\prime*}}}\cdot\boldsymbol{\epsilon})-(\boldsymbol{\sigma}\cdot\hat{{\bf k}})({\bf s}\cdot\boldsymbol{\epsilon^{\prime*}})) \space . \label{eq:4-1} \end{eqnarray} Here, $W=\omega+\sqrt{\omega^{2}+m_{B}^{2}}$ is the center of mass energy and $\omega$ is the energy of the incoming photon. The unit magnetic vector (${\bf s}=(\hat{{\bf k}}\times\boldsymbol{\epsilon}$)), the polarization vector ($\boldsymbol{\epsilon}$) and the unit momentum of the photon (${\displaystyle \hat{{\bf k}}=\frac{{\bf k}}{k}}$) are denoted with a prime in the case of the outgoing photon. Although the choice of the basis for the invariant Compton amplitude is not unique and can be defined differently \cite{Babusci}, the basis in Eq.(\ref{eq:4-1}) is more convenient for the evaluation of the polarizabilities because in this basis the structure functions, $R_{i}$, are directly related to the electric, magnetic and spin-dependent polarizabilities in the multipole expansion. It is well known that parameters such as polarizabilities can be determined by the non-Born contributions to the Compton structure functions. This includes loops (up to the given order of perturbation) and structure-dependent pole contributions, such as tree-level baryon resonance excitations and the WZW anomalous interaction (contributing only into backward spin-dependent polarizability). If in the multipole expansion of the Compton structure functions \cite{multi-1,multi-2,multi-3} we keep only the dipole-dipole and dipole-quadrupole transitions, we can obtain simple equations connecting non-Born (NB) structure functions to the polarizabilities of the baryon: \begin{eqnarray} &R_{1}^{NB}=\omega^{2}\alpha_{E1};\ \ R_{2}^{NB}=\omega^{2}\beta_{M1}; \nonumber \\ \nonumber \\ &R_{3}^{NB}=\omega^{3}(-\gamma_{E1E1}+\gamma_{E1M2}); \nonumber \\ \nonumber \\ &R_{4}^{NB}=\omega^{3}(-\gamma_{M1M1}+\gamma_{M1E2});\nonumber \\ \nonumber \\ &R_{5}^{NB}=-\omega^{3}\gamma_{M1E2};\ \ R_{5}^{NB}=-\omega^{3}\gamma_{E1M2}.\label{eq:6-1} \end{eqnarray} Although the polarizabilities used in the Eq.(\ref{eq:6-1}) are defined as constants, it is essential to treat them as energy-dependent quantities \cite{Griesshammer} because the Compton scattering experiments were performed with 50 to 800 MeV photons and hence require additional theoretical information to extrapolate the results to zero-energy parameters. The polarizabilities become energy-dependent due to the internal relaxation mechanisms, resonances, and particle production thresholds. Accordingly, if for the static polarizabilities we only keep order up to $\mathcal{O}(\omega^{2})$ for $R_{1,2}$ and up to $\mathcal{O}(\omega^{3})$ for $R_{3,4,5,6}$ then for energy-dependent dynamical polarizabilities we keep all orders in $\omega$ in the Compton structure functions. The calculation of the Compton structure functions up to the one-loop order in the framework of relativistic CHPT was made possible by CHM \cite{CHM}. In addition, the structure-dependent pole contribution to the nucleon polarizabilities has been taken into account in the form of the nucleon $\Delta$-resonance excitation. A Lagrangian which describes nucleon-to-resonance radiative transition is given in the form of a contact term: \begin{eqnarray} & \displaystyle{\mathcal{L}^{\Delta N\gamma}=i\Theta\frac{e}{\Lambda}\bar{N}\gamma^{\mu}\gamma_{5}Q\Delta^{\nu}F_{\mu\nu},}\nonumber \\ \label{eq:9-1}\\ & F_{\mu\nu}=\partial_{\mu}\mathcal{A}_{\nu}-\partial_{\nu}\mathcal{A}_{\mu}.\nonumber \end{eqnarray} Here, $\Lambda\sim1\, GeV$ is the scale of chiral symmetry breaking and $\Theta$ is the coupling strength, determined from the branching ratio of the radiative decay, $\Delta\rightarrow N\gamma$. The $\Delta$ resonance propagator is described by the propagator of the 3/2 spin Rarita-Schwinger field: \begin{eqnarray} \Pi_{\Delta}^{\mu\nu}=\frac{1}{2m}\frac{\not p+m}{p^{2}-m^{2}+im\Gamma}\Big(g^{\mu\nu}-\frac{1}{3}\gamma^{\mu}\gamma^{\nu}-\nonumber \\ \nonumber \\ \frac{2p^{\mu}p^{\nu}}{3m^{2}}+\frac{p^{\mu}\gamma^{\nu}-p^{\nu}\gamma^{\mu}}{3m}\Big).\label{eq:10-1} \end{eqnarray} The polarizabilities calculated for the proton with the photon energies up to 300 MeV are shown in Fig.\ref{ff1}. It is evident that below 50 MeV these polarizabilities have small energy dependence. For the neutron, the energy dependencies of dynamical polarizabilities have very similar behavior except the values are bigger on the absolute scale, so we will only describe the dynamical polarizabilities for the proton. \begin{figure*}[!htpb] \begin{centering} \begin{tabular}{cc} \includegraphics[scale=0.37]{ProtonAlpha} & \includegraphics[scale=0.37]{ProtonBeta}\tabularnewline & \tabularnewline \end{tabular} \par\end{centering} \centering{}\caption{Dependencies of the proton electric and magnetic polarizabilities (in $10^{-4}\,(fm^{3})$) on photon energy, $\omega$ (GeV), in the center-of-mass reference frame. Green-dashed curves correspond to the meson-nucleon loops contribution only; solid-red curves include the $\Delta$-resonance pole contribution.} \label{ff1} \end{figure*} The electric polarizability of the proton has very strong resonance type dependence near the pion production threshold. The $\Delta$-pole contribution has a small effect while consistently reducing $\alpha_{p}(\omega)$ values for all energies. Of course, to make final predictions for the CHPT values of polarizabilities, we need to add the contribution from the resonances in the loops of Compton scattering. Hence, in order to compare our results with the experimental values, we have borrowed the resonance loops results from the small-scale expansion (SSE) approach \cite{SSE}. If no $\Delta$-pole contribution is added, the magnetic polarizability in Fig.\ref{ff1} stays negative (diamagnetic) for almost all the energies. The $\Delta$-pole contribution is very big and shifts $\beta_{p}(\omega)$ from negative to positive (paramagnetic) values for energies up to 250 MeV. This behavior is quite natural since the pion loop calculations reflect magnetic polarizability coming from the virtual diamagnetic pion cloud and the $\Delta$ resonance contribution to $\beta_{p}(\omega)$ is driven by the strong paramagnetic core of the nucleon. Our results for the proton polarizabilities calculated in relativistic CHPT up to one-loop order including the $\Delta$- pole and SSE contribution are the following (in units of $10^{-4}(fm^{3}))$: \begin{eqnarray*} & \alpha_{p}=(7.38\,(\pi-\mbox{loop})-0.95\,(\Delta-\mbox{pole})+\nonumber \\ &4.2\,(\mbox{SSE}))=10.63;\\ \\ & \beta_{p}=(-2.20\,(\pi-\mbox{loop})+3.0\,(\Delta-\mbox{pole})+\nonumber \\ &0.7\,(\mbox{SSE}))=1.49. \end{eqnarray*} The static electric and magnetic polarizabilities for hyperons have been calculated first in \cite{Meissner} in the heavy baryon chiral perturbation theory. The dynamical electric and magnetic polarizabilities for hyperons were first calculated in \cite{AB}. In Fig.\ref{ff2}, we provide our results for dynamical electric and magnetic polarizabilities for hyperons using the basis from Eq.(\ref{eq:4-1}) in the Compton scattering amplitude. \begin{figure*}[!htpb] \begin{centering} \begin{tabular}{ccc} \includegraphics[scale=0.25]{SigmaZAlphaBeta} & \includegraphics[scale=0.25]{SigmaPAlphaBeta} & \includegraphics[scale=0.25]{SigmaMAlphaBeta}\tabularnewline \includegraphics[scale=0.25]{LambdaZAlphaBeta} & \includegraphics[scale=0.25]{CascadeZAlphaBeta} & \includegraphics[scale=0.25]{CascadeMAlphaBeta}\tabularnewline \end{tabular} \par\end{centering} \caption{Electric and magnetic dynamical polarizabilities of hyperons in units of $10^{-4}\,(fm^{3})$ as a function of the photon energy, $\omega$ (GeV). Here, the solid-red line represents the electric polarizability, and the dashed-green line is the magnetic polarizability.} \label{ff2} \end{figure*} For all polarizabilities listed in Fig.(\ref{ff2}), the electric polarizabilities have very similar resonant-type behavior near meson-production thresholds and the magnetic polarizabilities for all hyperons have negative low energy (static) values. Once again, it is important to include both pole and loop resonance contributions for a complete analysis. It is clear that for the all dynamical polarizabilities of the SU(3) octet of baryons, the values are strongly governed by the excitation mechanism reflected in the meson production peaks. Hence, the study of these polarizabilities directly probes the internal degrees of freedom governing the baryon structure at low energies. \section{Conclusions} In this work, we have evaluated the pion form factor and have investigated its influence on the behavior of the two-photon box pion electroproduction correction. We also calculated the dynamical polarizabilities of mesons including the entire SU(3) octet of mesons in the loop integrals using relativistic CHPT implemented in CHM. The dynamical electric and magnetic polarizabilities of the SU(3) octet of baryons were investigated in detail. We found that predictions of the chiral theory derived from our calculations (up to one-loop order and not including resonances in loop calculations) are somewhat consistent with the experimental results. The dependencies for the range of photon energies covering the majority of the meson photo production channels were analyzed. These extensive calculations are made possible by the recent implementation of semi-automatized calculations in CHPT, which allows the evaluation of polarizabilities from Compton scattering up to next-to-the-leading order. Our current goal is the calculation of dynamical polarizabilities with baryon resonances in the loops. There is still some disagreement between heavy-baryon and relativistic versions of CHPT introducing theoretical uncertainty. Clearly, further experimental work is needed especially for the hyperon and strange mesons polarizabilities, which would help with the further development of the theory. \section{Acknowledgements} This work has been supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). \nocite{*} \bibliographystyle{elsarticle-num}
2024-02-18T23:40:52.302Z
2013-09-16T02:01:12.000Z
algebraic_stack_train_0000
3,536
4,637
proofpile-arXiv_066-1277
\section{Introduction}\label{Sec:intro} A stochastic process $\{X(t),t\ge 0\}$ with finite variance taking values in $\mathbb{R}$ is said to be \emph{self-similar} if there is a constant called \emph{Hurst coefficient} $H>0$, such that for any scaling factor $a>0$, $X(at)\overset{f.d.d.}{=}a^H X(t)$, where $\overset{f.d.d.}{=}$ means equality in finite-dimensional distributions. If a self-similar process $\{X(t),t\ge 0\}$ has also stationary increments, namely, if for any $h\ge 0$, $\{Y(t):=X(t+h)-X(t),t\ge 0\}$ is a stationary process, then we say that $\{X(t),t\ge 0\}$ is $H$-sssi. The natural range of $H$ is $(0,1)$, which implies $\mathbb{E} X(t)=0$ for all $t\ge 0$. We refer the reader to Chapter 3 of \citet{embrechts:maejima:2002:selfsimilar} for details. The fundamental theorem of Lamperti (\citet{lamperti:1962:semi}) states that $H$-sssi processes are the only possible limit laws of normalized partial sum of stationary sequences, that is, if $$ \frac{1}{A(N)}\sum_{n=1}^{[Nt]}X(n)\overset{f.d.d.}{\longrightarrow} Y(t) $$ and $A(N)\rightarrow\infty$ as $N\rightarrow\infty$, where $\{X(n)\}$ is stationary, then $\{Y(t),t\ge 0\}$ has to be $H$-sssi for some $H>0$, and $A(N)$ has to be regularly varying with exponent $H$. The notation $\overset{f.d.d.}{\longrightarrow}$ stands for convergence in finite-dimensional distributions (f.d.d.). The best known example of Lamperti's fundamental theorem is when $\{X(n)\}$ is i.i.d.\ or a \emph{short-range dependent} (SRD) sequence, then the limit $Y(t)$ is Brownian motion which is $\frac{1}{2}$-sssi. If $\{X(n)\}$ has \emph{long-range dependence} (LRD), the limit $Y(t)$ is often $H$-sssi with $H>1/2$. The most typical $H$-sssi process is fractional Brownian motion $B_H(t)$, but there are also non-Gaussian processes, e.g, \emph{Hermite processes} (\citet{taqqu:1979:convergence}, \citet{dobrushin:major:1979:non}). The Hermite process of order $1$ is fractional Brownian motion, but when the order is greater than or equal to $2$, its law belongs to higher-order Wiener chaos (see, e.g., \citet{peccati:taqqu:2011:wiener}) and is thus non-Gaussian. The Hermite processes have attracted a lot of attention. The first-order Hermite process, namely fractional Brownian motion, has been studied intensively by numerous researchers since its popularization by \citet{mandelbrot:vanness:1968:fractional}, and we refer the reader to a recent monograph \citet{nourdin:2012:selected} and the references therein. The second-order Hermite process, namely the Rosenblatt process, is also investigated in a number of papers. Recent works include \citet{tudor:2008:analysis}, \citet{bardet:tudor:2010:wavelet}, \citet{veillette:taqqu:2012:properties}, \citet{maejima:tudor:2007:wiener,maejima:tudor:2013:distribution}. Hermite processes frequently appear in statistical inference problems involving LRD, e.g., \citet{levy:boistard:taqqu:reisen:2011:asymptotic}, \citet{dehling:rooch:2012:non}. It is interesting to note that when the stationary sequence $\{X(n)\}$ is LRD, one can obtain in the limit a much richer class of processes, whereas in the SRD case, one obtains only Brownian motion. The type of limit theorems involving $H$-sssi processes other than Brownian motion are often called \emph{non-central limit theorems}. While Hermite processes are the main examples of $H$-sssi processes obtained as the limit of partial sum of finite-variance LRD sequence, there are very few other limit $H$-sssi processes which have been considered, with some exceptions \citet{rosenblatt:1979:some} and \citet{major:1981:limit}. In this paper, we introduce a broad class of $H$-sssi ($H>1/2$) processes $\{Z(t),t\ge 0\}$ with their laws in Wiener chaos, which includes the Hermite processes as a special case. These processes are defined as $Z(t)=I_k(h_t)$, where $I_k(\cdot)$ denotes $k$-tuple Wiener-It\^o integral, and $$h_t(x_1,\ldots,x_k):=\int_0^t g(s-x_1,\ldots,s-x_k)\mathrm{1}_{\{s>x_1,\ldots,s>x_k\}}ds,$$ with $g$ being some suitable homogeneous function on $\mathbb{R}_+^k$ called \emph{generalized Hermite kernel}. For example, \begin{equation}\label{eq:eg} g(x_1,\ldots,x_k)=\max\left( \frac{x_1\ldots x_k}{ x_1^{k-\alpha}+\ldots +x_k^{k-\alpha}} ,~ x_1^{\alpha/k}\ldots x_k^{\alpha/k}\right), \quad \mathbf{x} \in \mathbb{R}_+^k, ~\alpha\in (-k/2-1/2,-k/2). \end{equation} We call the corresponding H-sssi process $Z(t)$ a \emph{generalized Hermite process}. We then construct a class of \emph{discrete chaos processes} as $$X(n)=\sum_{(i_1,\ldots,i_k)\in\mathbb{Z}_+^k}' g(i_1,\ldots,i_k)\epsilon_{n-i_1}\ldots \epsilon_{n-i_k},$$ where $\{\epsilon_i\}$ are i.i.d.\ noise, and the prime $'$ exclusion of the diagonals $i_p=i_q$, $p\neq q$. We show that the normalized partial sum of $X(n)$ converges to the generalized Hermite process $Z(t)$ defined by the same $g$. We also obtain processes with $H\in(0,1/2)$ by applying an additional fractional filter. The increments of these processes have negative dependence. Finally, we state a multivariate limit theorem which mixes central and non-central limits, including cases where there is an additional fractional filter. The paper is organized as follows. In Section 2, we review the Hermite processes. In Section 3, the generalized Hermite processes are introduced. In Section 4, we consider the discrete chaos processes. In Section 5, we prove a hypercontractivity relation for infinite discrete chaos. In Section 6, we show that the discrete chaos processes converge weakly to the generalized Hermite processes, including situations where $H<1/2$. \section{Brief review of Hermite processes}\label{Sec:Review} The Hermite processes are defined with the aid of a multiple stochastic integral called \emph{Wiener-It\^o integral}. We give here a brief introduction to this integral. For the proofs of our statements and additional details, we refer the reader to \citet{major:1981:multiple} and \citet{nualart:2006:malliavin}, for example. The Wiener-It\^o integral is defined for any $f\in L^2(\mathbb{R}^k)$ as \[ I_k(f):=\int_{\mathbb{R}^k}' f(x_1,\ldots,x_k) W(dx_1)\ldots W(dx_k), \] where $W(\cdot)$ is Brownian motion viewed as a random integrator, and the prime $'$ indicates that we don't integrate on the diagonals $x_p=x_q$, $p\neq q$. The integral $I_k(\cdot)$ can be defined first for elementary functions $f=\sum_{i=1}^n a_i \mathrm{1}_{A_i}$, where $A_i$'s are off-diagonal cubes in $\mathbb{R}^k$. This results in a linear combination of $k$-fold product of independent centered Gaussian random variables. One then extends this in the usual way to any $f\in L^2(\mathbb{R}^k)$. The random variable $I_k(f)$ is also said to belong to the $k$-th Wiener chaos $\mathcal{H}_k$, which is the Hilbert space generated by $I_k(f)$ when $f$ varies in $L^2(\mathbb{R}^k)$. Here we state the following important properties of the Wiener-It\^o integral $I_k(\cdot)$: \begin{enumerate} \item $I_k(\cdot)$ is a linear mapping from $L^2(\mathbb{R}^k)$ to $L^2(\Omega)$. \item If $f_\sigma(x_1,\ldots,x_k):=f(x_{\sigma(1)},\ldots,x_{\sigma(k)})$, where $\sigma$ is any permutation of $(1,\ldots,k)$, then $I_k(f_\sigma)=I_k(f)$. It hence suffices to focus on symmetric integrands (symmetrize $f$ as $$ \tilde{f}(x_1,\ldots,x_k):=\frac{1}{k!}\sum_{\sigma}f(x_{\sigma(1)},\ldots,x_{\sigma(k)}) $$ when necessary). \item Suppose $f\in L^2(\mathbb{R}^p)$ and $g\in L^2(\mathbb{R}^q)$, and both are symmetric. Then \[ \mathbb{E} I_p(f) I_q (g) = \begin{cases} k! \langle f,g\rangle_{L^2(\mathbb{R}^k)}=k!\int_{\mathbb{R}^k}f(\mathbf{x})g(\mathbf{x})d\mathbf{x}, & \text{ if } p=q=k; \\ 0, & \text{ if } p\neq q. \end{cases} \] If $f\in L^2(\mathbb{R}^k)$ is not symmetric, one gets \[ \mathbb{E} I_p(f)^2 = \|\tilde{f}\|_{L^2(\mathbb{R}^k)}^2\le k! \|{f}\|_{L^2(\mathbb{R}^k)}^2. \] \end{enumerate} An Hermite process of order $k$ is an $H$-sssi process with $1/2<H<1$, which is represented by the following Wiener-It\^o integral: \begin{equation}\label{eq:Herm TimeDomain} Z_H^{(k)}(t)=a_{k,d} \int'_{\mathbb{R}^k}~\int_0^t \prod_{p=1}^k (s-x_j)_+^{d-1}ds~ W(dx_1)\ldots W(dx_k), \end{equation} where and $a_{k,d}$ is some positive constant that makes $\mathrm{Var}(Z_H^{(k)}(1))=1$. We call (\ref{eq:Herm TimeDomain}) the \emph{time-domain representation}. It is known that Hermite processes admit other representations in terms of Wiener-It\^o integrals (see \citet{pipiras:taqqu:2010:regularization}), among which we note the \emph{spectral-domain representation}: \begin{equation}\label{eq:Herm SpecDomain} Z_H^{(k)}(t)=b_{k,d}\int''_{\mathbb{R}^k}\frac{e^{i(u_1+\ldots+u_k)t}-1}{i(u_1+\ldots+u_k)} |u_1|^{-d}\ldots |u_k|^{-d} \widehat{W}(du_1)\ldots\widehat{W}(du_k), \end{equation} where $\widehat{W}(\cdot)$ is a complex-valued Brownian motion (with real and imaginary parts being independent) viewed as a random integrator (see, e.g., p.22 of \citet{embrechts:maejima:2002:selfsimilar}), the double prime $''$ indicates the exclusion of the hyper-diagonals $u_p=\pm u_q$, $p\neq q$, and $b_{k,d}$ is some positive constant that makes $\mathrm{Var}(Z_H^{(k)}(1))=1$. In the sequel, we use $\widehat{I}_k(\cdot)$ to denote a $k$-tuple Wiener-It\^o integral with respect to the complex-valued Brownian motion $\widehat{W}(\cdot)$. In fact, the kernel inside the Wiener-It\^o integral in (\ref{eq:Herm SpecDomain}) is the Fourier transform of the kernel in (\ref{eq:Herm TimeDomain}) up to some unimportant factors. The connection between the time-domain and spectral-domain representation is through the following general result: \begin{Pro}\label{Pro:Time<->Spec}(Proposition 9.3.1 of \citet{peccati:taqqu:2011:wiener}) Let $g_j(\mathbf{x})$ be a real-valued function in $L^2(\mathbb{R}^{k_j})$, $j=1,\ldots,J$. Let $$\widehat{g}_j(\mathbf{u})=\int_{\mathbb{R}^k} g_j(\mathbf{x}) e^{i\langle \mathbf{u},\mathbf{x} \rangle} d\mathbf{x} $$ be the Fourier transform. Then \[ \Big(I_{k_1}(g_1),\ldots,I_{k_J}(g_2)\Big)\overset{d}{=}\left((2\pi)^{-k_1/2}\widehat{I}_{k_1}(\widehat{g}_1 w_1^{\otimes_{k_1}}),\ldots,(2\pi)^{-k_J/2}\widehat{I}_{k_J}(\widehat{g}_2 w_J^{\otimes_{k_J}})\right), \] for any $|w_j(u)|=1$ and $w_j(u)=\overline{w_j(-u)}$, $j=1,\ldots,J$, where $w^{\otimes k}(u_1\ldots u_k):=w(u_1)\ldots w(u_k)$. \end{Pro} The factors $w_j^{\otimes {k_j}}$, $j=1,\ldots,J$ do not change the distributions due to the change-of-variable formula of Wiener-It\^o integrals (see, e.g., Proposition 4.2 of \citet{dobrushin:1979:gaussian}). The Hermite process of order $k=1$ is fractional Brownian motion $B_H(t)$, and that of order $k=2$ is called \emph{Rosenblatt process} whose marginal distribution was discovered by \citet{rosenblatt:1961:independence}. We note that all $H$-sssi processes with unit variance at $t=1$ have covariance $$ R(s,t)=\frac{1}{2}(s^{2H}+t^{2H}-|s-t|^{2H}), $$ as is the case for Hermite process of arbitrary order. Hermite processes arise as limits of partial sum of nonlinear LRD sequences. In the following two theorems, $A(N)$ is a normalization factor guaranteeing unit asymptotic variance for the partial sum process at $t=1$. We use $\Rightarrow$ to denote weak convergence in the Skorohod space $D[0,1]$ with the uniform metric. \begin{Thm}\label{Thm:GaussSub}(\citet{dobrushin:major:1979:non,taqqu:1979:convergence}.) Suppose that $\{X(n)\}$ is a Gaussian stationary sequence with autocovariance $$ \gamma(n)\sim cn^{2d-1} $$ as $n\rightarrow\infty$ for some constant $c>0$ and $$1/2(1-1/k)<d<1/2.$$ Let $H_k(x):=(-1)^ke^{x^2/2}\frac{d^k}{dx^k}e^{-x^2/2}$ be the $k$-th Hermite polynomial, $k\ge 1$. Then \[ \frac{1}{A(N)}\sum_{n=1}^{[Nt]} H_k(X(n))\Rightarrow Z_{d}^{(k)}(t). \] \end{Thm} \begin{Thm}\label{Thm:Polyform}(\citet{surgailis:1982:zones}, see also \citet{giraitis:koul:surgailis:2009:large} Chapter 4.8.) Let $\{\epsilon_i\}$ be an i.i.d.\ sequence with mean $0$ variance $1$, $$ a_n\sim c n^{d-1} $$ as $n\rightarrow\infty$ for some constant $c>0$ and $$1/2(1-1/k)<d<1/2.$$ Let $$ X(n)=\sum_{0<i_1,\ldots,i_k<\infty}^{\prime} a_{i_1}\ldots a_{i_k} \epsilon_{n-i_1}\ldots\epsilon_{n-i_k}, $$ where the prime $'$ indicates that one doesn't sum on the diagonals $i_p=i_q$ $p\neq q$. Then \[ \frac{1}{A(N)}\sum_{n=1}^{[Nt]} X(n)\Rightarrow Z_{d}^{(k)}(t). \] \end{Thm} \begin{Rem} The Hermite polynomial in Theorem \ref{Thm:GaussSub} can be replaced by a general function $G(\cdot)$ such that $\mathbb{E} G(X_n)=0$, $\mathbb{E} G(X_n)^2<\infty$, due to the orthogonal expansion of $G(x)$ with respect to Hermite polynomials, and the fact that only the leading term in the expansion contributes to the limit law. Similarly, the off-diagonal multilinear polynomial-form process $X(n)$ in Theorem \ref{Thm:Polyform} can be replaced by a suitable function of the linear process $Y(n):=\sum_{i\ge 1} a_i\epsilon_{n-i}$. In both of the above theorems $\overset{f.d.d.}{\longrightarrow}$ can be strengthened to weak convergence $\Rightarrow$ (Proposition 4.4.2 of \citet{giraitis:koul:surgailis:2009:large}). \end{Rem} \begin{Rem} The range of the parameter $d$ in both of the theorems guarantees that the summand is LRD in the sense that the autocovariance decays as a power funciton with an exponent in the range $(-1,0)$. We note also that the constant $c>0$ appearing in both theorems can be replaced by a slowly varying function. \end{Rem} \section{Generalized Hermite Processes}\label{Sec:GenHermProc} We introduce first some notation, which will be used throughout. $\mathbb{R}_+=(0,\infty)$, $\mathbb{Z}_+=\{1,2,\ldots\}$. $\mathbf{x}=(x_1,\ldots,x_k)\in \mathbb{R}^k$, $\mathbf{i}=(i_1,\ldots,i_k)\in \mathbb{Z}^k$, $\mathbf{0}=(0,\ldots,0)$, $\mathbf{1}=(1,\ldots,1)$. For any real number $x$, $[x]=\sup \{n\in \mathbb{Z},n\le x\}$, and $[\mathbf{x}]=([x_1],\ldots,[x_k])$. We write $\mathbf{x}> \mathbf{y}$ (or $\ge$) if $x_j> y_j$ (or $\ge$), $j=1,\ldots,k$. $\langle \mathbf{x},\mathbf{y}\rangle=\sum_{j=1}^k x_jy_j$, and $\|\mathbf{x}\|=\sqrt{\langle \mathbf{x},\mathbf{x}\rangle}$, while $\|\cdot\|$ with a subscript is also used to denote the norm of some other space (specified in the subscript). Given a set $A\subset \mathbb{R}$, $A^k$ is the $k$-fold Cartesian product. $\mathrm{1}_A(\cdot)$ is the indicator function of a set $A$. $L^p(\mathbb{R}^k,\mu)$ denotes the $L^p$-space on $\mathbb{R}^k$ with measure $\mu$, and $\mu$ is omitted if it is Lebesgue measure. \subsection{General kernels}\label{Subsec:General} The following proposition provides a general way to construct in the time-domain an $H$-sssi process living in Wiener chaos: \begin{Pro}\label{Pro:Construct H-sssi} Fix an $H\in(0,1)$. Suppose that $\{h_t(\cdot), t>0\}$ is a family of functions defined on $\mathbb{R}^k$ satisfying \begin{enumerate} \item $h_t\in L^2(\mathbb{R}^k)$\label{SSSI:L2}; \item $\forall\lambda>0$, $\exists \beta \neq 0$, such that $h_{\lambda t}(\mathbf{x})=\lambda^{H+k\beta /2}h_t(\lambda^\beta \mathbf{x})$ for a.e.\ $\mathbf{x}\in \mathbb{R}^k$ and all $t>0$; \label{SSSI:homo} \item $\forall s>0$, $\exists$ $\mathbf{a}\in \mathbb{R}^k$, such that $h_{t+s}(\mathbf{x})-h_{t}(\mathbf{x})=h_s(\mathbf{x}+t\mathbf{a})$ for a.e.\ $\mathbf{x}\in \mathbb{R}^k$ and all $t>0$.\label{SSSI:stationary} \end{enumerate} Then $Z(t):=I_k(h_t)$ is an $H$-sssi process. \end{Pro} Condition \ref{SSSI:L2} guarantees that the Wiener-It\^o integral is well defined. Condition \ref{SSSI:homo} yields self-similarity, where the term $k\beta /2$ in the exponent compensates for the scaling of the $k$-tuple Brownian motion integrators. Condition \ref{SSSI:stationary} guarantees stationary increments. Self-similarity and stationary increments can be rigorously checked by the change-of-variable formula of Wiener-It\^o integrals (Proposition 4.2 of \citet{dobrushin:1979:gaussian}). The Hermite process, for instance, which is defined in (\ref{eq:Herm TimeDomain}) can be obtained following the scheme of Proposition \ref{Pro:Construct H-sssi} by letting $$ h_t(\mathbf{x})=\int_0^t g(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}(s)ds, $$ and \begin{align}\label{eq:original g} g(\mathbf{x})=\prod_{j=1}^k x_j^{d-1}, ~x_j>0. \end{align} It is easy to check that the conditions on $h_t$ in Proposition \ref{Pro:Construct H-sssi} are all satisfied with $\beta=-1$ in condition \ref{SSSI:homo} and $H=kd-k/2+1$. One can also check that the integrand in the spectral-domain representation in (\ref{eq:Herm SpecDomain}) also satisfies the first two conditions in Proposition \ref{Pro:Construct H-sssi}, but with $\beta=1$ in Condition \ref{SSSI:homo} instead. The third condition, however, must be replaced by $\widehat{h}_{t+s}(\mathbf{u})-\widehat{h}_{t}(\mathbf{u})=e^{-it\langle\mathbf{a},\mathbf{u} \rangle} \widehat{h}_s(\mathbf{u})$ due to the Fourier-transform relation. Our first goal is to extend the kernel $g$ in (\ref{eq:original g}) to some general class of functions. To do so, we define the following class of functions on $\mathbb{R}_+^k$, which first appeared in \citet{mori:toshio:1986:law} to study the law of iterated logarithm: \begin{Def}\label{Def:GHK} We say that a nonzero measurable function $g(\mathbf{x})$ defined on $\mathbb{R}_+^k$ is a \emph{generalized Hermite kernel}, if it satisfies \begin{enumerate}[A.] \item $g(\lambda \mathbf{x})=\lambda^\alpha g(\mathbf{x})$, $\forall \lambda>0$, where $\alpha\in(-\frac{k+1}{2},-\frac{k}{2})$;\label{ass:homo} \item $\int_{\mathbb{R}_+^k}|g(\mathbf{x})g(\mathbf{1}+\mathbf{x})| d\mathbf{x} <\infty$. \label{ass:int 2} \end{enumerate} \end{Def} One can check that the Hermite kernel $g$ in (\ref{eq:original g}) satisfies the above assumptions. \begin{Rem} The range of $\alpha$ in Condition \ref{ass:homo} is non-overlapping for different $k$, and extends from $-1/2$ to $-\infty$ with all the multiples of $-1/2$ excluded. \end{Rem} \begin{Rem} Suppose $g_1$ and $g_2$ are generalized Hermite kernels having order $k_1$, $k_2$ and homogeneity exponent $\alpha_1$, $\alpha_2$ respectively. If in addition, $\alpha_1+\alpha_2>-(k_1+k_2+1)/2$, then $g_1\otimes g_2(\mathbf{x}_1,\mathbf{x}_1):=g_1(\mathbf{x}_1)g_2(\mathbf{x}_2)$ is a generalized Hermite kernel having order $k_1+k_2$ and homogeneity exponent $\alpha_1+\alpha_2$. \end{Rem} \begin{Thm}\label{Thm:is H-sssi} Let $g(\mathbf{x})$ be a generalized Hermite kernel defined in Definition \ref{Def:GHK}. Then $$h_t(\mathbf{x})=\int_0^t g(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}} ds$$ is well-defined in $L^2(\mathbb{R}^k)$, $\forall t>0$, and the process defined by $Z_t:=I_k(h_t)$ is an $H$-sssi process with $$H=\alpha+k/2+1\in (1/2,1).$$ \end{Thm} \begin{proof} To check that $h_t\in L^2(\mathbb{R}^k)$, we write \begin{align*} \int_{\mathbb{R}^k} h_t(\mathbf{x})^2 d\mathbf{x} &= \int_{\mathbb{R}^k} d\mathbf{x} \int_0^t\int_0^t ds_1ds_2~ g(s_1\mathbf{1}-\mathbf{x}) g(s_2\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s_1\mathbf{1}>\mathbf{x}\}} \mathrm{1}_{\{s_2\mathbf{1}>\mathbf{x}\}}. \end{align*} We want to change the integration order by integrating on $\mathbf{x}$ first. By Fubini, we need to check that the absolute value of the integrand is integrable, that is, \begin{align*} &2\int_0^tds_1 \int_{s_1}^t ds_2 \int_{\mathbb{R}^k} d\mathbf{x}~ |g(s_1\mathbf{1}-\mathbf{x})g(s_2\mathbf{1}-\mathbf{x})|\mathrm{1}_{\{s_1\mathbf{1}-\mathbf{x}>\mathbf{0} \}} \quad (\text{ by symmetry of } s_1<s_2 \text{ and } s_1>s_2) \\ &=2\int_0^t ds \int_0^{t-s} du \int_{\mathbb{R}_+^k}d\mathbf{w}~ |g(\mathbf{w})g(u\mathbf{1}+\mathbf{w})| ~~\qquad\qquad\qquad (s=s_1, ~u=s_2-s_1,~\mathbf{w}=s_1\mathbf{1}-\mathbf{x})\\ &=2\int_0^t ds \int_0^{t-s} du \int_{\mathbb{R}_+^k}u^kd\mathbf{y}~ |g(u\mathbf{y})g(u+u\mathbf{y})| \\ &=2\int_0^t ds \int_0^{t-s} u^{2\alpha+k}du~\int_{\mathbb{R}_+^k}d\mathbf{y}~ |g(\mathbf{y})g(\mathbf{1}+\mathbf{y})| ~~~\qquad\qquad (\text{by Condition \ref{ass:homo} of Definition \ref{Def:GHK}}), \end{align*} where the last expression is finite by $2\alpha+k+1>0$ and Condition \ref{ass:int 2}. Hence by the same calculation, but without absolute values, \begin{align*} \int_{\mathbb{R}^k} h_t(\mathbf{x})^2 d\mathbf{x} &=2\int_0^t ds \int_0^{t-s} u^{2\alpha+k}du~~ \int_{\mathbb{R}_+^k}d\mathbf{y}~ g(\mathbf{y})g(\mathbf{1}+\mathbf{y}) \\ &=\frac{t^{2\alpha+k+2}}{(\alpha+k/2+1)(2\alpha+k+2)}\int_{\mathbb{R}_+^k}d\mathbf{y}~ g(\mathbf{y})g(\mathbf{1}+\mathbf{y}). \end{align*} To check self-similarity (Condition \ref{SSSI:homo} of Proposition \ref{Pro:Construct H-sssi} with $\beta=-1$), \begin{align*} h_{\lambda t}( \mathbf{x})&=\int_0^{\lambda t} g(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}ds=\lambda^{\alpha+1}\int_0^{t} g( r\mathbf{1}-\lambda^{-1}\mathbf{x})\mathrm{1}_{\{ r\mathbf{1}>\lambda^{-1}\mathbf{x}\}}\lambda dr =\lambda^{\alpha+1}h_t(\lambda^{-1} \mathbf{x}), \end{align*} where the second equality uses Condition \ref{ass:homo} of Definition \ref{Def:GHK}. The Hurst coefficient $H$ of $I_k(h_t)$ is obtained from $\alpha+1=H-k/2$. To check stationary increments (Condition \ref{SSSI:stationary} of Proposition \ref{Pro:Construct H-sssi}), for any $t,r>0$, \begin{align*} h_{t+r}(\mathbf{x})-h_t(\mathbf{x})=\int_t^{t+r} g(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}ds=\int_0^{r} g(u\mathbf{1}+t\mathbf{1}-\mathbf{x})\mathrm{1}_{\{u\mathbf{1}+t\mathbf{1}>\mathbf{x}\}} du=h_{r}(\mathbf{x}-t\mathbf{1}). \end{align*} \end{proof} \begin{Rem}\label{Rem:byproduct} As a byproduct of the above proof, we obtain that under the conditions of Definition \ref{Def:GHK}, one has $\int_0^t |g(s\mathbf{1}-\mathbf{x})|\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}(s) ds<\infty$ for a.e.\ $\mathbf{x}\in \mathbb{R}^k$, and $$ \mathbb{E} Z(t)^2 (k!)^{-1}\le \|h_t\|_{L^2(\mathbb{R}^k)}^2=\frac{t^{2H}}{H(2H-1)} C_g, $$ where $C_g:=\int_{\mathbb{R}_+^k}g(\mathbf{x})g(\mathbf{1}+\mathbf{x})d\mathbf{x}$, and the first inequality becomes equality if $g$ and hence $h_t$ is symmetric. Note that $C_g>0$ must hold, otherwise $h_t(\mathbf{x})=\int_0^t g(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}} ds=0$ for a.e.\ $\mathbf{x}\in \mathbb{R}^k$ and any $t>0$, which implies that $g$ is zero a.e., and thus contradicts the assumption. \end{Rem} \begin{Rem}\label{Rem:symmetrization} Since $\forall f\in L^2(\mathbb{R}^k)$, $I_k(f)=I_k(\tilde{f})$, where $\tilde{f}$ is the symmetrization of $f$ (\citet{nualart:2006:malliavin} p.9), it suffices to focus on symmetric generalized Hermite kernels $g$ only. In the sequel, we will not always assume that $g$ is symmetric for convenience, while being aware that $g$ can always be symmetrized. \end{Rem} \begin{Def}\label{Def:GHP} The process \begin{equation}\label{eq:Z(t)} Z(t):=\int_{\mathbb{R}^k}' ~\int_0^t g(s-x_1,\ldots,s-x_k) \mathrm{1}_{\{s>x_1,\ldots,s>x_k\}}ds~W(dx_1)\ldots W(dx_k) \end{equation} which we simply write $Z(t)=I_k(h_t)$ with $h_t(\mathbf{x})=\int_0^t g(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}ds$, where $g$ is a generalized Hermite kernel defined in Definition \ref{Def:GHK}, is called a \emph{generalized Hermite process}. \end{Def} \begin{Rem} It is known (see, e.g., \citet{janson:1997:gaussian} Theorem 6.12) that if a random variable $X$ belongs to the $k$-th Wiener chaos, then there $\exists a,b,t_0>0$ such that for $t\ge t_0$, $$\exp(-at^{2/k})\le P(|X|>t) \le \exp(-bt^{2/k}).$$ This shows that the generalized Hermite processes of different orders must necessarily have different laws, and the higher the order gets, the heavier the tail of the marginal distribution becomes, while they all have moments of any order. \end{Rem} The generalized Hermite process $Z(t)$ admits a continuous version, which follows from the following general result: \begin{Pro} If $\{Z(t),t\ge 0\}$ is an $H$-sssi process whose marginal distribution satisfies $\mathbb{E}|Z(1)|^\gamma<\infty$ for some $\gamma>H^{-1}$, then $Z(t)$ admits a continuous version. \end{Pro} \begin{proof} Using stationary increments and self-similarity, we have $$ \mathbb{E}|Z(t)-Z(s)|^\gamma =\mathbb{E}|Z(t-s)|^\gamma=|t-s|^{H\gamma}\mathbb{E}|Z(1)|^\gamma. $$ Since $H\gamma>1$, Kolmogorov's criterion applies. \end{proof} \begin{Rem} In \citet{mori:toshio:1986:law}, the following laws of iterated logarithm are obtained for the generalized Hermite process $Z(t)$: \begin{align*} \limsup_{n\rightarrow\infty} \frac{Z(n)}{n^H (2\log_2 n)^{k/2}}=l_1, \quad \liminf_{n\rightarrow\infty} \frac{Z(n)}{n^H (2\log_2 n)^{k/2}}=l_2 ~\text{ a.s.}, \end{align*} where $l_1=\sup K_h$ and $l_2=\inf K_h$ with the set \[ K_h:=\left\{ \int_{\mathbb{R}^k}h_1(\mathbf{x})\xi(x_1)\ldots \xi(x_k)d\mathbf{x}:\|\xi\|_{L^2(\mathbb{R})}\le 1\right\}. \] \end{Rem} In the spirit of (\ref{eq:Herm SpecDomain}), we can consider the spectral-domain representation of the generalized Hermite processes. Since $h_t(\mathbf{x})=\int_0^t g(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}(s) ds\in L^2(\mathbb{R})$, it always has an $L^2$-sense Fourier transform $\widehat{h}_t$. We give an explicit way to calculate $\widehat{h}_t$ when $g$ is integrable in a neighborhood of the origin. Note that since $g$ is homogeneous, it suffices to assume integrability on the unit cube $(0,1]^k$. \begin{Pro}\label{Pro:ComputeFT} Suppose that \begin{equation} \int_{(0,1]^k}|g(\mathbf{x})|<\infty \label{ass:int 1}. \end{equation} Let $g_n(\mathbf{x})=g(\mathbf{x})\mathrm{1}_{(0,n]^k}(\mathbf{x})$, and $\widehat{g}_n(\mathbf{u}):=\int_{\mathbb{R}^k}g_n(\mathbf{x})e^{i\langle \mathbf{u},\mathbf{x} \rangle}d\mathbf{x}$ be its Fourier transform. Set $$ \widehat{h}_{t,n}:=\frac{e^{it\langle\mathbf{u},\mathbf{1} \rangle}-1}{i\langle\mathbf{u},\mathbf{1} \rangle}\widehat{g}_n(-\mathbf{u}), $$ then $\widehat{h}_{t,n}$ converges in $L^2(\mathbb{R}^k)$ to $\widehat{h}_t$. Moreover, there is a function $\widehat{g}(\mathbf{u})$ defined for a.e.\ $\mathbf{u}\in \mathbb{R}^k$, such that, \begin{equation}\label{eq:general spec} \widehat{h}_t(\mathbf{u})=\frac{e^{it\langle\mathbf{u},\mathbf{1} \rangle}-1}{i\langle\mathbf{u},\mathbf{1} \rangle}\widehat{g}(-\mathbf{u}). \end{equation} \end{Pro} \begin{proof} Due to (\ref{ass:int 1}), the Fourier transform of $g_n$ is well-defined pointwise as \begin{equation}\label{eq:hatg_n} \widehat{g}_n(\mathbf{u})=\int_{\mathbb{R}^k}g(\mathbf{x})\mathrm{1}_{(0,n]^k}(\mathbf{x})e^{i\langle \mathbf{u},\mathbf{x} \rangle}d\mathbf{x}. \end{equation} Let $$ h_{t,n}(\mathbf{x})=\int_0^t g_n(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}(s) ds=\int_0^t g(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{\mathbf{x}<s\mathbf{1}\le \mathbf{x}+n\mathbf{1}\}}(s) ds. $$ Note that $|g_n(\mathbf{x})|\le |g(\mathbf{x})|$, so by the proof of Theorem \ref{Thm:is H-sssi}, $h_{t,n}(\mathbf{x})\in L^2(\mathbb{R}^k)$, and by the Dominated Convergence Theorem, $h_{t,n}$ converges to $h_{t}$ pointwise as $n\rightarrow\infty$. Since $|h_{t,n}|\le \int_0^t |g(s\mathbf{1}-\mathbf{x})|\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}(s) ds$, by the Dominated Convergence Theorem in $L^2(\mathbb{R}^k)$, $h_{t,n}$ converges to $h_{t}$ in $L^2(\mathbb{R}^k)$. By Plancherel's isometry, $\widehat{h}_{t,n}$, the Fourier transform of $h_{t,n}$, converges in $L^2(\mathbb{R}^k)$ to $\widehat{h}_t$. But \begin{align} \widehat{h}_{t,n}(\mathbf{u}):=&\int_{\mathbb{R}^k} \int_{0}^t g(s\mathbf{1}-\mathbf{x}) \mathrm{1}_{\{\mathbf{x}<s\mathbf{1}\le\mathbf{x}+ n\mathbf{1}\}}(s)ds~ e^{i\langle \mathbf{u},\mathbf{x}\rangle}d\mathbf{x}\notag\\ =& \int_{0}^t \int_{\mathbb{R}^k} e^{i\langle\mathbf{u},s\mathbf{1}\rangle} g(s\mathbf{1}-\mathbf{x})e^{i\langle -\mathbf{u},s\mathbf{1}-\mathbf{x}\rangle}\mathrm{1}_{\{\mathbf{0}< s\mathbf{1}-\mathbf{x}\le n\mathbf{1}\}}(\mathbf{x}) d\mathbf{x} ds\notag\\ =&\int_{0}^t e^{i\langle\mathbf{u},s\mathbf{1}\rangle} ds\int_{\mathbb{R}^k} g(\mathbf{y})\mathrm{1}_{\{\mathbf{0}<\mathbf{y}\le n\mathbf{1}\}}e^{i\langle -\mathbf{u},\mathbf{y}\rangle} d\mathbf{y} \notag\\ =& \frac{e^{it\langle\mathbf{u},\mathbf{1} \rangle}-1}{i\langle\mathbf{u},\mathbf{1} \rangle}\widehat{g}_n(-\mathbf{u}),\label{eq:compute fourier h_tn} \end{align} where the change of integration order is valid because by (\ref{ass:int 1}), $$\int_0^tds \int_{\mathbb{R}^k}d\mathbf{x}|g(s\mathbf{1}-\mathbf{x})| \mathrm{1}_{\{\mathbf{x}<s\mathbf{1}\le\mathbf{x}+ n\mathbf{1}\}}=\int_0^tds \int_{\mathbb{R}^k}|g(\mathbf{y})|\mathrm{1}_{\{\mathbf{0}<\mathbf{y}\le n\mathbf{1}\}} d\mathbf{y}<\infty.$$ We now prove (\ref{eq:general spec}). The fact that $\widehat{h}_{t,n}$ converges in $L^2(\mathbb{R}^k)$ to $\widehat{h}_t$ implies that $\widehat{g}_n$ is a Cauchy sequence in $L^2(\mathbb{R}^k,\mu_t)$, where $\mu_t$ is the measure given by $$\mu_t(A)=\int_A \left|\frac{e^{it\langle\mathbf{u},\mathbf{1} \rangle}-1}{i\langle\mathbf{u},\mathbf{1} \rangle}\right|^2d\mathbf{u}=\int_A\frac{2-2\cos(t \langle\mathbf{u},\mathbf{1}\rangle)}{\langle\mathbf{u},\mathbf{1}\rangle^2}d\mathbf{u} $$ for any measurable set $A\subset \mathbb{R}^k$. Hence there exists a $\widehat{g} \in L^2(\mathbb{R}^k,\mu_t)$ which is the limit of $\widehat{g}_n$ in $L^2(\mathbb{R}^k,\mu_t)$. Since $\mu_t$ is equivalent to Lebesgue measure, $\widehat{g}$ is determined a.e.\ on $\mathbb{R}^k$, and there exists a subsequence of $\widehat{g}_n$ that converges a.e.\ to $\widehat{g}$. So (\ref{eq:general spec}) holds. \end{proof} \begin{Rem} Note that $\widehat{g}$ is not the $L^2$-sense Fourier transform of $g\mathrm{1}_{\mathbb{R}^k_+}$, since $g\notin L^2{(\mathbb{R}^k_+)}$. One can, however, evaluate the limit of $\widehat{g}_n$ pointwise as an improper integral, as is done in the Hermite kernel case (\ref{eq:original g}) (see Lemma 6.2 of \citet{taqqu:1979:convergence}). \end{Rem} The limit $\widehat{g}$ in (\ref{eq:general spec}) is also a homogeneous function: \begin{Pro}\label{Pro:hat g homo} The function $\widehat{g}$ defined in Remark \ref{Pro:ComputeFT} satisfies for any $\lambda>0$, $g(\lambda \mathbf{u})=\lambda^{-\alpha-k}\widehat{g}(\mathbf{u})$ for a.e.\ $\mathbf{u}\in \mathbb{R}^k$. \end{Pro} \begin{proof} Following (\ref{eq:hatg_n}) and using Condition \ref{ass:homo} of Definition \ref{Def:GHK}, and noting that $\langle \lambda\mathbf{u},\mathbf{x}\rangle=\langle \mathbf{u},\lambda\mathbf{x}\rangle$, we have \begin{align*} \widehat{g}_n(\lambda\mathbf{u})&=\lambda^{-\alpha}\int_{\mathbb{R}^k}g(\lambda\mathbf{x})\mathrm{1}_{(0,n]^k}(\mathbf{x})e^{i\langle \mathbf{u},\lambda\mathbf{x} \rangle}d\mathbf{x} =\lambda^{-\alpha-k}\int_{\mathbb{R}^k}g(\mathbf{y})\mathrm{1}_{(0,\lambda n]^k}(\mathbf{\mathbf{y}})e^{i\langle \mathbf{u},\mathbf{y} \rangle}d\mathbf{y} =\lambda^{-\alpha-k}\widehat{g}_{n\lambda }(\mathbf{u}). \end{align*} Then let $n\rightarrow\infty$ through a subsequence so that both sides converge a.e.. \end{proof} \begin{Rem} The spectral-domain representation of the Hermite process in (\ref{eq:Herm SpecDomain}) is indeed obtained as $\widehat{g}(\mathbf{u})=c\prod_{j=1}^k |u_k|^{-d}w(\mathbf{u})$ for some constant $c>0$, where the function $w(\mathbf{u})=\prod_{j=1}^k \exp\left(-\mathrm{sign}(u_j)\frac{i\pi d}{2}\right)$ can be omitted (see Proposition \ref{Pro:Time<->Spec}). \end{Rem} \subsection{Special kernels and examples}\label{Subsec:special} We introduce now some subclasses of the generalized Hermite kernels $g$ defined in Definition \ref{Def:GHK}, which will be of interest later when dealing with limit theorems. Note that the kernel $g$ is determined by its value on the positive unit sphere $\mathcal{S}_+^k:=\{\mathbf{x}\in \mathbb{R}_+^k, \|\mathbf{x}\|= 1\}$. Because it is homogeneous, $g$ is always radially continuous and it is decreasing since $\alpha<0$ in Definition \ref{Def:GHK}. Thus assuming that $g$ is continuous on $\mathcal{S}_+^k$ a.e.\ (with respect to the uniform measure on the $\mathcal{S}_+^k$) is the same as assuming $g$ is continuous a.e.\ on $\mathbb{R}_+^k$ . \begin{Def}\label{Def:class bounded} We say that a generalized Hermite kernel $g$ is of Class (B) (B stands for ``boundedness''), if on ${\mathcal{S}_+^k}$, it is continuous a.e. and bounded. Consequently, $$ |g(\mathbf{x})| \le \|\mathbf{x}\|^\alpha g(\mathbf{x}/\|\mathbf{x}\|)\le c \|\mathbf{x}\|^\alpha $$ for some $c>0$. \end{Def} \begin{Rem} According to Lemma 7.1 of \citet{mori:toshio:1986:law}, Class (B) forms a dense subclass of the class of generalized Hermite kernels in the sense that for any generalized Hermite kernel $g$ and any $\epsilon>0$, there exists $g_\epsilon$ in Class (B), such that $\|h-h_\epsilon\|_{L^2(\mathbb{R}^k)}<\epsilon$, where $h(\mathbf{x})=\int_0^1 g(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}ds$ and $h_\epsilon(\mathbf{x})=\int_0^1 g_\epsilon(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}ds$. \end{Rem} Note that Class (B) does not include the original Hermite kernel in (\ref{eq:original g}). We now introduce a class of generalized Hermite kernels, called Class (L), which includes generalized Hermite kernels of the form: \begin{equation}\label{eq:nonsym Herm} g(\mathbf{x})=\prod_{j=1}^k x_j^{\gamma_j}, \end{equation} where each $-1<\gamma_j<-1/2$ and $-k/2-1/2<\sum_j \gamma_j < -k/2$. These particular kernels with $k=2$ has been considered in \citet{maejima:tudor:2012:selfsimilar} where the resulting process is called non-symmetric Rosenblatt process. We hence call the kernel in (\ref{eq:nonsym Herm}) a \emph{non-symmetric Hermite kernel}. Note that despite the name, one can always symmetrize these kernels. Class (L) will appear in the discrete chaos processes and the limit theorems considered later. \begin{Def}\label{Def:class limit} We say that a generalized Hermite kernel $g$ on $\mathbb{R}_+^k$ having homogeneity exponent $\alpha$ is of Class (L) ($L$ stands for ``limit'' as in ``limit theorems''), if \begin{enumerate} \item $g$ is continuous a.e.\ on $\mathbb{R}^k_+$; \item $|g(\mathbf{x})|\le g^*(\mathbf{x})$ a.e.\ $\mathbf{x}\in \mathbb{R}_+^k$, where $g^*$ is a finite linear combination of non-symmetric Hermite kernels: $\prod_{j=1}^k x_j^{\gamma_j}$, where $\gamma_j\in (-1,-1/2)$, $j=1,\ldots,k$, and $\sum_{j=1}^k \gamma_j=\alpha\in(-k/2-1/2,-k/2)$. \end{enumerate} \end{Def} For example, $g^*(\mathbf{x})$ could be $x_1^{-3/4} x_2^{-5/8}+x_1^{-9/16}x_2^{-13/16}$ if $k=2$. In this case, $\alpha=-11/8$. \begin{Rem}\label{Rem:L good} If two functions $g_1$ and $g_2$ on $\mathbb{R}_+^k$ satisfy Condition 2 of Definition \ref{Def:class limit}, then $\int_{\mathbb{R}_+^k}|g_1(\mathbf{x})g_2(\mathbf{1}+\mathbf{x})|d\mathbf{x}<\infty$ automatically holds, which can be seen by using the following identity: for any $\gamma,\delta\in (-1,-1/2)$, $$\int_0^\infty x^\gamma(1+x)^\delta dx=\mathrm{B}(\gamma+1,-\gamma-\delta-1),$$ where $\mathrm{B}(\cdot,\cdot)$ is the beta function. In addition, $\int_{(0,1]^k}|g_1(\mathbf{x})|d\mathbf{x}<\infty$ also holds. \end{Rem} \begin{Pro}\label{Pro:L > B} Class (L) contains Class (B). \end{Pro} \begin{proof} Suppose $g$ is a generalized Hermite kernel of Class (B). Then there exist contants $C_1,C_2>0$, such that \[ |g(\mathbf{x})|\le C_1 \|\mathbf{x}\|^{\alpha}=C_1\left(\sum_{j=1}^k x_j^2\right)^{\alpha /2}\le C_2 \prod_{j=1}^k x_j^{\alpha/k}, \] where we have used the arithmetic-geometric mean inequality $k^{-1}\sum_{j=1}^ky_j\ge \left(\prod_{j=1}^k y_j\right)^{1/k}$ and $\alpha<0$. So Condition 2 of Definition \ref{Def:class limit} is satisfied with $g^*$ being a single term where $\gamma_1=\ldots=\gamma_k=\alpha/k$. \end{proof} \begin{Rem} In view of Remark \ref{Rem:byproduct} and Remark \ref{Rem:L good}, one can check that Class (B) or Class (L) if adding in the a.e. $0$-valued function, with fixed order $k$ and fixed homogeneity component $\alpha\in (-k/2-1/2,-k/2)$, forms an inner product space, with the inner product specified as \begin{align*} \langle g_1,g_2\rangle &:=\left\langle\int_0^1 g_1(s\mathbf{1}-\cdot)ds ,\int_0^1 g_2(s\mathbf{1}-\cdot)ds \right\rangle_{L^2(\mathbb{R}^k)} \\ &=\frac{1}{2H(2H-1)}\int_{\mathbb{R}_+^k}g_1(\mathbf{x})g_2(\mathbf{1}+\mathbf{x})+g_1(\mathbf{1}+\mathbf{x})g_2(\mathbf{x})d\mathbf{x}, \end{align*} where $H=\alpha+k/2+1$, which yields the norm $$\|g\|:=\left\|\int_0^1 g(s\mathbf{1}-\cdot)ds\right\|_{L^2(\mathbb{R}^k)}=\left(\frac{1}{H(2H-1)}\int_{\mathbb{R}_+^k}g(\mathbf{x})g(\mathbf{1}+\mathbf{x})d\mathbf{x}\right)^{1/2}. $$ \end{Rem} Here are several examples. \begin{Eg} Suppose $g(\mathbf{x})=\|\mathbf{x}\|^\alpha$, where $\alpha\in(-1/2-k/2,-k/2)$. This $g$ belongs to Class (B) and thus also Class (L). The pseudo-Fourier transform (Proposition \ref{Pro:ComputeFT}) of $g$ is $\widehat{g}(\mathbf{u})=c\|\mathbf{u}\|^{-\alpha-k}$ ((25.25) of \citet{samko:1993:fractional}) for some constant $c>0$, which provides the spectral representation by (\ref{eq:general spec}). \end{Eg} \begin{Eg}\label{Eg:not M} Another example of Class (B): $$g(\mathbf{x})=\frac{\prod_{j=1}^k x_j^{a_j}}{\sum_{j=1}^kx_j^b},$$ where $a_j>0$ and $b>0$, yielding a homogeneity exponent $\alpha=\sum_{j=1}^k a_j-b\in (-1/2-k/2,-k/2)$. \end{Eg} \begin{Eg}\label{eg:L} We give yet another example of Class (L) but not (B): $$ g(\mathbf{x})=g_0(\mathbf{x})\vee\left(\prod_{j=1}^k x_j^{\alpha/k}\right). $$ where $g_0(\mathbf{x})>0$ is any generalized Hermite kernel of Class (B) on $\mathbb{R}_+^k$ with homogeneity exponent $\alpha$. \end{Eg} \subsection{Fractionally filtered kernels}\label{Subsec:frac} According to Theorem \ref{Thm:is H-sssi}, the generalized Hermite process introduced above admits a Hurst coefficient $H>1/2$ only. To obtain an $H$-sssi process with $0<H<1/2$, we consider the following fractionally filtered kernel: \begin{equation}\label{eq:h^beta_t} h^\beta_t(\mathbf{x})= \int_{\mathbb{R}}l_t^\beta(s) g(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}} ds, \end{equation} where $g$ is a generalized Hermite kernel defined in Definition \ref{Def:GHK} with homogeneity exponent $$ \alpha\in (-k/2-1/2,-k/2), $$ and \begin{equation}\label{eq:l^beta_t} l^{\beta}_t(s)=\frac{1}{\beta}\left[(t-s)_+^\beta- (-s)_+^\beta\right],~\beta\neq 0. \end{equation} One can extend it to $\beta=0$ by writing $l^{0}_t(s)=\mathrm{1}_{(0,t]}(s)$, but this would lead us back to the generalized Hermite process case. We hence assume throughout that $\beta\neq 0$. The following proposition gives the range of $\beta$ for which $I_k(h_t^\beta)$ is well-defined. \begin{Pro}\label{Pro:beta range} If \begin{equation}\label{eq:beta range} -1<-\alpha-\frac{k}{2}-1<\beta<-\alpha-\frac{k}{2}<\frac{1}{2}, \quad \beta\neq 0 \end{equation} then $h^\beta_t \in L^2(\mathbb{R}^k)$. \end{Pro} \begin{proof} \begin{align} \int_{\mathbb{R}^k} h_t^\beta (\mathbf{x})^2 d\mathbf{x}\le& 2\int_{-\infty}^\infty ds_1 \int_{s_1}^\infty ds_2 \int_{\mathbb{R}^k} d\mathbf{x}~ l_t(s_1)l_t(s_2)|g(s_1\mathbf{1}-\mathbf{x})g(s_2\mathbf{1}-\mathbf{x})|\mathrm{1}_{\{ s_1\mathbf{1}>\mathbf{x}\}} \label{eq:int h^beta_t} \\ =&2\int_{-\infty}^\infty ds \int_{0}^\infty du \int_{\mathbb{R}_+^k}d\mathbf{w}~ l^{\beta}_t(s) l^{\beta}_t(s+u) |g(\mathbf{w})g(u\mathbf{1}+\mathbf{w})| \qquad\qquad (s=s_1, ~u=s_2-s_1,~\mathbf{w}=s_1\mathbf{1}-\mathbf{x})\notag\\ =&2\int_{-\infty}^\infty ds ~l^{\beta}_t(s) \int_{0}^\infty l^{\beta}_t(s+u) u^{2\alpha+k}du~\int_{\mathbb{R}_+^k}d\mathbf{y}~ |g(\mathbf{y})g(\mathbf{1}+\mathbf{y})|\notag. \end{align} We thus focus on showing $\int_{-\infty}^\infty ds ~l^{\beta}_t(s) \int_{0}^\infty l^{\beta}_t(s+u) u^{2\alpha+k}du<\infty$. Recall that for any $c>0$, we have \begin{align*} \int_0^c (c-s)^{\gamma_1} s^{\gamma_2}ds= c^{\gamma_1+\gamma_2+1}\int_0^1 (1-s)^{\gamma_1}s^{\gamma_2}ds =c^{\gamma_1+\gamma_2+1}\mathrm{B}(\gamma_1+1,\gamma_2+1),\quad \forall\gamma_1,\gamma_2>-1. \end{align*} So by noting that $\beta>-1$ and $2\alpha+k>-1$, we have \begin{align*} \int_{0}^\infty l^{\beta}_t(s+u) u^{2\alpha+k}du&=\frac{1}{\beta}\int_{0}^\infty \left[(t-s-u)_+^\beta -(-s-u)_+^\beta\right]u^{2\alpha+k} du\\ &= \frac{1}{\beta}\left[\int_0^{t-s}(t-s-u)^\beta u^{2\alpha+k}du+\int_0^{-s} (-s-u)^\beta u^{2\alpha+k}du\right] \\&= \frac{\mathrm{B}(\beta+1,2\alpha+k+1)}{\beta}\left[(t-s)_+^{\beta+\delta}- (-s)_+^{\beta+\delta}\right], \end{align*} where \begin{equation}\label{eq:delta} \delta =2\alpha+k+1\in (0,1). \end{equation} We thus want to determine when the following holds: \begin{align*} \int_\mathbb{R} \left((t-s)_+^\beta - (-s)_+^\beta \right) \left((t-s)_+^{\beta+\delta} - (-s)_+^{\beta+\delta} \right)ds<\infty. \end{align*} Suppose $t>0$. The potential integrability problems appear near $s=-\infty,0,t$. Near $s=-\infty$, the integrand behaves like $|s|^{2\beta+\delta-2}$, and thus we need $2\beta+\delta-2<-1$; near $s=0$, the integrand behaves like $|s|^{2\beta+\delta}$, and thus $2\beta+\delta>-1$; near $s=t$, the integrand behaves like $|t-s|^{2\beta+\delta}$, and thus again $2\beta+\delta>-1$. In view of (\ref{eq:delta}), these requirements are satisfied by (\ref{eq:beta range}). \end{proof} \begin{Rem} Using (\ref{eq:int h^beta_t}) we obtain as a byproduct of the preceding proof that if $\beta$ is in the range given in Proposition \ref{Pro:beta range}, then the function $f_{\mathbf{x},t}(s):=l_t(s) |g(s\mathbf{1}-\mathbf{x})|\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}$ is in $L^1(\mathbb{R})$ for any $t>0$ and a.e.\ $\mathbf{x}\in \mathbb{R}^k$. \end{Rem} \begin{Thm}\label{Thm:Frac Z beta hsssi} The process defined by $Z^\beta(t):=I_k(h_t^\beta)$ with $h_t^\beta$ given in (\ref{eq:h^beta_t}), namely, \begin{equation}\label{eq:frac filt proc full} Z^\beta(t)= \int_{\mathbb{R}^k}' \int_{\mathbb{R}}\frac{1}{\beta} [(t-s)_+^\beta-(-s)_+^\beta] g(s-x_1,\ldots,s-x_k)1_{\{s>x_1,\ldots,s>x_k\}} ds~ W(dx_1)\ldots W(dx_k), \end{equation} is an $H$-sssi process with $$H=\alpha+\beta+k/2+1 \in (0,1). $$ \end{Thm} \begin{proof} By (\ref{eq:l^beta_t}), one has for any $\lambda>0$, $ l^\beta_{\lambda t}(s)=\lambda^{\beta} l^\beta_{t}(\frac{s}{\lambda}), $ and for any $t,h>0$, $ l^\beta_{t+h}(s)-l^\beta_{t}(s)=l^\beta_h(s-t). $ In addition, $g$ is homogeneous with exponent $\alpha$. The conclusion then follows by Proposition \ref{Pro:Construct H-sssi}. \end{proof} \begin{Rem} In the case $\beta>0$, one is able to write $l^\beta_t(s)=\int_0^t (r-s)_+^{\beta-1}dr$, and thus by Fubini \begin{equation}\label{eq:h_t beta>0} h_t^\beta (\mathbf{x})=\int_0^tdr \int_{\mathbb{R}} ds (r-s)_+^{\beta-1}g(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}. \end{equation} \end{Rem} \begin{Rem} To get the anti-persistent case $H<1/2$, choose $$\beta\in(-\alpha-k/2-1,-\alpha-k/2-1/2). $$ \end{Rem} We now state an analog of (\ref{eq:general spec}) for the spectral representation of the process $Z^\beta(t)$: \begin{Pro}\label{Pro:ComputeFT frac} Suppose that (\ref{ass:int 1}) holds. Then the $L^2$-sense Fourier transform of $h_t^\beta$ is \begin{equation}\label{eq:hat h_t^beta} \widehat{h}_{t}^\beta(\mathbf{u}) = (e^{it\langle\mathbf{u},\mathbf{1} \rangle}-1)(i\langle\mathbf{u},\mathbf{1} \rangle)^{-\beta-1} \widehat{g}(-\mathbf{u}) \Gamma(\beta),~a.e.~\mathbf{u}\in \mathbb{R}^k, \end{equation} where $\widehat{g}$ is defined in Proposition \ref{Pro:ComputeFT}. \end{Pro} \begin{proof} Let $g_n(\mathbf{x})=g(\mathbf{x})\mathrm{1}_{(0,n]^k}(\mathbf{x})$, and $l_{t,n}^\beta=\beta^{-1}[(t-s)_+^\beta \mathrm{1}_{\{t-s<n\}}-(-s)_+^\beta \mathrm{1}_{\{-s<n\}}].$ Set $$h_{t,n}^\beta(\mathbf{x}) = \int_{\mathbb{R}}l_{t,n}(s)g_n(s\mathbf{1}-\mathbf{x})ds.$$ Similar to the proof of Proposition \ref{Pro:ComputeFT}, one can show that $h_{t,n}^\beta $ converges in $L^2(\mathbb{R}^k)$ to $h_{t}^\beta $ as $n\rightarrow\infty$ through the Dominated Convergence Theorem by noting that $|g_n|\le |g|$ and $|l_{t,n}^\beta|\le l_{t}^\beta$. Since the truncated $l_{t,n}$ and ${g}_n$ admit $L^1$-Fourier transforms $\widehat{l}_{t,n}$ and $\widehat{g}_n$ respectively, one can write the Fourier transform of $h_{t,n}^\beta$ as: \begin{align*} \widehat{h}_{t,n}^\beta(\mathbf{u}) = \widehat{l}_{t,n}(\langle\mathbf{u},\mathbf{1}\rangle) \widehat{g}_n(-\mathbf{u}), \end{align*} (compare with (\ref{eq:compute fourier h_tn})). Since $h_{t,n}^\beta$ converges in $L^2(\mathbb{R})$ to $h_t^\beta$ as $n\rightarrow\infty$, by Plancherel's isometry, $\widehat{h}^\beta_{t,n}$ converges in $L^2(\mathbb{R}^k)$ to $\widehat{h}^\beta_t$. One now needs to identify (\ref{eq:hat h_t^beta}) with the limit of $\widehat{h}^\beta_{t,n}$. We first compute $\widehat{l}^\beta_{t,n}$. When $\beta<0$, one has by change of variable that \begin{align} l^{\beta}_{t,n}(u)&=\beta ^{-1}\left(\int_\mathbb{R} e^{iux} (t-x)_+^\beta \mathrm{1}_{\{t-x<n\}}dx-\int_{\mathbb{R}}e^{iux}(-x)_+^\beta \mathrm{1}_{\{-x<n\}} dx\right)\notag\\ &= \beta^{-1}(e^{iut}-1)\int_{0}^n e^{-ius} s^\beta ds.\label{eq:hat l beta<0} \end{align} When $\beta>0$, one has \begin{align*} l^{\beta}_{t,n}(u)&=\int_\mathbb{R}\mathrm{1}_{[0,t)}(x)(x-u)_+^{\beta-1}\mathrm{1}_{\{x-u<n\}}dx=(\mathrm{1}_{[0,t)} * b_n)(u), \end{align*} where $b_n(x)=(-x)_+^{\beta-1}\mathrm{1}_{\{-x<n\}}$. We have the Fourier transforms $\widehat{1_{[0,t)}}(u)=\frac{e^{iut}-1}{iu}$, and \[ \widehat{b}_n(u)=\int_{\mathbb{R}}e^{-iux} (-x)_+^{\beta-1}\mathrm{1}_{\{-x<n\}}dx =\int_0^n e^{-ius}s^{\beta-1}ds. \] So \begin{equation}\label{eq:hat l beta>0} \widehat{l}^\beta_{t,n}(u)=\frac{e^{iut}-1}{iu}\int_0^n e^{-ius}s^{\beta-1}ds \end{equation} By \citet{gradshteyn:2007:table} Formula 3.761.4 and 3.761.9, for $\mu\in (0,1)$, \begin{align*} \lim_{n\rightarrow\infty}\int_0^n e^{-ius}s^{\mu-1}ds& = |u|^{-\mu}\Gamma(\mu)\cos(\frac{\mu \pi}{2})-i\mathrm{sign}(u)|u|^{-\mu}\Gamma(\mu)\sin(\frac{\mu\pi}{2})\\ &=e^{-i\mathrm{sign}(u)\mu\pi/2} |u|^{-\mu}\Gamma(\mu)=(iu)^{-\mu}\Gamma(\mu), \end{align*} Combining the foregoing limit with (\ref{eq:hat l beta<0}) and (\ref{eq:hat l beta>0}), we deduce \[ \lim_{n\rightarrow\infty}\widehat{l}^\beta_{t,n}=\widehat{l}^\beta_{t}(u) :=(e^{itu}-1) (iu)^{-\beta-1} \Gamma(\beta). \] Recall that there exists a subsequence $\widehat{g}_{n_k}$ converges a.e.\ to the pseudo-Fourier transform $\widehat{g}$ as $k\rightarrow\infty$ (Proposition \ref{Pro:ComputeFT}). So $\widehat{l}_{t,n_k}(\langle\mathbf{u},\mathbf{1}\rangle) \widehat{g}_{n_k}(-\mathbf{u})$ converges to $\widehat{l}_{t}(\langle\mathbf{u},\mathbf{1}\rangle) \widehat{g}(-\mathbf{u})$ for a.e.\ $\mathbf{u}\in \mathbb{R}^k$. But at the same time $\widehat{l}_{t,n_k}(\langle\mathbf{u},\mathbf{1}\rangle) \widehat{g}_{n_k}(-\mathbf{u})$ converges in $L^2(\mathbb{R})^k$ to $\widehat{h}_{t}^\beta$. So we identify $\widehat{h}_t^\beta$ with the expression in (\ref{eq:hat h_t^beta}) \end{proof} \begin{Rem} By Proposition \ref{Pro:Time<->Spec}, we get a spectral representation $Z^\beta(t)\overset{f.d.d.}{=}\widehat{I}(\widehat{h}^\beta_{t})$. The kernel (\ref{eq:hat h_t^beta}) in the spectral-domain has been considered by \citet{major:1981:limit} in the special case where $\widehat{g}(\mathbf{u})=c\prod_{j=1}^k |u_j|^{-d}$ is the kernel for the spectral representation of Hermite process. \end{Rem} \section{Discrete chaos processes}\label{Sec:poly process} In this section, we introduce a class of stationary sequence which converges to a generalized Hermite process of Class (L) as defined in Definition \ref{Def:class limit}. First we define the \emph{discrete chaos}, or the \emph{discrete multiple stochastic integral}, $Q_k(\cdot ;\boldsymbol{\epsilon})$ with respect to the i.i.d.\ noise $\boldsymbol{\epsilon}:=(\epsilon_i,i\in \mathbb{Z})$. Let $h$ be a function defined in $\mathbb{Z}^k$ such that $\sum'_{\mathbf{i}\in \mathbb{Z}^k} h(\mathbf{i})^2<\infty$, where $'$ indicate the exclusion of the diagonals $i_p=i_q$, $p\neq q$. The following sum \begin{align}\label{eq:Q_k(h)} Q_k(h)=Q_k(h,\boldsymbol{\epsilon})=\sum'_{(i_1,\ldots,i_k)\in \mathbb{Z}^k} h(i_1,\ldots,i_k) \epsilon_{i_1}\ldots \epsilon_{i_k}=\sum'_{\mathbf{i}\in \mathbb{Z}^k} h(\mathbf{i})\prod_{p=1}^k \epsilon_{i_p}, \end{align} is called the \emph{discrete chaos} of order $k$. It is easy to see that switching the arguments, say $i_p$ and $i_q$, $p\neq q$, of $h(i_1,\ldots,i_k)$, does not change $Q_k(h)$. So if $\tilde{h}$ is the symmetrization $h$, then $Q_k(h)=Q_k(\tilde{h})$. The discrete chaos is related to Wiener chaos by a limit theorem. Suppose now we have a sequence of function vectors $\mathbf{h}_n=(h_{1,n},\ldots,h_{j,n})$ where each $h_{j,n}\in L^2(\mathbb{Z}^{k_j})$, $j=1,\ldots,J$. The following proposition concerns the convergence of the discrete chaos to the Wiener chaos: \begin{Pro}\label{Pro:Poly->Wiener} Let $\tilde{h}_{j,n}(\mathbf{x})=n^{k_j/2}h_{j,n}\left([n\mathbf{x}]+\mathbf{c}_j\right)$, $j=1,\ldots,J$, where $\mathbf{c}_j\in \mathbb{Z}^k$. Suppose that there exists $h_j\in L^2(\mathbb{R}^{k_j})$, such that $$ \|\tilde{h}_{j,n}-h_j\|_{L^2(\mathbb{R}^{k_j})}\rightarrow 0 $$ as $n\rightarrow\infty$. Then, as $n\rightarrow\infty$, \begin{align*} \mathbf{Q}:=\Big(Q_{k_1}(h_{1,n}),\ldots,Q_{k_J}(h_{J,n})\Big) \overset{d}{\rightarrow} \mathbf{I}:=\Big(I_{k_1}(h_1),\ldots,I_{k_J}(h_J)\Big), \end{align*} where each $I_{k_j}(\cdot)$, $j=1,\ldots,J$, denotes the $k_j$-tuple Wiener-It\^o integral with respect to the same standard Brownian motion $W$. \end{Pro} For a proof, we refer the reader to the proof of Proposition 14.3.2 of \citet{giraitis:koul:surgailis:2009:large} on the univariate case. The proof for the multivariate case (corresponding to Proposition 14.3.3 of \citet{giraitis:koul:surgailis:2009:large}) is similar once the Cr\'amer-Wald Device is applied. The difference between Proposition \ref{Pro:Poly->Wiener} and Proposition 14.3.3 of \citet{giraitis:koul:surgailis:2009:large} is that we add the shift $\mathbf{c}_j$ for more flexibility. This extension requires only an easy modification to the proof. The causal \emph{discrete chaos process} of order $k\ge 1$ is a stationary sequence $\{X(n),n\in \mathbb{Z}\}$ defined by: \begin{equation}\label{eq:Def PolyProcess} X(n)=\sum'_{0<i_1,\ldots,i_k<\infty} a(i_1,\ldots,i_k)\epsilon_{n-i_1}\ldots \epsilon_{n-i_k}=\sum'_{-\infty<i_1,\ldots,i_k<n} a(n-i_1,\ldots,n-i_k)\epsilon_{i_1}\ldots \epsilon_{i_k}, \end{equation} where $'$ indicates that the sum excludes the diagonals $i_p=i_q$, $p\neq q$, $\{\epsilon_n\}$ is an i.i.d.\ sequence with mean $0$ and variance $1$, $a(\mathbf{i})$ is a function on $\mathbb{Z}^k$, and we require that it satisfies $\sum'_{\mathbf{i}>\mathbf{0}} a(\mathbf{i})^2<\infty$, so that $X(n)$ is well-defined in the $L^2(\Omega)$-sense. Note that when $k=1$, $X(n)$ is plainly a linear process. Due to the off-diagonality, the autocovariance of $\{X(n)\}$ is given by the simple formula \begin{equation}\label{eq:ACF} \gamma(n):=\mathrm{Cov}(X(n),X(0))=k!\sum'_{\mathbf{i}> \mathbf{0}}\tilde{a}(\mathbf{i})\tilde{a}(\mathbf{i}+|n|\mathbf{1}), \end{equation} where $\tilde{a}(\cdot)$ is the symmetrization of $a(\cdot)$. We now focus on the following case: \begin{equation}\label{eq:a=gL} a(\mathbf{i})=g(\mathbf{i})L(\mathbf{i}), \end{equation} where $g$ is a generalized Hermite kernel of Class (L) defined in Definition \ref{Def:class limit}, and $L$ is a bounded function on $\mathbb{Z}_+^k$ which satisfies the following: for any $\mathbf{x}\in\mathbb{R}_+^k$ and for any bounded $\mathbb{Z}^k$-valued function $\mathbf{B}(\cdot)$ defined on $\mathbb{Z}_+$, we have \begin{equation}\label{eq:L(ni)->1} L([n\mathbf{x}]+\mathbf{B}(n))\rightarrow 1,~\text{as }n\rightarrow \infty. \end{equation} Note that $X(n)$ is well-defined in $L^2(\Omega)$ since $\sum_{\mathbf{i}\in \mathbb{Z}^k_+} g^*(\mathbf{i})^2<\infty$, where $g^*$ is a linear combination of terms of the form $\prod_{j=1}^k x_j^{\gamma_j}$ with every $\gamma_j<-1/2$, \begin{Rem} Note that the boundedness of $L$ and (\ref{eq:L(ni)->1}) are strictly weaker than assuming that $L(\mathbf{i})\rightarrow 1$ as $\|\mathbf{i}\|\rightarrow \infty$ for some norm $\|\cdot\|$ on $\mathbb{R}^k$ (recall that norms are equivalent in the finite-dimensional space). Indeed, consider $$ L(i_1,i_2)= \begin{cases} 2 & \text{ if } i_2=1;\\ 1 & \text{ otherwise}. \end{cases} $$ Suppose that $\mathbf{B}$ is bounded by $M$. Then $L([n\mathbf{x}]+\mathbf{B}(n))=1$ for large $n$. On the other hand, consider $\|\mathbf{i}\|=\max(i_1,i_2)$. Then if $(i_1,i_2)=(i_1,1)$, $i_1\rightarrow \infty$, we have $\|\mathbf{i}\|=i_1\rightarrow\infty$ but $L(i_1,i_2)=L(i_1,1)=2$. \end{Rem} \begin{Rem} In practice, Relation (\ref{eq:L(ni)->1}) implies that for any fixed $\mathbf{x}\in \mathbb{R}_+^k$ and $\mathbf{c}\in \mathbb{Z}_+^k$, $L([n\mathbf{x}]+\mathbf{c})\rightarrow 1$ as $n\rightarrow \infty$. \end{Rem} The following Proposition shows that one can get long-range dependence if $g$ is of Class (L). \begin{Pro}\label{Pro:LRD ACF} If $a(\mathbf{i})$ is as given in (\ref{eq:a=gL}), where $g$ has homogeneity exponent $\alpha\in (-1/2-k/2,-k/2)$ (or $2\alpha+k\in (-1,0)$), then the autocovariance of the discrete chaos process $\{X(n)\}$ satisfies \begin{equation}\label{eq:ACF asymp} \gamma(n)\sim k! C_{\tilde{g}} n^{2H-2}, \text{ as }n\rightarrow\infty, \end{equation} where $C_{\tilde{g}}=\int_{\mathbb{R}_+^k}\tilde{g}(\mathbf{x})\tilde{g}(\mathbf{1}+\mathbf{x})>0$, $H=\alpha+k/2+1\in(1/2,1)$, with $\tilde{g}$ being the symmetrization of $g$. In addition, as $N\rightarrow\infty$, \begin{equation}\label{eq:Var asymp} \mathrm{Var}[\sum_{n=1}^NX(n)] \sim \frac{k!C_{\tilde{g}}}{H(2H-1)} N^{2H}. \end{equation} \end{Pro} \begin{proof} Assume without loss of generality that $g$ is already symmetric. \begin{align*} (k!)^{-1}\gamma(n) =&\sum'_{\mathbf{i}>\mathbf{0}} g(\mathbf{i})g(n\mathbf{1}+\mathbf{i})L(n\mathbf{1}+\mathbf{i})L(\mathbf{i})\\ =&n^{2\alpha+k}\sum'_{\mathbf{i}>\mathbf{0}} g\left( \frac{\mathbf{i}}{n}\right)g\left(\mathbf{1}+\frac{\mathbf{i}}{n}\right)L(\mathbf{i})L(n\mathbf{1}+\mathbf{i})\frac{1}{n^k}\\ =&n^{2\alpha+k} \int_{\mathbb{R}^k_+}\mathrm{1}_{D_n^c}(\mathbf{x})g_n(\mathbf{x})g_n(1+\mathbf{x})d\mathbf{x}, \end{align*} where $g_n(\mathbf{x})=g(\frac{[n\mathbf{x}]+\mathbf{1}}{n})L([n\mathbf{x}]+\mathbf{1})$, $D_n^c=\{\mathbf{x}\in\mathbb{R}_+^k, ~[nx_p]\neq [nx_q],~p\neq q\in \{1,\ldots,k\}\}$. Note that $\mathrm{1}_{D_n}(\mathbf{x})=1$ as $n$ becomes large enough, for any $\mathbf{x}\in D^c:=\{\mathbf{x}\in \mathbb{R}_+^k, x_p\neq x_q, ~p\neq q\in \{1,\ldots,k\}\}$, and that the diagonal set $D:=\mathbb{R}_+^k\setminus D^c$ has measure $0$. Since $g$ belongs to Class (L), $g$ is continuous a.e., so $g_n(\mathbf{x})\rightarrow g(\mathbf{x})$ a.e.\ as $n\rightarrow\infty$. Furthermore, there exists $g^*(\mathbf{x})$ which is a linear combination of the form $\prod_{j=1}^k x_j^{\gamma_j}$ (Condition 2 of Definition \ref{Def:class limit}), so that for a.e.\ $\mathbf{x}\in \mathbb{R}_+^k$, $$ |g_n(\mathbf{x})|\le g^*\left(\frac{[n\mathbf{x}]+\mathbf{1}}{n}\right)\le g^*(\mathbf{x}), $$ since $L$ is bounded and $g^*$ is decreasing in its every variable. Note that $\int_{\mathbb{R}_+^k} g^*(\mathbf{x})g^*(\mathbf{1}+\mathbf{x})d\mathbf{x}<\infty$, and $g$ is a.e.\ continuous. So it remains to apply the Dominated Convergence Theorem. Finally, (\ref{eq:Var asymp}) follows by first noting that $$ \mathrm{Var}[\sum_{n=1}^NX(n)]=\sum_{n}(N-|n|)\gamma(n)=N\sum_{|n|<N} \gamma(n)-\sum_{|n|<N} |n|\gamma(n), $$ and then using the asymptotics of $\gamma(n)$ just derived. \end{proof} \section{Hypercontractivity for infinite discrete chaos}\label{Sec:hypercontract} Let $X_M$ be a finite discrete chaos defined as \begin{equation}\label{eq:finite chaos} X_M=\sum_{-M\mathbf{1}\le \mathbf{i}\le M\mathbf{1}}' h(\mathbf{i}) \epsilon_{i_1}\ldots \epsilon_{i_k}, \end{equation} where $h(\mathbf{i})=h(i_1,\ldots,i_k)$ is a function on $\mathbb{Z}^k$, $M\in \mathbb{Z}_+$, and we assume that $\{\epsilon_i\}$ is a sequence of i.i.d.\ variables with $\mathbb{E} \epsilon_i=0$, $\mathbb{E} \epsilon_i^2=1$. Then we have the following moment-comparison inequality, also called ``hypercontractivity inequality'': \begin{Pro} Suppose that $\mathbb{E} |\epsilon_i|^p <\infty$ with $p\ge 2$. Then \begin{equation}\label{eq:hypercontrac} \mathbb{E} [|X_M|^p]^{1/p} \le d_{p,k} \mathbb{E} [|X_M|^2]^{1/2}, \end{equation} where $d_{p,k}$ is a constant depending only on $p$ and $k$. \end{Pro} For a proof of (\ref{eq:hypercontrac}), where $M$ is finite, see Lemma 4.3 of \citet{krakowiak:szulga:1986:random}, where the so-called MPZ($p$) condition (Definition 1.5 of \citet{krakowiak:szulga:1986:random}) is trivially satisfied since the $\epsilon_i$'s are identically distributed. Now we extend (\ref{eq:hypercontrac}) to the case $M=\infty$. The result is used in Theorem \ref{Thm:CLT}, \ref{Thm:Frac NCLT beta<0} and \ref{Thm:multi limit} below for proving tightness in $D[0,1]$. \begin{Pro}\label{Pro:Hypercontract} Suppose that $\sum_{\mathbf{i}\in \mathbb{Z}^k}'h(\mathbf{i})^2<\infty$. Let $X=\sum_{\mathbf{i}\in \mathbb{Z}^k}'h(\mathbf{i}) \prod_{p=1}^k\epsilon_{i_p}$. If for some $p'>p>2$, $\mathbb{E} |\epsilon_i|^{p'}<\infty$, then one has \begin{equation}\label{eq:inf hypercontrac} \mathbb{E} [|X|^p]^{1/p} \le d_{p,k} \mathbb{E} [|X|^2]^{1/2} \end{equation} \end{Pro} \begin{proof} Let $X_M$ be the truncated finite chaos as in (\ref{eq:finite chaos}). The condition on $h$ implies that $X_M\rightarrow X$ in $L^2(\Omega)$. Moreover, one has by (\ref{eq:hypercontrac}), \[ \mathbb{E} [|X_M|^{p'}] \le d_{p',k}^{p'} \mathbb{E} [|X_M|^2]^{p'/2} \le d_{p',k}^{p'} \left(\sum_{\mathbf{i}\in \mathbb{Z}^k}' h(\mathbf{i})^2 \right)^{p'/2}. \] This implies that $\{|X_M|^p,M\ge 1\}$ and $\{|X_M|^2,M\ge 1\}$ are uniformly integrable, implying convergence of the corresponding moments. So one can then let $M\rightarrow \infty$ on both sides of (\ref{eq:hypercontrac}) and obtain (\ref{eq:inf hypercontrac}). \end{proof} \section{Joint convergence of the discrete chaoses} Our goal here is to obtain non-central limit theorems for the discrete chaos process introduced in Section \ref{Sec:poly process}. We shall, in fact, prove both a central limit theorem for the SRD case (getting Brownian motion as limit) and a non-central limit theorem for the LRD case (getting the generalized Hermite process introduced in Section \ref{Sec:GenHermProc} as limit). We also consider non-central limit theorems leading to the fractionally filtered generalized Hermite process introduced in Section \ref{Subsec:frac}. Finally, we derive a multivariate limit theorem which mixes central and non-central limit theorems. We first define here precisely what SRD and LRD stand for in the context of discrete chaos process. Recall that $\tilde{a}(\cdot)$ denotes the symmetrization of $a(\cdot)$. \begin{Def}\label{Def:SRD LRD} We say a discrete chaos process $\{X(n)\}$ given in (\ref{eq:Def PolyProcess}) is \begin{itemize} \item SRD, if $\sum_{n=-\infty}^{\infty} \sum_{\mathbf{i}>\mathbf{0}}' |\tilde{a}(\mathbf{i})\tilde{a}(\mathbf{i}+|n|\mathbf{1})|<\infty$ and $\sum_{n=-\infty}^\infty \gamma(n)>0$; \item LRD, if $a(\mathbf{i})=g(\mathbf{i})L(\mathbf{i})$ as given in (\ref{eq:a=gL}). In particular, $g$ is a generalized Hermite kernel of Class (L). \end{itemize} \begin{Rem} The definitions of SRD and LRD in Definition \ref{Def:SRD LRD} are distinct. Indeed, the SRD condition implies that $\sum_n|\gamma(n)|<\infty$, while LRD yields $\sum_n|\gamma(n)|=\infty$ by Proposition \ref{Pro:LRD ACF}. \end{Rem} \end{Def} \subsection{Central limit theorem}\label{Subsec:CLT} \begin{Thm}\label{Thm:CLT} If a discrete chaos process $\{X(n)\}$ given in (\ref{eq:Def PolyProcess}) is SRD in the sense of Definition \ref{Def:SRD LRD}, then \begin{equation}\label{eq:partial sum SRD} \frac{1}{N^{1/2}}\sum_{n=1}^{[Nt]}X(n)\overset{f.d.d.}{\longrightarrow} \sigma B(t) \end{equation} where $B(t)$ is a standard Brownian motion, and $\sigma^2=\sum_{n=-\infty}^\infty \gamma(n)$. \end{Thm} \begin{proof} Assume without loss of generality that $a(\cdot)$ is symmetric. The proof is similar to the proof of Theorem \ref{Thm:Polyform} found on p.108 of \citet{giraitis:koul:surgailis:2009:large}, so we give only a sketch. The central idea is to introduce the $m$-truncation of $X(n)$, namely, $X^{(m)}(n):=\sum'_{\mathbf{0}<\mathbf{i}\le m\mathbf{1}}a(\mathbf{i})\prod_{j=1}^k\epsilon_{n-i_j}$, and then let $m\rightarrow\infty$. The sequence $\{X^{(m)}(n),n\in \mathbb{Z}\}$ is $m$-dependent, so the classical invariance principle applies (\citet{billingsley:1956:invariance} Theorem 5.2). The long-run variance $\sigma^2=\sum_n\gamma(n)$ is a standard result. We now check that the $L^2(\Omega)$ approximation is valid as $m\rightarrow\infty$, that is, \begin{equation}\label{eq:m->inf} \lim_{m\rightarrow\infty}\sup_{N\in \mathbb{Z}_+}\mathrm{Var}[Y^{(m)}_N(t)-Y_N(t)]=0, ~t>0, \end{equation} where $Y^{(m)}_N(t)=\frac{1}{\sqrt{N}}\sum_{n=1}^{[Nt]}X^{(m)}(n)$ and $Y_N(t)=\frac{1}{\sqrt{N}}\sum_{n=1}^{[Nt]}X(n)$, which is similar to (4.8.7) of \citet{giraitis:koul:surgailis:2009:large}. Indeed, \begin{align}\label{eq:m->inf expr} \mathrm{Var}[Y_N^{(m)}(t)-Y_{N}(t)]&= \frac{1}{N}\mathrm{Var}\left[\sum_{n=1}^{[Nt]} (X^{(m)}_n-X_n)\right]=\frac{[Nt]}{N}\sum_{|n|<[Nt]}\gamma_{m}(n)(1-\frac{|n|}{[Nt]})\le t\sum_{n=-\infty}^\infty |\gamma_m(n)|, \end{align} where \begin{align*} \gamma_m(n):=\mathbb{E}(X_{n}-X_{n}^{(m)})(X_{0}-X_{0}^{(m)}) =k!\sum'_{\mathbf{i}>m\mathbf{1}}a(\mathbf{i})a(n\mathbf{1}+\mathbf{i}). \end{align*} For a fixed $n\in \mathbb{Z}$, $\gamma_m(n)\rightarrow 0$ as $m\rightarrow\infty$, and $|\gamma_m(n)|\le \rho(n)$, where $ \rho(n)=k!\sum'_{\mathbf{i}>\mathbf{0}}|a(\mathbf{i})a(\mathbf{i}+n\mathbf{1})|, $ which satisfies $\sum_n \rho(n)<\infty$ by the SRD assumption in Definition \ref{Def:SRD LRD}. Since the bound in (\ref{eq:m->inf expr}) does not depend on $N$, the Dominated Convergence Theorem applies and thus (\ref{eq:m->inf}) holds. \end{proof} To strengthen the conclusion of Theorem \ref{Thm:CLT} to weak convergence, we have to make some additional assumptions to prove tightness. \begin{Thm}\label{Thm:CLTWeak} Theorem \ref{Thm:CLT} holds with $\overset{f.d.d.}{\longrightarrow}$ replaced by weak convergence $\Rightarrow$ in $D[0,1]$, if either of the following holds: \begin{enumerate} \item There exists $\delta>0$, such that $\mathbb{E}(|\epsilon_i|^{2+\delta})<\infty$; \item There exists an $M>0$ such that $a(\mathbf{i})=0$ whenever $\mathbf{i}>M\mathbf{1}$. \end{enumerate} \end{Thm} \begin{proof} Look first at case 1. Let \[Y_N(t):=\frac{1}{\sqrt{N}}\sum_{n=1}^{[Nt]}X(n)\] Select $p\in (2,2+\delta)$. By Proposition \ref{Pro:Hypercontract}, one has \begin{equation}\label{eq:hypercontract} \mathbb{E}[|Y_N(t)-Y_N(s)|^{p}]\le c \mathbb{E}[|Y_N(t)-Y_N(s)|^2]^{p/2}, \end{equation} where $c$ is some constant which doesn't depend on $s,t$ or $N$. Note that $\sum_n |\gamma(n)|<\infty$ due to SRD assumption, we have \begin{align}\label{eq:secondmoment} &\mathbb{E}\left[\left|Y_N(t)-Y_N(s)\right|^2\right]=\frac{1}{N}\mathbb{E}[|\sum_{n=1}^{[Nt]-[Ns]}X(n)|^2] \notag\\=& \frac{[Nt]-[Ns]}{N}\sum_{|n|<[Nt]-[Ns]}\left(1-\frac{|n|}{[Nt]-[Ns]}\right)\gamma(n)\le \frac{[Nt]-[Ns]}{N}\sum_{n=-\infty}^\infty |\gamma(n)|. \end{align} Combining (\ref{eq:hypercontract}) and (\ref{eq:secondmoment}), we have for some constant $C>0$ that \[ \mathbb{E}[|Y_N(t)-Y_N(s)|^p]\le cE[|Y_N(t)-Y_N(s)|^2]^{p/2}\le C |F_N(t)-F_N(s)|^{p/2}, \] where $F_N(t)=[Nt]/N$. Now by applying Lemma 4.4.1 and Theorem 4.4.1 of \citet{giraitis:koul:surgailis:2009:large}, noting that $p/2>1$, we conclude that tightness holds. For case 2, $X(n)$ is $M$-dependent, so by Theorem 5.2 of \citet{billingsley:1956:invariance} tightness holds as well. \end{proof} \subsection{Non-central limit theorem}\label{Subsec:NCLT} The following theorem shows that in the LRD case, the discrete chaos process converges weakly to a generalized Hermite process. \begin{Thm}\label{Thm:NCLT} If a discrete chaos process $\{X(n)\}$ given in (\ref{eq:Def PolyProcess}) is LRD in the sense of Definition \ref{Def:SRD LRD}, then \begin{equation}\label{eq:partial sum LRD} \frac{1}{N^H}\sum_{n=1}^{[Nt]}X(n)\Rightarrow Z(t), \end{equation} in $D[0,1]$, where $Z(t)$ is the generalized Hermite process in (\ref{eq:Z(t)}), and $$H=\alpha+k/2+1\in \left(\frac{1}{2},1\right),$$ where $\alpha\in (-1/2-k/2,-k/2)$ is the homogeneity exponent of $g$ and $k$ is the order of $\{X(n)\}$. \end{Thm} \begin{proof} Tightness in $D[0,1]$ is standard since $H>1/2$. We only need to show convergence in finite-dimensional distributions. Assume for simplicity that $a(\mathbf{i})=g(\mathbf{i})$ or equivalently $L(\mathbf{i})=1$. The inclusion of a general $L$ can be done as in the proof of Proposition \ref{Pro:LRD ACF}. We want to show that \begin{align}\label{eq:Q_k(h)->I} \frac{1}{N^H}\sum_{n=1}^{[Nt]} X(n)=\sum'_{(i_1,\ldots,i_k)\in \mathbb{Z}^k}\frac{1}{N^{\alpha+k/2+1}}\sum_{n=1}^{[Nt]}g(n\mathbf{1}-\mathbf{i})\mathrm{1}_{\{n\mathbf{1}>\mathbf{i}\}}\epsilon_{i_1}\ldots\epsilon_{i_k}=:Q_k(h_{t,N})\overset{f.d.d.}{\longrightarrow} Z(t), \end{align} where $Q_k(\cdot)$ is defined in (\ref{eq:Q_k(h)}). Now in view of Proposition \ref{Pro:Poly->Wiener}, we only need to check that \begin{equation}\label{eq:h tilde L2 conv h} \|\tilde{h}_{t,N}(\mathbf{x})-h_{t}(\mathbf{x})\|_{L^2(\mathbb{R}^k)}\rightarrow 0, \end{equation} where \[h_t(\mathbf{x})=\int_0^t g(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}ds,\] and \begin{align*} \tilde{h}_{t,N}(\mathbf{x}):&=N^{k/2} h_{t,N}([N\mathbf{x}]+\mathbf{1}) =\frac{1}{N^{\alpha+1}}\sum_{n=1}^{[Nt]}g(n\mathbf{1}-[N\mathbf{x}]-\mathbf{1})\mathrm{1}_{\{n\mathbf{1}>[N\mathbf{x}]+\mathbf{1}\}}\\&= \sum_{n=1}^{[Nt]}g\left(\frac{n\mathbf{1}-[N\mathbf{x}]-\mathbf{1}}{N}\right)\mathrm{1}_{\{n\mathbf{1}>[N\mathbf{x}]+\mathbf{1}\}} \frac{1}{N}=\int_0^t g\left(\frac{[Ns\mathbf{1}]-[N\mathbf{x}]}{N}\right)\mathrm{1}_{\{[Ns\mathbf{1}]> [N\mathbf{x}]\}} ds - R_N(t,\mathbf{x}). \end{align*} where \[R_N(t,\mathbf{x})=\frac{Nt-[Nt]}{N}g\left(\frac{[Nt\mathbf{1}]-[N\mathbf{x}]}{N}\right)\mathrm{1}_{\{[Nt\mathbf{1}]> [N\mathbf{x}]\}}.\] Note that we have replaced $\mathbf{i}$ by $[N\mathbf{x}]+\mathbf{1}$ and $n$ by $[Ns]+1$. By Condition 2 in Definition \ref{Def:class limit}, there exists a positive generalized Hermite kernel $g^*(\mathbf{x})$ which is a linear combination of the form $\prod_{j=1}^k x_j^{\gamma_j}$, such that $|g(\mathbf{x})|\le g^*(\mathbf{x})$ for a.e.\ $\mathbf{x}\in \mathbb{R}_+^k$. We assume without loss of generality that $g^*(\mathbf{x})=\prod_{j=1}^k x_j^{\gamma_j}$. Since $[Ns\mathbf{1}]>[N\mathbf{x}]$ implies $s\mathbf{1}>\mathbf{x}$, we have \begin{equation}\label{eq:g<=g^*} \left|g\left(\frac{[Ns\mathbf{1}]-[N\mathbf{x}]}{N}\right)\right|\mathrm{1}_{\{[Ns\mathbf{1}]>[N\mathbf{x}]\}}\le \left(\prod_{j=1}^k\left(\frac{[Ns]-[Nx_j]}{N}\right)^{\gamma_j}\mathrm{1}_{\{[Ns]>[Nx_j]\}}\right)\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}} ~a.e.. \end{equation} Moreover, if $0<[Ns]-[Nx]= k\in \mathbb{Z}_+$, then $Ns-1-Nx\le k$, and hence $s-x\le \frac{k+1}{N}$. So we have for any $\gamma<0$ that \begin{align}\label{eq:useful} \sup_{N\ge 1,[Ns]>[Nx]} \left(\frac{[Ns]-[Nx]}{N}\right)^{\gamma}(s-x)^{-\gamma}\le & \sup_{N\ge 1,[Ns]-[Nx]=k\ge 1} \left(\frac{k}{N}\right)^{\gamma}(s-x)^{-\gamma}\notag\\ \le& \sup_{N\ge 1,k\ge 1} \left(\frac{k}{N}\right)^{\gamma}\left(\frac{k+1}{N}\right)^{-\gamma} =2^{-\gamma}. \end{align} So we have for some constant $C>0$, \begin{equation}\label{eq:dominate} \left|g\left(\frac{[Ns\mathbf{1}]-[N\mathbf{x}]}{N}\right)\right|\mathrm{1}_{\{[Ns\mathbf{1}]>[N\mathbf{x}]\}}\le C g^*(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}. \end{equation} Since $g(\mathbf{x})$ by assumption of Class (L) is continuous a.e., $g\left(\frac{[Ns\mathbf{1}]-[N\mathbf{x}]}{N}\right)\mathrm{1}_{\{[Ns\mathbf{1}]>[N\mathbf{x}]\}}$ converges a.e.\ to $g(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}$ as $N\rightarrow\infty$. In view of (\ref{eq:dominate}), and noting that $\int_{\mathbb{R}^k}d\mathbf{x}\left(\int_0^t g^*(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}ds\right)^2<\infty$ because $g^*$ is a generalized Hermite kernel, one then applies the Dominated Convergence Theorem to conclude the $L^2$ convergence of $\int_0^t g\left(\frac{[Ns\mathbf{1}]-[N\mathbf{x}]}{N}\right)\mathrm{1}_{\{[Ns\mathbf{1}]> [N\mathbf{x}]\}} ds$ to $h_t(\mathbf{x})$. For the remainder term $R_{N,t}(\mathbf{x})$, one has \[ \|R_{N,t}(\mathbf{x})\|_{L^2(\mathbb{R}^k)}^2= N^{-2H}(Nt-[Nt])^2 \sum_{\mathbf{i}>\mathbf{0}} g\left(\mathbf{i}\right)^2 \rightarrow 0\] as $N\rightarrow\infty$. The proof is thus complete. \end{proof} \begin{Eg} Consider the kernel $g(\mathbf{x})$ defined in (\ref{eq:eg}). It belongs to Class (L) by Example \ref{eg:L}. Hence by Theorem \ref{Thm:NCLT}, we have the following weak convergence in $D[0,1]$: \begin{align*} &\frac{1}{N^{H}}\sum_{n=1}^{[Nt]} \sum'_{(i_1,\ldots,i_k)\in \mathbb{Z}_+^k} \left( \frac{ \prod_{j=1}^k i_j}{\sum_{j=1}^k i_j^{k-\alpha}}\vee \prod_{j=1}^k i_j^{\alpha/k}\right) ~\epsilon_{n-i_1}\ldots \epsilon_{n-i_k} \Rightarrow \\&\int_{\mathbb{R}^k}' ~ \int_0^{t} ~ \left( \frac{ \prod_{j=1}^k (s-x_j)_+}{\sum_{j=1}^k (s-x_j)_+^{k-\alpha}}\right)\vee \left(\prod_{j=1}^k (s-x_j)_+^{\alpha/k}\right) ~ds~ W(dx_1)\ldots W(dx_k), \end{align*} where $H=\alpha+k/2+1$. \end{Eg} \subsection{Non-central limit theorem with fractional filter } In the spirit of \citet{rosenblatt:1979:some} and \citet{major:1981:limit}, we consider here the non-central limit theorem for the fractionally filtered generalized Hermite process introduced in Section \ref{Subsec:frac}. Assume throughout that the generalized Hermite kernel $g$ is of Class (L) (Definition \ref{Def:class limit}). \begin{Def} \label{Def:fLRD} Let $X(n)=\sum_{\mathbf{i}<n\mathbf{1}}' a(n\mathbf{1}-\mathbf{i})\prod_{j=1}^k \epsilon_{i_j}$ be the same discrete chaos process as in Theorem \ref{Thm:NCLT}. We say that a discrete process $U(n)$ is fLRD (fractionally-filtered LRD discrete chaos process) if \begin{equation}\label{eq:frac U} U(n)=\sum_{m=1}^\infty C_m X(n-m)=\sum_{m=-\infty}^{n-1}C_{n-m}\sum_{\mathbf{i}<m\mathbf{1}}' a(m\mathbf{1}-\mathbf{i})\prod_{j=1}^k \epsilon_{i_j}, \end{equation} where $a(\mathbf{i})=g(\mathbf{i})L(\mathbf{i})$ as in (\ref{eq:a=gL}) with $g$ being a generalized Hermite kernel in Class (L), $$ C_n\sim c n^{\beta-1} $$ as $ n\rightarrow\infty,$ and where, as in Proposition \ref{Pro:beta range}, \begin{equation}\label{e:beta3} \beta\in \left(-\frac{2\alpha+k+2}{2},-\frac{2\alpha+k}{2}\right). \end{equation} \end{Def} $U(n)$ is well-defined in the $L^2(\Omega)$ sense. Indeed, we have the following: \begin{Lem}\label{Lem:U well defined} We have \begin{equation*} \sum_{\mathbf{i}\in\mathbb{Z}^k}' \left(\sum_{m<n} |C_{n-m}a(m\mathbf{1}-\mathbf{i})|\mathrm{1}_{\{m\mathbf{1}>\mathbf{i}\}}\right)^2<\infty. \end{equation*} \end{Lem} \begin{proof} Note that $a(\cdot)=g(\cdot)L(\cdot)$, where $g$ is of Class (L). So by Definition \ref{Def:class limit}, there exists a $g^*(\mathbf{x})>0$ which is a finite linear combination of the form $\prod_{j=1}^k x_j^{\gamma_j}$, such that $|g(\mathbf{x})|<g^*(\mathbf{x})$. Note that $L$ is bounded and $|C_n|\le cn^{\beta-1}$. Set $n=-1$ without loss of generality due to stationarity. We hence need to show that \begin{equation}\label{eq:square sum frac} \sum_{\mathbf{i}\in\mathbb{Z}^k} \left(\sum_{m<-1} (-m)^{\beta-1} g^*(m\mathbf{1}-\mathbf{i})1_{\{m\mathbf{1}>\mathbf{i}\}}\right)^2<\infty. \end{equation} It suffices to show this when $\beta>0$, since for any $\beta'\le 0$ and $\beta>0$, $(-m)^{\beta'-1}\le (-m)^{\beta-1}$ for all $m<-1$. The preceding sum can be rewritten as an integral by replacing $m$ by $[s]$ and $\mathbf{i}$ by $[\mathbf{x}]$: \begin{align}\label{eq:int approx sum} \int_{\mathbb{R}^k}1_{D^c} d\mathbf{x}\left( \int_{-\infty}^{-1} ds (-[s])^{\beta-1} g^*([s\mathbf{1}]-[\mathbf{x}])1_{\{[s\mathbf{1}]>[\mathbf{x}]\}} \right)^2, \end{align} where $D^c=\{\mathbf{x}\in \mathbb{R}^k:~ [x_p]\neq [x_q], ~p\neq q\}$. By $[s]\le s$, $\beta-1<0$, and (\ref{eq:dominate}), (\ref{eq:int approx sum}) is bounded by (up to a constant) \begin{align*} &\int_{\mathbb{R}^k} d\mathbf{x}\left( \int_{-\infty}^{-1} ds (-s)_+^{\beta-1} g^*(s\mathbf{1}-\mathbf{x})1_{\{s\mathbf{1}>\mathbf{x}\}} \right)^2\\ =&\int_{-\infty}^{-1} ds(-s)^{\beta-1}\int_{0}^{-s}du(-s-u)^{\beta-1}u^{2\alpha+k} \int_{\mathbb{R}_+^k}d\mathbf{y}g^*(\mathbf{y})g^*(\mathbf{1}+\mathbf{y})\\ =& \int_{1}^{\infty}s^{2\alpha+2\beta+k-1}ds ~\mathrm{B}(\beta,2\alpha+k+1) ~ C_{g^*}<\infty, \end{align*} where we have used a change of variable similar to the lines below (\ref{eq:int h^beta_t}), and in addition the assumptions $\beta>0$, $2\alpha+k>-1$, $2\alpha+2\beta+k<0$, and $g^*$ is a generalized Hermite kernel. \end{proof} \begin{Rem} Lemma \ref{Lem:U well defined} not only shows that $U(n)$ is well-defined in $L^2(\Omega)$, it also allows changing the order of summations, which will be used in proving the non-central limit theorem below. \end{Rem} Next we want to obtain non-central limit theorems, that is, to show that the suitably normalized partial sum of $U(n)$ defined in (\ref{eq:frac U}) converges to the fractionally-filtered generalized Hermite process introduced in Section \ref{Subsec:frac}. We need to distinguish two cases: $\beta>0$ (which increases $H$) and $\beta<0$ (which decreases $H$). We first consider $\beta>0$: \begin{Thm}\label{Thm:frac non central beta>0} Let $U(n)$ be as in (\ref{eq:frac U}) with $\beta\in (0,-\alpha-k/2)$. Then \begin{align*} \frac{1}{N^{H}}\sum_{n=1}^{[Nt]} U(n) \Rightarrow Z^\beta(t), \end{align*} where $$1/2<\alpha+k/2+1<H=\alpha+\beta+k/2+1<1,$$ and $Z^\beta(t)$ is the fractionally-filtered generalized Hermite process defined in Theorem \ref{Thm:Frac Z beta hsssi}. It is defined using the same $g$ and $\beta$ as $U(n)$. \end{Thm} \begin{proof} Since $H>1/2$, tightness in $D[0,1]$ is standard. We now show convergence in finite-dimensional distributions. Assume for simplicity that $C_m=m^{\beta-1}$ and $L(\mathbf{i})=1$. By Lemma \ref{Lem:U well defined}, we are able to change the order of the summations to write: \begin{align*} \frac{1}{N^H}\sum_{n=1}^{[Nt]}U(n)=\sum_{\mathbf{i}\in \mathbb{Z}^k}' \frac{1}{N^H}\sum_{n=1}^{[Nt]}\sum_{m<n} (n-m)^{\beta-1} g(m\mathbf{1}-\mathbf{i})\mathrm{1}_{\{m\mathbf{1}>\mathbf{i}\}}\prod_{j=1}^k \epsilon_{i_j}=\sum_{\mathbf{i}\in \mathbb{Z}^k} h_{t,N}^\beta(\mathbf{i})\prod_{j=1}^k \epsilon_{i_j} =Q_k(h^\beta_{t,N}), \end{align*} and by setting $\tilde{h}^\beta_{t,N}(\mathbf{x})=N^{k/2}h^\beta_{t,N}([N\mathbf{x}]+\mathbf{1})$, we have \begin{align*} \tilde{h}^\beta_{t,N}(\mathbf{x})=&\frac{1}{N^{\alpha+\beta+1}}\sum_{n=1}^{[Nt]} \sum_{m<n}(n-m)^{\beta-1}g\left(m\mathbf{1}-[N\mathbf{x}]-\mathbf{1}\right)\mathrm{1}_{\{m\mathbf{1}>[N\mathbf{x}]-\mathbf{1}\}}\\=& \sum_{n=1}^{[Nt]} \sum_{m<n} \left(\frac{n-m}{N}\right)^{\beta-1} g\left(\frac{m\mathbf{1}-[N\mathbf{x}]-\mathbf{1}}{N}\right)\mathrm{1}_{\{m\mathbf{1}>[N\mathbf{x}]-\mathbf{1}\}} \frac{1}{N^2} \\=&\int_0^t ds \int_\mathbb{R} dr \left(\frac{[Ns]-[Nr]}{N}\right)_+^{\beta-1}g\left(\frac{[Nr\mathbf{1}]-[N\mathbf{x}]}{N}\right)\mathrm{1}_{\{[Nr\mathbf{1}]> [N\mathbf{x}]\}}-R_{N,t}(\mathbf{x}) \\=&:\int_0^t ds \int_\mathbb{R} dr G_{N}(s,r,\mathbf{x})\mathrm{1}_{K_N}-R_{N,t}(\mathbf{x}) \end{align*} where we associate $\mathbf{i}$ with $[N\mathbf{x}]+\mathbf{1}$, $n$ with $[Ns]+1$, and $m$ with $[Nr]+1$, $$ G_N(s,r,\mathbf{x}):= \left(\frac{[Ns]-[Nr]}{N}\right)^{\beta-1}g\left(\frac{[Nr\mathbf{1}]-[N\mathbf{x}]}{N}\right), $$ $$ K_N=\{[Ns]> [Nr],[Nr\mathbf{1}]> [N\mathbf{x}]\}\subset\{s>r,r\mathbf{1}>\mathbf{x}\}, $$ and \[ R_{N,t}(\mathbf{x})=\frac{Nt-[Nt]}{N}\int_\mathbb{R} dr \left(\frac{[Nt]-[Nr]}{N}\right)_+^{\beta-1}g\left(\frac{[Nr\mathbf{1}]-[N\mathbf{x}]}{N}\right)\mathrm{1}_{\{[Nr\mathbf{1}]> [N\mathbf{x}]\}}. \] In view of Proposition \ref{Pro:Poly->Wiener}, we need to show that $\tilde{h}^\beta_{t,N}\rightarrow h^\beta_t$ and $R_{N,t}\rightarrow 0$ in $L^2(\mathbb{R}^k)$, where $$ h_t^\beta(\mathbf{x}):=\int_0^tds\int_\mathbb{R} dr (s-r)_+^{\beta-1}g(r\mathbf{1}-\mathbf{x})\mathrm{1}_{\{r\mathbf{1}>\mathbf{x}\}}. $$ Using (\ref{eq:g<=g^*}) and (\ref{eq:useful}) (note that $\beta-1<0$) as in the proof of Theorem \ref{Thm:NCLT}, we can bound the integrand as \[ |G_{N}(s,r,\mathbf{x})|\mathrm{1}_{K_N}\le C (s-r)_+^{\beta-1}g^*(r\mathbf{1}-\mathbf{x})\mathrm{1}_{\{r\mathbf{1}>\mathbf{x}\}} \] for some $C>0$, where $g^*(\mathbf{x})$ is a generalized Hermite kernel from Definition \ref{Def:class limit}. Because $$ h^*(\mathbf{x}):=(s-r)_+^{\beta-1}g^*(r\mathbf{1}-\mathbf{x})\mathrm{1}_{\{r\mathbf{1}>\mathbf{x}\}} \in L^2(\mathbb{R}^k) $$ by (\ref{eq:h_t beta>0}) and Proposition \ref{Pro:beta range}, and $g$ is a.e.\ continuous, it remains to apply the Dominated Convergence Theorem to conclude $\tilde{h}^\beta_{t,N}\rightarrow h^\beta_t$. For the remainder term $R_{N,t}(\mathbf{x})$, one has \begin{align*} \|R_{N,t}(\mathbf{x})\|_{L^2(\mathbb{R}^k)}^2=N^{-2H} (Nt-[Nt]) \sum_{\mathbf{i}\in \mathbb{Z}^k}\left(\sum_{m<[Nt]}([Nt]-m)^{\beta-1}g(m\mathbf{1}-\mathbf{i})1_{\{m\mathbf{1}>\mathbf{i}\}}\right)^{2}, \end{align*} which, in view of (\ref{eq:square sum frac}), converges to $0$ as $N\rightarrow\infty$. The proof is thus complete. \end{proof} We now treat the case $\beta<0$. This case is more delicate than the case $\beta>0$ in two ways: a) an additional assumption on the linear-filter response $\{C_n\}$ has to be made; b) if $\beta$ is chosen such that $H<1/2$, then tightness of the normalized partial sum process needs also additional assumptions. When $\beta<0$, we have $$ \sum_{n=1}^\infty |C_n|<\infty. $$ If $f_X$ is the spectral density of $\{X(n)\}$, then the spectral density of $\{U(n)\}$ is $$ f_U(\lambda)=|C(e^{i\lambda})|^2 f_X(\lambda), $$ where $C(z):=\sum_n C_nz^n$, and the transfer function $H(\lambda):=|C(e^{i\lambda})|^2$ is continuous. Since $X(n)$ is LRD (see Proposition \ref{Pro:LRD ACF}), its spectral density blows up at the origin. To dampen it we need to multiply it by an $H(\lambda)$ which converges to $0$ as $\lambda\rightarrow 0$. This means that $H(0)=|\sum_{n=1}^\infty C_n|^2=0$, and hence we need to assume $\sum_{n=1}^\infty C_n=0$. \begin{Thm}\label{Thm:Frac NCLT beta<0} Let $U(n)$ be as in (\ref{eq:frac U}) with $\beta\in (-\alpha-k/2-1,0)$, and assume in addition that \begin{equation}\label{eq:sum C_n=0} \sum_{n=1}^\infty C_n=0. \end{equation} Then \begin{align*} \frac{1}{N^{H}}\sum_{n=1}^{[Nt]} U(n) \overset{f.d.d.}{\longrightarrow} Z^\beta(t), \end{align*} where \[ 0<H=\alpha+\beta+k/2+1<\alpha+k/2+1<1, \] $Z^\beta(t)$ is the fractionally-filtered generalized Hermite process defined in Theorem \ref{Thm:Frac Z beta hsssi}. It is defined using the same $g$ and $\beta$ as $U(n)$. If in addition, either a) $H>1/2$, or b) $H<1/2$ and for some $p>1/H$, $\mathbb{E}|\epsilon_i|^p<\infty$, then the above $\overset{f.d.d.}{\longrightarrow}$ can be replaced with weak convergence in $D[0,1]$. \end{Thm} \begin{proof} Note that by Lemma \ref{Lem:U well defined}, we can change the order of summations to write: \begin{align*} Y_N(t):&=\frac{1}{N^H}\sum_{n=1}^{[Nt]}U(n)= \sum_{\mathbf{i}\in \mathbb{Z}^k}'\frac{1}{N^{H}}\sum_{n=1}^{[Nt]} \sum_{m<n}C_{n-m}\sum_{\mathbf{i}<m\mathbf{1}}' a(m\mathbf{1}-\mathbf{i})\prod_{j=1}^k \epsilon_{i_j} \\&=\sum_{\mathbf{i}\in \mathbb{Z}^k}' \frac{1}{N^H}\sum_{m\in \mathbb{Z}}a(m\mathbf{1}-\mathbf{i})\mathrm{1}_{\{m\mathbf{1}>\mathbf{i}\}} \sum_{n=1\vee (m+1)}^{[Nt]}C_{n-m} \prod_{j=1}^k \epsilon_{i_j}=Q_k(h^\beta_{t,N}), \end{align*} where \[ h_{t,N}^\beta(\mathbf{i})=\frac{1}{N^H}\sum_{m\in \mathbb{Z}}a(m\mathbf{1}-\mathbf{i})\mathrm{1}_{\{m\mathbf{1}>\mathbf{i}\}} \sum_{n=1\vee (m+1)}^{[Nt]}C_{n-m}. \] Making use of (\ref{eq:sum C_n=0}), and using $l$ to denote a generic function such that $l(i)\rightarrow 1$ as $i\rightarrow\infty$, we have if $m\ge 1$, \[ \sum_{n=1\vee (m+1)}^{[Nt]}C_{n-m}=\sum_{n=1}^{[Nt]-m}C_n=-\sum_{n=[Nt]-m+1}^\infty C_n= \beta^{-1}l([Nt]-m+1)([Nt]-m+1)_+^{\beta}; \] and if $m\le 0$, \begin{align*} \sum_{n=1\vee (m+1)}^{[Nt]}C_{n-m}= &\sum_{n=1}^{[Nt]}C_{n-m}=\sum_{n=-m+1}^{[Nt]-m}C_n= \sum_{n=[Nt]-m+1}^\infty C_n -\sum_{n=-m+1}^{\infty} C_n\\= &\beta^{-1}\left[l([Nt]-m+1)([Nt]-m+1)_+^\beta-l(-m)(-m)_+^\beta \right]. \end{align*} So by letting $\mathbf{i}$ correspond to $[N\mathbf{x}]+\mathbf{1}$ and $m$ to $[Ns]+1$ (omitting $L$ and $l$ for simplicity), \begin{align*} \tilde{h}^\beta_{t,N}(\mathbf{x})&=N^{k/2} h^\beta_{t,N}([N\mathbf{x}]+\mathbf{1}) \\&=\beta^{-1}\int_\mathbb{R}g\left(\frac{[Ns]\mathbf{1}-[N\mathbf{x}]}{N}\right)\mathrm{1}_{\{[Ns]\mathbf{1}>[N\mathbf{x}]\}}\left(\left(\frac{[Nt]-[Ns]}{N}\right)^\beta_+ - \left(\frac{-[Ns]-1}{{N}}\right)^\beta_+\right) ds. \end{align*} Using similar arguments as in the proof of Theorem \ref{Thm:NCLT}, we can bound the absolute value of the integrand above by $Cg^*(s\mathbf{1}-\mathbf{x})\mathrm{1}_{\{s\mathbf{1}>\mathbf{x}\}}\left((t-s)_+^\beta- (-s)_+^\beta\right)$ for some $C>0$, where $g^*$ is a generalized Hermite kernel from Definition \ref{Def:class limit} (for the last term, we use $[Ns]+1\ge Ns$). Note that $\beta<0$ in this case. By applying the Dominated Convergence Theorem, we get the desired f.d.d.\ convergence using Proposition \ref{Pro:Poly->Wiener}. Now we turn to the weak convergence. When $H>1/2$, the tightness is standard. To show tightness under condition $H<1/2$ and $\mathbb{E} |\epsilon_i|^p<\infty$, Proposition \ref{Pro:Hypercontract} and the above f.d.d.\ convergence imply that for some constant $c,C>0$ free from $s,t$ and $N$, \[ \mathbb{E} |Y_N(t)-Y_N(s)|^{p'} \le c\mathbb{E} [|Y_N(t)-Y_N(s)|^2]^{p'/2} \le C |F_N(t)-F_N(s)|^{p'H}, \] where $F_N(t)=[Nt]/N$, $p'<p$ and $p'H>1$. Now by Lemma 4.4.1 and Theorem 4.4.1 of \citet{giraitis:koul:surgailis:2009:large}, we conclude that tightness holds. \end{proof} \subsection{Mixed multivariate limit theorem} In \citet{bai:taqqu:2013:multivariate}, a multivariate version of Theorem \ref{Thm:Polyform} is obtained, where both central and non-central convergence appear simultaneously. We will state here a similar theorem. Suppose that $\mathbf{X}(n)=\left(X_{1}(n),\ldots,X_{J}(n)\right)$ is a vector of discrete chaos process defined on the same noise but with different coefficients, that is, \begin{equation}\label{eq:X_j(n)} X_{j}(n)=\sum'_{0<i_1,\ldots,i_{k_j}<\infty} a_j(i_1,\ldots,i_{k_j}) \epsilon_{n-i_1}\ldots\epsilon_{n-i_{k_j}}=\sum'_{\mathbf{i}>\mathbf{0}} a_j(\mathbf{i}) \prod_{p=1}^{k_j}\epsilon_{n-i_p}, \end{equation} where we assume $\{\epsilon_i\}$ is an i.i.d.\ random sequence with mean $0$ and variance $1$. For convenience we let $a_j(i_1,\ldots,i_{k_j})=a_j(\mathbf{i})=a_j(\mathbf{i})\mathrm{1}_{\{\mathbf{i}>\mathbf{0}\}}$, and $\tilde{a}_j(\cdot)$ denotes the symmetrization of $a_j(\cdot)$. \begin{Def}\label{Def:SRD LRD Multi} We say that the vector sequence of discrete chaos processes $\{\mathbf{X}(n)\}$ is \begin{itemize} \item SRD, if every component $X_j(n)$ is SRD in the sense of Definition \ref{Def:SRD LRD}, and in addition, for any $p\neq q\in \{1,\ldots,J\}$, \begin{equation}\label{eq:cross cov bound} \sum_{n=-\infty}^{\infty} \sum_{\mathbf{i}>\mathbf{0}}' |\tilde{a}_p(\mathbf{i})\tilde{a}_q(n\mathbf{1}+\mathbf{i})|<\infty; \end{equation} \item LRD, if every component $X_j(n)$ is LRD in the sense of Definition \ref{Def:SRD LRD}. \item fLRD, if every component $X_j(n)$ is a fractionally-filtered LRD discrete chaos process in the sense of Definition \ref{Def:fLRD}. Note: these components were denoted $U(n)$ in that definition. \end{itemize} \end{Def} \begin{Rem} If the vector sequence is SRD, then (\ref{eq:cross cov bound}) guarantees that the cross-covariance $\gamma_{p,q}(n):=\mathrm{Cov}(X_p(n),X_q(0))$ satisfies $\sum_n |\gamma_{p,q}(n)|<\infty$. As in Proposition 2.5 of \cite{bai:taqqu:2013:multivariate}, we have that as $N\rightarrow\infty$, \begin{equation}\label{eq:cross covariance limit} \mathrm{Cov}\left(\frac{1}{\sqrt{N}}\sum_{n=1}^{[Nt_1]}X_p(n),\frac{1}{\sqrt{N}}\sum_{n=1}^{[Nt_2]} X_q(n)\right)\rightarrow (t_1\wedge t_2) \sum_{n=-\infty}^\infty \gamma_{p,q}(n). \end{equation} Note that $\gamma_{p,q}(n)=0$ always if the orders $k_p\neq k_q$. \end{Rem} We will now consider a general case where SRD and LRD and fLRD vectors can all be present in $\mathbf{X}(n)$. We divide $\mathbf{X}(n)$ into four parts $$\mathbf{X}(n)=(\mathbf{X}_{S_1}(n),\mathbf{X}_{S_2}(n),\mathbf{X}_L(n),\mathbf{X}_F(n))$$ of dimension $J_{S_1}, J_{S_2}, J_{L}, J_F$ respectively, which are defined as follows: \begin{enumerate}[(i)] \item all the components of $\mathbf{X}_{S_1}(n)=(X_{1,S_1}(n),\ldots,X_{J_{S_1},S_1}(n))$ have order $k=1$, namely, are all linear processes; \item every component of $\mathbf{X}_{S_2}(n)=(X_{1,S_2}(n),\ldots,X_{J_{S_2},S_2}(n))$ has order $k\ge 2$, and the {\it combined vector} $$ \mathbf{X}_S(n)=(\mathbf{X}_{S_1}(n),\mathbf{X}_{S_2}(n))=(X_{1,S}(n),\ldots,X_{J_S,S}(n)),\quad J_S=J_{S_1}+J_{S_2}, $$ is SRD in the sense of Definition \ref{Def:SRD LRD Multi}; \item the vector $\mathbf{X}_L(n)=(X_{1,L}(n),\ldots,X_{J_L,L}(n))$ is LRD in the sense of Definition \ref{Def:SRD LRD Multi}, with correspondingly generalized Hermite kernels $\mathbf{g}=(g_{1,L},\ldots,g_{J_L,L})$; \item the vector $\mathbf{X}_F(n)=(X_{1,F}(n),\ldots,X_{J_F,F}(n))$ is fLRD in the sense of Definition \ref{Def:SRD LRD Multi}, with correspondingly generalized Hermite kernels $\mathbf{g}=(g_{1,F},\ldots,g_{J_F,F})$ and fractional exponent $\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{J_F})$. \end{enumerate} We now state the multivariate limit theorem. We use $Y_N$ (with subscript $S_1$, $S_2$, $L$ or $F$) to denote the corresponding normalized sum $Y_N(t):=N^{-H}\sum_{n=1}^{[Nt]}X(n)$, where $X(n)$ is a component of $\mathbf{X}(n)$, $H$ is such that $\mathrm{Var} (Y_N(1))$ converges to some constant $c>0$ as $N\rightarrow\infty$. \begin{Thm}\label{Thm:multi limit} Following the notation defined above, one has \begin{align}\label{eq:multi conv} (\mathbf{Y}_{N,S_1}(t),\mathbf{Y}_{N,S_2}(t),\mathbf{Y}_{N,L}(t),\mathbf{Y}_{N,F}(t))\overset{f.d.d.}{\longrightarrow}(\mathbf{B}_1(t),\mathbf{B}_2(t),\mathbf{Z}(t),\mathbf{Z}^{\boldsymbol{\beta}}(t)), \end{align} where \begin{enumerate}[(i)] \item $\mathbf{B}_1(t)=\mathbf{W}(t):=(\sigma_{1}W(t),\ldots,\sigma_{J_{S_1}}W(t))$ defined by the same standard Brownian motion $W(t)$, and $$ \sigma_{p}=\sum_{n=-\infty}^\infty \sum_{i>0} a_{p,S_1}(n)a_{p,S_1}(n+i), \quad p=1,\ldots,J_{S_1}. $$ \item $\mathbf{B}_2(t)$ is a multivariate Brownian motion with the covariance given by (\ref{eq:cross covariance limit}); \item $\mathbf{Z}(t)$ is a multivariate generalized Hermite process defined as in (\ref{eq:Z(t)}) by the kernels $(g_{1,L},\ldots,g_{J_L,L})$ and using the $W(t)$ in Point $(i)$ as Brownian motion integrator. \item $\mathbf{Z}^{\boldsymbol{\beta}}(t)$ is a multivariate fractionally-filtered generalized Hermite process defined as in (\ref{eq:frac filt proc full}) by the kernels $(g_{1,F},\ldots,g_{J_F,F})$, fractional exponent $\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{J_F})$ and using the $W(t)$ in Point $(i)$ as Brownian motion integrator. \end{enumerate} Moreover, $\mathbf{B}_2(t)$ is always independent of $(\mathbf{B}_1(t),\mathbf{Z}(t),\mathbf{Z}^{\boldsymbol{\beta}}(t))$. In addition, $\overset{f.d.d.}{\longrightarrow}$ in (\ref{eq:multi conv}) can be replaced with weak convergence in $D[0,1]^J$, if every component of $\mathbf{X}_{S_1}$ and $\mathbf{X}_{S_2}$ satisfies the assumption in Theorem \ref{Thm:CLTWeak}, and every component of $\mathbf{X}_F$ satisfies the assumption given at the end of Theorem \ref{Thm:Frac NCLT beta<0}. \end{Thm} The proof is similar to that of Theorem 3.5 of \citet{bai:taqqu:2013:multivariate}. We only provide some heuristics. The processes $\mathbf{B}_2(t),\mathbf{Z}(t)$ and $\mathbf{Z}^{\boldsymbol{\beta}}(t)$ involve the same integrator $W(\cdot)$ because they are defined in terms of the same $\epsilon_i$'s. To understand the independence statement, note that the independence between $\mathbf{B}_2$ and $W$ stems from the uncorrelatedness between $X_{S_2}$ and $X_{S_1}$, since $X_{S_2}$ belongs to a discrete chaos of order $k\ge 2$, while $X_{S_1}$ belongs to a discrete chaos of order $k=1$. $\mathbf{B} _2$ is therefore independent of $\mathbf{B}_1$. $\mathbf{B}_2$ is also independent of $\mathbf{Z}$ and $\mathbf{Z}^{\boldsymbol{\beta}}$, because $\mathbf{Z}$ and $\mathbf{Z}^{\boldsymbol{\beta}}$ have $W$ as integrators. \begin{Rem}\label{Rem:indep} The pairwise dependence between components of $\mathbf{Z}$, of $\mathbf{Z}^{\boldsymbol{\beta}}$, and between cross components in Theorem \ref{Thm:multi limit} can be checked using the criterion due to \citet{ustunel:zakai:1989:independence}, that is, if $f\in L^2(\mathbb{R}^p)$ and $g\in L^2(\mathbb{R}^q)$, and both are symmetric, then the multiple Wiener-It\^o integrals $I_p(f)$ and $I_q(g)$ are independent, if and only if \[ f \otimes_1 g(x_1,\ldots,x_{p+q-2}):=\int_{\mathbb{R}}f(x_1,\ldots,x_{p-1},y)g(x_{p},\ldots,x_{p+q-2},y) dy =0 ~~a.e.. \] For example, suppose that two generalized Hermite kernels $g_1$ and $g_2$ on $\mathbb{R}_+^{p}$ and $\mathbb{R}_+^{q}$ are symmetric, then the corresponding two generalized Hermite processes are independent if and only if \begin{equation}\label{eq:indep crit} \int_\mathbb{R}~ \int_{0}^t g_1(s-x_1,\ldots,s-x_{p-1},s-y)ds\int_0^t g_2(s-x_{p},\ldots,s-x_{p+q-2},s-y)ds~dy = 0 \qquad a.e. ~, \end{equation} where we use the abbreviation $ g_j(\mathbf{x})=g_j(\mathbf{x})\mathrm{1}_{\{\mathbf{x}>\mathbf{0}\}}$, $j=1,2$. Obviously, if $g_1$ and $g_2$ are both positive, then the dependence always holds. This is true, for example, for the symmetrized version of the kernels in (\ref{eq:nonsym Herm}). \end{Rem} \noindent\textbf{Acknowledgments.} This work was partially supported by the NSF grant DMS-1007616 and DMS-1309009 at Boston University. \bibliographystyle{plainnat}
2024-02-18T23:40:52.440Z
2014-09-12T02:03:06.000Z
algebraic_stack_train_0000
3,544
16,946
proofpile-arXiv_066-1380
\section{Introduction} Decomposing a tensors into its components, and determining the number of those (= the rank) is a multidimensional generalization of the singular value decomposition and the matrix rank, and a reoccurring task in all practical sciences, appearing many times under different names; first discovered by Hitchcock~\cite{Hitchcock1927} and then re-discovered under names such as PARAFAC~\cite{Harshman1970} or CANDECOMP~\cite{CarrollChang1970}, it has been applied in many fields such as chemometrics, psychometrics, and signal processing \cite{Bro1997,Parafac2000,NionSidi2009}. An extensive survey of many applications can be found in \cite{Sidi2004,DeLathauwer:2008}.\\ Recently, motivated by real world applications, orthogonality constraints on the decomposition have been studied in the literature, such as the orthogonal rank decomposition and the combinatorial orthogonal rank decomposition, which can be traced back to~\cite{Deni1989,Kolda01orthogonaltensor}, and the orthogonal decomposition in~\cite{Martin06ajacobi-type} and~\cite{Hsu2012}, the latter of which occurs for example in the identification of latent variable models from empirical moments, and several other statistical estimation tasks, see~\cite{Ana2012} for a survey. The orthogonality constraints imposed in these two branches of literature are not the same, as~\cite{Deni1989,Kolda01orthogonaltensor} imposes summand-wise orthogonality, while in~\cite{Martin06ajacobi-type,Hsu2012,Ana2012}, factor-wise orthogonality can be deduced from the model constraints. In~\cite{Martin06ajacobi-type}, a Jacobi-like and heuristic algorithm was described to obtain a close orthogonal decomposition via Jacobi angle optimization for general tensors; in~\cite{Ana2012}, the authors describe a second order fixed point method for obtaining the decomposition.\\ In~\cite{Ish2013ICML,Son2013ICML}, hierarchical tensor decomposition models are discussed in the context of latent tree graphical models, and algorithms for the identification of this decomposition are described. While this is not explicitly done in the language of orthogonal tensor decompositions, the idea of using flattenings is similar to the one presented, and, in the specific context of tree models, a specific instance orthogonal tensor decomposition, as described in~\cite{Ana2012}.\\ In this paper, we study the orthogonal decomposition model, as it occurs in~\cite{Hsu2012,Ana2012}, namely with factor-wise orthogonality constraints. We show that this kind of decomposition can be directly transformed to a set of singular value decompositions, both theoretically and practically. We give identifiability results for this kind of orthogonal decomposition, showing that it is unique\footnote{up to natural symmetries} in case of existence, and we provide algorithms to obtain the orthogonal decomposition, by reducing it to a sequence of singular value decompositions. We apply these algorithms to a latent variable identification problem which was discussed in~\cite{Hsu2012,Ana2012}, reducing it to a series of eigenvalue problems. In particular, by performing the reduction to singular value decomposition, we show that all existing theory on the singular value decomposition, concerning theoretical issues as well as numerical and algorithmical ones, can be readily applied to the orthogonal decomposition problem. \section{Theoretical Background} \subsection{Tensors} \subsubsection{Definition of a Tensor} While tensors are common objects, their notation diverges throughout the literature. For ease of reading, we provide the basic definitions. \begin{Def} A real tensor of size $(n_1\times n_2\times\dots\times n_d)$ and of degree $d$ is an element of the set $$\ensuremath{\mathbb{R}}^{n_1\times n_2\times \dots \times n_d}=\left\{(a_{i_1\dots i_d})_{\begin{subarray}{l} 1\le i_1\le n_1\\ \vdots \\ 1\le i_d\le n_d \end{subarray}}\right\}.$$ If $n_1 = n_2 = \dots = n_d,$ we also write $\ensuremath{\mathbb{R}}^{n^{\times d}}:= \ensuremath{\mathbb{R}}^{n_1\times n_2\times \dots \times n_d}.$ \end{Def} \subsubsection{Linear Transformation} Let us introduce a useful shorthand notation for linearly transforming tensors. \begin{Def} Let $A\in \ensuremath{\mathbb{R}}^{m\times n}$ be a matrix. For a tensor $T\in \ensuremath{\mathbb{R}}^{n^{(\times d)}}$, we denote by $A\circ T$ the application of $A$ to $T$ along all tensor dimensions, that is, the tensor $A\circ T\in \ensuremath{\mathbb{R}}^{m^{(\times d)}}$ defined as $$\left(A\circ T\right)_{i_1\dots i_d}=\sum_{j_1=1}^n\dots \sum_{j_d=1}^n A_{i_{1}j_{1}}\cdot\ldots\cdot A_{i_{d}j_{d}}\cdot T_{j_1\dots j_d}.$$ \end{Def} \begin{Rem} For $T\in \ensuremath{\mathbb{R}}^{n^{(\times d)}}$ and $A\in \ensuremath{\mathbb{R}}^{m\times n}, A'\in \ensuremath{\mathbb{R}}^{m'\times m}$, note that $$A'\circ (A\circ T) = (A'\cdot A) \circ T.$$ \end{Rem} \subsubsection{Flattening} A flattening of a tensor is the tensor obtained from regarding different indices as one index. \begin{Def} Denote by $[k]=\{1,2,\dots, k$\}. A surjective map $\sigma:[d]\rightarrow [\tilde{d}]$ is called $d$-to-$\tilde{d}$ \emph{flattening map}.\\ \end{Def} \begin{Def} Let $T\in \ensuremath{\mathbb{R}}^{n_1\times \dots \times n_d}$ be a tensor, and let $\sigma$ be a $d$-to-$\tilde{d}$ flattening map. Then, the $\sigma$-flattening of $T$ is the degree $\tilde{d}$ tensor $\sigma\dashv T\in \ensuremath{\mathbb{R}}^{\tilde{n}_1\times \dots \times \tilde{n}_{\tilde{d}}},$ with $\tilde{n}_k=\prod_{\ell\in\sigma^{-1}(k)}n_\ell,$ defined as $$(\sigma\dashv T)_{j_1\dots j_{\tilde{d}}} := T_{i_1\dots i_d}\quad,\mbox{where}\; j_k=(i_\ell\;:\;\ell\in \sigma^{-1}(k)).$$ Conversely, if $\tilde{T}=\sigma\dashv T$, then we write $T=\sigma\vdash\tilde{T}$ and call $T$ the \emph{unflattening} of $\tilde{T}$. \end{Def} Note that the indices of $\sigma\dashv T$ are, as defined, tuples of indices of $T$; however, this does not contradict the definition of tensor since $[n_1]\times[n_2]\times \dots [n_k]$ can be bijectively mapped onto $\left[\prod_{i=1}^k n_i\right].$ It is convenient to choose the lexicographical ordering for the bijection, but it is mathematically not necessary to fix any such bijection. For unflattening, if only $\tilde{T}$ and $\sigma$ are given, it is not clear what $\sigma\vdash\tilde{T}$ should be without further specification, since the same unflattening can arise from different tensors even if $\sigma$ is fixed. Therefore, we will use it only in the context where a given flattening is being reversed, or partially reversed, therefore making the unflattening well-defined. \begin{Ex} Let $T\in \ensuremath{\mathbb{R}}^{n_1\times n_2\times n_3}$ be a tensor, let $\sigma: 1\mapsto 1, 2\mapsto 2, 3\mapsto 2$. The $\sigma$-flattening of $T$ is a $(n_1\times n_2n_3)$-matrix $\tilde{T}:=\sigma\dashv T$. The columns of $\sigma\dashv \tilde{T}$ are all the $n_2n_3$ sub-$(n_1\times 1\times 1)$-tensors of $T$ where second and third index are fixed. The columns of $\sigma\dashv T$ are indexed by the pairs $(k,\ell)$, or, alternatively, by bijection, by the lexicographical index number $(k-1)\cdot n_2 + \ell$. Taking any $(n'_1\times n_2n_3)$-submatrix of $\tilde{T}$, we can unflatten to obtain a $(n'_1\times n_2\times n_3)$-tensor $\sigma\vdash \tilde{T}$. \end{Ex} \subsubsection{Outer Product} Furthermore, we introduce notation for creating tensors of higher order out of tensors of lower order: \begin{Def} Let $v^{(1)}\in \ensuremath{\mathbb{R}}^{n_1},\dots, v^{(d)}\in \ensuremath{\mathbb{R}}^{n_d}$. The \emph{outer product} of the $v^{(k)}$ is the tensor $v^{(1)}\otimes \dots \otimes v^{(d)}\in\ensuremath{\mathbb{R}}^{n_1\times \dots n_d}$ defined by $$(v^{(1)}\otimes \dots \otimes v^{(d)})_{i_1\dots i_d} := \prod_{k=1}^d v^{(k)}_{i_k}.$$ In case that $v=v^{(1)}= \dots = v^{(d)}$, we also write $v^{\otimes d} := v^{(1)}\otimes \dots \otimes v^{(d)}.$\\ Similarly, if $A\in \ensuremath{\mathbb{R}}^{n_1\times \dots\times n_c}$ and $B\in \ensuremath{\mathbb{R}}^{n_{c+1}\times \dots\times n_d}$ are tensors, the outer product of $A$ and $B$ is the tensor $A\otimes B\in \ensuremath{\mathbb{R}}^{n_1\times \dots n_d}$ defined as $$(A\otimes B)_{i_1\dots i_d} := \prod_{k=1}^c A^{(k)}_{i_1\dots i_c}\cdot \prod_{k=c+1}^d B^{(k)}_{i_{c+1}\dots i_d}.$$ Outer products of several tensors $A_1\otimes \dots\otimes A_k$ by induction on $k$, namely: $$A_1\otimes \dots\otimes A_k := (A_1\otimes \dots\otimes A_{k-1})\otimes A_k.$$ \end{Def} A useful calculation rule for linear transformation is the following: \begin{Lem} Let $A\in \ensuremath{\mathbb{R}}^{n^{\times d_1}}$ and $B\in \ensuremath{\mathbb{R}}^{n^{\times d_2}},$ let $A\in\ensuremath{\mathbb{R}}^{m\times n}$. Then, $$P\circ (A\otimes B) = (P\circ A)\otimes (P\circ B).$$ Similarly, if $v\in \ensuremath{\mathbb{R}}^{n}$, then $P\circ \left(v^{\otimes d}\right) = \left(P\circ v\right)^{\otimes d}.$ \end{Lem} Outer products are also compatible with flattenings: \begin{Lem}\label{Lem:prodflat} Let $A\in\ensuremath{\mathbb{R}}^{n_1\times \dots\times n_c}$ and $B\in\ensuremath{\mathbb{R}}^{n_{c+1}\times \dots\times n_d}.$ Let $\tau$ be a $d$-to-$k$-flattening, let $\sigma_1$ be the restriction of $\tau$ to $[c]$, and let $\sigma_2$ be the $(d-c)$-to-$\tilde{k}$-flattening defined by $\sigma(i):=\tau(c+i)$. Then, $$\tau\dashv(A\otimes B) = (\sigma_1\dashv A)\otimes (\sigma_2\dashv B).$$ \end{Lem} \subsection{Orthogonality and Duality} We briefly review the notions of scalar product and some results, which can also be found in~\cite{Kolda01orthogonaltensor} in slightly different formulation and slightly less generality. \begin{Def} A \emph{scalar product} is defined on $\ensuremath{\mathbb{R}}^{n_1\times n_2\times \dots \times n_d}$ by \begin{align*} \langle .,.\rangle:& \ensuremath{\mathbb{R}}^{n_1\times n_2\times \dots \times n_d}\times \ensuremath{\mathbb{R}}^{n_1\times n_2\times \dots \times n_d}\longrightarrow\ensuremath{\mathbb{R}}\\ & (A,B)\mapsto \sum_{i_1=1}^{n_1}\dots \sum_{i_d=1}^{n_d} A_{i_1\dots i_d}\cdot B_{i_1\dots i_d} \end{align*} As usual, $A,B\in \ensuremath{\mathbb{R}}^{n_1\times \dots \times n_d}$ are called orthogonal to each other if $\langle A,B\rangle =0$, and $A$ is called normal if $\langle A,A\rangle = 1$. A set $A_1,\dots, A_r\in \ensuremath{\mathbb{R}}^{n_1\times \dots \times n_d}$ is called orthonormal if $\langle A_i,A_j\rangle =\delta_{ij}$, where $\delta_{ij}$ is the Kronecker-delta. \end{Def} By identification of $\ensuremath{\mathbb{R}}^{n_1\times \dots \times n_d}$ with $\ensuremath{\mathbb{R}}^{N}$, where $N=\prod_{i=1}^d n_i$, the scalar product on tensors inherits all properties of the real scalar product. \begin{Rem}\label{Rem:sctr} It is seen by checking definitions that the scalar product on matrices is identical to the trace product, i.e., $\langle A,B\rangle =\operatorname{Tr} (A^\top B)$ for $A,B\in\ensuremath{\mathbb{R}}^{m\times n}$. \end{Rem} An important property of the scalar product is compatibility with flattenings: \begin{Lem}\label{Lem:orthflat} Let $T_1,T_2\in \ensuremath{\mathbb{R}}^{n_1\times n_2\times \dots \times n_d}$, let $\sigma$ be a $d$-to-$\tilde{d}$ flattening map. Then, $$\langle T_1,T_2\rangle = \langle \sigma\dashv T_1,\sigma\dashv T_2\rangle.$$ In particular, $T_1$ and $T_2$ are orthogonal to each other if and only if $\sigma\dashv T_1$ and $\sigma\dashv T_2$ are. \end{Lem} \begin{proof} A flattening is a bijection on the set of entries, therefore the result of the entry-wise scalar product is not changed by flattening. \end{proof} \begin{Prop}\label{Prop:prodorth} Let $A^{(j)}_1,A^{(j)}_2\in \ensuremath{\mathbb{R}}^{n^{(j)}_1\times \dots\times n^{(j)}_{c_j}}$, for $j=1,\dots, k$. Then, $$\left\langle A^{(1)}_1\otimes \dots A^{(k)}_1, A^{(1)}_2\otimes \dots A^{(k)}_2\right\rangle = \prod_{j=1}^k\left\langle A^{(j)}_1,A^{(j)}_2\right\rangle.$$ In particular, if there exists $j$ such that $A^{(j)}_1,A^{(j)}_2$ are orthogonal to each other, then the outer products $A^{(1)}_1\otimes \dots \otimes A^{(k)}_1$ and $A^{(1)}_2\otimes \dots \otimes A^{(k)}_2$ are orthogonal to each other. \end{Prop} \begin{proof} By performing induction on $k$, it suffices to prove the statement for $k=2$: Let $A_1,A_2\in \ensuremath{\mathbb{R}}^{n_1\times \dots\times n_c}$ and $B_1,B_2\in \ensuremath{\mathbb{R}}^{n_{c+1}\times \dots\times n_d}.$ Then, $$\langle A_1\otimes B_1, A_2\otimes B_2\rangle = \langle A_1,B_1\rangle\cdot \langle A_2,B_2\rangle.$$ We proceed to prove this statement. Let $\sigma_1$ be the $c$-to-$1$-flattening, let $\sigma_2$ be the $(d-c)$-to-$1$-flattening. Let $v_i=\sigma_1\dashv A_i$, and $w_i=\sigma_2\dashv B_i$ for $i=1,2$. By Lemma~\ref{Lem:orthflat}, it holds that $$\langle A_i,B_i\rangle = \langle v_i,w_i\rangle\quad\mbox{for}\;i=1,2.$$ Let $\tau$ be the $d$-to-$2$-flattening defined by $\tau:\{1,\dots, c\}\mapsto \{1\}, \{c+1,\dots, d\}\mapsto \{2\}$. Let $C_i=\tau\dashv (A_i\otimes B_i)$. By Lemma~\ref{Lem:orthflat}, it holds that $$\langle A_1\otimes B_1, A_2\otimes B_2\rangle = \langle C_1,C_2\rangle.$$ By Lemma~\ref{Lem:prodflat}, it holds that $$\langle C_1,C_2\rangle = \langle v_1\otimes w_1, v_2\otimes w_2\rangle.$$ Using that scalar product on tensors is the trace product (see~\ref{Rem:sctr}), we obtain $$\langle v_1\otimes w_1, v_2\otimes w_2\rangle = \operatorname{Tr}(v_1w_1^\top w_2 v_2^\top).$$ The cyclic property of the trace product for matrices yields $$\operatorname{Tr}(v_1w_1^\top w_2 v_2^\top) = \operatorname{Tr}(w_1^\top w_2 v_2^\top v_1) = w_1^\top w_2v_2^\top v_1 = \langle v_1,v_2\rangle\cdot \langle w_1,w_2\rangle.$$ All equalities put together yield the claim. \end{proof} \begin{Cor} Let $\mu_1,\mu_2\in \ensuremath{\mathbb{R}}^n$, and $d\in\ensuremath{\mathbb{N}}$, such that $\langle \mu_1,\mu_2\rangle = 0$. Then, $$\left\langle\mu_1^{\otimes d}, \mu_2^{\otimes d}\right\rangle = 0.$$ \end{Cor} \begin{Def}\label{Def:odec} Let $T\in \ensuremath{\mathbb{R}}^{n_1\times n_2\times \dots \times n_d}$, let $[d]=S_1\cup S_2\cup\dots \cup S_k$ be a partition. A decomposition $$T=\sum_{i=1}^r w_i\cdot A^{(1)}_i\otimes \dots \otimes A^{(k)}_i$$ with $w_i\in\ensuremath{\mathbb{R}}$, and $A^{(j)}_i\in \ensuremath{\mathbb{R}}^{\times_{\ell\in S_j} n_\ell}$, such that the set of $A^{(j)}_i$ with fixed $j$ is orthonormal, is called rank-$r$ \emph{orthogonal atomic decomposition} of $T$, with signature $(S_1,\dots, S_k)$. If $k=d$ and $S_i=\{i\}$, then the decomposition is called \emph{orthogonal CP-decomposition}. \end{Def} An orthogonal atomic decomposition does not need to exist necessarily. However, if it does, it is compatible with respect to flattenings, as Proposition~\ref{Prop:decompflat} will show. We introduce notation for a more concise statement of the compatibility first: \begin{Def} Let $(S_1,\dots, S_k)$ be a partition of $[d]$. We say a $d$-to-$\tilde{d}$-flattening $\sigma$ is \emph{compatible} with the partition $(S_1,\dots, S_k)$, if it holds that $\{i,j\}\in S_\ell$ for some $\ell$ implies $\sigma(i)=\sigma(j)$. We say that $\sigma$ is \emph{strictly compatible} with the partition $(S_1,\dots, S_k)$, if it holds that $\{i,j\}\in S_\ell$ for some $\ell$ if and only if $\sigma(i)=\sigma(j)$. \end{Def} \begin{Prop}\label{Prop:decompflat} Let $T\in \ensuremath{\mathbb{R}}^{n_1\times n_2\times \dots \times n_d}$. Let $$T=\sum_{i=1}^r w_i\cdot A^{(1)}_i\otimes \dots \otimes A^{(k)}_i$$ be an orthogonal atomic decomposition with signature $(S_1,\dots, S_k),$ let $\sigma$ be compatible with the signature. Then, $$T=\sum_{i=1}^r w_i\cdot B^{(1)}_i\otimes \dots \otimes B^{(\tilde{d})}_i,\quad\mbox{where}\quad B^{(1)}_i=\sigma\dashv \left(\bigotimes_{j\in\sigma^{-1}(i)}B^{(1)}_j \right),$$ is an orthogonal atomic decomposition of $(\sigma\dashv T)$. In particular, if $\sigma$ is strictly compatible with the signature, then the decomposition is also an orthogonal CP-decomposition. \end{Prop} \begin{proof} This is a direct consequence of Lemma~\ref{Lem:orthflat}, checking compatibility of scalar product and orthogonality with the flattening at each of the sets of indices $S_i$. \end{proof} \subsection{Identifiability of the Orthogonal Atomic Decomposition} The orthogonal decomposition, as given in Definition~\ref{Def:odec}, does not need to exist for a tensor, nor does it need to be unique. We will show that due to the compatibility with flattenings, if it exists, it is unique, if the rank is chosen minimal. The main ingredient, besides flattenings, is uniqueness of singular value decomposition~\cite{You36}, a classical result, which we state in a convenient form: \begin{Thm} Let $A\in\ensuremath{\mathbb{R}}^{m\times n}$, let $r=\operatorname{rank} A$. Then, there is a singular value decomposition (= orthogonal CP-decomposition) $$A=\sum_{i=1}^r w_i\cdot u_i\cdot v_i^\top\quad\mbox{with}\;u_i\in\ensuremath{\mathbb{R}}^m, v_i\in \ensuremath{\mathbb{R}}^n, w_i\in \ensuremath{\mathbb{R}}$$ such that the $u_i$ are orthonormal, and the $v_j$ are orthonormal. In particular, there is no singular value decomposition of rank strictly smaller than $r$. Moreover, the singular value decomposition of $A$ is unique, up to: \begin{description} \item[(a)] the sequence of summation, i.e., up to arbitrary permutation of the indices $i=1,\dots, n$ \item[(b)] the choice of sign of $w_i,u_i,v_i$, i.e., up to changing the sign in any two of $w_i,u_i,v_i$ for fixed $i$ \item[(c)] unitary transformations of the span of $u_i,u_j$ or $v_i,v_j$ such that $|w_i|=|w_j|$ \end{description} Condition (c) includes (b) as a special case, and (c) can be removed as a condition if no two distinct $w_i,w_j$ have the same absolute value. \end{Thm} \begin{Thm}\label{Thm:uniq} Let $T\in \ensuremath{\mathbb{R}}^{n_1\times n_2\times \dots \times n_d}$, and assume that $T$ has an orthogonal atomic decomposition $$T=\sum_{i=1}^r w_i\cdot A^{(1)}_i\otimes \dots \otimes A^{(k)}_i$$ of signature $(S_1,\dots, S_k)$, such that $w_i\neq 0$ for all $i$. Then: \begin{description} \item[(i)] Denote $N_j=\prod_{i\in S_j} n_i$ for $j=1,\dots, k$. Then, $r\le N_j$ for all $j$. \item[(ii)] There is no orthogonal atomic decomposition of $T$ with signature $(S_1,\dots, S_k)$, and of rank strictly smaller than $r$. \item[(iii)] The orthogonal atomic decomposition of $T$ of rank $r$ is unique, up to: \begin{description} \item[(a)] the sequence of summation, i.e., up to arbitrary permutation of the indices $i=1,\dots, n$ \item[(b)] the choice of sign of $w_i,A^{(k)}_i$, i.e., up to changing the sign in any two of $w_i$ and the $A^{(k)}_i$ for fixed $i$ and arbitrary $k$ \item[(c)] transformations of factors $A^{(k)}_i,A^{(k)}_j,$ and their respective tensor products, such that $|w_i|=|w_j|$, which induce unitary transformations in all flattenings compatible with the signature $(S_1,\dots, S_k)$. \end{description} Condition (c) includes (b) as a special case, and (c) can be removed as a condition if no two distinct $w_i,w_j$ have the same absolute value. \end{description} \end{Thm} \begin{proof} Fix some arbitrary $j$. Consider the $d$-to-$2$-flattening $\sigma: S_j\mapsto \{1\}, S_i\mapsto \{2\}$ for $i\neq j$, note that $\sigma$ is compatible with the signature. Let $m=N_j, n=\prod_{i\neq j} N_i$, and $A=\sigma\dashv T$. Note that $A$ is a $(m\times n)$-matrix. Let $$T=\sum_{i=1}^r w_i \cdot A^{(1)}_i\otimes \dots \otimes A^{(k)}_i$$ be the orthogonal atomic decomposition of $T$, and let $u_i=\sigma\dashv A^{(j)}_i$, and $v_i=\sigma\dashv\bigotimes_{k\neq j} A^{(k)}_i$ for all $i$. Note that $u_i$ is an $m$-vector, and $v_j$ is an $n$-vector. By Proposition~\ref{Prop:decompflat}, $$A=\sum_{i=1}^r w_i\cdot u_i\cdot v_i^\top$$ is a singular value decomposition of $A$.\\ (i) In particular, the $u_i$ are a system of $r$ orthonormal vectors in $\ensuremath{\mathbb{R}}^m$. Therefore, $r\le m=N_j$. Since $j$ was arbitrary, statement (i) follows.\\ (ii) Since the $w_i$ are non-zero, it holds that $\operatorname{rank} A = r$. Would there be an orthogonal atomic decomposition of $T$ with signature $(S_1,\dots, S_k)$ of rank strictly smaller than $r$, there would be a singular value decomposition of $A$ of rank strictly smaller than $r$, contradicting Proposition~\ref{Prop:decompflat}.\\ (iii) Observe that the flattening by $\sigma$ induces a bijection between the orthogonal atomic decompositions of $T$, of rank $r$, and the singular value decompositions of $A$, of rank $r$. The statement in (iii) then follows directly from the uniqueness asserted in Proposition~\ref{Prop:decompflat} for the singular value decomposition of $A$. \end{proof} Again, we would like to stress that the present orthogonal decomposition model is different from the one in~\cite{Kolda01orthogonaltensor}; ours being factor-wise orthogonal between different summands, while the orthogonal rank decomposition in~\cite{Kolda01orthogonaltensor} being summand-wise orthogonal, and the combinatorial orthogonal rank decomposition enforcing orthogonality of factors in the same summand. Therefore, Theorem~\ref{Thm:uniq} does not contradict Lemma~3.5 in~\cite{Kolda01orthogonaltensor}.\\ Another result which seems to be folklore, but not available in the literature, is that it is a strong restruction for a tensor to assume that it has an orthogonal decomposition. Since it is almost implied by the identifiability Theorem~\ref{Thm:uniq}, we state a quantitative version of this: \begin{Prop}\label{Prop:notallorth} The set of tensors $T\in \ensuremath{\mathbb{R}}^{n_1\times n_2\times \dots \times n_d}$, with $d\ge 3$, and $n_j\ge 2$ for all $j$, for which $T$ has an orthogonal CP-decomposition, is a Lebesgue zero set. \end{Prop} \begin{proof} The CP-decomposition can be viewed as an algebraic map \begin{align*} \phi: \ensuremath{\mathbb{R}}\times \ensuremath{\mathbb{R}}^{n_1}\times\dots\times\ensuremath{\mathbb{R}}^{n_d}&\rightarrow \ensuremath{\mathbb{R}}^{n_1\times n_2\times \dots \times n_d}\\ (w_i,v^{(j)}_i) &\mapsto \sum_{i=1}^r w_i\cdot v^{(1)}_i\otimes \dots \otimes v^{(k)}_i. \end{align*} Since the left hand side is an irreducible variety, the image of the map $\phi$ also is. The orthogonal CP-decompositions form an algebraic subset of the left hand side. Therefore the state follows from the fact that $\phi$ is not surjective. This follows from a degree of freedom resp.~dimension count. One has \begin{align*} D_1&:=\dim\ensuremath{\mathbb{R}}^{n_1\times n_2\times \dots \times n_d} =\prod_{i=1}^d n_d,\quad\mbox{and }\\ D_2&:=\dim (\ensuremath{\mathbb{R}}\times \ensuremath{\mathbb{R}}^{n_1}\times\dots\times\ensuremath{\mathbb{R}}^{n_d})^r = r\cdot (n_1+\dots + n_d+1). \end{align*} Theorem~\ref{Thm:uniq}~(i) implies $$D_2\le n_j\cdot (n_1+\dots+n_k+1).$$ An explicit computation shows that $D_1\gneq D_2$, which proves the statement.\\ The proof above can be rephrased in terms of the CP-rank (see~\cite{Cat2002} for an introduction), can be obtained by observing that the generic CP-rank of the tensors in questions must be strictly larger than $\min (n_1,\dots, n_k)$, then proceeding again by arguing that the algebraic set of tensors with orthogonal CP-decompositions must be a proper subset of all tensors with that format, thus a Lebesgue zero set. \end{proof} Proposition~\ref{Prop:notallorth} can be extended to orthogonal atomic decompositions with signature $(S_1,\dots, S_k), k\ge 3$, by considering suitable unflattenings. \subsection{Tensors and Moments} We briefly show how tensors relate to moments of multivariate real random variables: \begin{Def} Let $X$ be a real $n$-dimensional random variable. Then, define: \begin{align*} \mbox{the characteristic function of}\; X\;\mbox{as}\quad &\quad\quad \varphi_X(\tau) := \ensuremath{\mathbb{E}} \left[ \exp \left(i \tau X\right) \right], \\ \mbox{the moment generating function of}\; X\mbox{as}\quad &\quad\quad \chi_X(\tau) := \log\ensuremath{\mathbb{E}} \left[ \exp \left(i \tau X\right) \right], \end{align*} where $\tau\in \ensuremath{\mathbb{R}}^{1\times n}$ is a formal vector of variables. The $d$-th \emph{moment} (or moment tensor) $\ensuremath{\mathbf{M}}_d(X)\in \ensuremath{\mathbb{R}}^{n^{(\times d)}}$ of $X$, and the $d$-th \emph{cumulant} (or cumulant tensor) $\ensuremath{\kappa}_d(X)\in \ensuremath{\mathbb{R}}^{n^{(\times d)}}$ of $X$ are defined\footnote{in case of convergence} as the coefficients in the multivariate Taylor expansions \begin{align*} \varphi_X(\tau) & = \sum_{d=1}^\infty \left(i \tau\right) \circ \frac{\ensuremath{\mathbf{M}}_d(X)}{d!},\\ \chi_X(\tau) & = \sum_{d=1}^\infty \left(i \tau\right) \circ \frac{\ensuremath{\kappa}_d(X)}{d!}. \end{align*} \end{Def} In the following, we will always assume that the moments and cumulants in question exist. The moments and cumulants of a linearly transformed random variable are the multilinearly transformed moments. \begin{Prop}\label{Prop:lintrans} Let $X$ be a real $n$-dimensional random variable and let $A \in \ensuremath{\mathbb{R}}^{m \times n}.$ Then, \begin{align*} \ensuremath{\mathbf{M}}_d(A\cdot X) &= A \circ \ensuremath{\mathbf{M}}_d(X),\\ \ensuremath{\kappa}_d(A\cdot X) &= A \circ \ensuremath{\kappa}_d(X). \end{align*} \end{Prop} \begin{proof} We prove the statement for cumulants, the proof for moments is completely analogous. For the cumulant generating functions $\chi_X$ of $X$ and $\chi_{A\cdot X}$ of $A\cdot X$, it holds that \begin{align*} \chi_{A\cdot X}(\tau) & = \ensuremath{\mathbb{E}} \left[ \exp \left(i \tau\cdot A\cdot X\right) \right] \\ & = \ensuremath{\mathbb{E}} \left[ \exp \left(i \left(\tau\cdot A \right) \cdot X\right) \right]\\ & = \sum_{d=1}^\infty \left(i \tau\right)\circ \left(A \circ \frac{\ensuremath{\mathbf{M}}_d(X)}{d!}\right). \end{align*} The last equality follows from the definition of $\chi_X(\tau)$. But by definition, it also holds that \begin{align*} \chi_{A\cdot X}(\tau) & = \sum_{d=1}^\infty \left(i \tau\right) \circ \frac{\ensuremath{\mathbf{M}}_d(A\cdot X)}{d!}, \end{align*} therefore the statement follows from comparing coefficient tensors. \end{proof} \section{Relation to Mixture Models} \label{sec:estimation} \subsection{The Estimation Problem} Throughout the paper, we will consider the following independent rank $1$ mixture model:\\ {\bf Generative Model:} $X_1,\dots, X_r$ are independent, $\ensuremath{\mathbb{R}}^n$-valued random variables, with $r\le n$, and probability/mass density functions $X_i\sim p_i$. Let $w_1,\dots, w_r\in\ensuremath{\mathbb{R}}$ be arbitrary such that $\sum_{i=1}^r w_i = 1$, and let $Y\sim \sum_{i=1}^r w_r p_i$ be the corresponding mixture of the $X_i$. Assume that there are $\mu_1,\dots, \mu_r\in\ensuremath{\mathbb{R}}^n$ with $\|\mu_i\|_2=1$, and random variables $Z_i\in\ensuremath{\mathbb{R}}$, such that $X_i=\mu_i\cdot Z_i$. Assume that the $\mu_i$ are linearly independent, and $\ensuremath{\mathbf{M}}_d(Z_i)=1$ for $d=2,\dots, m$.\\ {\bf Estimation Task:} Given $\ensuremath{\mathbf{M}}_2 (Y),\ensuremath{\mathbf{M}}_3 (Y),\dots,\ensuremath{\mathbf{M}}_m (Y), m\ge 3$, or estimators thereof, determine/estimate $\mu_i$ and $w_i$ for $i=1,\dots, r$.\\ While the above scenario seems very restrictive, several important problems can be reduced to this setting, see for example~\cite{Hsu2012}, or chapter 3 of~\cite{Ana2012}. We recommend the interested reader to read the exposition there. \subsection{Algebraic Formulation via Moments} The estimation problem presented above can be reformulated as a purely algebraic problem, see~\cite{Ana2012}. Namely, the $\ensuremath{\mathbf{M}}_i$ are explicitly calculable in terms of the expectations of the $\mu_i$ and $w_i$. Then, Proposition~\ref{Prop:lintrans} implies that $\ensuremath{\mathbf{M}}_d(X_i)= \mu_i^{\otimes d}$ for all $d$, therefore $\ensuremath{\mathbf{M}}_d(Y)=\sum_{i=1}^r w_i\cdot \mu_i^{\otimes d}$ for all $d$, thus yielding the following algebraic version of the estimation problem.\\ {\bf Algebraic Problem:} Let $r\le n$, let $\mu_1,\dots, \mu_r\in\ensuremath{\mathbb{R}}^n$ be linearly independent and $w_1,\dots, w_r\in\ensuremath{\mathbb{R}}$ arbitrary such that $\sum_{i=1}^r w_i = 1$. Given (exact or noisy estimators for) $$\ensuremath{\mathbf{M}}_d = \sum_{i=1}^r w_i\cdot \mu_i^{\otimes d}\quad\mbox{for}\;d=2,\dots, m,\;\mbox{with}\;m\ge 3,$$ determine the $\mu_i$ and $w_i$.\\ \section{Algorithms} \subsection{Orthogonal Decomposition of Tensors} A special case of orthogonal decomposition is singular value decomposition (SVD). There are a huge amount of well-studied methods for obtaining the singular value decomposition, which we will not discuss. However, we will make extensive use of the SVD algorithm, as described in Algorithm~\ref{Alg:SVD} as a black box. \begin{algorithm}[ht] \caption{\label{Alg:SVD} \texttt{SVD}. Singular Value Decomposition of Matrices.\newline \textit{Input:} A matrix $A\in\ensuremath{\mathbb{R}}^{m\times n}$. \textit{Output:} The singular value decomposition $A= U\cdot \Sigma\cdot V^\top$, with $U\in\ensuremath{\mathbb{R}}^{m\times r}, V\in\ensuremath{\mathbb{R}}^{n\times r}$ orthogonal, $\Sigma\in\ensuremath{\mathbb{R}}^{r\times r}$ diagonal, and the rank $r=\operatorname{rank} A$} \end{algorithm} First, for completeness, we treat the trivial case in Algorithm~\ref{Alg:orthdecomp1}. \begin{algorithm}[ht] \caption{\label{Alg:orthdecomp1} \texttt{OTD1}. Orthogonal Tensor Decomposition in one factor.\newline \textit{Input:} A tensor $T\in\ensuremath{\mathbb{R}}^{n_1\times\dots\times n_d}$, a signature $(S_1)$. \textit{Output:} The orthogonal atomic decomposition $T=\sum_{i=1}^r w_i\cdot A_i$.} \begin{algorithmic}[1] \State Return rank $r=1,$ coefficients $w_1=\|T\|,$ factors $A_1=\|T\|^{-1}\cdot T$. \end{algorithmic} \end{algorithm} Now we explicitly describe how to compute the orthogonal decomposition if each summand has two tensor factors. Algorithm~\ref{Alg:orthdecomp2} computes the decomposition by a proper reformatting of the entries, computing the singular value decomposition, then reformatting again. \begin{algorithm}[ht] \caption{\label{Alg:orthdecomp2} \texttt{OTD2}. Orthogonal Tensor Decomposition in two factors.\newline \textit{Input:} A tensor $T\in\ensuremath{\mathbb{R}}^{n_1\times\dots\times n_d}$, a signature $(S_1, S_2)$. \textit{Output:} The orthogonal atomic decomposition $T=\sum_{i=1}^r w_i\cdot A_i\otimes B_i$ (assumed to exist), including the rank $r$} \begin{algorithmic}[1] \State Define $\sigma: [d]\rightarrow [2], S_i\mapsto \{i\}.$ \State Set $A\leftarrow (\sigma\dashv T)$. Note that $A\in\ensuremath{\mathbb{R}}^{m\times n}$, with $m=\prod_{i\in S_1}n_i, n=\prod_{i\in S_2}n_i.$ \State Compute the \texttt{SVD} of $A=U\cdot \Sigma\cdot V^\top$, see Algorithm~\ref{Alg:SVD}. \State Return rank $r=\operatorname{rank} A$. \State Return coefficients $w_i=\Sigma_{ii}$ for $i=1,\dots, r$. \State For all $i$, let $U_i$ be the $i$-th column of $U$, let $V_i$ be the $i$-th columns of $V$. \State Return factors $A_i=\sigma\vdash U_i, B_i =\sigma\vdash V_i$ for $i=1,\dots, r$. \end{algorithmic} \end{algorithm} The algorithm for the general case, Algorithm~\ref{Alg:orthdecomp}, consists as well of repeated applications of reindexing and singular value decomposition. Variants of singular value decomposition exist with adjustable noise tolerance or singular value thresholding, and can therefore be employed to obtain thresholding and numerically stable variants of Algorithm~\ref{Alg:orthdecomp}. Furthermore, step~\ref{Alg:orthdecomp-step1} allows for an arbitrary choice of $k$-to-$2$-flattening, in each recursion. Since in the presence of noise, the results might differ when taking a different sequence flattenings, the numerical stability can be improved by clustering the results of all possible choices, then averaging. \begin{algorithm}[ht] \caption{\label{Alg:orthdecomp} \texttt{OTD}. Orthogonal Tensor Decomposition.\newline \textit{Input:} A tensor $T\in\ensuremath{\mathbb{R}}^{n_1\times\dots\times n_d}$, a signature $(S_1, \dots, S_k)$. \textit{Output:} The orthogonal atomic decomposition $T=\sum_{i=1}^r w_i\cdot A^{(1)}_i\otimes \dots \otimes A^{(k)}_i$ (assumed to exist), including the rank $r$} \begin{algorithmic}[1] \State \label{Alg:orthdecomp-step1} Choose any $k$-to-$2$-flattening map $\tau$. \State Set $\tilde{S}_j\leftarrow \cup_{i\in\tau^{-1}(j)}S_i$ for $j=1,2$. \State Set $\tilde{T}\leftarrow \tau\dashv T$. \State Use \texttt{OTD2}, Algorithm~\ref{Alg:orthdecomp2}, to compute the orthogonal atomic decomposition $\tilde{T}=\sum_{i=1}^r w_i\cdot A_i\otimes B_i$ with signature $(\tilde{S}_1,\tilde{S}_2)$. \State Return the $w_i$ as coefficients and $r$ as the rank for the decomposition of $T$. \State \label{Alg:orthdecomp-step6} For $i=1,\dots, r$, use the suitable one of \texttt{OTD1},\texttt{OTD2},\texttt{OTD}, i.e., Algorithm~\ref{Alg:orthdecomp1},\ref{Alg:orthdecomp2}, or~\ref{Alg:orthdecomp}, to compute the orthogonal atomic decomposition $(\tau\vdash A_i)=\sum_{i=1}^1 1\cdot\bigotimes_{\tau(j)\in S_1} A^{(j)}_i$, noting that rank is one, and using the signature $(S_j\;:\;\tau(j)\in \tilde{S}_1).$ \State \label{Alg:orthdecomp-step7} For $i=1,\dots, r$, use the suitable one of \texttt{OTD1},\texttt{OTD2},\texttt{OTD}, i.e., Algorithm~\ref{Alg:orthdecomp1},\ref{Alg:orthdecomp2}, or~\ref{Alg:orthdecomp}, to compute the orthogonal atomic decomposition $(\tau\vdash B_i)=\sum_{i=1}^1 1\cdot\bigotimes_{\tau(j)\in S_2} A^{(j)}_i$, noting that rank is one, and using the signature $(S_j\;:\;\tau(j)\in \tilde{S}_2).$ \State Return the $A^{(j)}_i$ as factors for $T$. \end{algorithmic} \end{algorithm} Termination of Algorithm~\ref{Alg:orthdecomp} is implied by the observation that in each recursion, the partition of $[d]$ is made strictly finer. Since $[d]$ has finite cardinality, there is only a finite number of recursions. The fact that the decompositions in steps~\ref{Alg:orthdecomp-step6} and~\ref{Alg:orthdecomp-step7} have rank one, and coefficients $1$, follows from the uniqueness of the orthogonal decomposition guaranteed in Theorem~\ref{Thm:uniq}. Correctness of Algorithm~\ref{Alg:orthdecomp} follows from repeated application of Proposition~\ref{Prop:decompflat}, and the uniqueness of singular value decomposition. \subsection{An Estimator for the Mixture Model} For illustrative purposes, we write out Algorithm~\ref{Alg:orthdecomp} for the problem introduced in~\ref{sec:estimation}, which has also extensively been studied in~\cite{Ana2012}: {\bf Example:} Let $r\le n$, let $\mu_1,\dots, \mu_r\in\ensuremath{\mathbb{R}}^n$ be linearly independent and $w_1,\dots, w_r\in\ensuremath{\mathbb{R}}$ arbitrary such that $\sum_{i=1}^r w_i = 1$. Given (exact or noisy estimators for) $$\ensuremath{\mathbf{M}}_d = \sum_{i=1}^r w_i\cdot \mu_i^{\otimes d}\quad\mbox{for}\;d=2,3,$$ determine the $\mu_i$ and $w_i$.\\ Algorithm~\ref{Alg:degree3} solves the problem, by reducing it to \begin{algorithm}[ht] \caption{\label{Alg:degree3} Model identification.\newline \textit{Input:} $\ensuremath{\mathbf{M}}_2,\ensuremath{\mathbf{M}}_3$ \textit{Output:} $w_1,\dots, w_r, \mu_1,\dots,\mu_r$.} \begin{algorithmic}[1] \State Set $r\leftarrow \operatorname{rank}(\ensuremath{\mathbf{M}}_2)$. \State Compute the SVD\footnote{Note: since $\ensuremath{\mathbf{M}}_2$ is symmetric, the SVD also is.} $\ensuremath{\mathbf{M}}_2=U\cdot \Sigma\cdot U^\top$. \State Set $W\leftarrow U\cdot \Sigma^{-\frac{1}{2}}$. \State \label{Alg:degree3-step4}Set $T:=W^\top\circ \ensuremath{\mathbf{M}}_3$. \State Define the flattening map $\sigma: 1\mapsto 1,2\mapsto 2, 3\mapsto 2$. \State Set $\tilde{T}:=\sigma\dashv T.$ \State Compute the rank $r$ SVD $\tilde{T}=\sum_{i=1}^r \tilde{w}_i\cdot \tilde{\mu}^{(1)}_i\cdot v_i^\top$. \State \label{Alg:degree3-step8} Return $w_i= \tilde{w}_i^{-2}$ for $i=1,\dots, r$. \State Set $\tilde{A}_i=(\sigma\vdash v_i)$ for $i=1,\dots, r$. \State \label{Alg:degree3-step10} Compute the rank $1$ SVD $\tilde{A}_i= \tilde{\mu}^{(2)}_i\cdot \left(\tilde{\mu}^{(3)}_i\right)^\top$. \State \label{Alg:degree3-step11} Set $\tilde{\mu}_i\leftarrow \frac{1}{3}\left(\tilde{\mu}^{(1)}_i+\tilde{\mu}^{(2)}_i+\tilde{\mu}^{(3)}_i\right),$ for $i=1,\dots, r$. \State \label{Alg:degree3-step12} Compute the pseudo-inverse $B$ of $W'$. Return $\mu_i = B\cdot \tilde{\mu}_i\cdot \tilde{w}_i$, for $i=1,\dots, r.$ \end{algorithmic} \end{algorithm} Theorem 4.3 in~\cite{Ana2012} implies that the tensor $T$ obtained in step~\ref{Alg:degree3-step4} has an orthogonal CP-decomposition, and it implies the correctness of steps~\ref{Alg:degree3-step8}, and~\ref{Alg:degree3-step12}. The fact that $\tilde{A}_i$ in step~\ref{Alg:degree3-step8} has rank one, and the coefficients are $1$, follow from the uniqueness of the decomposition guaranteed in Theorem~\ref{Thm:uniq}. Note that explicit presentation of the algorithm could be substantially abbreviated by applying $\texttt{ODT}$ directly to $\tilde{T}$ in step~\ref{Alg:degree3-step4}, with signature $(\{1\},\{2,3\})$, and then performing the analogues of steps~\ref{Alg:degree3-step8} and~\ref{Alg:degree3-step12}. Furthermore, the accuracy of the estimator in step~\ref{Alg:degree3-step11} can be improved, by repeating the procedure for the three possible signatures $(\{1\},\{2,3\}), (\{2\},\{1,3\}),$ and $(\{3\},\{1,3\})$, then averaging, or weighted averaging, over the nine estimates for each $\tilde{\mu}_i$, making use of the symmetry of the problem.\\ Also, similar to Algorithm~\ref{Alg:orthdecomp}, the presented Algorithm~\ref{Alg:degree3}, while already numerically stable, can be modified to cope better with noise by, e.g., introducing thresholding to the singular value decomposition and rank computations. The numerical stability with respect to noise is governed by the numerical stability of the SVDs performed, and the pseudo-inversion of $W'$ in step~\ref{Alg:degree3-step12}.\\ Algorithm~\ref{Alg:degree3} is also related to Algorithm~1 proposed in ~\cite{Ana2012NIPS}. Namely, $\mbox{Triples}(\eta)$, as defined in section~4.1, is a degree $2$-projection of the tensor $T$, and therefore can be also understood as a random projection of the flattening $\sigma\dashv T$.\\ Furthermore, an estimator for the hierarchical models described in~\cite{Ish2013ICML,Son2013ICML} can be constructed in a similar way. \section{Conclusion} We have demonstrated that computing the orthogonal decomposition of an arbitrary degree tensor, symmetric or not, can be reduced to a series of singular value decompositions, and we have described efficient algorithms to do so. This makes orthogonal tensor decomposition approachable by the wealth of theoretical results and existing methods for eigenvalue problems and singular value decomposition. Moreover, we have exemplified our method in the case of identifying components in a low-rank mixture model. \section*{Acknowledgments} I thank Arthur Gretton, Zolt\'an Szab\'o, and Andreas Ziehe for interesting discussions. I thank the Mathematisches Forschungsinstitut Oberwolfach for support. \bibliographystyle{plainnat}
2024-02-18T23:40:52.843Z
2013-09-13T02:08:56.000Z
algebraic_stack_train_0000
3,565
6,463
proofpile-arXiv_066-1382
\section{Introduction} Understanding the properties of complicated wave scattering systems \cite{Newton1966B} is a common challenge in many engineering and physics fields, such as quantum chaotic systems \cite{Stockmann1999B, Haake2000B}, quantum dots and mesoscopic systems \cite{Altshuler1991B, Brouwer1997, Alhassid2000, Mello2004B}, acoustic waves \cite{Pagneux2001}, and microwave cavities \cite{Doron1990, Kuhl2005, Hemmady2005a, Hemmady2005b}. Due to the complexity of wave propagation and scattering in many of these systems, numerically solving the wave equations with high resolution is difficult or impractical. This is particularly true when the wavelength is short compared to the characteristic size of the scattering region (the situation of interest in this paper). In addition, in this case, scattering properties are extremely sensitive to small changes in system parameters, which may not be precisely known. Thus, a statistical approach has become a popular alternative for describing the wave properties \cite{Holland1999B}. Researchers have developed statistical models based on random matrix theory (RMT), which successfully predict certain universal statistical properties of complicated wave scattering systems \cite{Bohigas1984, Mehta1991B, Akemann2011B}. In order to apply RMT to practical wave systems, one usually needs to account for nonuniversal system-specific features, which are not included in RMT. For example, considering microwave signals entering an enclosure through localized ports and propagating inside, the port coupling between the enclosure and the outside world is one system-specific feature \cite{Zheng2006a,Zheng2006b}. The short ray trajectories between ports due to scattering from fixed walls and/or objects within the enclosure are also nonuniversal system-specific features \cite{Hart2009}. The random coupling model (RCM) is a well-developed model to combine the universal predictions of RMT and the nonuniversal features of a practical system by a simple additive formula in terms of impedance \cite{Zheng2006a,Zheng2006b,Hart2009}. This model has been experimentally verified in microwave cavities, and it offers a complete statistical model for the impedance matrices, the scattering matrices \cite{Hemmady2005a, Hemmady2005b, Yeh2010a, Yeh2010b}, the admittance matrices \cite{Hemmady2006a}, the conductances \cite{Hemmady2006c}, and the fading statistics \cite{Yeh2012a,Yeh2012b} of practical systems. The statistical distributions of the universal predictions of RMT and the practical distributions which includes nonuniversal features are distinctly different for most wave scattering properties. However, the impedance variance ratio (defined below) is a quantity that is predicted to be independent of nonuniversal features of the wave system, and it is expected to be a universal function of the loss of the system \cite{Zheng2006c}. In this paper, we use ``universality'' to mean that the impedance variance ratio is independent of the system-specific features including the port coupling of the system and the short ray trajectories betweeen ports. Impedance is a meaningful concept in electromagnetism, and it can be extended to all wave scattering systems. In a linear electromagnetic wave system with $N$ ports, the $N\times N$ impedance matrix $\textbf{Z}$ is the linear relationship of the complex phasor voltage vector $\widehat{\textbf{\textit{V}}}$ of the $N$ port voltages and the complex phasor current vector $\widehat{\textbf{\textit{I}}}$ of the $N$ port currents, via the phasor generalization of Ohm's law as $ \widehat{\textit{\textbf{V}}}=\textbf{Z}\widehat{\textit{\textbf{I}}}$ \cite{Pozar1990B}. A quantum-mechanical quantity corresponding to the impedance is the so-called reaction matrix, which is often denoted in the literature as $\textbf{K}$ and is related to $\textbf{Z}$ by $\textbf{K}=-i\textbf{Z}$ \cite{Alhassid2000, Verbaarschot1985, Lewenkopf1991, Beck2003, Fyodorov2004, Fyodorov2005, Savin2005}. The impedance matrix can also be related to the scattering matrix $\textbf{S}$ via the relationship \cite{Zheng2006a,Zheng2006b} \begin{equation}\label{eq:StoZ} \textbf{Z} = \textbf{Z}_{0}^{1/2}\left(\textbf{1}+\textbf{S}\right)\left(\textbf{1}-\textbf{S}\right)^{-1}\textbf{Z}_{0}^{1/2}, \end{equation} where $\textbf{Z}_{0}$ is a $N\times N$ diagonal matrix whose diagonal element $Z_{0,nn}$ is the characteristic impedance of the n$^{th}$ scattering channel mode, and $\textbf{1}$ is the identity matrix. The scattering matrix $\textbf{S}$ specifies the linear relationship between the incoming wave vector $\widehat{\textit{\textbf{a}}}$ and the outgoing wave vector $\widehat{\textit{\textbf{b}}}$, as $\widehat{\textit{\textbf{b}}}=\textbf{S}\widehat{\textit{\textbf{a}}}$. The $n^{th}$ element of the incoming and outgoing power waves are $a_{n}=(V_{n}+Z_{0,nn}I_{n})/\sqrt{Z_{0,nn}}$ and $b_{n}=(V_{n}-Z_{0,nn}I_{n})/\sqrt{Z_{0,nn}}$, where $V_{n}$ and $I_{n}$ are the voltage and current at the $n^{th}$ port, respectively \cite{Pozar1990B}, and the incident and reflected power fluxes in channel $n$ are $|a_{n}|^{2}$ and $|b_{n}|^{2}$. For complicated wave scattering systems, the impedance matrices and the scattering matrices are sensitive to small variations of the system, such as change of the applied frequency, the configuration of the enclosure boundary, or the location and orientation of an internal scatterer. The statistical variations of the elements of $\textbf{Z}$ and $\textbf{S}$ due to small random changes in the scattering system are of great interest \cite{Zheng2006c,Fyodorov2005,Savin2006}. For example, the variances of the the elements of $\textbf{S}$ and their ratio (the Hauser-Feshbach relation) have been studied in the nuclear scattering literature when researchers investigate the statistics of inelastic scattering of neutrons \cite{Hauser1952} and compound nuclear reactions \cite{Verbaarschot1985,Agassi1975}. Friedman and Mello used information theory to derive the Hauser-Feshbach formula in the statistical treatment of nuclear reactions \cite{Friedman1985}. The elastic enhancement factor is the ratio of variances in reflection (diagonal elements of $\textbf{S}$) to that in transmission (off-diagonal elements of $\textbf{S}$) \cite{Verbaarschot1986}. In chaotic scattering, elastic processes (the diagonal elements) are known to be systematically enhanced over inelastic ones (the off-diagonal elements) \cite{Kretschmer1978, Dietz2010}. For a two-port system, the elastic enhancement factor $W=\sqrt{\textrm{Var}[S_{11}]\textrm{Var}[S_{22}]}/\textrm{Var}[S_{12}]$, where $\textrm{Var}[x]$ stands for the variance of the variable $x$, and $S_{ij}$ denotes the matrix element of $\textbf{S}$ that occupies the $i^{th}$ row and the $j^{th}$ column. In research on electromagnetic fields in mode-stirred reverberating chambers, Fiachetti and Michelsen have conjectured the universality of the ratio of the variances of the scattering elements in the cases of time reversal invariant systems (corresponding to RMT of the Gaussian orthogonal ensemble (GOE)) \cite{Fiachetti2003}. The universality of the scattering variance ratio has been tested with wave scattering experiments in microwave resonators in the GOE case \cite{Zheng2006c}. Dietz \textit{et al.} have also tested the universality of the elastic enhancement factor with microwave resonators in the GOE case and in the cases of partially breaking of time reversal invariance (corresponding to RMT of the Gaussian unitary ensemble (GUE)) \cite{Dietz2010}. {\L}awniczak \textit{et al.} have used microwave networks to test the elastic enhancement factor in both the GOE and GUE cases \cite{Lawniczak2010,Lawniczak2011,Lawniczak2012}. In this paper we are concerned with the impedance variance ratio, which is defined as \cite{Zheng2006c} \begin{equation}\label{eq:variance_ratioZ} \Xi_{Z}\equiv\frac{\textrm{Var}[Z_{ij}]}{\sqrt{\textrm{Var}[Z_{ii}]\textrm{Var}[Z_{jj}]}},\ \ \ i\neq j, \end{equation} and the scattering variance ratio, defined as \begin{equation}\label{eq:variance_ratioS} \Xi_{S}\equiv\frac{\textrm{Var}[S_{ij}]}{\sqrt{\textrm{Var}[S_{ii}]\textrm{Var}[S_{jj}]}},\ \ \ i\neq j, \end{equation} where the variances arise from small variations of the system. For a reciprocal ($Z_{ij}=Z_{ji}$) two-port system, the impedance variance ratio is $\Xi_{Z}=\textrm{Var}[Z_{12}]/\sqrt{\textrm{Var}[Z_{11}]\textrm{Var}[Z_{22}]}$. Similarly, the scattering variance ratio is $\Xi_{S}=\textrm{Var}[S_{12}]/\sqrt{\textrm{Var}[S_{11}]\textrm{Var}[S_{22}]}$. Note that $\Xi_S$ is the inverse of the elastic enhancement factor of a two-port system. The impedance variance ratio $\Xi_{Z}$ is predicted to be a universal function of the loss parameter $\alpha$ \cite{Zheng2006c}, which characterizes the losses and mode-spacing within the wave scattering system (defined below). On the other hand, $\Xi_{S}$ is in general dependent on the system-specific features of the wave scattering system and hence not universal. Only in the high loss regime ($\alpha \gg 1$) can one assume that the fluctuating part of the impedance matrix (or the scattering matrix) is much smaller than the mean part, which allows one can obtain the result $\Xi_{S}\simeq\Xi_{Z}$ ($\alpha\gg 1$) \cite{Zheng2006c}, which implies that $\Xi_{S}$ is approximately universal for high loss. The loss parameter can be understood as the degree of overlap of resonances in frequency in the electromagnetic case (or energy level in the quantum case) due to the distributed losses of the closed version of the wave scattering system. For example, in the case of electromagnetic wave scattering, the loss parameter is \begin{equation}\label{eq:loss_para} \alpha=\frac{f}{2Q\Delta f}, \end{equation} where $f$ is the frequency of the wave signal, $\Delta f$ is the average spacing between cavity resonant frequencies near $f$, and $Q$ is the quality factor due to the distributed losses of the closed cavity, such as losses from conducting walls or a lossy dielectric that fills the cavity \cite{Hemmady2005a, Zheng2006a,Zheng2006b}. Based on RMT, researchers have given analytical expressions of $\Xi_{Z}(\alpha)$ \cite{Zheng2006c,Fyodorov2005} and $\Xi_{S}(\alpha)$ \cite{Fyodorov2005,Savin2006} for the GOE and GUE cases. In this paper, we focus on the time reversal invariant case (GOE). The goal of this paper is to experimentally test the analytical predictions of the impedance variance ratio and the scattering variance ratio in the low loss regime. Dietz \textit{et al.} carried out experiments in the low loss regime, but their interests were in the elastic enhancement factor (inverse of $\Xi_{S}$) in the weak port-coupling situation \cite{Dietz2010}. Note that the common approach to accounting for coupling (one nonuniversal feature) is to use a single scalar quantity for a given frequency range (the amplitude of the averaged scattering parameter $|\overline{S_{ii}}|$) \cite{Kuhl2005, Savin2006}, whereas the random coupling model treats nonuniversal features more generally by using a complex function of frequency (the frequency-dependent averaged impedance matrix, defined in Section 2.2), and includes short ray trajectories. Zheng \textit{et al.}'s study of $\Xi_{Z}$ and $\Xi_{S}$ \cite{Zheng2006c} and {\L}awniczak \textit{et al.}'s study of the elastic enhancement factor \cite{Lawniczak2010,Lawniczak2011,Lawniczak2012} applied the original version of the RCM to take account of the nonuniversality of the port-coupling. In this paper we apply the extended version of the RCM to further include the nonuniversal features of the short ray trajectories. We also test the low loss regime which has not been previously achieved by Zheng's or {\L}awniczak's experiments \cite{Zheng2006c, Lawniczak2010,Lawniczak2011,Lawniczak2012}. In the following sections, we first review the theory and present numerical tests of $\Xi_{Z}$ and $\Xi_{S}$ as a function of loss parameter. The numerical tests point out a numerical deviation from the theory due to the finite number of samples, which is more significant in the low loss regime for the impedance variance ratio. After the numerical tests, we present our experimental systems of three microwave cavities with varied values of the loss parameter and make a thorough experimental test in a broad range of loss parameters. \section{Theory and Numerical Results} \subsection{Universal Statistics Based on RMT} The theoretical model of the impedance variance ratio $\Xi_{Z}$ is derived from RMT \cite{Zheng2006c}. Using RMT, for a complicated wave scattering system with time reversal invariance of wave propagation, researchers have developed a statistical model of the impedance matrix $\textbf{Z}_{rmt}$ \cite{Zheng2006a, Zheng2006b, Hart2009, Lewenkopf1991, Beck2003, Fyodorov2004, Fyodorov2005, Savin2005}. This statistical model is applicable to situations where system-specific short-ray-trajectory effects are negligible and the ports are such that the input-output channels are perfectly matched to the scatterer (in the sense that $\langle \textbf{Z} \rangle = \textbf{1}$, where $\langle\ldots \rangle$ denotes a suitable ensemble average). With the known statistics of $\textbf{Z}_{rmt}$, the impedance variance ratio as a function of $\alpha$ can be analytically derived \cite{Mehta1991B,Zheng2006a} \begin{equation}\label{eq:VRZ_analy} \Xi_{Z_{rmt}}(\alpha)=\left[3-2\int_{0}^{\infty}\frac{4\ g(x)}{4+(x/\alpha)^{2}}\ dx \right]^{-1}, \end{equation} where $\displaystyle g(x) = f^{2}(x)-\left[\int^{x}_{0}f(x')dx'-\frac{1}{2}\right]\frac{df}{dx}$ and $\displaystyle f(x)=\frac{\sin(\pi x)}{\pi x}$ in the time reversal invariant case. This result is shown as the thick black curve in Fig.$\ $1, where the loss parameter scale is logarithmic. Note that $\Xi_{Z_{rmt}} = 1/3$ in the GOE lossless case ($\alpha = 0$) and $\Xi_{Z_{rmt}} = 1/2$ as $\alpha \rightarrow \infty$. \begin{figure} \includegraphics[width=3.2in]{VRZ_num.EPS} \caption{The impedance variance ratio versus the loss parameter $\alpha$. The thick black curve is the analytical formula $\Xi_{Z_{rmt}}$, Eq.$\ $(5). The other colored curves are numerical results of mean impedance variance ratio $\widetilde{\Xi}_{Z_{rmt}}^{(N_{s})}$ based on $\textbf{Z}_{rmt}$ with different numbers of samples ($N_{s}$) indicated in the parentheses.} \end{figure} In addition to the analytical prediction (Eq.$\ $(5)), we also numerically generate $2\times 2$ random impedance matrices $\textbf{Z}_{rmt}$ (using the appropriate RMT ensemble) and compute the variance ratios with different values of the loss parameter $\alpha$. We select 15 different loss parameters from $\alpha = 0.01$ to $\alpha = 10$. With each loss parameter, we generate a finite ensemble with $N_{s}$ samples of $\textbf{Z}_{rmt}$ matrices. The variations of these matrices represent a finite sampling of the universal variations of the wave scattering system. Because the number of generated sample matrices ($N_{s}$) is finite, the variance ratio $\displaystyle\Xi_{Z_{rmt}}^{(N_{s})}= \frac{\textrm{Var}^{(N_{s})}[Z_{rmt,12}]}{\sqrt{\textrm{Var}^{(N_{s})}[Z_{rmt,11}]\textrm{Var}^{(N_{s})}[Z_{rmt,22}]}}$ of a finite ensemble is not a single value, but has a statistical distribution. To illustrate the finite-sample-size issue, we choose the sample numbers as $N_{s}=$ 30, 100, 350, $10^{3}$, and $10^{6}$ for each loss parameter, and we numerically generate the statistical distribution of $\Xi_{Z_{rmt}}^{(N_{s})}$. We plot the means of these distributions ($\widetilde{\Xi}_{Z_{rmt}}^{(N_{s})} = \langle\Xi_{Z_{rmt}}^{(N_{s})}\rangle$) versus the loss parameter as colored curves in Fig.$\ $1. One can see the deviations between the numerical $\widetilde{\Xi}_{Z_{rmt}}^{(N_{s})}$ and the analytical theory (Eq.$\ $(5)) are more significant in the low loss cases. This indicates that fluctuations of $\Xi_{Z_{rmt}}^{(N_{s})}$ in the low loss cases are more significant, thus necessitating a large number of samples to achieve good agreement between the finite-size numerical mean and the theory. As with the impedance variance ratio $\Xi_{Z_{rmt}}$, we have done the same analysis for the scattering variance ratio $\Xi_{S_{rmt}}$, where $\textbf{S}_{rmt}=(\textbf{Z}_{rmt}-\textbf{1})(\textbf{Z}_{rmt}+\textbf{1})^{-1}$. For the scattering matrices generated based on RMT in the time reversal invariant (GOE) case, the theoretical prediction is $\Xi_{S_{rmt}} = 1/2$ \cite{Brouwer1997,Zheng2006c}, and it is independent of the loss parameter $\alpha$. We show the theory and numerical results in Fig.$\ $2. Note that $\Xi_{S_{rmt}}$ does not contain the nonuniversal features encountered in a typical practical system. \begin{figure} \includegraphics[width=3.2in]{VRS_num.EPS} \caption{The scattering variance ratio versus the loss parameter $\alpha$. The thick black curve is the theory $\Xi_{S_{rmt}} = 1/2$. The other colored curves are numerical results $\widetilde{\Xi}_{S_{rmt}}^{(N_{s})}$ with different numbers of samples ($N_{s}$) indicated in the parentheses.} \end{figure} \subsection{Including the Nonuniversal Features through the RCM} To extend the predictions of RMT to practical systems and include nonuniversal features, Zheng \textit{et al.} have introduced the random coupling model \cite{Zheng2006a,Zheng2006b}. The original version of the random coupling model took the system-specific port coupling into account through the radiation impedance matrix. This method has also been applied in previous work on impedance and scattering variance ratios \cite{Zheng2006c}. Hart \textit{et al.} have considered the additional system-specific features of short ray trajectories between ports and developed the short-ray-trajectory-corrected version of the RCM \cite{Hart2009}. This RCM connects the universal fluctuating part and the practical impedance matrix $\textbf{Z}$ as \begin{equation}\label{eq:RCMavg} \textbf{Z}_{n}=\textbf{R}_{avg}^{-1/2} \left(\textbf{Z}-i\textbf{X}_{avg}\right) \textbf{R}_{avg}^{-1/2}. \end{equation} The normalized impedance matrix $\textbf{Z}_{n}$ represents the universal part, and its statistics are the same as the RMT prediction ($\textbf{Z}_{rmt}$) \cite{Yeh2010a, Yeh2010b}. The nonuniversal features of the port coupling (the radiation impedance) and short ray trajectories are included in the ensemble-averaged impedance matrix $\textbf{Z}_{avg}=\textbf{R}_{avg}+i\textbf{X}_{avg}$, where $\textbf{R}_{avg} = \textrm{Re}[\textbf{Z}_{avg}]$, $\textbf{X}_{avg} = \textrm{Im}[\textbf{Z}_{avg}]$ \cite{Hart2009, Yeh2010b}. In experiments measuring the statistics of wave scattering properties, one needs an ensemble measurement of many different realizations of the system \cite{Hemmady2005a, Yeh2010a, Yeh2010b, Schafer2005, Schanze2005}. In this paper, our experimental measurement ensemble includes configuration variation and frequency variation. These variations aim to create a set of systems in which none of the nonuniversal system details are reproduced from one realization to another, except for the effects of the port coupling and short ray trajectories. The previous analysis of the experimental results for the impedance variance ratio included frequency-dependent nonuniversal feature of short ray trajectories \cite{Zheng2006c}. In this paper we remove these by utilizing the extended RCM (Eq.$\ $(6)) \cite{Yeh2010a, Yeh2010b}. Considering the extended RCM (Eq.$\ $(6)), in general the variance ratio $\Xi_{Z}$ of the impedance matrix and the variance ratio $\Xi_{Z_{n}}$ of the normalized impedance matrix are not equal, and their relationship depends on the elements of $\textbf{R}_{avg}$ (note that $\textbf{X}_{avg}$ does not influence the variances of the impedance elements). However, if the ports of the wave scattering system are far apart, then the off-diagonal elements of $\textbf{Z}_{avg}$ are small \cite{Zheng2006c}, and one can approximately simplify the relationship between $\Xi_{Z}$ and $\Xi_{Z_{n}}$. More specifically for a two-port system, one can define \begin{equation}\label{eq:Rsqrt} \textbf{R}_{avg}^{1/2} = \left[ \begin{array}{cc} A & B \\ C & D \\ \end{array} \right], \end{equation} where $A$, $B$, $C$, and $D$ ($B=C$ in time reversible (reciprocal) cases) are all frequency-dependent real quantities. Under the condition $A$, $D\gg |B|$, $|C|$, the relationships of impedance variances over configuration realizations at a frequency $f$ become \begin{equation}\label{eq:variance_11} \textrm{Var}[Z_{11}]=A^{2}(f)\textrm{Var}[Z_{n,11}], \end{equation} \begin{equation}\label{eq:variance_22} \textrm{Var}[Z_{22}]=D^{2}(f)\textrm{Var}[Z_{n,22}], \end{equation} \begin{equation}\label{eq:variance_12} \textrm{Var}[Z_{12}]=A(f)D(f)\textrm{Var}[Z_{n,12}]. \end{equation} In this case, $A(f)$ and $D(f)$ cancel in the calculation of the variance ratio, and one has the universal result \begin{equation}\label{eq:variance_equal} \Xi_{Z} = \Xi_{Z_{n}}. \end{equation} This equation shows the significance of the impedance variance ratio: if off-diagonal elements of $\textbf{R}_{avg}$ are negligible, the quantity is independent of the system-specific feature $\textbf{Z}_{avg}$ and is directly related to the universal fluctuating quantity $\textbf{Z}_{n}$. The statistics of $\textbf{Z}_{n}$ are the same as the statistics of $\textbf{Z}_{rmt}$, and the statistical properties only depend on the loss parameter $\alpha$ \cite{Zheng2006c}. Therefore, the impedance variance ratio becomes is a universal property of the wave scattering system and only depends on the loss parameter $\alpha$. On the other hand, the scattering variance ratio $\Xi_{S}$ of the practical scattering matrix does not have this universality, even under the condition $A$, $D\gg |B|$, $|C|$ \cite{Zheng2006c}. The elastic enhancement factor (inverse of $\Xi_S$) is known to be a function of both the loss parameter $\alpha$ and the coupling, in general \cite{Savin2006}. Only in the high loss regime ($\alpha\gg1$), can one further assume that the fluctuation part of the practical impedance $\delta \textbf{Z}$ is much smaller than the mean part of the practical impedance $\langle \textbf{Z}\rangle$, as $\delta \textbf{Z} \ll \langle \textbf{Z}\rangle$ and the practical impedance elements $|Z_{11}|$, $|Z_{22}|\gg |Z_{12}|$, $|Z_{21}|$, and therefore with Eq.$\ $(1) Zheng \textit{et al.} have derived \cite{Zheng2006c}, \begin{equation}\label{eq:variance_S_equal} \Xi_{S} \simeq \Xi_{Z}, \ \ \ \ (\alpha\gg1). \end{equation} Note that for the high loss GOE case, $\Xi_{S} \simeq \Xi_{Z} = 1/2$. \section{Experimental Systems and Results} \subsection{Three Experimental Systems} In order to experimentally test the predictions above, we use an Agilent PNA E8364C network analyzer to measure the frequency dependence of the complex $2\times2$ scattering matrices $\textbf{S}$ of three two-port microwave scattering enclosures in the semiclassical limit. To achieve the semiclassical limit, the typical length scales of the cavities are at least several times larger than the free-space wavelength making the systems sensitive to small perturbations. We add perturbing objects (perturbers) in each wave scattering system and move the perturbers (with the movement larger or on the scale of the applied wavelength) to create an ensemble for each wave scattering system. We can convert $\textbf{S}$ to $\textbf{Z}$ by Eq.$\ $(1), and the characteristic impedances of the transmission lines connected to the ports are $Z_{0,11}=Z_{0,22}=50\ [\Omega]$ in all experiments. The first experimental system is a quasi-two-dimensional ray-chaotic ``$1/4$-bowtie-shaped'' microwave billiard illustrated in Fig.$\ $3(a). The cavity is made of copper and has two coupling ports schematically shown as the red dots in Fig.$\ $3(a). Microwaves are injected or extracted through each port antenna attached to a coaxial transmission line, and each antenna is inserted into the cavity through a small hole (diameter about 0.1 [cm]) in the lid, similar to previous setups \cite{Yeh2010a, Hemmady2006a, Hemmady2006c}. Due to the two convex circular arc walls, ray trajectories are chaotic. This system has previously been used to test the predictions of RMT \cite{So1995, Gokirmak1998, Chung2000}. To create an ensemble for statistical analysis, we add two metal perturbers to the interior of the cavity and randomly move the perturbers to create 100 different realizations \cite{Yeh2010a, Yeh2010b}. For each realization, we measure the scattering matrix over the frequency window ($6 - 18$ [GHz]). The perturbers are conducting cylinders of diameter 5.1 [cm] and height approximately equal to that of the cavity (0.7 [cm]). \begin{figure} \includegraphics[width=1.6in]{bowtie.eps} \includegraphics[width=1.6in]{cutcircle.eps} \caption{(a) The $1/4$-bowtie cavity with the two ports as red dots and the two metallic perturbers as blue circles. (b) The cut-circle cavity with the two ports as red dots and the Teflon perturber as the blue wedge.} \end{figure} In order to test the predictions of $\Xi_{Z}$ and $\Xi_{S}$ in the low loss regime, we have carried out experiments (similar to the $1/4$-bowtie cavity) in a superconducting microwave cavity, illustrated in Fig.$\ $3(b). The shape of the cavity is a symmetry-reduced ``cut-circle'' that shows chaos for ray trajectories \cite{Yeh2012a, Ree1999, Richter2001, Dietz2006, Dietz2008}. The superconducting cavity is made of copper with Pb-plated walls and cooled to a temperature (6.6 [K]) below the transition temperature of Pb. A Teflon wedge (the blue wedge in Fig.$\ $3(b)) can be rotated as a ray-splitting perturber inside the cavity, and we rotate the wedge in $5^{o}$ increments to create an ensemble of 72 different realizations. Measurements of the scattering matrix of the superconducting cavity are calibrated by an \textit{in-situ} broadband cryogenic calibration system (more experimental details of the cryogenic systems can be found in \cite{Yeh2013}). The previous two wave systems are both quasi-two-dimensional cavities. We also do experiments in a three-dimensional metal cavity, which we call the ``GigaBox'' \cite{Taddese2011,Frazier2013}. The GigaBox is approximately a rectangular microwave resonator with dimensions of length 1.27 [m], width 1.22 [m], and height 0.65 [m]. The cavity is made of aluminum and has mode stirrers (a fan formed by aluminum plates) inside it. The mode stirrers and the irregularities on the surface create a complicated wave scattering environment. A stepper motor is used to rotate the mode stirrers to create an ensemble of 199 different realizations. \begin{table*}[t] \centering \caption{Parameters of the six experimental data sets.} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline Data Set & I & II & III & IV & V & VI \\ \hline Cavity & Cut-circle & Cut-circle & $1/4$-bowtie & $1/4$-bowtie & GigaBox & GigaBox \\ $f_{R}$ [GHz] & $14 - 16$ & $17 - 19$ & $14 - 16$ & $17 - 19$ & $6.0 - 6.1$ & $9.0 - 9.1$ \\ $\Delta f$ [MHz] & 28 & 23 & 10 & 8.6 & 0.031 & 0.014 \\ $N_{m}$ & 71 & 87 & 200 & 230 & 3200 & 7100 \\ $N_{r}$ & 72 & 72 & 100 & 100 & 199 & 199 \\ $\alpha$ & 0.02 & 0.23 & 1.24 & 1.9 & 4.51 & 9.31 \\ $\Xi_{Z}$ &$0.39\pm0.01$&$0.44\pm0.01$&$0.48\pm0.01$&$0.48\pm0.01$&$0.502\pm0.005$&$0.487\pm0.004$\\ $\Xi_{Z_{n}}$ &$0.37\pm0.02$&$0.45\pm0.01$&$0.48\pm0.01$&$0.48\pm0.01$&$0.502\pm0.005$&$0.489\pm0.004$\\ $\Xi_{S}$ &$0.41\pm0.01$&$0.48\pm0.01$&$0.50\pm0.01$&$0.48\pm0.01$&$0.508\pm0.005$&$0.503\pm0.004$\\ $\Xi_{S_{n}}$ &$0.51\pm0.02$&$0.55\pm0.02$&$0.51\pm0.01$&$0.50\pm0.01$&$0.503\pm0.005$&$0.489\pm0.004$\\ \end{tabular} \end{table*} For each of these three microwave systems, we select two frequency ranges where the condition $A$, $D\gg |B|$, $|C|$ (Eq.$\ $(7)) is satisfied. The parameters of these six experimental data sets are shown in Table 1, where $f_{R}$ is the frequency range, $\Delta f$ is the mean frequency spacing of the resonant modes in that range, $N_{m}$ is the approximate number of modes in the frequency range, $N_{r}$ is the number of configuration realizations. The first data set of the cut-circle cavity is measured at temperature 6.6 [K] (the superconducting case), and the second data set is from the cut-circle cavity at temperature 270 [K] (the normal metal case). Note that the GigaBox system has a much higher mode density than the two quasi-two-dimensional cavities due to its large volume ($V$ = 1.01 [m$^{3}$]), and therefore the smaller frequency range (100 [MHz]) of the GigaBox contains more resonances than the frequency range (2 [GHz]) of the other two cavities. The loss parameters $\alpha$ for these data sets are determined as the best-fit parameter by the method introduced in \cite{Yeh2010b}, which compares the statistics of the normalized scattering element $S_{n,12}$ and the prediction of RMT ($S_{rmt,12}$). The averaged variance ratios ($\Xi_{Z}$, $\Xi_{Z_{n}}$, $\Xi_{S}$, and $\Xi_{S_{n}}$) and their standard errors of the mean are calculated from the experimental data, and we introduce the procedures in the next section. \subsection{Analysis of the Variance Ratios} \begin{figure} \includegraphics[width=3.2in]{VRZ_data2.eps} \caption{The experimental impedance variance ratio versus the loss parameter $\alpha$. The thick black curve is the analytical formula $\Xi_{Z_{rmt}}$, Eq.$\ $(5). The green squares are $\Xi_{Z_{n}}$ from the normalized impedance matrix over the whole frequency range. The red circles are averaged $\Xi_{Z}$, and the pink bars show the standard deviations of $\Xi_{Z}$ from the practical impedance matrix over the smaller frequency windows. The blue stars are averaged $\Xi_{Z_{n}}$, and the light blue bars show the standard deviations of $\Xi_{Z_{n}}$ from the normalized impedance matrix over the smaller frequency windows.} \end{figure} We show the impedance variance ratios of the normalized impedance matrix $\textbf{Z}_{n}$ and the measured impedance matrix $\textbf{Z}$ versus the loss parameter $\alpha$ in Fig.$\ $4. As shown in the finite-size numerical ensembles (Fig.$\ $1), a large number of samples are critical for accurately determining the impedance variance ratio, especially in the low loss regime. For experimental measurement, the number of samples from different configuration realizations are limited by the remaining correlations in the experimental data. Therefore, we take the samples for computing the variance from the ensemble not only with different configuration realizations but also frequency variations. Note that in Eqs.$\ $(8) to (11) the variances are taken over the configuration realizations at a fixed frequency. However, if $\alpha$ is frequency independent, Maxwell's equations are invariant to the scaling $f\rightarrow \eta f$ and (length) $\rightarrow \eta$(length), so that a frequency change can be thought of as equivalent to a configuration change. For the normalized impedance matrix $\textbf{Z}_{n}$, the frequency-dependent nonuniversal features ($A(f)$ and $D(f)$) have been removed by the RCM, so we can compute the impedance variance ratio $\Xi_{Z_{n}}$ from variances over the whole frequency range and all realizations. The results are shown as green squares in Fig.$\ $4. For the measured impedance matrix $\textbf{Z}$, the frequency-dependent nonuniversal features remain, so taking variances over the whole frequency range is not valid. Therefore, we take a smaller frequency window (1/20 of the whole frequency range $f_{R}$) instead and assume that the nonuniversal features ($A(f)$ and $D(f)$) are approximately constant in this small frequency window (100 [MHz] for the cut-circle cavity and the $1/4$-bowtie cavity, and 5 [MHz] for the GigaBox). With this condition, the derivation from Eqs.$\ $(8) to (11) is still valid. We compute the averaged impedance variance ratio $\Xi_{Z}$ of the 20 impedance variance ratios of the smaller windows and plot the results as red circles in Fig.$\ $4. For comparison, we also plot the averaged impedance variance ratio $\Xi_{Z_{n}}$ of small windows for the normalized impedance matrix as the blue stars. The pink bars (and the light blue bars) show the standard deviations of the 20 variance ratios of the measured (and normalized) impedance matrices for the smaller windows to illustrate the larger fluctuations in the low loss regime. Note that $1/\sqrt{20}$ of these standard deviations are the standard errors of the mean shown in the last four rows in Table 1. Note also that in Fig.$\ $4 the green squares and the blue stars are both computed from the normalized impedance matrix, and the only difference is the finite sample size due to the frequency range for the green squares being 20 times larger than the frequency range for the blue stars. The values of the blue stars are systematically larger than the values of the green squares, especially the lowest loss case. This trend is consistent with the finite-sample-size deviation illustrated in Fig.$\ $1. Comparing all three sets of experimental impedance variance ratios, the results in Fig.$\ $4 agree with the prediction $\Xi_{Z}=\Xi_{Z_{n}}=\Xi_{Z_{rmt}}$ as a function of the loss parameter, to the extent permitted by the finite sample sizes. \begin{figure} \includegraphics[width=3.2in]{VRS_data2.eps} \caption{The experimental scattering variance ratio versus the loss parameter $\alpha$. The thick black curve is the theory $\Xi_{S_{rmt}} = 1/2$. The green squares are $\Xi_{S_{n}}$ from the normalized scattering matrix over the whole frequency range. The red circles are averaged $\Xi_{S}$, and the pink bars show the standard deviations of $\Xi_{S}$ from the practical impedance matrix over the smaller frequency windows. The blue stars are averaged $\Xi_{S_{n}}$, and the light blue bars show the standard deviations of $\Xi_{S_{n}}$ from the normalized impedance matrix over the smaller frequency windows.} \end{figure} We also convert the impedance matrix to the scattering matrix by Eq.$\ $(1) and do the same analysis for the scattering variance ratio. The results are shown in Fig.$\ $5. The experimental results show that the variance ratios of the normalized scattering matrices (green squares and blue stars) are consistent with the theoretical prediction $\Xi_{S_{n}}=\Xi_{S_{rmt}}=1/2$. Note that the measured scattering variance ratios (red circles and pink bars) tend to be lower than 1/2, especially in the low loss regime. This trend is opposite to the finite-sample-size deviation illustrated in Fig.$\ $2 and is due to the nonuniversal features in the wave scattering system. Zheng \textit{et al.} have shown that the nonuniversal features (imperfect port coupling) makes the averaged $\Xi_{S} < 1/2$ in the lossless case \cite{Zheng2006c}. Savin \textit{et al.} have also examined the nonuniversal feature of $\Xi_{S}$ and found its relationship with the loss parameter in the imperfect coupling situations \cite{Fyodorov2005, Savin2006}. Hence, the variance ratios of the scattering matrix $\Xi_{S}$ (red circles) are found not to be universal, and they depend on the nonuniversal features, such as the port coupling and short ray trajectories \cite{Zheng2006c,Savin2006}. Only in the high loss regime ($\alpha\gg1$) is approximately universal behavior of $\Xi_{S}$ observed, such as the two data sets in the GigaBox, where $\Xi_{S}\simeq1/2$. By comparing Fig.$\ $4 and 5, or the four rows of variance ratios in Table 1, we see that $\Xi_{S}\simeq\Xi_{Z}$ in the high loss regime. \section{Conclusion} In this paper, we analyze the impedance and scattering variance ratios of complicated wave scattering systems at short wavelength. Through numerical tests (Fig.$\ $1) and experimental tests in three microwave systems (Fig.$\ $4), we show that the impedance variance ratio $\Xi_{Z}$ is a universal function of the loss parameter, independent of the nonuniversal port coupling and short-ray-trajectory effects (accounted for in $\textbf{Z}_{avg}$ by the RCM). On the other hand, the scattering variance ratio $\Xi_{S}$ in general depends on the nonuniversal features (as the low loss cases in Fig.$\ $5 demonstrate), although it is universal in the high loss regime. Comparing with the previous analysis \cite{Zheng2006c}, this work has two novel contributions. One is that we utilize the superconducting microwave cavity to test the theoretical predictions in the low loss regime. The other is that we have utilized the extended RCM to better account for the nonuniversal features. By applying the extended RCM to remove the nonuniversal features of the system, we show that the normalized data ($\Xi_{Z_{n}}$ and $\Xi_{S_{n}}$) agree with the theoretical predictions ($\Xi_{Z_{rmt}}$ and $\Xi_{S_{rmt}}$) to within the precision dictated by the finite sample size. \section*{Acknowledgements} We thank the group of A. Richter (Technical University of Darmstadt) for graciously lending us the cut-circle billiard, and H. J. Paik and M. V. Moody for use of the pulsed tube refrigerator cryostat. This work is funded by the ONR/Maryland AppEl Center Task A2 (Contract No. N000140911190), the Office of Naval Research (Contract No. N000141310474), the AFOSR (Grant No. FA95500710049), NSF-GOALI ECCS-1158644, and the Center for Nanophysics and Advanced Materials (CNAM).
2024-02-18T23:40:52.847Z
2013-11-14T02:12:12.000Z
algebraic_stack_train_0000
3,566
5,987
proofpile-arXiv_066-1449
\subsection*{1. Introduction} It is generally assumed that the key problem with quantum mechanics is a problem with measurement. After all, apart from measurement, we have deterministic unitary evolution, while measurement outcome is random. Besides, in the absence of measurement, unitary evolution is well defined, while in case of measurement, there is no agreed upon definition as to what constitutes measurement as such. Finally, the unitary evolution can be viewed as local if we are concerned about field operators as opposed to actual states described over a hypersurface. On the other hand, measurement is distinctly non-local. While the above points are legitimate, I don't agree that they are the most crucial things that make quantum mechanics quantum. After all, some proposals have been made as to how to model the measurement: Bohamian mechanics (for example, \cite{Bohm1}), GRW model (see \cite{GRW1} and \cite{GRW2}), etc. While said proposals are non-local, Newtonian mechanics was non-local as well. Thus, our need for locality is due to \emph{empirical} evidence for relativity as opposed to what our classical intuition \emph{demands}. Therefore, as long as said theories claim to match conventional predictions -- which they do -- the empirical evidence can't falsify them, which is all we need. Perhaps more serious problem is that lack of falsification doesn't amount to a proof: after all, there is no agreement which of those several theories, if any, takes place in nature. That, again, is nothing new: people before Newton were facing these same problems, yet they weren't claiming that they should abandon classical logic. The consensus among conventional scientists is "impossibility" of reconciling quantum physics with classical intuition; but what we have here is not impossibility at all, just lack of knowledge. However, there is far more serious problem that \emph{does}, in fact, imply some senst of "impossibility", which is largely overlooked. In particular, the ontology of quantum state itself can't be viewed in the classical terms. And this is true even in the absence of measurement! Yes, unitary evolution is deterministic, but we don't know "what" said deterministic process is describing! But now we have to be a little more careful. If we talk about single particle quantum mechanics, we can easily answer the question we just posed by simply comparing Schr\"odinger's wave function to classical Maxwell field. Indeed, if we have no problem with Maxwell field changing directions, we shouldn't have any problem with $\psi$ being complex-valued. After all $\psi$ is not a probability, its a field. The relation between probability and $\vert \psi \vert^2$ is similar to relationship between the probability and the weights placed on the two sides of biased coin. Said weights are still physical parameters, \emph{not} probabilities, and the same is true for $\psi$. The problem begins when we introduce multiparticle configuration space. In this case, Maxwell field is no longer a good analogy since it lives in ordinary space as opposed to configuration space. Perhaps this is what forces us to instead call $\psi$ a "probability amplitude" since "probability", in fact, lives in configuration space; but then the problem arises from the fact that probability as we know it is positive real, while $\psi$ is not. Keeping in mind all of the logical connections we just made, one can argue that the presence of configuration space is the single most important problem in quantum mechanics. Indeed, this point was named by various notable physicists (see \cite{Conf1} for some references). One might object to this by pointing out that in classical physics we also have configuration space. The important difference, though, is that in classical physics configuration space is merely mathematical tool to simplify calculations. If we were to resort to numeric simulations, and therefore won't need simplifications, we would be able to do without configuration space. In quantum mechanics, on the other hand, configuration space is necessary on conceptual level, since we can't define $\psi$ without it. In other words, the presence of $\psi$ is what makes configuration space a reality rather than mathematical tool. In classical physics we might have probabilities over configuration space as well. But since those probabilities obey classical rules, we can always derive the probability of large configuration of particles from the probabilities pertaining to their pairs. In quantum mechanics we can not do that either. After all, the moment we insert $i$, we are saying that the new probability is not "produced" from the old ones but instead it exists as independent object, with a certain rule that has $i$ in its description, and resemblence to classical probability laws is merely a coincidence. Now, the "independent object" we just mentioned is multiparticle probability. In other words, vaguely speaking, $i$ enhances the status of probability and the probability, with its enhanced status, enhances the status of configuration space, which makes the latter more problematic then it would ever be classically. Even though those different issues are all linked together in the above chain, I claim that "configuration space" is the link that we should try and remove. Once removed, we could go back to the electromagnetic analogy we made in single particle context. Thus, the purpose of this paper is to get rid of configuration space. Now, the difference between ordinary space and configuration space is simply that the latter has too many dimensions. Since many body QM is merely a low energy limit of QFT, the "large number of dimensions" in the former case is a byproduct of infinitely many dimensions in the latter case. So we can restrict our quest to the dimensions present in QFT. Now, as far as QFT is concerned, it deals with harmonic oscillators in $\phi$. In QM case, the harmonic oscillator in $x$ corresponds to function $\psi (x)$. Therefore, in QFT case, the harmonic oscillator in $\phi$ corresponds to functional $\psi (\phi)$. Thus, the source of infinite dimensionality is simply that a functional is a function over infinite-dimensional domain. Therefore, in order to "get rid" of the problem, we have to replace functionals with ordinary functions. This is what I set out to do in this paper (and we focus exclusively on QFT since QM is merely its low energy limit). Clearly, the exact correspondence between functional and function is impossible for the simple reason that cardinalities are different. But, since there is no experimental proof that anything is exact, the approximate correspondence up to coarse graining would suffice us. In fact, the use of ultraviolet cutoff in QFT calculations implies that it is only defined up to certain scale anyway, it is simply that said scale happens to be very small and, therefore, unknown\footnote{Some people view cutoff as just a formalism and they won't make any conclusions based off of it, but our philosophy is to take things literally whenever possible so we do believe QFT has momentum upper bound, we simply don't know what it is.}. This being the case, we propose to introduce a single extra coordinate, $y$, and use it as a way to parametrize subset of elements of $\phi$ that covers "enough" elements to "approximate" the QFT as we know it. This can be done by means of "hidden" classical field $\chi (\vec{x},y)$ which enables us to define $g^{(\chi)} \colon \{y \} \mapsto \{ \phi \}$ as \begin{equation}}\def\eeq{\end{equation} g^{(\chi)} (y) = \chi_y \eeq where $\chi_y \colon \{ \vec{x} \} \mapsto \mathbb{R}$ is given by \begin{equation}}\def\eeq{\end{equation} \chi_y (\vec{x}) = \chi (\vec{x},y) \label{ChiyIntro} \eeq This will enable us to replace $\psi (\phi)$ with $\xi (y)$. But, in order to have analogy with electromagnetic field, we would like to have $\xi (\vec{x},y)$ rather than $\xi (y)$. We do that by adding a non-interacting particle, which we call a "fly". Thus, we are describing all of the particles in a universe, plus a fly. If the particles in the universe are in a state that is conventionally represented by $\psi (\phi)$, and a fly has a momentum $\vec{p}$, then the function $\xi (\vec{x},y)$ takes the form \begin{equation}}\def\eeq{\end{equation} \xi (\vec{x},y) = e^{i \vec{p} \cdot \vec{x}} \psi (g^{(\chi)} (y)) \eeq Alternatively, we can utilize extra parameter as a way of defining ensemble of states as opposed to a single quantum state. Thus, the wave function \begin{equation}}\def\eeq{\end{equation} \xi (\vec{x}, y) = \sum_k C_k e^{i \vec{p}_k \cdot \vec{x}} \psi_k (g^{(\chi)} (y)) \eeq corresponds to the density matrix \begin{equation}}\def\eeq{\end{equation} \sum_k C_k \vert \psi_k (\phi) \rangle \langle \psi_k (\phi) \vert \eeq What we are essentially saying is that, instead of ensemble of states, we have one single state that involves entanglement with a fly. If the fly is non-interacting then the components of a state corresponding to different fly momenta will look like separate states in the ensemble. In reality they are part of one and the same state. This is certainly a good thing since some of the theories of quantum measurement (for example, quantum Darwinism) rely on the notion of ensemble of states which is another factor that takes away from realism, apart from the things talked about earlier. So it is good that we were able to address both question at the same time instead of making separate constructions for each one of them. This, in turn, will allow us to try convince realists to consider ensemble theories and conversely try to convince the ensembly people to consider realism. Going back to the issue of coarse graining, we have to warn the reader about the following problem. Suppose $R_{\vec{p}} (y) \in \mathbb{R}$ and $\Theta_{\vec{p}} (y)$ corresponds to amplitude and phase of the Fourier component of $\chi_y\colon \{ \vec{x} \} \mapsto \mathbb{R}$ (see Eq \ref{ChiyIntro}, \ref{RABC}, \ref{ThetaABC}). If we assume that $y$-coordinate is compactified, \begin{equation}}\def\eeq{\end{equation} y + L_5 = y \eeq then the fact that $R_{\vec{p}}$ and $\Theta_{\vec{p}}$ are real valued implies that they are \emph{not} one to one. As far as $(R_{\vec{p}}, \Theta_{\vec{p}})$ is concerned, it \emph{might} be one to one, but it is not likely: after all it \emph{is} possible to draw a curve on a plane without self-intersections, yet a random curve is more likely to self-intersect than not. \emph{However}, if we consider \emph{three} parameters, $(R_{\vec{0}}, R_{\vec{p}}, \Theta_{\vec{p}})$, it \emph{is} most probably one to one: after all, the random curve in $\mathbb{R}^3$ is most likely \emph{not to} self intersect. If so, this creates a problem: any function $\xi \colon \{y \} \mapsto \mathbb{C}$, which we have \emph{intended} to correspond to the function over infinite dimensional domain, $\{ (R_{\vec{0}}, R_{\vec{p}_1}, \Theta_{\vec{p}_1}, R_{\vec{p}_2}, \Theta_{\vec{p}_2}, \cdots ) \} = \mathbb{R}^{\infty}$ can actually be modelled in terms of three dimensional domain, $\{ (R_{\vec{0}}, R_{\vec{p}}, \Theta_{\vec{p}} ) \}$. As will be explained later in more detail, $R_{\vec{0}}$ parameter can be used to model arbitrary number of particles with momentum $\vec{0}$, while $(R_{\vec{p}}, \Theta_{\vec{p}})$ can be used to model arbitrary number of particles with momenta $+ \vec{p}$ and $- \vec{p}$. Thus, an arbitrary state can be described as a linear combination of those three states! For example, \begin{equation}}\def\eeq{\end{equation} a^{\dagger}_{\vec{q}} \vert 0 \rangle = \sum_{abc} (a^{\dagger}_{\vec{p}})^a (a^{\dagger}_{- \vec{p}})^b (a^{\dagger}_{\vec{0}})^c \vert 0 \rangle \label{PhysicsContradiction} \eeq despite the fact that \begin{equation}}\def\eeq{\end{equation} \vec{p} \neq \vec{q} \label{pNeqq} \eeq In order to get out of this predicament, we make a claim that $a$-s, $b$-s and $c$-s on the right hand side will be forced to be extremely large numbers to the point of absurdity (in particular, the finer the coarse graining, the larger these numbers will have to be); the only choice of representation that avoids this feature is the one given on left hand side. The statement we just made might at first sound impossible: how can we isolate \emph{exactly one} state as opposed to narrow range of states? After all, the change of representation is continuous! Upon further look, however, there is no contradiction: we know that the set of basis states is discrete anyway; continuous change refers to the change in \emph{coefficients} next to afore-given set of basis states. Now, what we are saying is that if a coefficient of $(a_{\vec{p}}^{\dagger})^2 \vert 0 \rangle$ is to change by $0 (\epsilon)$, then the coefficient next to $(a_{\vec{p}}^{\dagger})^N \vert 0 \rangle$ will also change by $0 (\epsilon)$, for some $N \gg 1$. The continuity has nothing to do with $N \gg 1$; it has to do with $\epsilon \ll 1$, and the latter still holds. Now, it is conceivable that, due to some physical process, we would get the probability of $(a_{\vec{p}}^{\dagger})^N \vert 0 \rangle$ to be of $0 (\epsilon)$ rather than zero. The only thing we are trying to avoid is for that probability being large. Now, if we could change the probability of $(a_{\vec{p}}^{\dagger})^N \vert 0 \rangle$ by $0 (\epsilon^2)$ while changing the probability of $(a_{\vec{p}}^{\dagger})^2 \vert 0 \rangle$ by $0 (\epsilon)$, then $\epsilon^{-1}$ of those changes would lead to finite change of probability of $(a_{\vec{p}}^{\dagger})^2 \vert 0 \rangle$ despite $0 (\epsilon)$ change of probaiblity of $(a_{\vec{p}}^{\dagger})^N \vert 0 \rangle$. But since in actuality both have change by $0 (\epsilon)$ at the same time, the above scenario is impossible. In other words, if we insist that $(a_{\vec{p}}^{\dagger})^N \vert 0 \rangle$ is of $0 (\epsilon)$ instead of large, then $(a_{\vec{p}}^{\dagger})^2 \vert 0 \rangle$ will have to be of $0 (\epsilon)$ rather than large, as well. Thus, we do have a narrow range of states instead of one single state, just as common sense tells us. And, indeed, we have to have narrow range of states on a physical grounds anyway, since we never know what traces of various past interactions could produce. Let us now go back to the statement we have made the next sentence after Eq \ref{pNeqq} and explain why we believe that statement. First of all, since the curve $g^{(\chi)} (y)$ fills the function space only up to coarse graining, it is impossible to shift in $(R_{\vec{q}}, \Theta_{\vec{q}})$ direction while keeping all the other $R$-s and $\Theta$-s constant. However, it is possible keep the latter \emph{approximately} constant: in particular, we have to "jump" by a very large distance in $y$ in such a way that, at the new point in $y$ the curve $g^{(\chi)} (y)$ "happens" to "come back to" the original point in projection to $(R_{\vec{p}}, \Theta_{\vec{p}})$, but not in projection to $(R_{\vec{q}}, \Theta_{\vec{q}})$. In other words, we change $(R_{\vec{q}}, \Theta_{\vec{q}})$ a lot while changing $(R_{\vec{p}}, \Theta_{\vec{p}})$ only slightly. Since at least one of those parameters changes a lot, $\xi (y)$ has to change a lot as well, there is no question about it. However, we can try to be silly and explain that change by the fact that $(R_{\vec{p}}, \Theta_{\vec{p}})$ had changed. In this case, the $(R_{\vec{p}}, \Theta_{\vec{p}})$-gradient of $\psi (\phi)$ better be very large since the change of $(R_{\vec{p}}, \Theta_{\vec{p}})$ is very small. The only time when gradient of $\psi$ is large is when we are dealing with high energies. And since the momentum in question, $\vec{p}$, is fixed, the only way for energy to be large is to have large number of particles with that momentum -- which is where that statement is coming from. On the other hand, if we decide to be more reasonable and say that the cause of the change was $(R_{\vec{q}}, \Theta_{\vec{q}})$ after all, then we no longer need $\psi$ to change fast and therefore no longer need large number of particles. What we have said so far can be summarized as a tradeoff between larger dimension and lesser precision versus smaller dimension and greater precision. On the one hand, one change cancels the other so both spaces are equal in size, which allows us to establish correspondence. On the other hand, we care about dimensionality a lot more than about precision, which is why "winning" the former is a huge accomplishment, even if it comes at the cost of "losing" the latter. From the field perspective, the above tradeoff has to do with the fact we can choose $\xi (y)$ which is only one coordinate (thus making space smaller) yet can be measured precisely (thus making space larger), or we can choose $\psi (\phi)$ that has many degrees of freedom (thus making space larger) yet is only defined up to coarse graining (thus making space smaller). In the state language the tradeoff is that, on the one hand, we can impose the cutoff on the particle numbers (making space smaller) yet have many different momenta (making space larger) or we can have only three allowed momenta (making space smaller) yet allowing all particle numbers without any cutoffs (thus making space larger). The purpose of the rest of the paper is to make some of what we said a lot more explicit. We will do it in the following steps: {\bf Section 2:} Start by writing down analytic soluton for general excited state of harmonic oscillator in 1D and 2D. While in most textbooks one can look up the 1D solutions for the first few states (for example, \cite{Griffiths}), it is very difficult to find a book that will give the one for general excited state, much less its 2D version, so I decided to derive it myself to use it as a reference. Such derivation, however, turned out to be very long so I skipped most of it and only covered a brief summary of key steps. {\bf Section 3:} Convert the wave equations for general states of 1D and 2D oscillator into the equations of a functionals of general state. Similarly, convert the definitions of raising and lowering operator into the definitions of creation and annihilation operators by replacing ordinary derivatives with functional derivatives, coordinates with other functionals, and so forth. {\bf Section 4:} Use the ideas we talked about in order to replace $\psi (\phi)$ with $\xi (\vec{x},y)$, thus arriving with a definition of multiparticle state that "looks like" a single particle in 5D and, therefore, "realistic". We will likewise write down explicit expressions for creation and annihilation operators as well, which will include the need to define derivative in the context of coarse graining, and so forth. {\bf Section 5:} We describe the dynamics of "classical" field $\xi (\vec{x},y)$ which is something we haven't done in previous sections which are all focused on kinematical definitions of states. The goal of the dynamics proposed in Section 5 is to make sure that, if $\xi (\vec{x},t)$ obeys said "classical" (yet non-local) dynamics, then the corresponding quantum states (as defined in previous sections) will obey some version of coarse grained QFT. While the definition of general particle state will in fact be taken from Section 3 with appropriate modifications, the definition of creation and annihilation operators will be substantially different from Section 3 since in case of Section 3 we could use infinitesimal shifts while in case of Section 4 we couldn't. Since our goal is Section 4, we could have skipped the creation and annihilation operators in Section 3 if we wanted (the wave function of Section 3 was obtained by copying the one from Section 2, so we didn't need to write Section 3 version of creation and annihilation operators to derive it). The reason we included the definition of creation and annihilation operators in Section 3 is largely due to the wish for completeness. \subsection*{2. Review of harmonic oscillator in 1D and 2D} Creation and annihilation operators are defined as \begin{equation}}\def\eeq{\end{equation} a^{\dagger} = \sqrt{\frac{m \omega}{2}} x - \frac{1}{\sqrt{2m \omega}} \frac{d}{dx} \; , \; a = \sqrt{\frac{m \omega}{2}} x + \frac{1}{\sqrt{2m \omega}} \frac{d}{dx} \eeq and satisfy commutation relations \begin{equation}}\def\eeq{\end{equation} [a, a^{\dagger}]=1 \eeq The first three states are \begin{equation}}\def\eeq{\end{equation} \psi_0 (x) = \bigg(\frac{m \omega}{\pi} \bigg)^{1/4} e^{-m \omega x^2/2} \label{1D0} \eeq \begin{equation}}\def\eeq{\end{equation} \psi_1 (x) = \frac{2^{1/2} (m \omega)^{3/4}}{\pi^{1/4}} x e^{- m \omega x^2/2} \label{1D1} \eeq \begin{equation}}\def\eeq{\end{equation} \psi_2 (x) = \bigg(\frac{m \omega}{\pi} \bigg)^{1/4} e^{-m \omega x^2/2} \bigg( \sqrt{2} m \omega x^2 - \frac{1}{\sqrt{2}} \bigg) \label{1D2} \eeq One can see by induction that $n$-th excited state can be expressed as \begin{equation}}\def\eeq{\end{equation} \psi_n (x) = \frac{1}{\sqrt{n!}} \bigg( \frac{m \omega}{\pi} \bigg)^{1/4} \bigg( \sqrt{\frac{m \omega}{2}} \hat{x} - \frac{1}{\sqrt{2m \omega}} \frac{d}{dx} \bigg)^n e^{-m \omega \hat{x}^2/2} \label{InductionStart1D} \eeq which can be rewritten as \begin{equation}}\def\eeq{\end{equation} \psi_n (x) = \frac{1}{\sqrt{n!}} \bigg( \frac{m \omega}{\pi} \bigg)^{1/4} e^{-m \omega \hat{x}^2/2} \bigg[ e^{m \omega \hat{x}^2/2} \bigg( \sqrt{\frac{m \omega}{2}} \hat{x} - \frac{1}{\sqrt{2m \omega}} \frac{d}{dx} \bigg) e^{-m \omega \hat{x}^2/2} \bigg]^n 1 \eeq and then further rewritten as \begin{equation}}\def\eeq{\end{equation} \psi_n (x) = \frac{1}{\sqrt{n!}} \bigg( \frac{m \omega}{\pi} \bigg)^{1/4} e^{-m \omega \hat{x}^2/2} \bigg( \sqrt{2 m \omega} \hat{x} - \frac{1}{\sqrt{2m \omega}} \frac{d}{dx} \bigg)^n 1 \eeq to obtain, after some combinatorics, \begin{equation}}\def\eeq{\end{equation} \psi_n (x) = \sqrt{n!} \bigg( \frac{m \omega}{\pi} \bigg)^{1/4} e^{-m \omega x^2/2} \sum_{C=0}^{\lfloor n/2 \rfloor} \frac{(-1)^C (2m \omega)^{\frac{n}{2} - C}}{2^C C! (n-2C)!} x^{n-2C} \label{nthState1D} \eeq Now, the factor $1/\sqrt{n!}$ in Eq \ref{InductionStart1D} was specifically designed in such a way that each state in the ladder is properly normalized. Yet, the normalization of Eq \ref{nthState1D} is not at all obvious. It turns out, however, that the normalization follows from the following identity, \begin{equation}}\def\eeq{\end{equation} p+q \; {\rm is \; even} \; \Longrightarrow \label{norm1D6} \eeq \begin{equation}}\def\eeq{\end{equation} \Longrightarrow \; \sum_{c_1=0}^{\lfloor p/2 \rfloor} \sum_{c_1=0}^{\lfloor q/2 \rfloor}\bigg( \frac{(-1)^{c_1+c_2}}{ c_1! c_2! (p-2c_1)! (q-2c_2)!} \frac{(p+q-2c_1-2c_2)! }{ (\frac{p+q}{2}-c_1-c_2)! } \bigg) = \frac{2^p}{p!} \delta^p_q= \frac{2^q}{q!} \delta^p_q \nonumber \eeq which I have proven in the separate paper that I am working on getting published, but it would be too much of a sidetrack to include that proof here. In two dimensional case, the harmonic oscillator has two degrees of freedom. In Cartesian coordinates these would be coming from oscillators in either of the two axes, and in polar coordinates these would be coming from total energy and angular momentum. Within these two degrees of freedom we define the operators \begin{equation}}\def\eeq{\end{equation} a_{++}= \frac{a_x^{\dagger} +i a_y^{\dagger}}{\sqrt{2}} \; , \; a_{+-}= \frac{a_x^{\dagger} - ia_y^{\dagger}}{\sqrt{2}} \label{a++a+-} \eeq \begin{equation}}\def\eeq{\end{equation} a_{-+} = \frac{a_x+ia_y}{\sqrt{2}} \; , \; a_{--}= \frac{a_x-ia_y}{\sqrt{2}} \label{a-+a--} \eeq We chose the above notation in such a way that the first sign represents what happens to energy upon action of said operator and the second sign represents what happens to angular momentum. Thus, $a_{++}$ raises both energy and angular momentum, $a_{--}$ lowers both, $a_{+-}$ raises energy while lowering angular momentum and $a_{-+}$ lowers energy while raising angular momentum. The Hermitian conjugate merely permutes those operators via the following expressions: \begin{equation}}\def\eeq{\end{equation} a_{++}^{\dagger} = a_{--} \; , \; a_{+-}^{\dagger} = a_{-+} \; ,\; a_{-+}^{\dagger} = a_{+-} \; , \; a_{--}^{\dagger} = a_{++} \eeq and also the operators satisfy the following commutation relations: \begin{equation}}\def\eeq{\end{equation} [a_{-+},a_{+-}]=[a_{--},a_{++}] = 1 \label{2DComm1} \eeq \begin{equation}}\def\eeq{\end{equation} [a_{+-},a_{-+}]=[a_{++},a_{--}]=-1 \label{2DComm2} \eeq \begin{equation}}\def\eeq{\end{equation} [a_{++},a_{-+}]=[a_{-+},a_{++}]= [a_{+-},a_{--}]= [a_{--},a_{+-}]=0 \label{2DComm3} \eeq \begin{equation}}\def\eeq{\end{equation} [a_{++},a_{+-}]=[a_{+-},a_{++}]=[a_{-+},a_{--}]=[a_{--},a_{-+}]=0 \label{2DComm4} \eeq \begin{equation}}\def\eeq{\end{equation} [a_{++},a_{++}]=[a_{+-},a_{+-}]=[a_{-+},a_{-+}]=[a_{--},a_{--}]=0 \label{2DComm5} \eeq In polar coordinates those operators are defined as \begin{equation}}\def\eeq{\end{equation} a_{++} = \frac{e^{i \theta}}{2} \bigg( r \sqrt{m \omega} - \frac{1}{\sqrt{m \omega}} \frac{\partial}{\partial r} - \frac{i}{r \sqrt{m \omega}} \frac{\partial}{\partial \theta} \bigg) \label{128b} \eeq \begin{equation}}\def\eeq{\end{equation} a_{+-} = \frac{e^{-i \theta}}{2} \bigg( r \sqrt{m \omega} - \frac{1}{\sqrt{m \omega}} \frac{\partial}{\partial r} + \frac{i}{r \sqrt{m \omega}} \frac{\partial}{\partial \theta} \bigg) \label{128d} \eeq \begin{equation}}\def\eeq{\end{equation} a_{-+} = \frac{e^{i \theta}}{2} \bigg( r \sqrt{m \omega} + \frac{1}{\sqrt{m \omega}} \frac{\partial}{\partial r} + \frac{i}{r \sqrt{m \omega}} \frac{\partial}{\partial \theta} \bigg) \label{128a} \eeq \begin{equation}}\def\eeq{\end{equation} a_{--} = \frac{e^{-i \theta}}{2} \bigg( r \sqrt{m \omega} + \frac{1}{\sqrt{m \omega}} \frac{\partial}{\partial r} - \frac{i}{r \sqrt{m \omega}} \frac{\partial}{\partial \theta} \bigg) \label{128c} \eeq The ground state, $\psi_{00}$, has energy $1/2 + 1/2 = 1$ (coming from the oscillator in $x$ direction and another oscillator in $y$ direction) and angular momentum $0$; its wave function is \begin{equation}}\def\eeq{\end{equation} \psi_{00} (r, \theta) = \sqrt{\frac{m \omega}{\pi}} e^{-m \omega r^2/2} \label{2D0} \eeq Unlike 1D oscillator, there are two "first excited states", each having energy $1+1=2$. In Cartesian coordinates they are $a_x^{\dagger} \vert 0 \rangle$ and $a_y^{\dagger} \vert 0 \rangle$, corresponding to wave functions $\psi_1 (x) \psi_0 (y)$ and $\psi_0 (x) \psi_1 (y)$, while in polar coordinates they are $a_{++} \vert 0 \rangle$ and $a_{+-} \vert 0 \rangle$, corresponding to wave functions $\psi_{1,1} (r, \theta)$ and $\psi_{1,-1} (r, \theta)$ (where $\psi_{nL}$ denotes the state of $n$-th energy level (or, eqivalently, an energy of $n+1$) and angular momentum $L$). Each of the first pair of states can be represented as a linear combination of second pair of states, and visa versa; the energy of all four states is $2$. In polar coordinates, the state with energy $2$ and angular momentum $1$ is \begin{equation}}\def\eeq{\end{equation} \psi_{1,-1} (r, \theta) = \frac{m \omega}{\sqrt{\pi}} r e^{-m \omega r^2/2} e^{-i \theta} \label{2D1-1} \eeq and the state with energy $2$ and angular momentum $-1$ is \begin{equation}}\def\eeq{\end{equation} \psi_{1,1} (r, \theta) = \frac{m \omega}{\sqrt{\pi}} r e^{-m \omega r^2/2} e^{i \theta} \label{2D11} \eeq The fact that \begin{equation}}\def\eeq{\end{equation} \frac{m \omega}{\sqrt{\pi}} r e^{-i \theta} e^{-m \omega r^2/2} = \frac{m \omega}{\sqrt{\pi}} (r \cos \theta - i r \sin \theta) e^{-m \omega (x^2 +y^2)/2} = \eeq \begin{equation}}\def\eeq{\end{equation} = \frac{m \omega}{\sqrt{\pi}} (x - i y) e^{-m \omega x^2/2} e^{-m \omega y^2/2} = \frac{m \omega}{\sqrt{\pi}} \Big( \Big(x e^{-m \omega x^2/2} \Big) \Big( e^{-m \omega y^2/2} \Big) - i \Big( e^{-m \omega x^2/2} \Big) \Big( y e^{-m \omega y^2/2} \Big) \Big) \nonumber \eeq and also that \begin{equation}}\def\eeq{\end{equation} \frac{m \omega}{\sqrt{\pi}} r e^{i \theta} e^{-m \omega r^2/2}= \frac{m \omega}{\sqrt{\pi}} (r \cos \theta + i r \sin \theta) e^{-m \omega(x^2 + y^2 )/2} = \eeq \begin{equation}}\def\eeq{\end{equation} = \frac{m \omega}{\sqrt{\pi}} (x + i y) e^{-m \omega x^2/2} e^{-m \omega y^2/2} = \frac{m \omega}{\sqrt{\pi}} \Big( \Big(x e^{-m \omega x^2/2} \Big) \Big( e^{-m \omega y^2/2} \Big) + i \Big( e^{-m \omega x^2/2} \Big) \Big( y e^{-m \omega y^2/2} \Big) \Big) \nonumber \eeq confirms that, indeed, the two excited states in polar coordinates are linear combinations of the two excited states in Cartesian coordinates. Similarly, there are three "second excited states", with energy $1+2=3$. In Cartesian coordinates these are $a_x^{\dagger} a_x^{\dagger} \vert 0 \rangle$, $a_x^{\dagger} a_y^{\dagger} \vert 0 \rangle$ and $a_y^{\dagger} a_y^{\dagger} \vert 0 \rangle$, corresponding to wave functions $\psi_2 (x) \psi_0 (y)$, $\psi_1 (x) \psi_1 (y)$ and $\psi_0 (x) \psi_2 (y)$ (the reason we skipped $a_y^{\dagger} a_x^{\dagger} \vert 0 \rangle$ is that $[a_x^{\dagger}, a_y^{\dagger}]=0$). In polar coordinates, the three second excited states are $a_{++} a_{++} \vert 0 \rangle$, $a_{++} a_{+-} \vert 0 \rangle$ and $a_{+-} a_{+-} \vert 0 \rangle$, corresponding to wave functions $\psi_{22} (r, \theta)$, $\psi_{20} (r, \theta)$ and $\psi_{2,-2} (r, \theta)$ (once again, we skipped $a_{-+} a_{++} \vert 0 \rangle$ because $[a_{-+}, a_{++} ] =0$). The polar coordinate wave functions are given by \begin{equation}}\def\eeq{\end{equation} \psi_{2,-2} = \frac{(m \omega)^{3/2}}{\sqrt{2 \pi}} r^2 e^{-2 i \theta} e^{-m \omega r^2/2} \label{2D2-2} \eeq \begin{equation}}\def\eeq{\end{equation} \psi_{20} = \bigg( \frac{(m \omega)^{3/2}}{\sqrt{\pi}} r^2 - \sqrt{\frac{m \omega}{\pi}} \bigg) e^{-m \omega r^2/2} \label{2D20} \eeq \begin{equation}}\def\eeq{\end{equation} \psi_{2,2} = \frac{(m \omega)^{3/2}}{\sqrt{2 \pi}} r^2 e^{2 i \theta} e^{-m \omega r^2/2} \label{2D22} \eeq It is easy to see that \begin{equation}}\def\eeq{\end{equation} r^2 e^{2 i \theta} = r^2 (\cos 2 \theta + i \sin 2 \theta) = r^2 (\cos^2 \theta - \sin^2 \theta + 2 i \sin \theta \cos \theta) = \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = (r \cos \theta)^2 + (r \sin \theta)^2 - 2 i (r \cos \theta) (r \sin \theta) = x^2 - y^2 -2ixy \eeq and, similarly, \begin{equation}}\def\eeq{\end{equation} r^2 e^{-2 i \theta} = x^2-y^2+2ixy \eeq Therefore, \begin{equation}}\def\eeq{\end{equation} \psi_{2,-2} = \frac{(m \omega)^{3/2}}{\sqrt{2 \pi}} (x^2+y^2+2ixy) e^{-m \omega (x^2+y^2)/2} = \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = \frac{(m \omega)^{3/2}}{\sqrt{2 \pi}} \bigg(\bigg(x^2 e^{-m \omega x^2/2} \bigg) \bigg( e^{-m \omega y^2/2} \bigg)+ \eeq \begin{equation}}\def\eeq{\end{equation} + \bigg( e^{-m \omega x^2/2} \bigg) \bigg(y^2 e^{-m \omega y^2/2}\bigg) + 2i \bigg(x e^{-m \omega x^2/2} \bigg) \bigg(y e^{-m \omega y^2/2} \bigg) \bigg) \nonumber \eeq and \begin{equation}}\def\eeq{\end{equation} \psi_{2,2} = \frac{(m \omega)^{3/2}}{\sqrt{2 \pi}} (x^2+y^2-2ixy) e^{-m \omega (x^2+y^2)/2} = \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = \frac{(m \omega)^{3/2}}{\sqrt{2 \pi}} \bigg(\bigg(x^2 e^{-m \omega x^2/2} \bigg) \bigg( e^{-m \omega y^2/2} \bigg)+ \eeq \begin{equation}}\def\eeq{\end{equation} + \bigg( e^{-m \omega x^2/2} \bigg) \bigg(y^2 e^{-m \omega y^2/2}\bigg) - 2i \bigg(x e^{-m \omega x^2/2} \bigg) \bigg(y e^{-m \omega y^2/2} \bigg) \bigg) \nonumber \eeq Finally, \begin{equation}}\def\eeq{\end{equation} \psi_{20} = \bigg( \frac{(m \omega)^{3/2}}{\sqrt{\pi}}(x^2+y^2) - \sqrt{\frac{m \omega}{\pi}} \bigg) e^{-m \omega (x^2+y^2)/2} = \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = \frac{(m \omega)^{3/2}}{\sqrt{\pi}} \bigg(x^2 e^{-m \omega x^2/2} \bigg) \bigg( e^{-m \omega y^2/2} \bigg) + \eeq \begin{equation}}\def\eeq{\end{equation} + \frac{(m \omega)^{3/2}}{\sqrt{\pi}} \bigg(e^{-m \omega x^2/2} \bigg) \bigg(y^2 e^{-m \omega y^2/2} \bigg) - \sqrt{\frac{m \omega}{\pi}} \bigg(e^{-m \omega x^2/2} \bigg) \bigg(e^{-m \omega y^2/2} \bigg) \nonumber \eeq This, indeed, confirms that any given state in polar coordinates can be represented as a linear combination of products of Cartesian coordinate states. The derivation of general state in polar coordinates would be too long of a sidetrack as far as this paper is concerned (although another paper with that derivation is in preparation). But let me give you a basic outline of steps that would serve as a brief summary of otherwise lenthy derivation. First, one can use Eq \ref{nthState1D} to write down $\psi_n (x) \psi_0 (y)$ as \begin{equation}}\def\eeq{\end{equation} \psi_n (x) \psi_0 (y) = \sum_{0 \leq k \leq n \; {\rm and} \; n-k \; {\rm is \; even}}^n \alpha_k x^k e^{-m \omega (x^2+y^2)/2} \label{CartesianStart} \eeq then one can use \begin{equation}}\def\eeq{\end{equation} x^k = (r \cos \theta)^k = r^k \bigg(\frac{e^{i \theta}+ e^{-i \theta}}{2} \bigg)^k = \bigg(\frac{r}{2} \bigg)^k \sum_{l \in \{-k, -k+2, \cdots, k-2, k \}} {k \choose (n+l)/2} e^{il \theta} \eeq to rewrite it as \begin{equation}}\def\eeq{\end{equation} \psi_n (x) \psi_0 (y) = \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = \sum_{l \in \{-n, -n+2, \cdots, n-2, n \}} \bigg( e^{il \theta} \sum_{k \in \{-n, -n+2, \cdots, -l-2, -l \} \cup \{l, l+2, \cdots, n-2, n \}} \alpha_k \bigg(\frac{r}{2} \bigg)^k {k \choose (n+l)/2} \bigg) \label{DoubleSumToy} \eeq Then by noticing that \begin{equation}}\def\eeq{\end{equation} \hat{H} (\psi_n (x)\psi_0 (y)) = \bigg(n + \frac{1}{2} \bigg) \psi_n (x) \psi_0 (y) + \frac{1}{2} \psi_n (x) \psi_0 (y) = (n+1) \psi_n (x) \psi_0 (y) \eeq \begin{equation}}\def\eeq{\end{equation} \hat{L} = \hat{x} \hat{p}_y - \hat{y} \hat{p}_x = - i \partial_{\theta} \eeq one can deduce that the state with energy $n+1$ and angular momentum $L$ is $e^{iL \theta}$-term in Eq \ref{DoubleSumToy} up to some normalization constant; namely, \begin{equation}}\def\eeq{\end{equation} \psi_{nL} (r, \theta) = N_{nL} e^{il \theta} \sum_{k \in \{-n, -n+2, \cdots, -l-2, -l \} \cup \{l, l+2, \cdots, n-2, n \}} \alpha_k \bigg(\frac{r}{2} \bigg)^k {k \choose (n+l)/2} \eeq where $N_{nL}$ is the normalization coefficient. In order to find $N_{nL}$, we first find $N_{nn}$ (corresponding to $L=n$) since it turns out the easiest one to find and, afterwords, we see how that coefficient changes upon action of $a_{+-}$. Since $a_{+-}$ raises energy and lowers angular momentum, we anticipate to see $n$ replaced with $n+1$ and $l$ with $l-1$, thus obtaining \begin{equation}}\def\eeq{\end{equation} a_{+-} \bigg( e^{il \theta} \sum_{k \in \{-n, -n+2, \cdots, -l-2, -l \} \cup \{l, l+2, \cdots, n-2, n \}} \alpha_k \bigg(\frac{r}{2} \bigg)^k {k \choose (n+l)/2} \bigg) = \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = M_{nL} e^{i(l-1) \theta} \sum_{k \in \{-n-1, -n+1, \cdots, -l-1, -l+1 \} \cup \{l-1, l+1, \cdots, n-1, n+1 \}} \alpha_k \bigg(\frac{r}{2} \bigg)^k {k \choose (n+l)/2} \label{PolarIsPreserved} \eeq However, we will have to perform explicit calculation in order to see what $M_{nL}$ is (said calculation is performed with $a_{+-}$ being expressed in polar coordinates). After finding out $M_{nL}$, we rewrite it as \begin{equation}}\def\eeq{\end{equation} a_{+-} \frac{\vert \psi_{nL} \rangle}{N_{nL}} = M_{nL} \frac{\vert \psi_{n+1,L-1} \rangle}{N_{n+1,L-1}} \eeq and, in combination with \begin{equation}}\def\eeq{\end{equation} [a_{+-}, a_{-+}]=1 \eeq as well as the value of $N_{nn}$, we find by induction the value of $N_{n+j, n-j}$, and, therefore, $N_{nL}$. As stated earlier, the explicit derivation is a lot lengthier than what is presented (in particular, the coefficients $\alpha_k$ need to be explicit, and so forth). After said derivation is done, the final answer will be \begin{equation}}\def\eeq{\end{equation} \psi_{nL} (r, \theta) = \sqrt{\frac{ m \omega}{\pi}} \sqrt{2^n \Big(\frac{n-L}{2} \Big)! \Big(\frac{n+L}{2} \Big)!} e^{-m \omega r^2/2} \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \sum_{C=0}^{\min \big( \frac{n-L}{2}, \frac{n+L}{2} \big)} \frac{(-1)^C (2m \omega)^{\frac{n}{2} - C} r^{n-2C} e^{iL \theta}}{2^{n-C} C! \big( \frac{n+L}{2}-C \big)! \big(\frac{n-L}{2} -C \big)!} \label{nLState2D} \eeq If you check the normalization of Eq \ref{nLState2D}, the result might not look right: instead of $1$ you would get a rather complicated sum. However, you will find that the identity \begin{equation}}\def\eeq{\end{equation} p+q \; {\rm is \; even} \; \Longrightarrow \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \Longrightarrow \sum_{c_1=0}^{\min \big( \frac{p-L}{2}, \frac{p+L}{2} \big)} \sum_{c_2=0}^{\min \big( \frac{q-L}{2}, \frac{q+L}{2} \big)} \frac{(-1)^{c_1+c_2} (\frac{p+q}{2}-c_1-c_2)! }{c_1! c_2! \big(\frac{p+L}{2}-c_1 \big)! \big(\frac{p-L}{2}-c_1 \big)! \big(\frac{q+L}{2}-c_2 \big)! \big(\frac{q-L}{2}-c_2 \big)!} = \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = \frac{\delta^p_q}{ \sqrt{\big(\frac{p-L}{2} \big)! \big(\frac{p+L}{2} \big)! \big(\frac{q-L}{2} \big)! \big(\frac{q+L}{2} \big)!}} \label{DesiredSumnL} \eeq implies that the normalization is as desired. The reader can check numerically that, indeed, the above identity holds. I have also written analytic proof of it, but that would be too much of a sidetrack for this paper so I will publish that proof separately. Clearly, there is another way of doing it. Instead of starting out from Cartesian coordinates in Eq \ref{CartesianStart}, one could have started from the ground state $e^{-m \omega r^2/2}$ and then work one's way up with $a_{++}$ and $a_{+-}$ by exclusively using polar coordinates. As Eq \ref{PolarIsPreserved} indicates, one would also arrive at Eq \ref{nLState2D} at the end of the day. The only problem with this approach is that Eq \ref{nLState2D} is very hard to guess by merely looking at the first few excited states -- unless one somehow anticipates that equation ahead of time. And the way to "anticipate" it is to start out from Cartesian coordinates as we have illustrated. Going back to "Cartesian coordinate start", one could have started from $\psi_{n_1} (x) \psi_{n_2} (y)$ instead of $\psi_n (x) \psi_0 (y)$. However, the inspection of the above steps shows that the polar coordinate states derived from $\psi_n (x) \psi_0 (y)$ are just as general as the ones derived from $\psi_{n_1} (x) \psi_{n_2} (y)$. After all, $\psi_n (x) \psi_0 (y)$ "covers" all possible $\vert L \vert \leq n$ and if one then "runs" through all possible $n$-s one can see that we indeed "cover" all possible states (since none of the states with $\vert L \vert > n$ are allowed). So, since both $\psi_n (x) \psi_0 (y)$ and $\psi_{n_1} (x) \psi_{n_2} (y)$ result in equally general states yet the latter involves far more complicated calculations than the former, the $\psi_n (x) \psi_0 (y)$ approach is preferred. One could, however, still do $\psi_{n_1} (x) \psi_{n_2} (y)$ just to check that no mistakes were made. But if one looks harder one can see a long list of other things one might want to check which would lead to equally difficult calculations. At the end of the day one should simply trust that said calculations would go through. \subsection*{3. Representing QFT states as functionals} In quantum mechanics case, the harmonic oscillator can be viewed as either a wave function $\psi (x)$ or as a linear combination of states defined via ladder operators. Now, a generic QFT state is defined in terms of the latter, where ladder operators are replaced with creation and annihilation operators. Thus, logic tells us, that said state can also be described as $\psi (\phi)$. Here, we have replaced $x$ with $\phi$ since in QM case Hamiltonian is a function of $x$ while in QFT it is a function of $\phi$. In other words, QFT state should be described as a functional. Let us now utilize what we have said about our oscillators in order to find out what such functional is. First of all, we imagine that we have a torus, \begin{equation}}\def\eeq{\end{equation} x^1 +L_1 = x^1 \; , \; x^2 + L_2 = x^2 \; , \; x^3 + L_3 = x^3 \eeq and then we define the momentum $\vec{p}_{abc}$ as \begin{equation}}\def\eeq{\end{equation} \vec{p}_{abc} = \bigg( \frac{2 \pi a}{L^1}, \frac{2 \pi b}{L^2}, \frac{2 \pi c}{L^3} \bigg) \eeq Furthermore, in order to simplify notation, we will assume some sort of sequence \begin{equation}}\def\eeq{\end{equation} \{\cdots, (a_{-2}, b_{-2}, c_{-2}), (a_{-1}, b_{-1}, c_{-1}), (a_0,b_0,c_0), (a_1, b_1, c_1), (a_2, b_2, c_2), \cdots \} \eeq such that the following conditions hold: \begin{equation}}\def\eeq{\end{equation} (a_0, b_0, c_0) = (0,0,0) \eeq \begin{equation}}\def\eeq{\end{equation} (a_{-k}, b_{-k}, c_{-k}) = (-a_k, -b_k, -c_k) \eeq \begin{equation}}\def\eeq{\end{equation} \forall (d,e,f) \neq (0,0,0) \; [\exists k ((a_k,b_k,c_k) = (d,e,f))] \eeq \begin{equation}}\def\eeq{\end{equation} \forall k \neq l ((a_k,b_k,c_k) \neq (a_l,b_l,c_l)) \eeq Once we have done it, we will define $\vec{p}_k$ as \begin{equation}}\def\eeq{\end{equation} \vec{p}_k = \vec{p}_{a_k,b_k,c_k} \eeq Thus, in particular, \begin{equation}}\def\eeq{\end{equation} \vec{p}_0 = \vec{p}_{000} = \vec{0} \eeq Any given $\phi (x)$ can be represented as \begin{equation}}\def\eeq{\end{equation} \phi (\vec{x}) = \sqrt{\frac{2}{L_1L_2L_3}} \bigg( \frac{R_0 (\phi)}{2} + \sum R_k (\phi) \cos (\vec{p}_k \cdot \vec{x} - \Theta_k (\phi)) \bigg) \eeq where $R_0 (\phi)$, $R_k (\phi)$ and $\Theta_k (\phi)$ are given by \begin{equation}}\def\eeq{\end{equation} R_0 (\phi) = R_{000} (\phi) = R_{\vec{0}} (\phi) = R_{\vec{p}_0} (\phi) = \frac{1}{\sqrt{L_1L_2L_3}} \bigg\vert \int d^3 x \; \phi (\vec{x}) \bigg\vert \label{R000} \eeq \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longrightarrow R_k (\phi) = R_{a_kb_kc_k} (\phi)= R_{\vec{p}_k} (\phi) = \sqrt{\frac{2}{L_1L_2L_3}} \bigg\vert \int d^3 x \; \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} \bigg\vert \label{RABC} \eeq \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longrightarrow \Theta_k (\phi) = \Theta_{a_kb_kc_k} (\phi)= \Theta_{\vec{p}_k} (\phi) = \Im \ln \int \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} d^3 x \label{ThetaABC} \eeq This implies that \begin{equation}}\def\eeq{\end{equation} R_k (\phi) = R_{-k} (\phi) \; , \; \Theta_k (\phi) = - \Theta_{-k} (\phi) \eeq We have used the letters $R$ and $\Theta$ for a reason. The above can be interpretted as a single 1D oscillator, corresponding to zero momentum, and infinitely many 2D oscillators, corresponding to all of the allowed non-zero momenta. The 2D oscillator number $k$ simultaneously describes all particles with momentum $\vec{p}_k$ as well as all particles with momentum $- \vec{p}_k$. Therefore, the 2D oscillator number $k$ and 2D oscillator number $-k$ is the very same thing. On the other hand, 1D oscillator is assigned number $0$ (although it doesn't have to since we only have one 1D oscillator anyway) and it describes particles with zero momentum. By noticing the difference in coefficient of $\sqrt{2}$ between Eq \ref{R000} and \ref{RABC} among other similar differences, one can see that the "mass" of 1D oscillator is different from the "masses" of 2D ones: \begin{equation}}\def\eeq{\end{equation} \mu_0 = \frac{1}{2} \; , \; \mu_k = 1 \; , \; k \neq 0 \eeq These are not to be confused with the mass of the particle which is \emph{not} equal to either of those (indeed, the particle mass has a dimension, while the above so-called masses are dimensionless). In particular, the "mass" of the particle becomes the "frequency" of the oscillator, while the "mass" of the oscillator remains either $1$ or $1/2$ as described. In order not to confuse the two, we will denote the mass of the particle by $m$ and the mass of the oscillator by $\mu$. As we mentioned earlier, the fact that in quantum mechanics the oscillator states can be represented as functions implies that in quatnum field theory they can be represented as functionals where $x$ is being replaced by $\phi$. In case of any given 2D oscillator, we replace the polar coordinates $(r, \theta)$ used in previous section with $(R_k (\phi), \Theta_k (\phi))$. On the other hand, for 1D oscillator we replace $x$ used in previous section with $R_0 (\phi)$. But, \emph{in contrast to} previous section, we will take infinite product of infinitely many oscillators. In particular, the functional of the vacuum state is the product of the wavefunction of 1D vacuum corresponding to a statement "there are no particles with momentum $\vec{0}$" with infinitely many wavefunctions of 2D vacua, corresponding to the statement "there are no particles with momentum $\vec{p}_k$" for any given $k$. Thus, the functional for vacuum state is given by \begin{equation}}\def\eeq{\end{equation} \psi_{\vert \Omega \rangle} (\phi) = \bigg( \frac{m}{2 \pi} \bigg)^{1/4} e^{-m R_0^2 (\phi) /4} \prod_{k \geq 1} \Bigg( \frac{(m^2+ \vert \vec{p}_k \vert^2)^{1/4}}{\pi^{1/2}} e^{- \sqrt{m^2+ \vert \vec{p}_k \vert^2} R_0^2 (\phi)/2} \Bigg) \eeq The reason we have taken a product over $k \geq 1$ insted of $k \neq 0$ is because of the remark that we have made earlier that an oscillator number $k$ is the same as an oscillator number $-k$, so we don't want to count the same oscillator twice. In other words, we could have either taken a product over $k \geq 1$ or over $k \leq -1$, but not both. The answer in case of either choice would be identical. Anyway, after substitutting the equation for $R_0 (\phi)$ this becomes \begin{equation}}\def\eeq{\end{equation} \psi_{\vert \Omega \rangle} (\phi) = \bigg( \frac{m}{2 \pi} \bigg)^{1/4} \exp \bigg( -\frac{m}{4L_1L_2L_3} \bigg\vert \int d^3 x_0 \; \phi (\vec{x}_0) \bigg\vert^2 \bigg) \times \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k \geq 1} \Bigg( \frac{(m^2+ \vert \vec{p}_k \vert^2)^{1/4}}{\pi^{1/2}} \exp \bigg( - \frac{\sqrt{m^2+ \vert \vec{p}_k \vert^2}}{L_1L_2L_3} \bigg\vert \int d^3 x_{ABC} \; \phi (\vec{x}_k) e^{i \vec{p}_{ABC} \cdot \vec{x}_k} \bigg\vert ^2 \bigg) \Bigg) \nonumber \eeq Now, when we are looking at excited states, we have to distinguish $\vec{p} = \vec{0}$ from $\vec{p} \neq \vec{0}$ as well as $\vec{p}_i = - \vec{p_j}$ from $\vec{p}_i \neq - \vec{p}_j$ (or, equivalently, $i=-j$ from $i \neq -j$). The reason is that $\vec{p}= \vec{0}$ forms 1D oscillator, while $\{\vec{p}_k, - \vec{p}_k \} = \{ \vec{p}_k, \vec{p}_{-k} \}$ forms 2D oscillator for any given $k \neq 0$. More precisely, in all cases we have a product of a single 1D oscillator with infinitely many 2D ones. But the question is which ones are kept in a ground state and which ones are raised to excited states. The particle with zero momentum raises 1D oscillator to first excited state while leaving all of the 2D oscillators in a ground state, while the particle with non-zero momentum raises one of the 2D oscillators into first excited state, while keeping both the 1D oscillator, as well as all other 2D oscillators (except for the aforementioned one) in the ground state. Let us now show exactly how it works. Suppose we have one particle with zero momentum. Since all of the 2D oscillators are left in a ground state, the product of their functionals can be absorbed into $\psi_{\vert \Omega \rangle} (\phi)$. On the other hand, 1D oscillator is raised to first excited state. \emph{But} the comparison of Eq \ref{1D1} to Eq \ref{1D0} tell us that Eq \ref{1D1} has the same Gaussian as Eq \ref{1D0} does, times an extra factor. Thus, the Gaussian part from the Eq \ref{1D1} can, similarly, be absorbed into $\psi_{\vert \Omega \rangle} (\phi)$, and the extra factor is the only thing we are left with. Thus, we write down the functional to be \begin{equation}}\def\eeq{\end{equation} \psi_{\vert p=0 \rangle} (\phi) = \sqrt{m} R_0 (\phi) \psi_{\vert \Omega \rangle} (\phi) \label{Functional10} \eeq where we have obtained the coefficient of $\sqrt{m}$ from \begin{equation}}\def\eeq{\end{equation} \sqrt{2 \mu_0 \omega_0} = \sqrt{2 \cdot \frac{1}{2} \cdot \omega_0} = \sqrt{\omega_{000}} = \sqrt{m} \label{Coeff1}\eeq Now, if we consider non-zero momentum, then the 1D oscillator is left in ground state, thus it is \emph{fully} absorbed into $\psi_{\vert 0 \rangle} (\phi)$, but \emph{one of} the 2D oscillators is now in a first excited state and is no longer fully absorbed the way it was previously. The comparison of Eq \ref{2D1-1} and Eq \ref{2D0} tells us that the Gaussian part of said 2D oscillator can still be absorbed into $\psi_{\vert 0 \rangle} (\phi)$, but then there is an extra coefficient that can't be. So, as before, take an extra coefficient \emph{without} Gaussian; but, this time, said extra coefficient is coming from 2D oscillator rather than 1D. Another thing that is important to stress is that, even though we have infinite product of 2D oscillators, we do \emph{not} have a product of "extra coefficients". The reason is that $\infty-1$ of those 2D oscillators are still in a ground state, and it is only \emph{one} 2D oscillator that has been raised to the first excited state. Thus $\psi_{\vert 0 \rangle} (\phi)$ \emph{fully} absorbs $\infty-1$ of $2D$ oscillators and "partially" absorbs the remaining one, so we have to include only one extra coefficient. Thus, we have \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longleftrightarrow \vec{p}_k \neq \vec{0} \Longrightarrow \psi_{\vert -p_k \rangle } (\phi) = (m^2 + \vert \vec{p}_k \vert^2)^{1/4} R_k (\phi) e^{-i \Theta_k (\phi)} \psi_{\vert \Omega \rangle} (\phi) \label{Functional1-1} \eeq where we have obtained the coefficient $ (m^2 + \vert \vec{p}_k \vert^2)^{1/4}$ via \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longleftrightarrow \vec{p}_k \neq \vec{0} \Longrightarrow \sqrt{\mu \omega_k} = \sqrt{1 \cdot \omega_k} = \sqrt{\omega_k} = (m^2 + \vert \vec{p}_k \vert^2)^{1/4} \label{Coeff2} \eeq In other words, the coefficient happens to be the same as previously, but for different reasons: on the one hand, instead of $\sqrt{2 \mu \omega}$ we now have $\sqrt{\mu \omega}$ and, on the other hand, instead of $\mu = 1/2$ we now have $\mu = 1$. In retrospect, this is not an accident, since, in Cartesian coordinates, 2D oscillator is simply a product of two 1D ones. Finally, identical argument in which, instead of comparing Eq \ref{2D1-1} to Eq \ref{2D0} we compare Eq \ref{2D11} to Eq \ref{2D0}, tells us that \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longleftrightarrow \vec{p}_k \neq \vec{0} \Longrightarrow \psi_{\vert p_k \rangle } (\phi) = (m^2 + \vert \vec{p}_k \vert^2)^{1/4} R_k (\phi) e^{i \Theta_k (\phi)} \psi_{\vert \Omega \rangle} (\phi) \label{Functional11} \eeq The fact that \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longleftrightarrow \vec{p}_k \neq \vec{0} \Longrightarrow \Theta_k (\phi) = - \Theta_k (\phi) \label{MinusTheta} \eeq allows us to combine Eq \ref{Functional1-1} and Eq \ref{Functional11} into a signle equation. Furthermore, comparison this equation to Eq \ref{Functional10} allows us to combine all three of them into a signle equation, which would be the same as Eq \ref{Functional11} with $k \neq 0$ condition being dropped: \begin{equation}}\def\eeq{\end{equation} \forall k \; \Big( \psi_{\vert p_{abc} \rangle } (\phi) = \sqrt{m} R_k (\phi) e^{i \Theta_k (\phi)} \psi_{\vert \Omega \rangle} (\phi) \Big) \label{forall} \eeq However, due to the fact that the equation for $R_0$ and $R_k$ differ by $\sqrt{2}$, if we are going to explicitly plug in the expressions for the latter, we would likewise have $\sqrt{2}$ difference in overall coefficient (apart from the fact that in the zero case we skip $e^{i \Theta (\phi)}$ seeing that it is equal to $1$). Thus, in case of zero momentum we have \begin{equation}}\def\eeq{\end{equation} \psi_{\vert p =0 \rangle} (\phi) = \bigg( \sqrt{\frac{m}{L_1L_2L_3}} \bigg\vert \int d^3 x' \phi (\vec{x}') \bigg\vert \bigg) \bigg( \bigg( \frac{m}{2 \pi} \bigg)^{1/4} \exp \bigg( -\frac{m}{4 L_1 L_2 L_3} \bigg\vert \int d^3 x_0 \; \phi (\vec{x}_0) \bigg\vert ^2 \bigg) \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k \geq 1} \Bigg( \frac{(m^2+ \vert \vec{p}_k \vert^2)^{1/4}}{\pi^{1/2}} \exp \bigg( - \frac{\sqrt{m^2+ \vert \vec{p}_k \vert^2}}{L_1L_2L_3} \bigg\vert \int d^3 x_k \; \phi (\vec{x}_k) e^{i \vec{p}_k \cdot \vec{x}_k} \bigg\vert ^2 \bigg) \Bigg) \eeq while in case of nonzero momentum we obtain \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longleftrightarrow \vec{p}_k \neq \vec{0} \Longrightarrow \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \Longleftrightarrow \psi_{\vert p_k \rangle} (\phi) = \sqrt{\frac{2 }{L_1L_2L_3}} (m^2+ \vert \vec{p}_k \vert^2 )^{1/4} \bigg\vert \int d^3 x' \phi (\vec{x}') e^{i \vec{p}_k \cdot \vec{x}'} \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg\vert \exp \bigg(\ii \Im \ln \int d^3 x'' \; \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}''} \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \frac{m}{2 \pi} \bigg)^{1/4} \exp \bigg( -\frac{m}{4 L_1 L_2 L_3} \bigg\vert \int d^3 x_0 \; \phi (\vec{x}_0) \bigg\vert ^2 \bigg) \times \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k \geq 1} \Bigg( \frac{(m^2+ \vert \vec{p}_k \vert^2)^{1/4}}{\pi^{1/2}} \exp \bigg( - \frac{\sqrt{m^2+ \vert \vec{p}_k \vert^2}}{L_1L_2L_3} \bigg\vert \int d^3 x_k \; \phi (\vec{x}_k) e^{i \vec{p}_k \cdot \vec{x}_k} \bigg\vert ^2 \bigg) \Bigg) \nonumber \eeq Let us now move to two particle case. If we have two particles of zero momentum, we have to raise 1D oscillator to second excited state while keeping all of the 2D oscillators in a ground state. Thus, all of the 2D oscillators are absorbed in $\psi_{\vert \Omega \rangle} (\phi)$ while 1D oscillator, via a comparison of Eq \ref{1D2} to Eq \ref{1D0}, gives us \begin{equation}}\def\eeq{\end{equation} \psi_{\vert 00 \rangle} (\phi) = \frac{m R^2_0 (\phi)-1}{\sqrt{2}} \psi_{\vert \Omega \rangle} (\phi) \eeq which, upon substituttion of $R (\phi)$ as well as $\psi_{\Omega} (\phi)$ becomes \begin{equation}}\def\eeq{\end{equation} \psi_{\vert 00 \rangle} (\phi) = \frac{1}{\sqrt{2}} \bigg( \frac{m}{L_1L_2L_2} \bigg\vert \int d^3 x' \; \phi^2 (\vec{x}') \bigg\vert^2 -1 \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \frac{m}{2 \pi} \bigg)^{1/4} \exp \bigg( -\frac{m}{4L_1L_2L_3} \bigg\vert \int d^3 x_0 \; \phi (\vec{x}_0) \bigg\vert^2 \bigg) \times \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k \geq 1} \Bigg( \frac{(m^2+ \vert \vec{p}_k \vert^2)^{1/4}}{\pi^{1/2}} \exp \bigg( - \frac{\sqrt{m^2+ \vert \vec{p}_k \vert^2}}{L_1L_2L_3} \bigg\vert \int d^3 x_k \; \phi (\vec{x}_k) e^{i \vec{p}_k \cdot \vec{x}_k} \bigg\vert ^2 \bigg) \Bigg) \nonumber \eeq In case of $\vec{p}_k \neq \vec{0}$ and $\vec{p}_l \neq \vec{0}$ (or, equivalently, $k \neq 0$ and $l \neq 0$), we have to use 2D oscillator. If $\vec{p}_k = \pm \vec{p}_l$ (or, equivalently, $k = \pm l$), then we have second excited state of the 2D oscillator number $k$ (which coincides with 2D oscillator number $-k$) and ground state of all the other ones; thus, we use Eq \ref{2D20}, \ref{2D22} and \ref{2D2-2}. On the other hand, if $\vec{p}_k \neq \pm \vec{p}_l$ (or, equivalently, $k \neq \pm l$) then we have two of the 2D oscillators raised to the first excited state (namely, 2D oscillators number $k$ and $l$ which coincide with oscillators number $-k$ and $-l$, respectively), and everything else kept on a ground state; thus, we use Eq \ref{2D11} and \ref{2D1-1}. And, finally, if we have $\vec{p}_k = \vec{0}$ and $\vec{p}_l \neq \vec{0}$ (or, equivalently, $k=0$ and $l \neq 0$), then 1D oscillator (which is always number $0$ by default since there is only one 1D oscillator available altogether), as well as the 2D oscillators number $l$ (which coincides with 2D oscillator number $-l$), will be in the first excited state, and all other 2D oscillators in ground state; thus, we combine Eq \ref{1D1} with either \ref{2D11} or \ref{2D1-1}. Going back to $\vec{p}_k = \pm \vec{p}_l$ (or, equivalently, $k = \pm l$), we have to distinguish the case of $\vec{p}_k = \vec{p}_l$ (or, equivalently, $k=l$) from $\vec{p}_k = - \vec{p}_l$ (or, equivalently, $k=-l$). In the case of $\vec{p}_k = \vec{p}_l$ (or, equivalently, $k=l$), the total linear momentum is $2 \vec{p}_k$, corresponding to angular momentum $\pm 2$ (where $\pm$ becomes $+$ if $k>0$ and $-$ if $k<0$) thus we have to use either Eq \ref{2D22} or \ref{2D2-2}. On the other hand, in the case of $\vec{p}_k = - \vec{p}_l$ (or, equivalently, $k<l$), we have total linear momentum zero, corresponding to zero angular momentum. Thus, we have to use Eq \ref{2D20}. By keeping in mind everything we said so far, we obtain the following functionals: \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longleftrightarrow \vec{p}_k \neq \vec{0} \Longrightarrow \psi_{\vert p_k, -p_k \rangle} (\phi) = \Big( \sqrt{\vert \vec{p}_k \vert^2 +m^2} R^2_k (\phi) - 1 \Big) \psi_{\vert \Omega \rangle} (\phi) \label{1} \eeq \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longleftrightarrow \vec{p}_k \neq \vec{0} \Longrightarrow \psi_{\vert p_k, p_k \rangle} (\phi) = \sqrt{\frac{\vert \vec{p}_k \vert^2 +m^2}{2}} R_k^2 (\phi) e^{2i \Theta_k (\phi)} \psi_{\vert \Omega \rangle} (\phi) \label{2} \eeq \begin{equation}}\def\eeq{\end{equation} k \neq \pm l \Longleftrightarrow \vec{p}_k \neq \pm \vec{p}_l \Longrightarrow \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \Longrightarrow \psi_{\vert p_k p_l \rangle} (\phi) = (m^2 + \vert \vec{p}_k \vert^2)^{1/4} (m^2 + \vert \vec{p}_l \vert^2)^{1/4} R_k (\phi) R_l (\phi) e^{i \Theta_k (\phi)} e^{i \Theta_l (\phi)} \psi_{\vert \Omega \rangle} (\phi) \label{3abc} \eeq The way we avoided much longer list is that we have used the kind of argument that allowed us to combine Eq \ref{Functional10}, \ref{Functional1-1} and \ref{Functional11} into a single equation, \ref{forall}. In particular, we utilized Eq \ref{MinusTheta} as well as the similarity between Eq \ref{Coeff1} and \ref{Coeff2} .Clearly, it we still have to distinguish some cases, but at least we can shorten the list of cases to be compared. Now, plugging in $R (\phi)$ and $\psi_{\vert \Omega \rangle} (\phi)$ into Eq \ref{1} and \ref{2} is straightforward since, in both cases, we have to use the expression for $R$ given for non-zero momentum. Thus, Eq \ref{1} becomes \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longleftrightarrow \vec{p}_k \neq \vec{0} \Longrightarrow \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \Longrightarrow \psi_{\vert p_k, -p_k \rangle} (\phi) = \bigg( \frac{2 \sqrt{\vert \vec{p}_k \vert^2+m^2}}{L_1L_2L_2} \bigg\vert \int d^3 x' \; \phi^2 (\vec{x}') e^{i \vec{p}_k \cdot \vec{x}'} \bigg\vert^2 -1 \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \frac{m}{2 \pi} \bigg)^{1/4} \exp \bigg( -\frac{m}{4L_1L_2L_3} \bigg\vert \int d^3 x_0 \; \phi (\vec{x}_0) \bigg\vert^2 \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{j\geq 0} \Bigg( \frac{(m^2+ \vert \vec{p}_j \vert^2)^{1/4}}{\pi^{1/2}} \exp \bigg( - \frac{\sqrt{m^2+ \vert \vec{p}_j \vert^2}}{L_1L_2L_3} \bigg\vert \int d^3 x_j \; \phi (\vec{x}_j) e^{i \vec{p}_j \cdot \vec{x}_k} \bigg\vert ^2 \bigg) \Bigg) \eeq while Eq \ref{2} becomes \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longleftrightarrow \vec{p}_k \neq 0 \Longrightarrow \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \Longrightarrow \psi_{\vert p_k, p_k \rangle} (\phi) = \bigg( \frac{ \sqrt{2 (\vert \vec{p}_k \vert^2+m_k^2)}}{L_1L_2L_2} \bigg\vert \int d^3 x' \; \phi^2 (\vec{x}') e^{i \vec{p}_k \cdot \vec{x}'}\bigg\vert^2 \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg(2 \ii \Im \ln \int d^3 x'' \; \phi (\vec{x}'') e^{i \vec{p}_k \cdot \vec{x}''} \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \frac{m}{2 \pi} \bigg)^{1/4} \exp \bigg( -\frac{m}{4L_1L_2L_3} \bigg\vert \int d^3 x_0 \; \phi (\vec{x}_0) \bigg\vert^2 \bigg) \times \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{j \geq 1} \Bigg( \frac{(m^2+ \vert \vec{p}_j \vert^2)^{1/4}}{\pi^{1/2}} \exp \bigg( - \frac{\sqrt{m^2+ \vert \vec{p}_j \vert^2}}{L_1L_2L_3} \bigg\vert \int d^3 x_j \; \phi (\vec{x}_j) e^{i \vec{p}_j \cdot \vec{x}_j} \bigg\vert ^2 \bigg) \Bigg) \nonumber \eeq On the other hand, Eq \ref{3abc} requires some extra care since it is used both for the case where both momenta are non-zero as well as the case where one of them is zero and the other is non-zero (the case where both are zero is ruled out since we have stated that the two momenta are not equal to each other). The situation where neither of the two momenta is zero is described as \begin{equation}}\def\eeq{\end{equation} \vec{0} \neq \vec{p}_k \neq \pm \vec{p}_l \neq \vec{0} \Longleftrightarrow 0 \neq k \neq \pm l \neq 0 \Longrightarrow \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \Longrightarrow \psi_{\vert p_k p_l \rangle} (\phi) = \bigg( \sqrt{\frac{2 }{L_1L_2L_3}} (m^2+ \vert \vec{p}_k \vert^2 )^{1/4} \bigg\vert \int d^3 x' \; \phi (\vec{x}') e^{i\vec{p}_k \cdot \vec{x}'} \bigg\vert \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \sqrt{\frac{2 }{L_1L_2L_3}} (m^2+ \vert \vec{p}_l \vert^2 )^{1/4} \bigg\vert \int d^3 x'' \; \phi (\vec{x}'') e^{i\vec{p}_l \cdot \vec{x}''} \bigg\vert \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg[ \exp \bigg( \ii \Im \ln \int d^3 x''' \; \phi (\vec{x}''') e^{i \vec{p}_k \cdot \vec{x}'''} \bigg) \bigg]\bigg[ \exp \bigg( \ii \Im \ln \int d^3 x'''' \phi (\vec{x}) e^{i \vec{p}_l \cdot \vec{x}''''} \bigg) \bigg] \times \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \frac{m}{2 \pi} \bigg)^{1/4} \exp \bigg( -\frac{m}{4L_1L_2L_3} \bigg\vert \int d^3 x_0 \; \phi (\vec{x}_0) \bigg\vert^2 \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{j \geq 1} \Bigg( \frac{(m^2+ \vert \vec{p}_j \vert^2)^{1/4}}{\pi^{1/2}} \exp \bigg( - \frac{\sqrt{m^2+ \vert \vec{p}_j \vert^2}}{L_1L_2L_3} \bigg\vert \int d^3 x_j \; \phi (\vec{x}_j) e^{i \vec{p}_j \cdot \vec{x}_j} \bigg\vert ^2 \bigg) \Bigg) \nonumber \eeq On the other hand, the situation where one of the momenta is zero is described as \begin{equation}}\def\eeq{\end{equation} l \neq 0 \Longleftrightarrow \vec{p}_l \neq \vec{0} \Longrightarrow \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \Longrightarrow \psi_{\vert 0p_l \rangle} (\phi) = \bigg( \sqrt{\frac{m}{L_1L_2L_3}} \bigg\vert \int d^3 x' \; \phi (\vec{x}') \bigg\vert \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \sqrt{\frac{2 }{L_1L_2L_3}} (m^2+ \vert \vec{p}_l \vert^2 )^{1/4} \bigg\vert \int d^3 x'' \; \phi (\vec{x}'') e^{i\vec{p}_l \cdot \vec{x}''} \bigg\vert \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg[ \exp \bigg( \ii \Im \ln \int d^3 x''' \; \phi (\vec{x}) e^{i \vec{p}_l \cdot \vec{x}'''} \bigg) \bigg] \times \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \frac{m}{2 \pi} \bigg)^{1/4} \exp \bigg( -\frac{m}{4L_1L_2L_3} \bigg\vert \int d^3 x_0 \; \phi (\vec{x}_0) \bigg\vert^2 \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{j \geq 1} \Bigg( \frac{(m^2+ \vert \vec{p}_j \vert^2)^{1/4}}{\pi^{1/2}} \exp \bigg( - \frac{\sqrt{m^2+ \vert \vec{p}_j \vert^2}}{L_1L_2L_3} \bigg\vert \int d^3 x_j \; \phi (\vec{x}_j) e^{i \vec{p}_j \cdot \vec{x}_j} \bigg\vert ^2 \bigg) \Bigg) \nonumber \eeq This procedure can be extended to general particle numbers by utilizing Eq \ref{nthState1D} and \ref{nLState2D}. In light of the fact that particles are not distinguishible, combined with the fact that we have aforegiven list of allowed momenta, in order to specify a state we simply have to list the particle numbers corresponding to each allowed momentum. We will denote the number of particles with momentum $k$ by $\sharp (\vec{p}_k)$. Since zero momentum corresponds to 1D oscillator and non-zero momentum corresponds to 2D, we use Eq \ref{nthState1D} to account for arbitrary number of particles with zero momentum and Eq \ref{nLState2D} to account for the arbitrary number of particles of non-zero momentum. Since there is only one zero momentum state and infinitely many non-zero ones, we take a product of one copy of Eq \ref{nthState1D} with arbitrary many copies of Eq \ref{nLState2D}, each copy being "sdjusted" for different momentum. Thus, we obtain \begin{equation}}\def\eeq{\end{equation} \psi_{\vert \sharp (\vec{0}) = n_0, \sharp (\vec{p}_1) = n_1, \sharp (-\vec{p}_1) = n_{-1}, \sharp (\vec{p}_2) = n_2, \sharp (- \vec{p}_2) = n_{-2}, \cdots \rangle} (\phi) = \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = \bigg( \sqrt{n_0!} \bigg( \frac{m}{2 \pi} \bigg)^{1/4} e^{-m R_0^2 (\phi) /4} \sum_{C_0=0}^{\lfloor n/2 \rfloor} \frac{(-1)^{C_0} m^{\frac{n_0}{2} - C_0}}{2^{C_0} C_0! (n_0-2C_0)!} R_0^{n_0-2C_0} (\phi) \bigg) \times \label{GenFunctional1} \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k} \Bigg( \frac{(m^2+ \vert \vec{p}_k \vert^2)^{1/4}}{\pi^{1/2}} \sqrt{2^{n_k+n_{-k}} n_k! n_{-k}!} e^{- \sqrt{m^2 + \vert \vec{p}_k \vert^2} R_0^2 (\phi)/2} \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \sum_{C_k=0}^{\min (n_k, n_{-k})} \frac{(-1)^{C_k} 2^{\frac{n_k+n_{-k}}{2}-C_k} (m^2 + \vert \vec{p}_k \vert^2)^{\frac{n_k+n_{-k}}{4} - \frac{C_k}{2}} R_k^{n-2C_k} (\phi) e^{i (n_k-n_{-k}) \Theta_k (\phi)}}{2^{n_k+n_{-k}-C_k} C! (n_k-C_k)! (n_{-k} -C_k )!} \Bigg) \nonumber \eeq Now, if we plug in zero particle numbers, we will obtain the functional for vacuum state: \begin{equation}}\def\eeq{\end{equation} \psi_{\vert \Omega \rangle} (\phi) = \bigg( \frac{m}{2 \pi} \bigg)^{1/4} e^{-m R_0^2 (\phi) /4} \prod_{k} \Bigg( \frac{(m^2+ \vert \vec{p}_k \vert^2)^{1/4}}{\pi^{1/2}} e^{- \sqrt{m^2+ \vert \vec{p}_k \vert^2} R_0^2 (\phi)/2} \Bigg) \label{VacFunctional} \eeq and, therefore, by absorbing some of Eq \ref{GenFunctional1} into $\psi_{\vert \Omega \rangle} (\phi)$ via Eq \ref{VacFunctional}, the general functional can be rewritten as \begin{equation}}\def\eeq{\end{equation} \psi_{\vert \sharp (\vec{0}) = n_0, \sharp (\vec{p}_1) = n_1, \sharp (-\vec{p}_1) = n_{-1}, \sharp (\vec{p}_2) = n_2, \sharp (- \vec{p}_2) = n_{-2}, \cdots \rangle} (\phi) = \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = \psi_{\vert \Omega \rangle} (\phi) \bigg( \sqrt{n_0!} \sum_{C_0=0}^{\lfloor n/2 \rfloor} \frac{(-1)^{C_0} m^{\frac{n_0}{2} - C_0}}{2^{C_0} C_0! (n_0-2C_0)!} R_0^{n_0-2C_0} (\phi) \bigg) \times \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k} \Bigg( \sqrt{2^{n_k+n_{-k}} n_k! n_{-k}!} \sum_{C_k=0}^{\min (n_k, n_{-k})} \frac{(-1)^{C_k} 2^{\frac{n_k+n_{-k}}{2}-C_k} (m^2 + \vert \vec{p}_k \vert^2)^{\frac{n_k+n_{-k}}{4} - \frac{C_k}{2}} R_k^{n-2C_k} (\phi) e^{i (n_k-n_{-k}) \Theta_{p_k} (\phi)}}{2^{n_k+n_{-k}-C_k} C! (n_k-C_k)! (n_{-k} -C_k )!} \Bigg) \nonumber \eeq and if we plug in the equations for $\psi_{\vert \Omega \rangle} (\phi)$ as well as $R_k (\phi)$ we obtain \begin{equation}}\def\eeq{\end{equation} \psi_{\vert \sharp (\vec{0}) = n_0, \sharp (\vec{p}_1) = n_1, \sharp (-\vec{p}_1) = n_{-1}, \sharp (\vec{p}_2) = n_2, \sharp (- \vec{p}_2) = n_{-2}, \cdots \rangle} (\phi) \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = \Bigg[ \bigg( \frac{m}{2 \pi} \bigg)^{1/4} \exp \bigg( -\frac{m}{4L_1L_2L_3} \bigg\vert \int d^3 x_0 \; \phi (\vec{x}_0) \bigg\vert^2 \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k \geq 1} \Bigg( \frac{(m^2+ \vert \vec{p}_k \vert^2)^{1/4}}{\pi^{1/2}} \exp \bigg( - \frac{\sqrt{m^2+ \vert \vec{p}_k \vert^2}}{L_1L_2L_3} \bigg\vert \int d^3 x_k \; \phi (\vec{x}_k) e^{i \vec{p}_k \cdot \vec{x}_k} \bigg\vert ^2 \bigg) \Bigg) \Bigg] \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \sqrt{n_0!} \sum_{C_0=0}^{\lfloor n/2 \rfloor} \frac{(-1)^{C_0} m^{\frac{n_0}{2} - C_0}}{2^{C_0} C_0! (n_0-2C_0)!} \bigg\vert \int d^3 x' \; \phi (\vec{x}') \bigg\vert^{n_0-2C_0} \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k} \Bigg( \sqrt{2^{n_k+n_{-k}} n_k! n_{-k}!} \sum_{C_k=0}^{\min (n_k, n_{-k})} \frac{(-1)^{C_k} 2^{\frac{n_k+n_{-k}}{2}-C_k} (m^2 + \vert \vec{p}_k \vert^2)^{\frac{n_k+n_{-k}}{4} - \frac{C_k}{2}} }{2^{n_k+n_{-k}-C_k} C! (n_k-C_k)! (n_{-k} -C_k )!} \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg\vert \int d^3 x'' \; \phi (\vec{x}'') e^{i \vec{p}_k \cdot \vec{x}''} \bigg\vert^{n_k-2C_k} \exp \bigg(i (n_k-n_{-k}) \Im \ln \int d^3 x''' \; \phi (\vec{x}''') e^{i \vec{p}_k \cdot \vec{x}'''} \bigg) \bigg) \label{ExplicitFunctional} \eeq Now, in order to write down creation and annihilation operators in differential form, first of all, let us define the derivatives. One can show that \begin{equation}}\def\eeq{\end{equation} \Theta_k (\phi + \epsilon \cos (\vec{k} \cdot \vec{x} - \Theta_l (\phi))) = \Theta_k (\phi) + 0 (\epsilon^2) \eeq \begin{equation}}\def\eeq{\end{equation} \Theta_k (\phi + \epsilon \sin (\vec{k} \cdot \vec{x} - \Theta_l (\phi))) = \Theta_{abc} (\phi) + \epsilon \; \frac{ \delta^k_l}{R_k (\phi)} \sqrt{\frac{L_1L_2L_3}{2}} + 0 (\epsilon^2) \eeq \begin{equation}}\def\eeq{\end{equation} R_k (\phi + \epsilon \cos (\vec{k} \cdot \vec{x} - \Theta_l (\phi))) = R_k (\phi) + \epsilon \delta^k_l \sqrt{\frac{L_1L_2L_3}{2}} + 0 (\epsilon^2) \eeq \begin{equation}}\def\eeq{\end{equation} R_k (\phi + \epsilon \sin (\vec{k} \cdot \vec{x} - \Theta_l (\phi))) = R_k (\phi) + 0 (\epsilon^2) \eeq From this, we conclude that \begin{equation}}\def\eeq{\end{equation} (\partial_{\Theta_k} \psi) (\phi) = \sqrt{\frac{2}{L_1L_2L_3}} R_k (\phi) \lim_{\epsilon \rightarrow 0} \frac{\psi \Big(\phi + \epsilon \sin \big(\vec{k} \cdot \vec{x} - \Theta_k (\phi) \big) \Big)- \psi (\phi)}{\epsilon} \eeq \begin{equation}}\def\eeq{\end{equation} (\partial_{R_k} \psi) (\phi) = \sqrt{\frac{2}{L_1L_2L_3}} \lim_{\epsilon \rightarrow 0} \frac{\psi \Big(\phi + \epsilon \cos \big(\vec{k} \cdot \vec{x} - \Theta_k (\phi)\big)\Big) - \psi (\phi)}{\epsilon} \eeq By substitutting \begin{equation}}\def\eeq{\end{equation} R_0 (\phi) = \frac{1}{\sqrt{L_1L_2L_3}} \bigg\vert \int d^3 x \; \phi (\vec{x}) \bigg\vert \eeq \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longrightarrow R_k (\phi) = \sqrt{\frac{2}{L_1L_2L_3}} \bigg\vert \int d^3 x \; \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} \bigg\vert \ \eeq \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longrightarrow \Theta_k (\phi) = \Im \ln \int \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} d^3 x \eeq into the right hand side we obtain \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longleftrightarrow \vec{p}_k \neq \vec{0} \Longrightarrow \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \Longrightarrow (\partial_{\Theta_k} \psi) (\phi) = \frac{2}{L_1L_2L_3} \bigg\vert \int d^3 x \; \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} \bigg\vert \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \lim_{\epsilon \rightarrow 0} \frac{\psi \Big(\phi + \epsilon \sin \big(\vec{k} \cdot \vec{x} - \Im \ln \int \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} d^3 x \big) \Big)- \psi (\phi)}{\epsilon} \eeq \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longleftrightarrow \vec{p}_k \neq \vec{0} \Longrightarrow \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \Longrightarrow (\partial_{R_k} \psi) (\phi) = \sqrt{ \frac{2}{L_1L_2L_3} } \lim_{\epsilon \rightarrow 0} \frac{\psi \Big(\phi + \epsilon \cos \big(\vec{k} \cdot \vec{x} - \Im \ln \int \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} d^3 x \big) \Big)- \psi (\phi)}{\epsilon} \eeq \begin{equation}}\def\eeq{\end{equation} (\partial_{R_0} \psi) (\phi) = \frac{1}{\sqrt{L_1L_2L_3}} \lim_{\epsilon \rightarrow 0} \frac{\psi (\phi + \epsilon) - \psi (\phi)}{\epsilon} \eeq where $\phi + \epsilon$ is merely a shift by a constant: \begin{equation}}\def\eeq{\end{equation} (\phi + \epsilon) (\vec{x}) = \epsilon + \phi (\vec{x}) \eeq By looking at the expressions for $a_{++}$ and $a_{--}$, we read off \begin{equation}}\def\eeq{\end{equation} [a_{p_k}^{\dagger} (\psi)] (\phi) = \frac{e^{i \Theta_k (\phi)}}{2} \bigg(R_k (\phi) \psi (\phi) (m^2 + \vert \vec{p}_k \vert^2)^{1/4} - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{1}{ (m^2 + \vert \vec{p}_k \vert^2)^{1/4}} (\partial_{R_k} \psi)(\phi) - \frac{i}{R_k (\phi) (m^2 + \vert \vec{p}_k \vert^2)^{1/4}} (\partial_{\Theta_k} \psi) (\phi)\bigg) \label{AnnihilationInfiniteOriginal} \eeq \begin{equation}}\def\eeq{\end{equation} [a_{p_k} (\psi)] (\phi) = \frac{e^{-i \Theta_k (\phi)}}{2} \bigg(R_k (\phi) \psi (\phi) (m^2 + \vert \vec{p}_k \vert^2)^{1/4} + \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} + \frac{1}{ (m^2 + \vert \vec{p}_k \vert^2)^{1/4}} (\partial_{R_{abc}} \psi) (\phi) - \frac{i}{R_k (\phi) (m^2 + \vert \vec{p}_k \vert^2)^{1/4}} (\partial_{\Theta_k} \psi) (\phi) \bigg) \label{CreationInfiniteOriginal} \eeq and, by substitutting the expressions for $R_k$, $\Theta_k$, $\partial_{R_k}$ and $\partial_{\Theta_k}$ we obtain \begin{equation}}\def\eeq{\end{equation} [a_{p_k}^{\dagger} (\psi)] (\phi) = \frac{1}{2} \exp \bigg(\ii \Im \ln \int \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} d^3 x \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \Bigg[\sqrt{\frac{2}{L_1L_2L_3}} (m^2 + \vert \vec{p}_k \vert^2)^{1/4} \bigg\vert \int d^3 x \; \psi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} \bigg\vert \psi (\phi) - \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{1}{(m^2 + \vert \vec{p}_k \vert^2)^{1/4}} \bigg( \sqrt{\frac{2}{L_1L_2L_3}} \lim_{\epsilon \rightarrow 0} \frac{\psi \Big(\phi + \epsilon \cos \big(\vec{k} \cdot \vec{x} - \Theta_k (\phi)\big)\Big) - \psi (\phi)}{\epsilon} \bigg) - \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{i}{(m^2 + \vert \vec{p}_k \vert^2)^{1/4} \sqrt{\frac{2}{L_1L_2L_3}} \bigg\vert \int d^3 x \; \psi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} \bigg\vert} \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \frac{2}{L_1L_2L_3} \bigg\vert \int d^3 x \; \psi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} \bigg\vert \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \lim_{\epsilon \rightarrow 0} \frac{\psi \Big(\phi + \epsilon \sin \big(\vec{k} \cdot \vec{x} - \Im \ln \int \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} d^3 x \big) \Big)- \psi (\phi)}{\epsilon} \bigg) \Bigg]\eeq The expression for the annihilation operator is the same except that the first sign is switched from minus to plus: \begin{equation}}\def\eeq{\end{equation} [a_{p_k} (\psi)] (\phi) = \frac{1}{2} \exp \bigg(\ii \Im \ln \int \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} d^3 x \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \Bigg[\sqrt{\frac{2}{L_1L_2L_3}} (m^2 + \vert \vec{p}_k \vert^2)^{1/4} \bigg\vert \int d^3 x \; \psi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} \bigg\vert \psi (\phi) + \eeq \begin{equation}}\def\eeq{\end{equation} + \frac{1}{(m^2 + \vert \vec{p}_k \vert^2)^{1/4}} \bigg( \sqrt{\frac{2}{L_1L_2L_3}} \lim_{\epsilon \rightarrow 0} \frac{\psi \Big(\phi + \epsilon \cos \big(\vec{k} \cdot \vec{x} - \Theta_k (\phi)\big)\Big) - \psi (\phi)}{\epsilon} \bigg) - \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{i}{(m^2 + \vert \vec{p}_k \vert^2)^{1/4} \sqrt{\frac{2}{L_1L_2L_3}} \bigg\vert \int d^3 x \; \psi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} \bigg\vert} \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \frac{2}{L_1L_2L_3} \bigg\vert \int d^3 x \; \psi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} \bigg\vert \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \lim_{\epsilon \rightarrow 0} \frac{\psi \Big(\phi + \epsilon \sin \big(\vec{k} \cdot \vec{x} - \Im \ln \int \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} d^3 x \big) \Big)- \psi (\phi)}{\epsilon} \bigg) \Bigg]\eeq Now, if we are to look at raising and lowering operators of 1D oscillator, and use \begin{equation}}\def\eeq{\end{equation} \mu_0 = \frac{1}{2} \; , \; \omega_0 = m \eeq we will read off $a_0^{\dagger}$ and $a_0$: \begin{equation}}\def\eeq{\end{equation} a_0^{\dagger} = \frac{\sqrt{m}}{2} R_0 - \frac{1}{\sqrt{m }} \partial_{R_0} \; , \; a = \frac{\sqrt{m}}{2} R_0 + \frac{1}{\sqrt{m}} \partial_{R_0} \label{CreatAnnihilZeroCont} \eeq and, by substitutting the expressions for $R_0$ and $\partial_{R_0}$ we obtain \begin{equation}}\def\eeq{\end{equation} a_0^{\dagger} = \frac{\sqrt{m}}{2} \bigg( \frac{1}{\sqrt{L_1L_2L_3}} \bigg\vert \int d^3 x \; \psi (\vec{x}) \bigg\vert \bigg) - \frac{1}{\sqrt{m }} \frac{1}{\sqrt{L_1L_2L_3}} \lim_{\epsilon \rightarrow 0} \frac{\psi (\phi + \epsilon) - \psi (\phi)}{\epsilon} \eeq \begin{equation}}\def\eeq{\end{equation} a_0^{\dagger} = \frac{\sqrt{m}}{2} \bigg( \frac{1}{\sqrt{L_1L_2L_3}} \bigg\vert \int d^3 x \; \psi (\vec{x}) \bigg\vert \bigg) +\frac{1}{\sqrt{m }} \frac{1}{\sqrt{L_1L_2L_3}} \lim_{\epsilon \rightarrow 0} \frac{\psi (\phi + \epsilon) - \psi (\phi)}{\epsilon} \eeq \subsection*{4. Converting functionals into functions} As we stated earlier, our ultimate goal is to replace $\psi (\phi)$ with $\psi (\vec{x},y)$ since the former doesn't have classical ontology while the latter does. In order to do that, we need a function $\{y \} \mapsto \{ \phi \}$. In order to introduce that function, we first postulate some \emph{fixed} field $\chi (\vec{x}, y)$ and then define $\chi_y$ as \begin{equation}}\def\eeq{\end{equation} \chi_y (\vec{x}) = \chi (\vec{x}, y) \eeq and then define $g^{(\chi)} \colon \{ y \} \mapsto \{\phi \}$ as \begin{equation}}\def\eeq{\end{equation} g^{(\chi)} (y) = \chi_y \eeq This should enable us to replace $\psi \colon \{ \phi \} \mapsto \mathbb{C}$ to $\psi \circ g^{(\chi)} \colon \{y \} \mapsto \mathbb{C}$ via \begin{equation}}\def\eeq{\end{equation} (\psi \circ g^{(\chi)}) (y) = \psi (g^{(\chi)}) (y) = \psi (\chi_y) \eeq This, however, is not yet what we want, since we would like to have a function of the form $\{\vec{x}, y\} \mapsto \mathbb{C}$ rather than $\{y \} \mapsto \mathbb{C}$. In order to obtain function $\{ \vec{x},y \} \mapsto \mathbb{C}$, we define \begin{equation}}\def\eeq{\end{equation} \xi (\vec{x},y) = f (\vec{x}) (\psi \circ g^{(\chi)}) (y) \eeq where $f (\vec{x})$ is a wave function corresponding to the additional particle we call a "fly". In other words, we are describing all of the particles in the universe via $\psi \circ g^{(\chi)}$ and, in addition to that, we are also describing one more particle, a fly, that can't be observed. Then the QFT state $\vert \psi (\phi) \rangle$ in conjunction with a hidden field $\chi$ and a fly with momentum $\vec{p}_{fly}$ will, indeed, be described as a function of the form $(\vec{x}, y) \mapsto \mathbb{C}$, just as we wanted: \begin{equation}}\def\eeq{\end{equation} \xi_{\chi \otimes \vert p_{fly} \rangle \otimes \vert \psi (\phi) \rangle} (\vec{x},y) = \psi (\chi_y) e^{i \vec{p}_{fly} \cdot \vec{x}} \eeq If we now substitute Eq \ref{ExplicitFunctional} for $\psi (\phi)$, the function over $(\vec{x},y)$ will read off as \begin{equation}}\def\eeq{\end{equation} \xi_{\chi \otimes \vert p_{fly} \rangle \otimes\vert \sharp (\vec{0}) = n_0, \sharp (\vec{p}_1) = n_1, \sharp (-\vec{p}_1) = n_{-1}, \sharp (\vec{p}_2) = n_2, \sharp (- \vec{p}_2) = n_{-2}, \cdots \rangle} (\vec{x},y) \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = e^{i \vec{p}_{fly} \cdot \vec{x}} \Bigg[ \bigg( \frac{m}{2 \pi} \bigg)^{1/4} \exp \bigg( -\frac{m}{4L_1L_2L_3} \bigg\vert \int d^3 x_0 \; \chi (\vec{x}_0,y) \bigg\vert^2 \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k \geq 1} \Bigg( \frac{(m^2+ \vert \vec{p}_k \vert^2)^{1/4}}{\pi^{1/2}} \exp \bigg( - \frac{\sqrt{m^2+ \vert \vec{p}_k \vert^2}}{L_1L_2L_3} \bigg\vert \int d^3 x_k \; \chi (\vec{x}_k,y) e^{i \vec{p}_k \cdot \vec{x}_k} \bigg\vert ^2 \bigg) \Bigg) \Bigg] \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \sqrt{n_0!} \sum_{C_0=0}^{\lfloor n/2 \rfloor} \frac{(-1)^{C_0} m^{\frac{n_0}{2} - C_0}}{2^{C_0} C_0! (n_0-2C_0)!} \bigg\vert \int d^3 x' \; \chi (\vec{x}',y) \bigg\vert^{n_0-2C_0} \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k} \Bigg( \sqrt{2^{n_k+n_{-k}} n_k! n_{-k}!} \sum_{C_k=0}^{\min (n_k, n_{-k})} \frac{(-1)^{C_k} 2^{\frac{n_k+n_{-k}}{2}-C_k} (m^2 + \vert \vec{p}_k \vert^2)^{\frac{n_k+n_{-k}}{4} - \frac{C_k}{2}} }{2^{n_k+n_{-k}-C_k} C! (n_k-C_k)! (n_{-k} -C_k )!} \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg\vert \int d^3 x'' \; \chi (\vec{x}'',y) e^{i \vec{p}_k \cdot \vec{x}''} \bigg\vert^{n_k-2C_k} \exp \bigg(i (n_k-n_{-k}) \Im \ln \int d^3 x''' \; \chi (\vec{x}''',y) e^{i \vec{p}_k \cdot \vec{x}'''} \bigg) \bigg) \label{xiGeneralFinal} \eeq Similarly, if we want fly to be localized in space rather than momentum, we have \begin{equation}}\def\eeq{\end{equation} \xi_{\chi \otimes \vert x_{fly} \rangle \otimes \vert \psi (\phi) \rangle} (\vec{x},y) = \psi (\chi_y) \delta^3 (\vec{x} - \vec{x}_{fly}) \eeq and then the function over $(\vec{x},y)$ will be \begin{equation}}\def\eeq{\end{equation} \xi_{\chi \otimes \vert x_{fly} \rangle \otimes\vert \sharp (\vec{0}) = n_0, \sharp (\vec{p}_1) = n_1, \sharp (-\vec{p}_1) = n_{-1}, \sharp (\vec{p}_2) = n_2, \sharp (- \vec{p}_2) = n_{-2}, \cdots \rangle} (\vec{x},y) \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = \delta^3 (\vec{x} - \vec{x}_{fly}) \Bigg[ \bigg( \frac{m}{2 \pi} \bigg)^{1/4} \exp \bigg( -\frac{m}{4L_1L_2L_3} \bigg\vert \int d^3 x_0 \; \chi (\vec{x}_0,y) \bigg\vert^2 \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k \geq 1} \Bigg( \frac{(m^2+ \vert \vec{p}_k \vert^2)^{1/4}}{\pi^{1/2}} \exp \bigg( - \frac{\sqrt{m^2+ \vert \vec{p}_k \vert^2}}{L_1L_2L_3} \bigg\vert \int d^3 x_k \; \chi (\vec{x}_k,y) e^{i \vec{p}_k \cdot \vec{x}_k} \bigg\vert ^2 \bigg) \Bigg) \Bigg] \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \sqrt{n_0!} \sum_{C_0=0}^{\lfloor n/2 \rfloor} \frac{(-1)^{C_0} m^{\frac{n_0}{2} - C_0}}{2^{C_0} C_0! (n_0-2C_0)!} \bigg\vert \int d^3 x' \; \chi (\vec{x}',y) \bigg\vert^{n_0-2C_0} \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k} \Bigg( \sqrt{2^{n_k+n_{-k}} n_k! n_{-k}!} \sum_{C_k=0}^{\min (n_k, n_{-k})} \frac{(-1)^{C_k} 2^{\frac{n_k+n_{-k}}{2}-C_k} (m^2 + \vert \vec{p}_k \vert^2)^{\frac{n_k+n_{-k}}{4} - \frac{C_k}{2}} }{2^{n_k+n_{-k}-C_k} C! (n_k-C_k)! (n_{-k} -C_k )!} \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg\vert \int d^3 x'' \; \chi (\vec{x}'',y) e^{i \vec{p}_k \cdot \vec{x}''} \bigg\vert^{n_k-2C_k} \exp \bigg(i (n_k-n_{-k}) \Im \ln \int d^3 x''' \; \chi (\vec{x}''',y) e^{i \vec{p}_k \cdot \vec{x}'''} \bigg) \bigg) \eeq We can utilize Eq \ref{xiGeneralFinal} in order to obtain realistic interpretation of ensembly of states. In particular, the density matrix \begin{equation}}\def\eeq{\end{equation} \sum_k \bigg( C_k \vert \sharp (\vec{0}) = n_{k0}, \sharp (\vec{p}_1) = n_{k1}, \sharp (-\vec{p}_1) = n_{k,-1}, \sharp (\vec{p}_2) = n_{k2}, \sharp (- \vec{p}_2) = n_{k,-2}, \cdots \rangle \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \langle \sharp (\vec{0}) = n_{k0}, \sharp (\vec{p}_1) = n_{k1}, \sharp (-\vec{p}_1) = n_{k,-1}, \sharp (\vec{p}_2) = n_{k2}, \sharp (- \vec{p}_2) = n_{k,-2}, \cdots \vert \bigg) \eeq is described as \begin{equation}}\def\eeq{\end{equation} \xi_{\sigma = \vert \cdots \rangle \langle \cdots \vert} (\vec{x},y) \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = \sum_k \bigg\{ e^{i \vec{p}_{fly} \cdot \vec{x}} \Bigg[ \bigg( \frac{m}{2 \pi} \bigg)^{1/4} \exp \bigg( -\frac{m}{4L_1L_2L_3} \bigg\vert \int d^3 x_0 \; \chi (\vec{x}_0,y) \bigg\vert^2 \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k \geq 1} \Bigg( \frac{(m^2+ \vert \vec{p}_k \vert^2)^{1/4}}{\pi^{1/2}} \exp \bigg( - \frac{\sqrt{m^2+ \vert \vec{p}_k \vert^2}}{L_1L_2L_3} \bigg\vert \int d^3 x_k \; \chi (\vec{x}_k,y) e^{i \vec{p}_k \cdot \vec{x}_k} \bigg\vert ^2 \bigg) \Bigg) \Bigg] \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \sqrt{n_0!} \sum_{C_0=0}^{\lfloor n/2 \rfloor} \frac{(-1)^{C_0} m^{\frac{n_0}{2} - C_0}}{2^{C_0} C_0! (n_0-2C_0)!} \bigg\vert \int d^3 x' \; \chi (\vec{x}',y) \bigg\vert^{n_0-2C_0} \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \prod_{k} \Bigg( \sqrt{2^{n_k+n_{-k}} n_k! n_{-k}!} \sum_{C_k=0}^{\min (n_k, n_{-k})} \frac{(-1)^{C_k} 2^{\frac{n_k+n_{-k}}{2}-C_k} (m^2 + \vert \vec{p}_k \vert^2)^{\frac{n_k+n_{-k}}{4} - \frac{C_k}{2}} }{2^{n_k+n_{-k}-C_k} C! (n_k-C_k)! (n_{-k} -C_k )!} \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg\vert \int d^3 x'' \; \chi (\vec{x}'',y) e^{i \vec{p}_k \cdot \vec{x}''} \bigg\vert^{n_k-2C_k} \exp \bigg(i (n_k-n_{-k}) \Im \ln \int d^3 x''' \; \chi (\vec{x}''',y) e^{i \vec{p}_k \cdot \vec{x}'''} \bigg) \bigg) \bigg\} \label{DensityFinal} \eeq We would now like to define creation and annihilation operators. However, we can no longer use the derivatives that we used in the previous section. The reason is that, as far as infinitesimal displacement is concerned, we have only one degree of freedom, namely $y$. This is not enough to define more than one partial derivative without unwanted linear dependence. The way around it is to utilize finite definition of partial derivatives as opposed to infinitesimal one; namely, for $f \colon (r_0, r_1, \theta_1, \cdots, r_n, \theta_n)\mapsto \mathbb{C}$ we define \begin{equation}}\def\eeq{\end{equation} \partial_{\theta_k}^{(\alpha)} f = \frac{\alpha^{N+ \frac{3}{2}}}{2^{1/2} \pi^{n+ \frac{1}{2}}} r_k^2 \int d^{2n+1} x' \; (\theta_k'- \theta_k) f(\vec{x}') e^{- \frac{\alpha}{2} \vert \vec{x}_n' - \vec{x}_n \vert^2} \eeq \begin{equation}}\def\eeq{\end{equation} \partial_{r_k}^{(\alpha)} f= \frac{\alpha^{N+ \frac{3}{2}}}{2^{1/2} \pi^{N+ \frac{1}{2}}} \int d^{2n+1}x' \; (r_k' -r_k ) f(\vec{x}') e^{- \frac{\alpha}{2} \vert \vec{x}_n' - \vec{x}_n \vert^2} \eeq which can be shown to approximate the corresponding derivatives in the event that $\alpha$ is so large that $f (\vec{x}')$ is approximately linear within the range where $e^{-\frac{\alpha}{2} \vert \vec{x}' - \vec{x} \vert^2}$ is far from zero. Now we would like to replace integrals over $r_k$-s and $\theta_k$-s with the single $y$-integral where $r_k$ and $\theta_k$ are being replaced by $R^{(\chi)} (y)$ and $\Theta^{(\chi)} (y)$, respectively. First, we recall that our space is compactified, \begin{equation}}\def\eeq{\end{equation} x^1+ L_1 = x^1 \; , \; x^2 +L_2 = x^2 \; , \; x^3 +L_3 = x^3 \; , \; x^5 + L_5 = x^5 \eeq Secondly, for any given $\phi$ we will define $\phi^{(N)}$ as a sum of its first $N$ Fourier components, \begin{equation}}\def\eeq{\end{equation} \phi^{(N)} (\vec{x}) = \frac{1}{L_1L_2L_3} \sum_{k=-N}^N \bigg[ e^{i \vec{p}_k \cdot \vec{x}} \bigg( \int d^3 x' \; \phi (\vec{x}') e^{-i \vec{p}_k \cdot \vec{x}'} \bigg) \bigg] \label{PhiN} \eeq and, finally, we will assume that $\chi$ behaves in such a way that $\{\chi_y^{(N)} \vert 0 \leq y < L_5\}$ is distributted in $\mathbb{R}^{2N+1}$ with probability density $\rho$. In this case, the integral over $\phi^{(N)}$ will be replaced with an integral over $y$ via the following scheme: \begin{equation}}\def\eeq{\end{equation} \int d^{2N+1} \phi^{(N)} \; f(\phi^{(N)}) \longrightarrow \frac{1}{L_5} \int dy' \; \frac{f(\chi^{(N)} (y'))}{\rho (\chi^{(N)} (y'))} \label{IntegralConversion} \eeq from which we read off the following definitions of partial derivatives: \begin{equation}}\def\eeq{\end{equation} ({\cal D}^{(N, \chi, \rho (\phi))}_{\Theta_k} \xi) (\vec{x},y) = \frac{\alpha^{N+ \frac{3}{2}}}{2^{1/2} \pi^{N+ \frac{1}{2}}L_5} R_k^2 (\chi_y) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \int d y' \; \frac{( \Theta_k (\chi^{(N)}_{y'}) - \Theta_k (\chi^{(N)}_{y})) \xi(\vec{x},y') e^{- \frac{\alpha}{2} \vert \chi_{y'} - \chi_y \vert^2}}{\rho (\chi^{(N)} (y'))} \label{DThetaShort} \eeq \begin{equation}}\def\eeq{\end{equation} ({\cal D}^{(N, \chi, \rho (\phi))}_{R_k} \xi) (\vec{x},y) = \frac{\alpha^{N+ \frac{3}{2}}}{2^{1/2} \pi^{N+ \frac{1}{2}} L_5} \int dy' \; \frac{(R_k (\chi^{(N)}_{y'}) -R_k (\chi_y^{(N)}) ) \xi(\vec{x},y') e^{- \frac{\alpha}{2} \vert \chi_{y'} - \chi_y \vert^2}}{\rho (\chi^{(N)} (y'))} \label{dRShort} \eeq \begin{equation}}\def\eeq{\end{equation} ({\cal D}^{(N, \chi, \rho (\phi))}_{R_0} \xi) (\vec{x},y) = \frac{\alpha^{N+ \frac{3}{2}}}{2^{1/2} \pi^{N+ \frac{1}{2}} L_5} \int dy' \; \frac{(R_0 (\chi^{(N)}_{y'}) -R_0 (\chi_y^{(N)}) ) \xi(\vec{x},y') e^{- \frac{\alpha}{2} \vert \chi_{y'} - \chi_y \vert^2}}{\rho (\chi^{(N)} (y'))} \label{dR0Short} \eeq Let us now substitute explicit expressions for $R$ and $\Theta$ in order to come up with an expression that only involves $\chi$ and $\xi$, however complicated that might be. First of all, one can easily show that \begin{equation}}\def\eeq{\end{equation} 0 \leq k \leq N \Longrightarrow R_k (\phi) = R_k (\phi^{(N)}) \eeq \begin{equation}}\def\eeq{\end{equation} 1 \leq k \leq N \Longrightarrow \Theta_k (\phi) = \Theta_k (\phi^{(N)}) \eeq and, therefore, \begin{equation}}\def\eeq{\end{equation} R_0 (\phi^{(N)}) = R_{0} (\phi) = \frac{1}{\sqrt{L_1L_2L_3}} \bigg\vert \int d^3 x \; \phi (\vec{x}) \bigg\vert \eeq \begin{equation}}\def\eeq{\end{equation} 1 \leq k \leq N \Longrightarrow R_k (\phi^{(N)}) = R_k (\phi) = \sqrt{\frac{2}{L_1L_2L_3}} \bigg\vert \int d^3 x \; \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} \bigg\vert \eeq \begin{equation}}\def\eeq{\end{equation} 1 \leq k \leq N \Longrightarrow \Theta_k (\phi^{(N)}) = \Theta_k (\phi) = \Im \ln \int \phi (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} d^3 x \eeq We will then convert $R$ and $\Theta$ into functions of $y$ as follows: \begin{equation}}\def\eeq{\end{equation} R_0^{(\chi)} (y) = R_0 (\chi_y) = \frac{1}{\sqrt{L_1L_2L_3}} \bigg\vert \int d^3 x \; \chi_y (\vec{x}) \bigg\vert = \frac{1}{\sqrt{L_1L_2L_3}} \bigg\vert \int d^3 x \; \chi_y (\vec{x},y) \bigg\vert \label{R0(y)} \eeq \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longrightarrow R^{(\chi)}_k (y) = R_k (\chi_y) = \sqrt{\frac{2}{L_1L_2L_3}} \bigg\vert \int d^3 x \; \chi_y (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} \bigg\vert = \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = \sqrt{\frac{2}{L_1L_2L_3}} \bigg\vert \int d^3 x \; \chi (\vec{x},y) e^{i \vec{p}_k \cdot \vec{x}} \bigg\vert \label{Rk(y)} \eeq \begin{equation}}\def\eeq{\end{equation} \Theta_k^{(\chi)} (y) = \Theta_k (\chi_y) = \Im \ln \int d^3 x \; \chi_y (\vec{x}) e^{i \vec{p}_k \cdot \vec{x}} = \Im \ln \int d^3 x \; \chi (\vec{x},y) e^{i \vec{p}_k \cdot \vec{x}} \label{Thetak(y)} \eeq Furthermore, we will assume that the probability distribution $\rho$ is Gaussian, \begin{equation}}\def\eeq{\end{equation} \rho^{(\beta,N)} (\phi) = \bigg(\frac{\beta}{2 \pi} \bigg)^{N+ \frac{1}{2}} \exp \bigg(- \frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x' \; \phi (\vec{x}') e^{-i \vec{p}_k \cdot \vec{x}'} \bigg\vert^2 \bigg) \eeq From this we define \begin{equation}}\def\eeq{\end{equation} \rho^{(\beta, N, \chi)} (y) = \rho^{(\beta,N)} (\chi_y) = \bigg(\frac{\beta}{2 \pi} \bigg)^{N+ \frac{1}{2}} \exp \bigg(- \frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x' \; \chi_y (\vec{x}') e^{-i \vec{p}_k \cdot \vec{x}'} \bigg\vert^2 \bigg) = \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} = \bigg(\frac{\beta}{2 \pi} \bigg)^{N+ \frac{1}{2}} \exp \bigg(- \frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_k \cdot \vec{x}'} \bigg\vert^2 \bigg) \label{rho} \eeq Now, by plugging in Eq \ref{Rk(y)}, \ref{Thetak(y)} and \ref{rho} into Eq \ref{DThetaShort}, we obtain \begin{equation}}\def\eeq{\end{equation} ({\cal D}^{(N, \chi, \alpha, \beta)}_{\Theta_k} \xi) (\vec{x},y) = \frac{2^{N+1} \alpha^{N+ \frac{3}{2}}}{ \beta^{N+\frac{1}{2}} L_5 L_1L_2L_3}\bigg\vert \int d^3 x' \; \chi (\vec{x}',y) e^{i \vec{p}_k \cdot \vec{x}'} \bigg\vert^2 \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \int d y' \; \bigg[ \bigg( \Im \ln \int \chi (\vec{x}'',y') e^{i \vec{p}_k \cdot \vec{x}''} d^3 x'' - \Im \ln \int d^3 x''' \; \chi (\vec{x}''',y) e^{i \vec{p}_k \cdot \vec{x}'''} \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \xi(\vec{x},y') \exp \bigg(- \frac{\alpha}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x'''' \; \chi (\vec{x}'''',y') e^{-i \vec{p}_k \cdot \vec{x}''''} \bigg\vert^2 \bigg) \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg(\frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x''''' \; \chi (\vec{x}''''',y') e^{-i \vec{p}_k \cdot \vec{x}'''''} \bigg\vert^2 \bigg) \bigg] \label{dThetakLong} \eeq On the other hand, if we plug in Eq \ref{Rk(y)}, \ref{Thetak(y)} and \ref{rho} into Eq \ref{dRShort}, we obtain \begin{equation}}\def\eeq{\end{equation} k \neq 0 \Longrightarrow ({\cal D}^{(N, \chi, \alpha, \beta)}_{R_k} \xi) (\vec{x},y) = \frac{2^{N+\frac{1}{2}} \alpha^{N+ \frac{3}{2}}}{ \beta^{N+\frac{1}{2}} L_5 \sqrt{L_1L_2L_3}} \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \int d y' \; \bigg[ \bigg(\bigg\vert \int d^3 x' \; \chi (\vec{x}',y) e^{i \vec{p}_k \cdot \vec{x}'}\bigg\vert - \bigg\vert \int d^3 x'' \; \chi (\vec{x}'',y) e^{i \vec{p}_k \cdot \vec{x}''} \bigg\vert \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \xi(\vec{x},y') \exp \bigg(- \frac{\alpha}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x''' \; \chi (\vec{x}''',y') e^{-i \vec{p}_k \cdot \vec{x}'''} \bigg\vert^2 \bigg) \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg(\frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x'''' \; \chi (\vec{x}'''',y') e^{-i \vec{p}_k \cdot \vec{x}''''} \bigg\vert^2 \bigg) \bigg] \label{dRkLong} \eeq Finally, if we plug in Eq \ref{R0(y)} and \ref{rho} into Eq \ref{dR0Short}, we obtain \begin{equation}}\def\eeq{\end{equation} ({\cal D}^{(N, \chi, \alpha, \beta)}_{R_0} \xi) (\vec{x},y) = \frac{2^N \alpha^{N+ \frac{3}{2}}}{ \beta^{N+\frac{1}{2}} L_5 \sqrt{L_1L_2L_3}} \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \int d y' \; \bigg[ \bigg(\bigg\vert \int d^3 x' \; \chi (\vec{x}',y) e^{i \vec{p}_0 \cdot \vec{x}'} \bigg\vert - \bigg\vert \int d^3 x'' \; \chi (\vec{x}'',y) e^{i \vec{p}_0 \cdot \vec{x}''} \bigg\vert \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \xi(\vec{x},y') \exp \bigg(- \frac{\alpha}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x''' \; \chi (\vec{x}''',y') e^{-i \vec{p}_k \cdot \vec{x}'''} \bigg\vert^2 \bigg) \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg(\frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x'''' \; \chi (\vec{x}'''',y') e^{-i \vec{p}_k \cdot \vec{x}''''} \bigg\vert^2 \bigg) \bigg] \label{dR0Long} \eeq Now that we have defined the derivatives, we are going to use them to define creation and annihilation operators. By looking at Eq \ref{AnnihilationInfiniteOriginal} and \ref{CreationInfiniteOriginal}, and making appropriate substitutions, we obtain \begin{equation}}\def\eeq{\end{equation} [a_{p_k}^{(N, \chi, \alpha, \beta)} (\xi)] (\vec{x},y) = \frac{e^{-i \Theta_k (\chi_y)}}{2} \bigg( R_k (\chi_y) \xi (\vec{x},y) (m^2 + \vert \vec{p}_k \vert^2)^{1/4} + \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} + \frac{1}{ (m^2 + \vert \vec{p}_k \vert^2)^{1/4}} ({\cal D}^{(N, \chi, \alpha, \beta)}_{R_k} \xi)(\vec{x},y) - \frac{i}{R_k (\phi) (m^2 + \vert \vec{p}_k \vert^2)^{1/4}} ({\cal D}^{(N, \chi, \alpha, \beta)}_{\Theta_k} \xi) (\vec{x},y)\bigg) \label{Annihilation5Short} \eeq \begin{equation}}\def\eeq{\end{equation} [a_{p_k}^{\dagger (N, \chi, \alpha, \beta)} (\xi)] (\vec{x},y) = \frac{e^{i \Theta_k (\chi_y)}}{2} \bigg( R_k (\chi_y) \xi (\vec{x},y) (m^2 + \vert \vec{p}_k \vert^2)^{1/4} - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{1}{ (m^2 + \vert \vec{p}_k \vert^2)^{1/4}} ({\cal D}^{(N, \chi, \alpha, \beta)}_{R_k} \xi)(\vec{x},y) - \frac{i}{R_k (\phi) (m^2 + \vert \vec{p}_k \vert^2)^{1/4}} ({\cal D}^{(N, \chi, \alpha, \beta)}_{\Theta_k} \xi) (\vec{x},y)\bigg) \label{Creation5Short} \eeq If we plug in Eq \ref{Rk(y)}, \ref{dRkLong} and \ref{dThetakLong} into Eq \ref{Annihilation5Short}, we obtain \begin{equation}}\def\eeq{\end{equation} [a_{p_k}^{(N, \alpha, \beta, \chi)} (\xi)] (\vec{x},y) = \frac{1}{2} \bigg[ \exp \bigg( - \ii \; \Im \ln \int d^3 x' \; \chi (\vec{x}',y) e^{i \vec{p}_k \cdot \vec{x}'} \bigg) \bigg] \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \Bigg\{ \xi (\vec{x},y) (m^2 + \vert \vec{p}_k \vert^2)^{1/4} \bigg( \sqrt{\frac{2}{L_1L_2L_3}} \bigg\vert \int d^3 x'' \; \chi (\vec{x}'',y) e^{i \vec{p}_k \cdot \vec{x}''} \bigg\vert \bigg) + \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} + \frac{1}{ (m^2 + \vert \vec{p}_k \vert^2)^{1/4}} \bigg\{ \frac{2^{N+\frac{1}{2}} \alpha^{N+ \frac{3}{2}}}{ \beta^{N+\frac{1}{2}} L_5 \sqrt{L_1L_2L_3}} \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \int d y' \; \bigg[ \bigg(\bigg\vert \int d^3 x''' \; \chi (\vec{x}''',y) e^{i \vec{p}_k \cdot \vec{x}'''} \bigg\vert - \bigg\vert \int d^3 x'''' \; \chi (\vec{x}'''',y) e^{i \vec{p}_k \cdot \vec{x}''''} \bigg\vert \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \xi(\vec{x},y') \exp \bigg(- \frac{\alpha}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x''''' \; \chi (\vec{x}''''',y') e^{-i \vec{p}_k \cdot \vec{x}'''''} \bigg\vert^2 \bigg) \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg(\frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x'''''' \; \chi (\vec{x}'''''',y') e^{-i \vec{p}_k \cdot \vec{x}''''''} \bigg\vert^2 \bigg) \bigg] \bigg\} - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{i}{ (m^2 + \vert \vec{p}_k \vert^2)^{1/4} } \frac{2^{N+\frac{1}{2}} \alpha^{N+ \frac{3}{2}}}{ \beta^{N+\frac{1}{2}} L_5 \sqrt{L_1L_2L_3}}\bigg\vert \int d^3 x''''''' \; \chi (\vec{x}''''''',y) e^{i \vec{p}_k \cdot \vec{x}'''''''} \bigg\vert \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \int d y' \; \bigg[ \bigg( \Im \ln \int \chi (\vec{x}'''''''',y') e^{i \vec{p}_k \cdot \vec{x}''''''''} d^3 x'''''''' - \Im \ln \int d^3 x''''''''' \; \chi (\vec{x}''''''''',y) e^{i \vec{p}_k \cdot \vec{x}'''''''''} \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \xi(\vec{x},y') \exp \bigg(- \frac{\alpha}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x'''''''''' \; \chi (\vec{x}'''''''''',y') e^{-i \vec{p}_k \cdot \vec{x}''''''''''} \bigg\vert^2 \bigg) \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg(\frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x''''''''''' \; \chi (\vec{x}''''''''''',y') e^{-i \vec{p}_k \cdot \vec{x}'''''''''''} \bigg\vert^2 \bigg) \bigg] \Bigg\} \label{AnnihilateFinal} \eeq On the other hand, if we plug in Eq \ref{Rk(y)}, \ref{dRkLong} and \ref{dThetakLong} into Eq \ref{Creation5Short}, we obtain \begin{equation}}\def\eeq{\end{equation} [a_{p_k}^{\dagger (N, \alpha, \beta, \chi)} (\xi)] (\vec{x},y) = \frac{1}{2} \bigg[ \exp \bigg( \ii \; \Im \ln \int d^3 x' \; \chi (\vec{x}',y) e^{i \vec{p}_k \cdot \vec{x}'} \bigg) \bigg] \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \Bigg\{ \xi (\vec{x},y) (m^2 + \vert \vec{p}_k \vert^2)^{1/4} \bigg( \sqrt{\frac{2}{L_1L_2L_3}} \bigg\vert \int d^3 x'' \; \chi (\vec{x}'',y) e^{i \vec{p}_k \cdot \vec{x}''} \bigg\vert \bigg) - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{1}{ (m^2 + \vert \vec{p}_k \vert^2)^{1/4}} \bigg\{ \frac{2^{N+\frac{1}{2}} \alpha^{N+ \frac{3}{2}}}{ \beta^{N+\frac{1}{2}} L_5 \sqrt{L_1L_2L_3}} \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \int d y' \; \bigg[ \bigg(\bigg\vert \int d^3 x''' \; \chi (\vec{x}''',y) e^{i \vec{p}_k \cdot \vec{x}'''} \bigg\vert - \bigg\vert \int d^3 x'''' \; \chi (\vec{x}'''',y) e^{i \vec{p}_k \cdot \vec{x}''''} \bigg\vert \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \xi(\vec{x},y') \exp \bigg(- \frac{\alpha}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x''''' \; \chi (\vec{x}''''',y') e^{-i \vec{p}_k \cdot \vec{x}'''''} \bigg\vert^2 \bigg) \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg(\frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x'''''' \; \chi (\vec{x}'''''',y') e^{-i \vec{p}_k \cdot \vec{x}''''''} \bigg\vert^2 \bigg) \bigg] \bigg\} - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{i}{ (m^2 + \vert \vec{p}_k \vert^2)^{1/4} } \frac{2^{N+\frac{1}{2}} \alpha^{N+ \frac{3}{2}}}{ \beta^{N+\frac{1}{2}} L_5 \sqrt{L_1L_2L_3}}\bigg\vert \int d^3 x''''''' \; \chi (\vec{x}''''''',y) e^{i \vec{p}_k \cdot \vec{x}'''''''} \bigg\vert \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \int d y' \; \bigg[ \bigg( \Im \ln \int \chi (\vec{x}'''''''',y') e^{i \vec{p}_k \cdot \vec{x}''''''''} d^3 x'''''''' - \Im \ln \int d^3 x''''''''' \; \chi (\vec{x}''''''''',y) e^{i \vec{p}_k \cdot \vec{x}'''''''''} \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \xi(\vec{x},y') \exp \bigg(- \frac{\alpha}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x'''''''''' \; \chi (\vec{x}'''''''''',y') e^{-i \vec{p}_k \cdot \vec{x}''''''''''} \bigg\vert^2 \bigg) \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg(\frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x''''''''''' \; \chi (\vec{x}''''''''''',y') e^{-i \vec{p}_k \cdot \vec{x}'''''''''''} \bigg\vert^2 \bigg) \bigg] \Bigg\} \label{CreateFinal} \eeq The 2D oscillator that we were using covers non-zero momentum. On the other hand, the zero momentum needs to be done separately by using 1D oscillator. By looking at Eq \ref{CreatAnnihilZeroCont} and making appropriate substitutions, we obtain \begin{equation}}\def\eeq{\end{equation} (a^{(N, \chi, \alpha, \beta)}_0 \xi) (\vec{x},y)= \frac{\sqrt{m}}{2} R_0 (\chi_y) \xi (\vec{x},y) + \frac{1}{\sqrt{m}} ({\cal D}^{(N, \chi, \alpha, \beta)}_{R_0} \xi) (\vec{x},y) \eeq \begin{equation}}\def\eeq{\end{equation} (a_0^{\dagger(N, \chi, \alpha, \beta)} \xi) (\vec{x},y)= \frac{\sqrt{m}}{2} R_0 (\chi_y) \xi (\vec{x},y)- \frac{1}{\sqrt{m}} ({\cal D}^{(N, \chi, \alpha, \beta)}_{R_0} \xi) (\vec{x},y) \eeq By substitutting Eq \ref{R0(y)} and \ref{dR0Long} this becomes \begin{equation}}\def\eeq{\end{equation} (a^{(N, \chi, \alpha, \beta)}_0 \xi) (\vec{x},y)= \frac{1}{2} \sqrt{\frac{m}{L_1L_2L_3}} \xi (\vec{x},y) \bigg\vert \int d^3 x \; \chi (\vec{x},y) \bigg\vert + \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} + \frac{2^N \alpha^{N+ \frac{3}{2}}}{ \beta^{N+\frac{1}{2}} L_5 \sqrt{m L_1L_2L_3}} \int d y' \; \bigg[ \bigg(\bigg\vert \int d^3 x' \; \chi (\vec{x}',y) e^{i \vec{p}_0 \cdot \vec{x}'} \bigg\vert - \bigg\vert \int d^3 x'' \; \chi (\vec{x}'',y) e^{i \vec{p}_0 \cdot \vec{x}''} \bigg\vert \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \xi(\vec{x},y') \exp \bigg(- \frac{\alpha}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x''' \; \chi (\vec{x}''',y') e^{-i \vec{p}_k \cdot \vec{x}'''} \bigg\vert^2 \bigg) \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg(\frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x'''' \; \chi (\vec{x}'''',y') e^{-i \vec{p}_k \cdot \vec{x}''''} \bigg\vert^2 \bigg) \bigg] \label{Annihilate0Final} \eeq \begin{equation}}\def\eeq{\end{equation} (a_0^{\dagger (N, \chi, \alpha, \beta)} \xi) (\vec{x},y)= \frac{1}{2} \sqrt{\frac{m}{L_1L_2L_3}} \xi (\vec{x},y) \bigg\vert \int d^3 x \; \chi (\vec{x},y) \bigg\vert - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{2^N \alpha^{N+ \frac{3}{2}}}{ \beta^{N+\frac{1}{2}} L_5 \sqrt{m L_1L_2L_3}} \int d y' \; \bigg[ \bigg(\bigg\vert \int d^3 x' \; \chi (\vec{x}',y) e^{i \vec{p}_0 \cdot \vec{x}'} \bigg\vert - \bigg\vert \int d^3 x'' \; \chi (\vec{x}'',y) e^{i \vec{p}_0 \cdot \vec{x}''} \bigg\vert \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \xi(\vec{x},y') \exp \bigg(- \frac{\alpha}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x''' \; \chi (\vec{x}''',y') e^{-i \vec{p}_k \cdot \vec{x}'''} \bigg\vert^2 \bigg) \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg(\frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x'''' \; \chi (\vec{x}'''',y') e^{-i \vec{p}_k \cdot \vec{x}''''} \bigg\vert^2 \bigg) \bigg] \label{Create0Final} \eeq \subsection*{5. Dynamics of $\xi (\vec{x},y,t)$} So far we have just given the kinematical definitions of quantum states. Let us now describe the dynamics. We recall that, in quantum mechanics case, path integral can be produced from \begin{equation}}\def\eeq{\end{equation} \psi (\vec{x},t) = \int d^3x' \; \psi (\vec{x}', t- \delta t) \exp \bigg( i \bigg\vert \frac{\vec{x}-\vec{x}'}{\delta t} \bigg\vert^2 - i V (x) \bigg) \label{QuantumMechanicsPathIntegral} \eeq We will now assume the preferred time and, therefore, the Lagrangian above is analogous to the integral of $\cal L$ over spacelike hypersurface, \begin{equation}}\def\eeq{\end{equation} S (\phi; t) = \int d^3 x \; {\cal L} (\phi; \vec{x}, t) \eeq From this, we read off the QFT version of Eq \ref{QuantumMechanicsPathIntegral} as \begin{equation}}\def\eeq{\end{equation} \psi (\phi^{(N)},t) = \int {\cal D} \phi'^{(N)} \; \bigg\{ \psi (\phi'^{(N)}, t- \delta t) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg[ i \int d^3 x \bigg( \frac{1}{2} \bigg(\frac{\phi^{(N)} (\vec{x}) - \phi'^{(N)}) (\vec{x})}{\delta t} \bigg)^2 - \frac{m^2}{2} (\phi^{(N)} (\vec{x}))^2 - \frac{\lambda}{4} (\phi^{(N)} (\vec{x}))^4 \bigg) \bigg] \bigg\} \label{Hypersurfaces1} \eeq One should note that we used $\phi^{(N)}$ instead of $\phi$. The reason for this is that, if we were to use $\phi$ we would get infinitely many contributions from arbitrarily high momenta, leading to intractable results. The purpose of $N$ is the same as the purpose of ultraviolet cutoff $\Lambda$ in QFT calculations. On the first glance, one might think that since we plan to substitute integration over $\phi$ with integration over $y$ per Eq \ref{IntegralConversion} the theory would be well defined even with $\phi$ being used instead of $\phi^{(N)}$. However, Eq \ref{IntegralConversion} includes $\rho^{(N,\chi, \beta)}$ and, as notation implies, we still need to know $N$ in order to know $\rho$. If $N$ were infinite then $\rho$ would have been infinitesimal, leading to mathematical ambiguities. In any case, Eq \ref{PhiN} tells us \begin{equation}}\def\eeq{\end{equation} \phi^{(N)} (\vec{x}) = \frac{1}{L_1L_2L_3} \sum_{k=-N}^N \bigg( e^{i \vec{p}_k \cdot \vec{x}} \bigg( \int d^3 x' \; \phi (\vec{x}') e^{-i \vec{p}_k \cdot \vec{x}'} \bigg) \bigg) \label{(N)1} \eeq and, therefore \begin{equation}}\def\eeq{\end{equation} \frac{\phi^{(N)} (\vec{x}) - \phi'^{(N)} (\vec{x})}{\delta t} = \frac{1}{L_1L_2L_3} \sum_{k=-N}^N \bigg( e^{i \vec{p}_k \cdot \vec{x}} \bigg( \int d^3 x' \; \frac{\phi (\vec{x}') - \phi' (\vec{x}')}{\delta t} e^{-i \vec{p}_k \cdot \vec{x}'} \bigg) \bigg) \label{(N)2} \eeq by substitutting Eq \ref{(N)1} and \ref{(N)2} into Eq \ref{Hypersurfaces1} we obtain \begin{equation}}\def\eeq{\end{equation} \psi (\phi,t) = \int {\cal D} \phi' \; \bigg\{ \psi (\phi', t- \delta t) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg[ i \int d^3 x \bigg( \frac{1}{2L_1^2L_2^2L_3^2} \bigg(\sum_{k=-N}^N \bigg( e^{i \vec{p}_k \cdot \vec{x}} \int d^3 x' \; \frac{\phi (\vec{x}') - \phi' (\vec{x}')}{\delta t} e^{-i \vec{p}_k \cdot \vec{x}'} \bigg)\bigg)^2 - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{m^2}{2L_1^2L_2^2L_3^2} \bigg(\sum_{k=-N}^N \bigg( e^{i \vec{p}_k \cdot \vec{x}} \bigg( \int d^3 x' \; \phi (\vec{x}') e^{-i \vec{p}_k \cdot \vec{x}'} \bigg) \bigg) \bigg)^2 - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{\lambda}{4L_1^4L_2^4L_3^4} \bigg(\sum_{k=-N}^N \bigg( e^{i \vec{p}_k \cdot \vec{x}} \bigg( \int d^3 x' \; \phi (\vec{x}') e^{-i \vec{p}_k \cdot \vec{x}'} \bigg) \bigg) \bigg)^4 \bigg) \bigg] \bigg\} \eeq We define operation "truth value", denoted by $T$, as \begin{equation}}\def\eeq{\end{equation} T (True) = 1 \; , \; T (False) = 0 \eeq with this notation, after the evaluating outside integral, we obtain \begin{equation}}\def\eeq{\end{equation} \psi (\phi,t) = \int {\cal D} \phi' \; \bigg\{ \psi (\phi', t- \delta t) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg[ i \bigg( \frac{1}{2} \sum_{k_1=-N}^N \sum_{k_2=-N}^N \bigg( T(\vec{p}_{k_1} + \vec{p}_{k_2} = \vec{0}) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \int d^3 x' \; \frac{\phi (\vec{x}') - \phi' (\vec{x}')}{\delta t} e^{-i \vec{p}_1 \cdot \vec{x}'} \bigg) \bigg( \int d^3 x' \; \frac{\phi (\vec{x}') - \phi' (\vec{x}')}{\delta t} e^{-i \vec{p}_2 \cdot \vec{x}'} \bigg) \bigg) - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{m^2}{2} \sum_{k_1=-N}^N \sum_{k_2=-N}^N \bigg( T(\vec{p}_{k_1} + \vec{p}_{k_2} = \vec{0}) \bigg( \int d^3 x \; \phi (\vec{x}') e^{-i \vec{p}_{k_1} \cdot \vec{x}'} \bigg) \bigg( \int d^3 x' \; \phi (\vec{x}') e^{-i \vec{p}_{k_2} \cdot \vec{x}'} \bigg) \bigg) - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{\lambda}{4} \bigg(\sum_{k_1=-N}^N \sum_{k_2=-N}^N \sum_{k_3=-N}^N \sum_{k_4=-N}^N \bigg( T (\vec{k_1} + \vec{k_2} + \vec{k_3} + \vec{k_4} = \vec{0}) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \int d^3 x' \; \phi (\vec{x}') e^{-i \vec{p}_{k_1} \cdot \vec{x}'} \bigg) \bigg( \int d^3 x' \; \phi (\vec{x}') e^{-i \vec{p}_{k_2} \cdot \vec{x}'} \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \int d^3 x' \; \phi (\vec{x}') e^{-i \vec{p}_{k_3} \cdot \vec{x}'} \bigg) \times \bigg( \int d^3 x' \; \phi (\vec{x}') e^{-i \vec{p}_{k_4} \cdot \vec{x}'} \bigg) \bigg) \bigg)^4 \bigg) \bigg] \bigg\} \eeq We are now ready to convert the integral over $\phi$ into the integral over $y$. Eq \ref{IntegralConversion} tells us that the prescription of such conversion is \begin{equation}}\def\eeq{\end{equation} \int d^{2N+1} \phi^{(N)} \; f(\phi^{(N)}) \longrightarrow \frac{1}{L_5} \int dy' \; \frac{f(\chi^{(N)} (y'))}{\rho (\chi^{(N)} (y'))} \eeq Therefore, we read off \begin{equation}}\def\eeq{\end{equation} \xi (\vec{x},y,t) = \frac{1}{L_5} \int \frac{dy'}{\rho^{(N, \chi, \beta)} (\chi^{(N)} (y'))} \; \bigg\{ \xi (\vec{x},y', t- \delta t) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \exp \bigg[ i \bigg( \frac{1}{2} \sum_{k_1=-N}^N \sum_{k_2=-N}^N \bigg( T(\vec{p}_{k_1} + \vec{p}_{k_2} = \vec{0}) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \int d^3 x' \; \frac{\chi (\vec{x}',y) - \chi (\vec{x}',y')}{\delta t} e^{-i \vec{p}_1 \cdot \vec{x}'} \bigg) \bigg( \int d^3 x' \; \frac{\chi (\vec{x}',y) - \chi (\vec{x}',y')}{\delta t} e^{-i \vec{p}_2 \cdot \vec{x}'} \bigg) \bigg) - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{m^2}{2} \sum_{k_1=-N}^N \sum_{k_2=-N}^N \bigg( T(\vec{p}_{k_1} + \vec{p}_{k_2} = \vec{0}) \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_1} \cdot \vec{x}'} \bigg) \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_2} \cdot \vec{x}'} \bigg) \bigg) - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{\lambda}{4} \bigg(\sum_{k_1=-N}^N \sum_{k_2=-N}^N \sum_{k_3=-N}^N \sum_{k_4=-N}^N \bigg( T (\vec{k_1} + \vec{k_2} + \vec{k_3} + \vec{k_4} = \vec{0}) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_1} \cdot \vec{x}'} \bigg) \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_2} \cdot \vec{x}'} \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_3} \cdot \vec{x}'} \bigg) \times \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_4} \cdot \vec{x}'} \bigg) \bigg) \bigg)^4 \bigg) \bigg] \bigg\} \label{StepPreFinal} \eeq Now Eq \ref{rho} tells us that \begin{equation}}\def\eeq{\end{equation} \rho^{(N, \chi, \beta)} = \bigg(\frac{\beta}{2 \pi} \bigg)^{N+ \frac{1}{2}} \exp \bigg(- \frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_k \cdot \vec{x}'} \bigg\vert^2 \bigg) \eeq and, therefore Eq \ref{StepPreFinal} becomes \begin{equation}}\def\eeq{\end{equation} \xi (\vec{x},y,t) = \frac{1}{L_5} \int dy \; \bigg\{ \bigg[ \bigg(\frac{2 \pi}{\beta} \bigg)^{N+ \frac{1}{2}} \exp \bigg( \frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_k \cdot \vec{x}'} \bigg\vert^2 \bigg) \bigg] \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \xi (\vec{x},y', t- \delta t) \; \exp \bigg[ i \bigg( \frac{1}{2} \sum_{k_1=-N}^N \sum_{k_2=-N}^N \bigg( T(\vec{p}_{k_1} + \vec{p}_{k_2} = \vec{0}) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \int d^3 x' \; \frac{\chi (\vec{x}',y) - \chi (\vec{x}',y')}{\delta t} e^{-i \vec{p}_1 \cdot \vec{x}'} \bigg) \bigg( \int d^3 x' \; \frac{\chi (\vec{x}',y) - \chi (\vec{x}',y')}{\delta t} e^{-i \vec{p}_2 \cdot \vec{x}'} \bigg) \bigg) - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{m^2}{2} \sum_{k_1=-N}^N \sum_{k_2=-N}^N \bigg( T(\vec{p}_{k_1} + \vec{p}_{k_2} = \vec{0}) \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_1} \cdot \vec{x}'} \bigg) \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_2} \cdot \vec{x}'} \bigg) \bigg) - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{\lambda}{4} \bigg(\sum_{k_1=-N}^N \sum_{k_2=-N}^N \sum_{k_3=-N}^N \sum_{k_4=-N}^N \bigg( T (\vec{k_1} + \vec{k_2} + \vec{k_3} + \vec{k_4} = \vec{0}) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_1} \cdot \vec{x}'} \bigg) \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_2} \cdot \vec{x}'} \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_3} \cdot \vec{x}'} \bigg) \times \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_4} \cdot \vec{x}'} \bigg) \bigg) \bigg)^4 \bigg) \bigg] \bigg\} \label{StepFinal} \eeq If we would like to convert it to continuum equation we can use the following tactic: the equation of the form \begin{equation}}\def\eeq{\end{equation} \xi (\vec{x}, y, t) = f (\xi; \vec{x},y,t- \delta t) \eeq can be generated through the continuum equation \begin{equation}}\def\eeq{\end{equation} \frac{\partial \xi}{\partial t} = \frac{1}{\delta} (- \xi (\vec{x},y,t) + f (\xi; \vec{x},y, t)) \eeq where we have replaced $\delta t$ with $\delta$ in order to make it clear that we are dealing with continuus process, where $\delta$ is merely a constant of nature, as opposed to step by step process with time interval $\delta t$. Thus, we read off \begin{equation}}\def\eeq{\end{equation} \frac{\partial \xi}{\partial t} = \frac{1}{\delta} \Bigg\{ - \xi (\vec{x},y,t) + \frac{1}{L_5} \int dy \; \bigg\{ \bigg[ \bigg(\frac{2 \pi}{\beta} \bigg)^{N+ \frac{1}{2}} \exp \bigg( \frac{\beta}{2L_1L_2L_3} \sum_{k=-N}^N \bigg\vert \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_k \cdot \vec{x}'} \bigg\vert^2 \bigg) \bigg] \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \xi (\vec{x},y', t) \; \exp \bigg[ i \bigg( \frac{1}{2} \sum_{k_1=-N}^N \sum_{k_2=-N}^N \bigg( T(\vec{p}_{k_1} + \vec{p}_{k_2} = \vec{0}) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \int d^3 x' \; \frac{\chi (\vec{x}',y) - \chi (\vec{x}',y')}{\delta} e^{-i \vec{p}_1 \cdot \vec{x}'} \bigg) \bigg( \int d^3 x' \; \frac{\chi (\vec{x}',y) - \chi (\vec{x}',y')}{\delta} e^{-i \vec{p}_2 \cdot \vec{x}'} \bigg) \bigg) - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{m^2}{2} \sum_{k_1=-N}^N \sum_{k_2=-N}^N \bigg( T(\vec{p}_{k_1} + \vec{p}_{k_2} = \vec{0}) \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_1} \cdot \vec{x}'} \bigg) \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_2} \cdot \vec{x}'} \bigg) \bigg) - \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} - \frac{\lambda}{4} \bigg(\sum_{k_1=-N}^N \sum_{k_2=-N}^N \sum_{k_3=-N}^N \sum_{k_4=-N}^N \bigg( T (\vec{k_1} + \vec{k_2} + \vec{k_3} + \vec{k_4} = \vec{0}) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_1} \cdot \vec{x}'} \bigg) \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_2} \cdot \vec{x}'} \bigg) \times \nonumber \eeq \begin{equation}}\def\eeq{\end{equation} \times \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_3} \cdot \vec{x}'} \bigg) \times \bigg( \int d^3 x' \; \chi (\vec{x}',y) e^{-i \vec{p}_{k_4} \cdot \vec{x}'} \bigg) \bigg) \bigg)^4 \bigg) \bigg] \bigg\} \Bigg\} \label{DynamicsFinal} \eeq Finally, the conditions under the sum signs can always be changed to accomodate what we would expect from loop diagrams (where the second order loops would have higher momenta than first order loops if we take the notion of UV cutoff literally), in which case it would no longer match $\phi^{(N)}$ (rather it might be some combination of different $N$-s depending on what restrictions we chose) but it would still be equally well defined theory. \subsection*{6. Conclusion} In this paper we have shown how arbitary multiparticle state can be described as a pair of two classical fields, $\chi$ and $\xi$, living in ordinary space with a single extra dimension. The field $\chi$ is a hidden variable field that has nothing to do with actual state and, instead, has to do with determining the coarse graining. On the other hand, $\xi$ indeed describes the physical state per Eq \ref{xiGeneralFinal}. Furthermore, cration and annihilation operators were described as taking one ordinary function to the other ordinary function (see Eq \ref{AnnihilateFinal}, \ref{CreateFinal}, \ref{Annihilate0Final}, \ref{Create0Final}). Furthermore, an ensemble of states is also described by one single wave function, as given in Eq \ref{DensityFinal}. In other words we were able to both overcome the problem of many particles as well as ensembe of states (despit the fact that these are very different issues), and describe both in terms of single wave function in ordinary space. Finally, we defined a dynamics of $\xi (\vec{x}, t)$, per Eq \ref{DynamicsFinal} that takes "classical" form in a sense that it pertains to $\xi (\vec{x},t)$, yet is non-local. If obeyed, it will result in states, as defined in the other equations we quoted, obeying some version of coarse grained QFT. Our approach was based on coarse graining. In future, it could be made more precise by means of space filling curve constructions given in \cite{SpaceFillingCurve1}, \cite{SpaceFillingCurve2} and \cite{SpaceFillingCurve3}. However, even if we did do that, we would still have to cut off the momentum since said constructions work only in finitely many dimensions. And, since we are accepting the fact that QFT is not precise in one way, we might as well accept that it is not precise in some other way as well -- particularly since the random curve that fills the space up to some coarse graining is a lot more natural than carefully designed curve proposed in \cite{SpaceFillingCurve1}, \cite{SpaceFillingCurve2} and \cite{SpaceFillingCurve3}. Nevertheless, it might be interesting to investigate the latter for the future project just to see whether or not we will be able to make rigourous some of the statements that were more hand waving in this paper. One weakness of our approach is Ocam's razor, combined with the fact that no new predictions are made. After all, we do not explaine collapse of wave function: we simply re-define quantum states and then existing collapse models would have to be readjusted. This being the case, a lot of people might not like that the equations look a lot more complicated and unnatural than their conventional counterparts if the predictions are identical. From my point of view, however, the important change is ontology, which I view to be worth it as end onto itself since that is what I view as a key difference between quantum and classical, as opposed to anything else. Another objection the reader might have is how do I know that the proposed model is what takes place in nature as opposed to some other, equally complicated yet different, construction? The answer is I don't know. But what I am set to show is that there is no reason to claim that classical logic doesn't work in quantum mechanics; so I gave a counter-example as to how classical logic "might" work, as given in this paper. Of course the reader can think of other counter-examples, but that will only strengthen my point.
2024-02-18T23:40:53.142Z
2014-12-10T02:06:09.000Z
algebraic_stack_train_0000
3,573
21,947
proofpile-arXiv_066-1482
\section{Introduction} The interplay of strong spin-orbit coupling and electronic correlations is at the heart of many recent developments in condensed-matter physics, involving, e.g., correlated topological insulators, fractional Chern insulators, and spin-orbit Mott insulators \cite{pesin10,hohenadler13,bergholtz13,kim08,kim09}. On the materials side, oxides with partially filled 5d shells, such as iridates and osmates, are considered promising candidates in order to realize the theoretically proposed phenomena. In this context, the insulating iridates A$_2$IrO$_3$\ (A=Na,Li) have attracted enormous attention over the past few years \cite{Sin10,Liu11,Choi12,Sin12,Ye12,Com12}. In these materials, the Ir$^{4+}$ ions are arranged in a layered honeycomb-lattice structure. Due to the combined effect of strong spin-orbit coupling and Coulomb interactions, the Ir 5d$^{5}$ states, with one hole in the t$_{2g}$ manifold, have been proposed to realize $J_{{\rm eff}}=1/2$ spin-orbit Mott insulators \cite{Shi09,Jac09}, similar to other layered iridates \cite{kim08,kim09}. Furthermore, Ref.~\onlinecite{Cha10} suggested that the magnetism of the $J_{{\rm eff}}=1/2$ moments is dominated by strongly spin-anisotropic compass interactions, which by itself lead to the spin-liquid model on the honeycomb lattice proposed by Kitaev \cite{Kit06}. Supplemented by an additional spin-isotropic Heisenberg interactions, the resulting Heisenberg-Kitaev (HK) model has been shown to host both spin-liquid and conventionally ordered phases \cite{Cha10,Jia11,Reu11,Bha12,Cha13,perkins12,perkins13}. Experimentally, both {Na$_2$IrO$_3$} and {Li$_2$IrO$_3$} have been found to undergo a magnetic ordering transition at $T_{\rm N}\simeq15$\,K \cite{Sin10,Liu11,Sin12}. In {Na$_2$IrO$_3$} the low-temperature spin configuration has been identified as collinear ``zigzag'' order \cite{Choi12,Ye12}, with ferromagnetic zigzag chains arranged antiferromagnetically in the honeycomb plane. This state is indeed a ground state of the HK model, where it results from a competition of antiferromagnetic Kitaev and ferromagnetic Heisenberg interactions \cite{Cha13}. Alternatively, Heisenberg and HK models with longer-range interactions have been considered: specifically, a Heisenberg $J_1$-$J_2$-$J_3$ model with sizeable second and third-neighbor coupling has been found to describe the available data as well \cite{Kimchi11,Choi12}. Finally, a more itinerant scenario in terms of molecular orbitals has also been proposed \cite{Maz12}, although a detailed description of the magnetic properties in this model is lacking to date. In this paper, we propose magnetic depletion, i.e., the random substitution of magnetic Ir$^{4+}$ by non-magnetic ions, as a powerful tool to study the magnetism of {A$_2$IrO$_3$} and to discriminate between the various proposed scenarios for magnetism. A key insight is that, within local-moment models, depletion will inevitably turn the zigzag ordered state into a spin (or spin-orbit) glass: Both the HK and $J_1$-$J_2$-$J_3$ models are frustrated, and the combination of disorder and frustration generically causes spin-glass behavior \cite{villain79}. We calculate the freezing temperature, $T_{\rm g}(x)$, as function of doping level $x$ and show that its behavior across the site-percolation threshold, $x_p=30.3\%$, strongly differs between the HK and $J_1$-$J_2$-$J_3$ models, Fig.~\ref{fig:tg}. \begin{figure}[t] \includegraphics[width=0.48\textwidth]{fig1.pdf} \caption{ \label{fig:tg} Ordering/freezing temperature extracted from our MC simulations, shown as $T_{\rm g}(x)/T_{\rm g}(x=0)$ as function of doping level $x$, for the HK (top) and $J_1$-$J_2$-$J_3$ (bottom) models for two different values of the interlayer coupling $J_{\perp}/J$. The vertical dashed line locates the 2d percolation threshold $x_p$; the horizontal dashed line marks temperatures below which we are unable to reach equilibrium in our MC simulations. Solid lines are polynomial fits through the data; dotted lines are extrapolations. } \end{figure} \section{Models} We focus on two models which have been considered to describe the zigzag ordered state of Na$_2$IrO$_3$. The first is the HK model, \begin{equation} \mathcal{H}= J \sum_{\left\langle ij\right\rangle} \vec{S}_{i}\cdot\vec{S}_{j} + 2K \sum_{\left\langle ij\right\rangle _{\gamma}}S_{i}^{\gamma}S_{j}^{\gamma}, \label{hk} \end{equation} the second the $J_1$-$J_2$-$J_3$ model \begin{equation} \mathcal{H}=J_{1}\sum_{\left\langle ij\right\rangle }\vec{S}_{i}\cdot\vec{S}_{j}+J_{2}\sum_{\left\langle \left\langle ik\right\rangle \right\rangle }\vec{S}_{i}\cdot\vec{S}_{k}+J_{3}\sum_{\left\langle \left\langle \left\langle il\right\rangle \right\rangle \right\rangle }\vec{S}_{i}\cdot\vec{S}_{l}. \label{jt} \end{equation} Here, the sums run over pairs of nearest, second, and third neighbor sites, respectively, while $\gamma=x$, $y$, $z$ in Eq.~\eqref{hk} labels the three different links for each spin in a honeycomb lattice. The parameter regimes of interest are defined through the presence of zigzag magnetic order as realized in {Na$_2$IrO$_3$} \cite{Choi12,Ye12}. The HK model's couplings may be parametrized as $J=A\cos\phi$ and $K=A\sin\phi$, where $A$ is an overall energy scale. Its full phase diagram was first mapped out in Ref.~\onlinecite{Cha13}, with the zigzag phase occurring for $0.51\pi<\phi<0.90\pi$; in the following we choose $\phi=0.62\pi$. For the $J_1$-$J_2$-$J_3$ model, sizeable $J_{2}$ and $J_{3}$ are required in order to have a zigzag magnetic ground state \cite{rastelli79,lhui01,li12}. Following Ref.~\onlinecite{Choi12}, we choose $J_{2}=0.8J_{1}$ and $J_{3}=0.9J_{1}$. As will become clear below, the magnetic properties of the depleted HK and $J_1$-$J_2$-$J_3$ models depend sensitively on the presence of a magnetic coupling between the layers. For A$_2$IrO$_3$, no quantitative information on such coupling is available at present; it is often assumed to be small due to the A-B-type stacking of the honeycomb layers. Here we will account for the 3d character by considering a layered model with A-A stacking and a small vertical (unfrustrated) Heisenberg coupling $J_{\perp}$; in application to A$_2$IrO$_3$\ this is to be understood as an effective coupling between second-neighbor layers. \section{Monte-Carlo simulations} We study the models \eqref{hk} and \eqref{jt} using classical Monte Carlo (MC) simulations for unit-length spins on lattices of size $L\times L\times L_{z}$, typically with $L_{z}=L/2$ and periodic boundary conditions. The honeycomb layers are spanned by the primitive lattice vectors $\vec{a}_{1\left(2\right)}=\left(3/2,\pm\sqrt{3}/2\right)$, with each unit cell containing two sites. Depletion is simulated by randomly removing a fraction $x$ of spins, with $x$ varying between 5\% and 50\%, with the total number of spins $N_{s}=\left(1-x\right)\times2L^{2}L_{z}$. We perform equilibrium MC simulations using single-site updates with a combination of the heat-bath and microcanonical (or over-relaxation) methods, with typically $10^{6}$ MC steps per spin, and combine this with the parallel-tempering algorithm \cite{mcdetails,av12a}. Disorder averages are taken over $N_{\rm rl}$ samples, with $N_{\rm rl}$ ranging from $1000$ for $L=8$ to $N_{\rm rl}=50$ for $L=20$. Below we quote energies in units of $J\equiv J_{1}$, the nearest-neighbor Heisenberg exchange. We extract the ordering (or freezing) temperature $T_{\rm g}$ from the crossing points of $\xi(T)/L$ for different $L$, according to the scaling law $\xi/L=f(L^{1/\nu}(T-T_{\rm g}))$, where $\xi$ is a correlation length, $f(x)$ a scaling function, and $\nu$ the correlation length exponent. This procedure is especially suitable to detect spin-glass freezing, as shown in previous studies of the 3d Edwards-Anderson model \cite{lee_young1,campos06,viet09}. The main source of numerical error in $T_{\rm g}$ is from the $L\to\infty$ extrapolation of the crossing point location required for small $L$. The magnetic correlation length $\xi_{\rm S}$ is calculated from a fit of the static magnetic structure factor, $S(\vec{q})$, close to the ordering wavevector $\vec{Q}$ (the three independent $\vec{Q}$ vectors corresponding to the zigzag order are $(\vec{b}_{1}+\vec{b}_{2})/2$, $\vec{b}_{1}/2$, and $\vec{b}_{2}/2$, where $\vec{b}_{1(2)}=2\pi(1/3,\pm1/\sqrt{3})$ are the reciprocal lattice vectors). Analogously, the spin-glass correlation length $\xi_{\rm SG}$ is obtained from the spin-glass susceptibility $\chi_{SG}(\vec{q})=N_{s}\sum_{\alpha,\beta}\big[\left\langle \left|q^{\alpha,\beta}\left(\vec{q}\right)\right|^{2}\right\rangle \big]_{av}$, where $q^{\alpha,\beta}\left(\vec{q}\right)=N_{s}^{-1}\sum_{i}S_{i}^{\alpha\left(1\right)}S_{i}^{\beta\left(2\right)}\mbox{exp}\left(i\vec{q}\cdot\vec{r}_{i}\right)$ is the spin-glass order parameter. Here $\alpha$ and $\beta$ are spin components, $^{(1,2)}$ denote identical copies of the system (``replicas'') containing the disorder configuration, $\langle\cdots\rangle $ denotes MC average, and $[\cdots]_{av}$ average over disorder. \section{Clean HK model} The 2d disorder-free HK model has been studied by various numerical methods \cite{Cha10,Cha13,Reu11,perkins12,perkins13}. A comparison of phase diagrams shows that the classical-spin HK model reproduces \cite{perkins13} all phases of the spin-1/2 model except for the quantum spin liquid \cite{Cha13}, with $T=0$ phase boundary locations in reasonable agreement between quantum and classical models. The results in Refs.~\onlinecite{perkins12,perkins13} also indicate two thermal transitions upon cooling to any of the ordered low-$T$ phases. The system enters a critical phase at $T_{\rm u}$, with power-law spin correlations, and a state with true long-range order is reached only below $T_{\rm l} < T_{\rm u}$. This behavior parallels that of a 2d six-state clock model \cite{clock}, as suggested by the sixfold degeneracy of the ordered states in the HK model. For selected values of $\phi$, we have verified that our MC simulations, applied to the 2d HK model ($J_{\perp}=0$), reproduce the results of Ref.~\onlinecite{perkins13}. In particular, the specific heat, Fig.~\ref{fig:clean}(a), shows a broad peak far above both $T_{\rm u}$ and $T_{\rm l}$ while the singularity at the transitions is weak -- this reflects the presence of strong fluctuations in the 2d system. Nevertheless, there is a well-defined crossing point in $\xi_{\rm S}/L$ at $T_{\rm l}$ where long-range order sets in, Fig.~\ref{fig:clean}(c). We have then switched on the inter-layer coupling $J_{\perp}$ and monitored the evolution of the transition temperature, Fig.~\ref{fig:clean}(d). As expected on general grounds, the critical intermediate phase of the 2d system disappears for finite $J_{\perp}$, such that there is only a single thermal phase transition at $T_{\rm N}$, which now displays a pronounced specific-heat singularity, Fig.~\ref{fig:clean}(b). For $J_{\perp}/J\gtrsim10^{-3}$, $T_{\rm N}$ is larger than both $T_{\rm l}$ and $T_{\rm u}$ of the 2d system, and our data is compatible with $T_{\rm N}\toT_{\rm l}$ as $J_{\perp}\to0$, although finite-size effects hamper an accurate determination of $T_{\rm N}$ for $J_{\perp}/J<10^{-4}$, Fig.~\ref{fig:clean}(d). \textit{Clean $J_1$-$J_2$-$J_3$ model.} We have also performed corresponding simulations for the $J_1$-$J_2$-$J_3$ model. Here, $T_{\rm N}\to0$ for $J_{\perp}\to0$ due to the assumed continuous spin symmetry. For $J_{\perp}/J=10^{-2}\left[10^{-3}\right]$ we have $T_{\rm N}/J=0.446\left(3\right)\left[0.42\left(1\right)\right]$. \begin{figure}[t] \includegraphics[width=0.49\textwidth]{fig2.pdf} \caption{ \label{fig:clean} MC results for the ordering into the zigzag state of the clean HK model for several system sizes $L$. (a,c): 2d. (b,d): 3d. (a,b): Specific heat as a function of the temperature $T$, in (b) with $J_{\perp}/J=10^{-2}$. (c): $\xi_{\rm S}/L$ as a function of $T$. The vertical dashed line indicates $T_{\rm l}$. (d): $T_{\rm N}(J_{\perp}/J)$, also showing $T_{\rm u}$ and $T_{\rm l}$ of the 2d HK model; the value of $T_{\rm u}$ was extracted from Ref.~\onlinecite{perkins13}. } \end{figure} \section{Magnetic depletion} We now describe our central results, obtained for the depleted HK and $J_1$-$J_2$-$J_3$ models, with a concentration $x$ of randomly placed vacancies. Since both models are frustrated, the introduction of vacancies generate local non-collinearities in the spin order \cite{henley89}, which ultimately leads to spin-glass behavior \cite{villain79}. \begin{figure}[t] \includegraphics[width=0.48\textwidth]{fig3.pdf} \caption{ \label{fig:dirty} MC results for the ordering into the spin-glass state for $J_{\perp}/J=10^{-2}$. (a,c): HK model at $x=30\%$. (b,d): $J_1$-$J_2$-$J_3$ model at $x=35\%$. (a) Specific heat as a function of the temperature $T$. (b) $\xi_{\rm S}/L$ as a function of $T$. The arrows indicate the crossing points of the curves for different pairs of $L$, displaying a clear downward trend with increasing $L$. Inset: Bragg intensity $S(\vec{Q}/N_{s})$ extrapolated to $T\to 0$ as a function of $1/N_{s}$. (c,d) $\xi_{\rm SG}/L$ as a function of $T$. Inset: Scaling plot with $\xi_{\rm SG}/L$ as a function of $(T-T_{\rm g})L^{1/\nu}$. The vertical dashed lines indicate the glass temperature $T_{\rm g}$, as determined in (c) and (d). } \end{figure} We have first studied the 2d case ($J_{\perp}=0$) and found -- in both models and for any $x\geq5\%$ -- indications of neither conventional nor spin-glass order at finite temperature. This is expected: conventional order is suppressed, due to the combination of disorder and frustration, in favor of spin-glass magnetism. However, the glass temperature is strictly zero in two dimensions \cite{Bray84,fischer} even in the case of Ising symmetry. For finite interlayer coupling the situation changes, with sample results shown in Fig.~\ref{fig:dirty}. While conventional long-range order is absent for any $x\geq 5\%$, spin-glass order emerges instead at low $T$. The latter is signified by a well-defined common crossing point in $\xi_{\rm SG}/L$ and a corresponding scaling, Fig.~\ref{fig:dirty}(c)-(d) \cite{nu}. In contrast, existing crossing points of $\xi_{\rm S}/L$ display a systematic downward shift with increasing $L$, indicative of short-range zigzag spin correlations, Fig.~\ref{fig:dirty}(b). We note that we do not reach the limit $L\gg\xi_{\rm S}(T=0)$ where crossing points would be absent entirely. Short-range magnetic order also manifests itself in the specific heat, Fig.~\ref{fig:dirty}(a). The peak in $C(T)$ is broad and occurs at a temperature considerably larger than the freezing temperature (here $T_{\rm peak}\approx2T_{\rm g}$), indicating that this short-range order builds up at temperature considerably higher than $T_{\rm g}$. We stress that this behavior is a hallmark of glassy systems \cite{av12a,fischer}, and it is, in principle, disconnected from the non-trivial behavior of the 2d disorder-free HK model \cite{perkins12}, Fig.~\ref{fig:clean}(a). To account for the possibility of different (non-zigzag) dilution-induced magnetic ground state, we monitored $S(\vec{q})$ in the reciprocal space, but (within our resolution) we detected peaks only at the $\vec{Q}$ vectors corresponding to the zigzag order. However, these peaks grow slower than the system size, Fig.~\ref{fig:dirty}(b), again indicating static short-range order with a vanishing magnetic order parameter. \begin{figure}[t] \includegraphics[width=0.49\textwidth]{fig4} \caption{\label{fig:spatial} Sample ground-state spin configuration of the classical HK model at $x=20\%$ depletion, with one $L=12$ layer shown. The arrows denote the $x$ and $z$ components of the $\vec{S}_{i}$; the circles indicate the vacancy positions. The arrow lengths indicate the weight of the projection onto the $x-z$ plane and the colors the in-plane orientation. Short-range zigzag order with glassy domain formation is visible. Inset: Ideal zigzag order with the spins aligned along $S_{i}^{z}$. } \end{figure} \section{Small doping and glassiness} In both the Heisenberg-Kitaev and $J_1$-$J_2$-$J_3$ models, we find a single vacancy to produce anticollinear states \cite{henley89}. Multiple vacancies have a somewhat different effect in the two models: In the Heisenberg-Kitaev case, the vacancies locally select specific stripe orientations due to spin-orbit coupling \cite{trousselet11}, causing domains with different stripe orientation to coexist. In the $J_1$-$J_2$-$J_3$ model, instead, the effect of long-range distortions of the spin pattern is more prominent, due to the presence of gapless bulk modes. Remarkably, for the vacancy concentrations of interest, $x\ge 20\%$, the spin configurations we observe in both models are virtually indistinguishable and are characterized by a short-range domain structure, Fig. \ref{fig:spatial}. Based on our MC data, we are unable to decide whether long-range order is destroyed in favor of spin-glass order at infinitesimal $x$ or at a finite critical $x_c$ (with $x_c<5\%$). We leave a more detailed characterization of the small-doping behavior for future work. \section{Ordering temperature and percolation} An easily accessible quantity is the ordering (or freezing) temperature $T_{\rm g}$ as function of $x$. While one generally expects that $T_{\rm g}$ decreases with increasing $x$, the behavior at large $x$ contains information on the nature of the magnetic couplings: For a layered local-moment system with nearest-neighbor couplings, $T_{\rm g}$ will diminish near the threshold $x_p$ for 2d site percolation, because for $x>x_p$ the layers fragment into disconnected spin clusters, and for Heisenberg symmetry $T_{\rm g}(x)/T_{\rm g}(x=0)$ will vanish as $x\tox_p$ in the limit of small interlayer coupling. In contrast, in systems with longer-range magnetic couplings, $T_{\rm g}$ will stay finite across $x_p$ \cite{percfoot}. The parent compounds of cuprate superconductors beautifully exemplify this physics: the Neel temperature in Zn-doped La$_{2}$CuO$_{4}$ vanishes essentially at the square-lattice percolation threshold of $x_p=40.5\%$ \cite{greven02} -- this proves that the cuprate magnetism is dominated by nearest-neighbor coupling. Our results for $T_{\rm g}(x)$ are shown in Fig.~\ref{fig:tg}. As expected from the above discussion, $T_{\rm g}$ of the nearest-neighbor HK model rapidly drops towards the honeycomb-lattice $x_p=30.3\%$ and becomes smaller than our lowest simulation temperature ($J/80$) for $x\gtrsim32\%$ for $J_\perp/J=10^{-2}$. (Note that, due to the finite $J_{\perp}$, $T_{\rm g}$ is expected to be non-vanishing up to the 3d percolation threshold, however, it is undetectably small for $x\gtrsim32\%$.) For smaller interlayer coupling this apparent vanishing of $T_{\rm g}(x)$ appears at even smaller $x$. In contrast, $T_{\rm g}$ of the $J_1$-$J_2$-$J_3$ model continues its approximately linear variation with $x$ across $x_p$ and extrapolates to our lowest simulation temperature at a much larger doping level of $x\approx50\%$ \cite{triang}. For both models, $T_{\rm g}(x)/T_{\rm g}(x\!=\!0)$ diminishes with decreasing $J_\perp/J$, and small $J_\perp/J$ induce a curvature in $T_{\rm g}(x)$ which is particularly pronounced at small $x$. \section{Summary} We have studied the magnetism of local-moment models for {A$_2$IrO$_3$} under magnetic depletion. A spin-orbit glass, with zigzag short-range order, emerges generically from the combination of strong spin-orbit coupling, frustration, and disorder. We have determined the glass (or freezing) temperature $T_{\rm g}$ as function of the doping level $x$, which at large doping differs qualitatively between the HK and $J_1$-$J_2$-$J_3$ models, Fig.~\ref{fig:tg}(a). We thus propose to employ magnetic depletion, using dopants with magnetically inert $d$ shells, as a tool to assess the importance of longer-range magnetic couplings in the {A$_2$IrO$_3$} compounds: If the experimental $T_{\rm g}$ were found to vanish near $x_p$ this would strongly hint \cite{itfoot} at short-range HK physics being realized in {A$_2$IrO$_3$}, as originally proposed in Refs.~\onlinecite{Cha10,Cha13}. Conversely, the absence of such vanishing would imply significant longer-range interactions. \textit{Note added.} Very recent experiments \cite{geg14}, using non-magnetic Ti dopants substituting for Ir, show significant differences between depleted {Na$_2$IrO$_3$} and {Li$_2$IrO$_3$} across the percolation threshold. \acknowledgments We thank J. van den Brink, P. Gegenwart, P. Horsch, G. Khaliullin, and A. Rosch for discussions. The computations were partially performed at the Center for Information Services and High Performance Computing (ZIH) at TU Dresden. This research was supported by the DFG through FOR 960 and GRK 1621 as well as by the Helmholtz association through VI-521. E.C.A. was also partially supported by FAPESP.
2024-02-18T23:40:53.258Z
2014-11-11T02:15:13.000Z
algebraic_stack_train_0000
3,577
3,554
proofpile-arXiv_066-1579
\section{Introduction \label{s:intro}} Recent experiments with radioactive ion beams have opened a new era in nuclear physics by providing the possibility to study the light nuclei far from stability. Indeed, the availability of the radioactive ion beams favored the discovery of halo nuclei \cite{Tanihata85a}. A typical example is the neutron halo in the nucleus $^{11}$Li, revealed as a consequence of its very large interaction radius, deduced from the measured interaction cross sections of $^{11}$Li with various target nuclei \cite{Tanihata85b,Tanihata88,Mittig87}. The halo of the nucleus extends its matter distribution to a large radius. A hypothesis based on the early data \cite{Tanihata85b} about the important role played by the neutron pairing for the stability of nuclei near the drip line is suggested in Refs.~\cite{Hansen87,Migdal73} and, in particular, the direct link of the matter radius to the $2n$ weak binding in $^{11}$Li is claimed to be attributed to its configuration as a $^{9}$Li core coupled to a di-neutron. The experiments that give evidences of the existence of a halo in this nucleus are related not only to measurements of the total reaction cross section for $^{11}$Li projectiles but also to the momentum distributions of the $^{9}$Li or neutron fragments following the breakup of $^{11}$Li at high energies \cite{Kobayashi88,Bertsch89,Anne90,Esbensen91}, e.g. the process $^{11}$Li+$^{12}$C at $E=800$ MeV/nucleon in Ref.~\cite{Kobayashi88}. Here we will mention also the experiments at lower energies $E=60$ MeV/nucleon of scattering of $^{11}$Li on $^{9}$Be, $^{93}$Nb and $^{181}$Ta in \cite{Orr92} and of $^{11}$Li on a wide range of nuclei from $^{9}$Be to $^{238}$U in \cite{Orr95}. It was shown that the momentum distribution of the breakup fragments has a narrow peak, much narrower than that observed in the fragmentation of well bound nuclei. This property has been interpreted (e.g., \cite{Baye2010,Barranco96,Hencken96,Bertulani2004,Bertulani92,Ershov2004,Bertulani2002}) to be related to the very large extension of the wave function, as compared to that of the core nucleus, leading to the existence of the nuclear halo. As pointed out in Ref.~\cite{Bertulani92}, the longitudinal component of the momentum (taken along the beam or $z$ direction) gives the most accurate information on the intrinsic properties of the halo and is insensitive to details of the collision and the size of the target. The differential cross sections for small-angle proton elastic scattering on Li isotopes at energies near 700 MeV/nucleon were measured in inverse kinematics with secondary nuclear beams at GSI (Darmstadt) \cite{Dobrovolsky2006}. They have been analyzed using the Glauber theory and information on the nuclear matter density distributions has been extracted. It was supposed that the two valence neutrons in $^{11}$Li, which form the halo, could move in a wide region far from the $^{9}$Li core that is related to the small two-neutron separation energy ($\sim 0.3$ MeV). The idea of existence of two-neutron halo in $^{11}$Li was experimentally verified in measurements and studies of differential cross sections of the $^{11}$Li$+p$ elastic scattering in the energy range 60--75 MeV/nucleon \cite{Moon92,Korsh97c,Korsh96}. The data analysis at 62 MeV/nucleon \cite{Moon92} showed that the adjusted phenomenological Woods-Saxon (WS) potential has a shallow real part and an imaginary part with a long tail. In Refs.~\cite{Korsh97c,Korsh96} the data at 65--75 MeV/nucleon were analyzed using the parameter free cluster-orbital shell-model approximation (COSMA) \cite{Zhukov93} and a conclusion was drawn that the $^{11}$Li$+p$ scattering is mainly determined by scattering on the $^{9}$Li core. In various works (e.g., Refs.~\cite{Suzuki93,Kohno93,Chaudhuri94,Kanungo97,Kim2001}) the calculations of the $^{11}$Li$+p$ differential cross sections in the energy range $E<100$ MeV/nucleon differ between themselves by the assumptions how the $^{11}$Li$+p$ optical potential to be constructed. Most of them use the simple folding approach to the real part of OP (ReOP) without accounting for the exchange terms and with introducing different forms of effective nucleon-nucleon (NN) interactions. To calculate the folding potentials, the constituent $^{9}$Li+$2n$ cluster model was usually employed, in which the $^{11}$Li density has two separated parts taken in explicit forms. Various suggestions were made for the imaginary part of OP (ImOP) like WS and Gaussian forms or calculated within the $t$-matrix method. Then, the cross sections were computed numerically by using the eikonal approximation or starting with the Glauber multiple scattering theory. The more complicated model of $^{11}$Li treated as a $^{9}$Li+n+n three-body system was developed in Ref.~\cite{Crespo96}, where the effects of the halo distribution in $^{11}$Li in correspondence to different parts of the three-body wave function are manifested in the elastic cross section. Generally, here we would like to outline the advantages of the microscopic analyses using the coordinate-space $g$-matrix folding method (e.g., Ref.~\cite{Amos2005}), as well as works (e.g., Ref.~\cite{Avrigeanu2000}), where the ReOP is microscopically calculated using effective NN interactions within a folding approach \cite{Satchler79,Khoa1993,Khoa2000,Khoa97} and including also the exchange terms in it. In the recent works \cite{Hassan2009,Farag2012} the $^{11}$Li$+p$ elastic scattering cross sections were analyzed using folding procedure and effective NN forces to calculate the real OP taking into account only its direct part but not the exchange one. In Ref.~\cite{Hassan2009} the volume ImOP was taken either in a WS form or in the form of the direct folded ReOP and in Ref.~\cite{Farag2012} an application of the microscopic OP \cite{Lukyanov2004a,Shukla2003} developed on the base of the HEA theory \cite{Glauber,Sitenko} was also made. To this end phenomenological densities (Gaussian-types and COSMA) have been used in the calculations \cite{Hassan2009} and the LSSM densities of $^{9,11}$Li \cite{Karataglidis97} in Ref.~\cite{Farag2012}, as well. The aims of our work can be presented as follows. First, we study elastic scattering cross section for $^{11}$Li$+p$ at three incident energies ($E<100$ MeV/nucleon) using microscopically calculated OP's within the hybrid model \cite{Lukyanov2004a}. The ReOP includes the direct and exchange terms and the ImOP is based on the HEA. We follow our previous works \cite{Lukyanov2007,Lukyanov2009,Lukyanov2010}, where this model was applied to elastic scattering of exotic nuclei $^{6,8}$He with use of their LSSM densities, and thus avoiding an adjustment of free parameters. As in Ref.~\cite{Lukyanov2009}, we pay attention to the ambiguity problem when fitting the coefficients $N$'s that renormalize the strengths of different parts of OP. This ambiguity is minimized in Ref.~\cite{Lukyanov2012} by testing the condition that the true energy dependence of the volume integrals must fulfill. Second, in addition to the analysis of elastic scattering cross sections, we estimate other characteristics of the reaction mechanism such as the $^{11}$Li total reaction and breakup cross sections. The theoretical scheme used in this second part of the work is based on the procedure from the first part to calculate microscopically the potentials necessary for the evaluation of the other quantities within the model. The calculations are performed by using the $^{11}$Li+$p$ OP constructed as a sum of the microscopically calculated OP of $^{9}$Li+$p$ and the ($2n$-halo)+$p$ potential folded with a density probability of the relative motion of clusters. For a more consistent description of the halo structure of $^{11}$Li we calculate the fragment momentum distributions from $^{11}$Li$+p$ reaction at 62 MeV/nucleon within the same breakup reaction model and present predictions for them. Finally, we give results for the single-particle density distribution of $^{11}$Li within the true two-cluster model considering the relative motion of clusters ($^{9}$Li+$h$) that is ensured by the respective wave function and make a comparison with other calculations. The structure of this article is the following. The theoretical scheme to calculate microscopically the real and imaginary parts of the OP and the spin-orbit term, as well as the results of the calculations of the elastic scattering of $^{11}$Li on protons and the discussion are given in Sec.~II. The next Sec.~III contains the basic expressions to estimate the $^{11}$Li breakup and to calculate the momentum distributions of its products. The same Section contains the results of the total breakup cross sections, the momentum distributions of clusters and the single-particle density distribution of $^{11}$Li calculated within the breakup model of $^{11}$Li. The summary and conclusions of the work are given in Sec.~IV. \section{Elastic scattering of $^{11}$Li on protons at $E<100$ MeV/nucleon} \subsection{Microscopic ReOP} The optical potential used in our calculations has the form \begin{equation} U_{opt}=V^{F}(r)+iW(r). \label{eq:0} \end{equation} In Sec.~IIC we add also a spin-orbit term to $U_{opt}$ from Eq.~(\ref{eq:0}). The real part of the nucleon-nucleus OP is assumed to be a result of a folding of the nuclear density and of the effective NN potential and involves the direct and exchange parts (e.g., Refs.~\cite{Satchler79,Khoa1993,Khoa2000}, see also \cite{Lukyanov2007,Lukyanov2009}): \begin{equation} V^{F}(r)= V^{D}(r)+V^{EX}(r). \label{eq:1} \end{equation} The direct part $V^{D}(r)$ is composed by the isoscalar (IS) and isovector (IV) contributions: \begin{equation} V^{D}_{IS}(r)=\int \rho_2({\bf r}_2)g(E)F(\rho_2)v_{00}^D(s)d{\bf r}_2, \label{eq:2} \end{equation} \begin{equation} V^{D}_{IV}(r)=\int \delta\rho_2({\bf r}_2)g(E)F(\rho_2)v_{01}^D(s)d{\bf r}_2 \label{eq:3} \end{equation} with ${\bf s}={\bf r}+{\bf r}_2$, and \begin{equation} \rho_2({\bf r}_2)=\rho_{2,p}({\bf r}_{2,p})+\rho_{2,n}({\bf r}_{2,n}), \label{eq:4} \end{equation} \begin{equation} \delta\rho_2({\bf r}_2)=\rho_{2,p}({\bf r}_{2,p})-\rho_{2,n}({\bf r}_{2,n}). \label{eq:5} \end{equation} In Eqs.~(\ref{eq:4}) and (\ref{eq:5}) $\rho_{2,p}({\bf r}_{2,p})$ and $\rho_{2,n}({\bf r}_{2,n})$ are the proton and neutron densities of the target nucleus. The expressions for the energy and density dependence of the effective NN interaction (the formulae for $g(E)$ and $F(\rho)$) are given e.g., in Ref.~\cite{Lukyanov2009}. For the NN potentials $v_{00}^D$ and $v_{01}^D$ we use the expression from Ref.~\cite{Khoa2000} for the CDM3Y6 type of the effective interaction based on the Paris NN potential. The isoscalar part of the exchange contribution to the ReOP has the form: \begin{eqnarray} V^{EX}_{IS}(r)&=&g(E)\int \rho_2({\bf r}_2, {\bf r}_2-{\bf s}) F\left(\rho_2({\bf r}_2-{\bf s}/2)\right ) \nonumber \\ & \times & v_{00}^{EX}(s) j_0(k(r)s)d{\bf r}_2, \label{eq:8} \end{eqnarray} $\rho_{2}$ being the one-body density matrix. It is shown in Ref.~\cite{Lukyanov2007} how the isovector part of the exchange ReOP can be obtained. Here we would like to emphasize the general importance of the account for the exchange part of the OP. As shown on different examples in Ref.~\cite{Khoa2000}, the exchange effects lead, for instance, to a particular energy dependence of the total potential, to different signs of the direct and exchange inelastic form factors and others, so they should be treated as accurately as possible. The LSSM proton and neutron densities used in our work for $^{11}$Li are calculated in a complex 2$\hbar\omega$ shell-model space using the WS basis of single-particle wave functions with exponential asymptotic behavior \cite{Karataglidis97}, which is in principle the realistic one. Here we would like to discuss this point. In many works, to simplify the analytical studies and calculations one uses basic functions and densities with Gaussian asymptotics of the type $\exp(-ar^{2})$, while it has to be exponential one $\exp(-br)/r$, where the parameter $b$ is related to the bound energy of the particle in the upper shell. This difference can affect the results for the cross sections in the region of relatively large angles of scattering. This point was one of the reasons the LSSM densities \cite{Karataglidis97} for $^{9,11}$Li to be used in our work. \subsection{Optical potential within the high-energy approximation} In the present work we use the hybrid model of OP \cite{Lukyanov2004a}, in which its imaginary part was derived within the HEA theory \cite{Glauber,Sitenko}, while the real part is obtained as prescribed by the folding procedure from Sec.~IIA. The cross sections are calculated by means of the DWUCK4 code \cite{DWUCK} for solving the Schr\"{o}dinger equation. To obtain the HEA OP one can use the definition of the eikonal phase as an integral of the nucleon-nucleus potential over the trajectory of the straight-line propagation, and has to compare it with the corresponding Glauber expression for the phase in the optical limit approximation. In this way, the HEA OP is obtained as a folding of the form factors of the nuclear density and the NN amplitude $f_{NN}(q)$ \cite{Lukyanov2004a,Shukla2003}: \begin{eqnarray} U^H_{opt}&=&V^H+iW^H=-{\hbar v\over (2\pi)^2}(\bar\alpha_{NN}+i)\bar\sigma_{NN}\nonumber \\ & \times & \int_0^\infty dq q^2 j_0(qr) \rho_2(q) f_{NN}(q). \label{eq:14} \end{eqnarray} In Eq.~(\ref{eq:14}) $\bar\sigma_{NN}$ and $\bar\alpha_{NN}$ are, respectively, the NN total scattering cross section and the ratio of the real to imaginary part of the forward NN scattering amplitude, both averaged over the isospin of the nucleus. These two quantities have been parametrized in \cite{Shukla2001,Charagi92} as functions of energies up to 1 GeV. The values of $\bar\sigma_{NN}$ and $\bar\alpha_{NN}$ can also account for the in-medium effect by a factor from Ref.~\cite{Xiangzhow98}. \subsection{The spin-orbit term} The expression for the spin-orbit contribution to the OP used in our work is added to the right-hand side of Eq.~(\ref{eq:0}) and has the form: \begin{equation} V_{LS}(r)=2\lambda_{\pi}^{2}\left[V_{0}\frac{1}{r}\frac{df_{R}(r)}{dr} +iW_{0} \frac{1}{r}\frac{df_{I}(r)}{dr}\right]({\bf l}\cdot{\bf s}), \label{eq:15} \end{equation} where $\lambda_{\pi}^{2}$=2 fm$^{2}$ is the squared pion Compton wavelength, $V_{0}$ and $W_{0}$ are the real and imaginary parts of the microscopic OP at $r$=0. In our work, in Eq.~(\ref{eq:15}) the functions $f_{R}(r)$ and $f_{I}(r)$ are taken as WS forms $f(r,R_{R},a_{R})$ and $f(r,R_{I},a_{I})$ with the half-radius $R_{R}(R_{I})$ and diffuseness $a_{R}(a_{I})$ parameters obtained by the best fit of the WS potential to the microscopically calculated real $V(r)$ and imaginary $W(r)$ parts of the OP. \subsection{Results of calculations of $^{11}$Li$+p$ elastic scattering} In the beginning of this subsection we consider $^{11}$Li$+p$ elastic scattering at three energies, 62, 68.4, and 75 MeV/nucleon, for which the differential cross sections have been measured \cite{Moon92,Korsh97c,Korsh96}. The respective folding optical potentials $V^{F}$ and $W^{H}$ are calculated by the procedure described in the previous subsections IIA.B.C. using Eqs.~(\ref{eq:0}--\ref{eq:15}) and then, the whole OP is constructed in the form \begin{widetext} \begin{equation} U_{opt}(r)=N_R V^F(r) + i N_I W(r) + 2\lambda_{\pi}^{2} \left\{ N_R^{SO} V^F_0 \frac 1 r \frac {df_R(r)} {dr} + i N_I^{SO} W^H_0 \frac 1 r \frac {df_I(r)} {dr}\right \} ({\bf l.s}). \label{eq:16} \end{equation} \end{widetext} The OP $U_{opt}(r)$ (\ref{eq:16}) is applied to calculate the elastic scattering differential cross sections using the program DWUCK4 \cite{DWUCK}. The number of partial waves is controlled by the parameter LMAX that corresponds to the maximum partial wave for the distorted waves. We use the parameter LMAX=100. For the densities of protons and neutrons of $^{11}$Li we use the LSSM ones \cite{Karataglidis97} (shown in Fig.~\ref{fig1}) that have an exponential asymptotics which is the correct one. As can be seen from Eq.~(\ref{eq:16}), we introduce and consider the set of $N$ coefficients as parameters that can be found by fitting the calculated to the experimental differential cross sections of the $^{11}$Li$+p$ elastic scattering. Moreover, the fitting procedure can be constrained by additional conditions on the behavior of the OP's (as in Refs.~\cite{Lukyanov2007,Lukyanov2009,Lukyanov2010} and will be seen below). The real and imaginary parts of the SO optical potential in (\ref{eq:16}) are approximated by Woods-Saxon form. Their parameters $V_{0}^{F}$($W_{0}^{H}$), $R_{R}(R_{I})$ and $a_{R}(a_{I})$ were obtained by a fitting procedure to the respective calculated microscopic potentials $V^{F}(r)$ and $W^{H}(r)$. We take the ImOP in two forms, the microscopically obtained $W^{H}$ within HEA ($W=W^{H}$) or the form of the folded real potential $V^{F}$ ($W=V^{F}$). \begin{figure} \includegraphics[width=0.8\linewidth]{fig1.eps} \caption{Total (normalized to $A=11$), point-proton (normalized to $Z=3$) and point-neutron (normalized to $N=8$) densities of $^{11}$Li obtained in the LSSM approach \protect\cite{Karataglidis97}. \label{fig1}} \end{figure} Concerning our approach using the set of $N$ coefficients as parameters we consider it as the appropriate physical basis, which constrains the fitting procedure by the established model forms of the potentials. We emphasize that in our work we do not aim to find perfect agreement with the experimental data. In this sense, however, the usage of the fitting parameters ($N$'s) related to the depths of the different components of the OP's can be considered as a way to introduce a quantitative measure of the deviations of the predictions of our method (with the account for the exchange contributions to OP) from the reality (e.g., the differences of $N$'s from unity for given energies, as can be seen below). Thus, the closeness of the $N$'s values to unity could show the ability of the approach to give the absolute values of the intensity of the OP's. The microscopic real part ($V^{F}$) of OP and HEA imaginary part ($W^{H}$) calculated using LSSM densities of $^{11}$Li are shown in Fig.~\ref{fig2} for different energies. In Fig.~\ref{fig3} we give as an example the differential cross section of the elastic scattering $^{11}$Li$+p$ at 62 MeV/nucleon in the cases when $W=W^{H}$ and $W=V^{F}$ with and without accounting for the spin-orbit term in Eq.~(\ref{eq:16}). The renormalization parameters $N$ are determined by a fitting procedure. The results of the calculations are close to each other and that is why all of them are presented inside areas shown in Fig.~\ref{fig3}. The following definition of $\chi^2$ is used: \begin{equation} \chi^2 = \frac{1}{N} \sum\limits_{i=1}^{N} \Bigl[ \frac{\sigma^{\text{exp}}(\vartheta_i) - \sigma^{\text{th}} (\vartheta_i)}{\Delta \sigma^{\text{exp}}(\vartheta_i)} \Bigr]^2, \label{eq:16a} \end{equation} where $\sigma^{\text{th}}(\vartheta_i)$ and $\sigma^{\text{exp}}(\vartheta_i)$ are the theoretical and experimental values of the differential cross sections ($d\sigma/d\Omega$), and $\Delta\sigma^{\text{exp}}(\vartheta_i)$ is the experimental error. The blue area in Fig.~\ref{fig3} includes four curves corresponding to $W=W^{H}$ (from which three curves obtained without SO term and one with the SO term), while the grey one includes four curves corresponding to $W=V^{F}$ (from which two curves obtained without SO term and two curves with the SO term). We give in Table~\ref{tab1} the values of the $N$'s parameters, $\chi^{2}$ and the total reaction cross sections $\sigma_{R}$. \begin{figure} \includegraphics[width=0.8\linewidth]{fig2.eps} \caption{Microscopic real part ($V^F$) of OP (a) and HEA imaginary part ($W^H$) (b) calculated using the LSSM densities for energies $E=62$ (solid lines), 68.4 (dashed lines) and 75 MeV/nucleon (dotted lines). \label{fig2}} \end{figure} \begin{figure} \includegraphics[width=1.0\linewidth]{fig3.eps} \caption{(Color online) The $^{11}$Li$+p$ elastic scattering cross section at $E=62$ MeV/nucleon using $U_{opt}$ [Eq.~(\ref{eq:16})] for values of the parameters shown in Table~\ref{tab1}. Dark (blue) area: $W=W^{H}$, pale (grey) area: $W=V^{F}$. The experimental data are taken from Ref.~\protect\cite{Moon92}. \label{fig3}} \end{figure} \begin{table} \caption{Values of the $N$'s parameters, $\chi^{2}$ and $\sigma_{R}$ (in mb) in the case of $^{11}$Li$+p$ at 62 MeV/nucleon for the results shown in Fig.~\ref{fig3}.} \label{tab1} \begin{center} \begin{tabular}{ccccccc} \hline \hline \noalign{\smallskip} $W$ & $N_R$ & $N_I$ & $N_R^{SO}$ & $N_I^{SO}$ & $\chi^{2}$ & $\sigma_{R}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} $W^{H}$ & 0.871 & 0.953 & & & 1.415 & 456.97 \\ & 0.870 & 0.965 & & & 1.435 & 459.37 \\ & 0.873 & 0.948 & & & 1.423 & 455.98 \\ & 0.854 & 0.974 & 0.028 & 0.000 & 1.468 & 461.21 \\ \noalign{\smallskip} $V^{F}$ & 0.953 & 0.448 & & & 5.567 & 389.72 \\ & 0.956 & 0.398 & & & 5.726 & 361.02 \\ & 0.670 & 0.251 & 0.338 & 0.000 & 5.027 & 258.65 \\ & 0.623 & 0.266 & 0.402 & 0.000 & 5.538 & 270.05 \\ \noalign{\smallskip}\hline \hline \end{tabular} \end{center} \end{table} It can be seen from Fig.~3 the satisfactory overall agreement of both areas of curves with the experimental data. However, we note the better agreement in the case when $W=W^{H}$ (the blue area) and the values of $\chi^{2}$ are between 1.40 and 1.47, while in the case $W=V^{F}$ they are between 5.00 and 5.80. The situation is similar also for the other energies. So, in our further calculations we use only ImOP $W=W^{H}$. Second, we note that the values of $\sigma_{R}$ are quite different in both cases ($\sigma_{R}\approx $ 455--462 mb for $W=W^{H}$ and $\sigma_{R}\approx $ 260--390 mb for $W=V^{F}$). Third, one can see from Table~\ref{tab1} and from the comparison with the data in Fig.~\ref{fig3} that the role of the SO term is weak. Its effects turn out to be to decrease the values of $N_{R}$ and to increase the values of $N_{R}^{SO}$ (see the last two lines in Table~\ref{tab1}). As is known, the problem of the ambiguity of the parameters $N$ arises when the fitting procedure is applied to a limited number of experimental data (see, e.g., the calculations and discussion in our previous works \cite{Lukyanov2007,Lukyanov2009,Lukyanov2010}). Due to the fact that the fitting procedure belongs to the class of the ill-posed problems (see, e.g., Ref.~\cite{Tikhonov77}), it becomes necessary to impose some physical constraints on the choice of the set of parameters $N$. The total cross section of scattering and reaction is one of them, however, the corresponding experimental values are missing at the energy interval considered in the present work. Another physical criterion that has to be imposed on the choice of the $N$ values is the behavior of the volume integrals \begin{equation} J_V=\frac{4\pi}{A}\int dr r^2 [N_{R}V^{F}(r)], \label{eq:17} \end{equation} \begin{equation} J_W=\frac{4\pi}{A}\int dr r^2 [N_{I}W^{H}(r)] \label{eq:18} \end{equation} as functions of the energy. We show in Fig.~\ref{fig4} the results of our calculations of the $^{11}$Li$+p$ elastic scattering cross sections for the three energies $E=62, 68.4$ and 75 MeV/nucleon. For each energy we present two curves, with and without accounting for the SO term. The corresponding values of the $N$'s parameters together with those of $J_V$, $J_W$, $\chi^{2}$ and $\sigma_{R}$ are given in Table~\ref{tab2}. In Fig.~\ref{fig5} we give the curves for the volume integrals $J_V$ and $J_W$ connecting the results obtained in our calculations with $N$'s values. We present them as better ones because first, the values of $\chi^{2}$ are around unity, and second, there is a good agreement with the data including those of $\theta_{c.m.}$ up to 60$^{\circ}$ for 62 MeV/nucleon. One can see that the values of $J_V$ are decreasing with the increase of the incident energy (with a small exception at 68.4 MeV/nucleon) that is in general agreement with the results from Ref.~\cite{Romanovsky}. This is not the case for $J_W$, where its value for $E=62$ MeV/nucleon is larger than for the others. Indeed, it was pointed out in \cite{Romanovsky} that the general behavior of the volume integral $J_V$ is decreasing with the increase of the energy in the interval $0<E<100$ MeV/nucleon, while $J_W$ increases with the increase of comparatively small energy and becomes almost constant at a larger energy. However, the same situation had appeared in the analysis of the same data at three energies within the semi-microscopic approach in Ref.~\cite{Hassan2009}, where the ReOP was calculated using a single-folding procedure with Gaussian, Gaussian-oscillator and COSMA forms of the single-particle density and the ImOP was taken phenomenologically in a Woods-Saxon form or equal to the form of the folded ReOP. In Fig.~\ref{fig6}(a) are shown the curves of $J_V$ corresponding to its values obtained in \cite{Hassan2009} for the cases of the four densities used. In addition, we show in Fig.~\ref{fig6}(b) the values of $J_W$ calculated using the corresponding fitted imaginary part of the OP's taken in a phenomenological WS form. One can see that the $J_V$ has a reasonable behavior in agreement with the results from Ref.~\cite{Romanovsky}, while the values of $J_W$ are in contradiction with them. Thus, the problem arising in our work had appeared also in the semi-phenomenological approach in Ref.~\cite{Hassan2009}, in which a larger number of parameters has been used. A possible reason for such a behavior of $J_W$ at this energy could be the change of the scattering mechanism with the increase of the angle of scattering when the other channels except the elastic one should be taken into consideration. Such a "strong" channel with its influence on the elastic one could be that of the fragmentation of $^{11}$Li into clusters. \begin{figure} \includegraphics[width=1.0\linewidth]{fig4.eps} \caption{The $^{11}$Li$+p$ elastic scattering cross section at $E=62$, 68.4, and 75 MeV/nucleon. Solid line: without SO term; dashed line: with SO term. The values of $N$'s are given in Table~\ref{tab2}. The experimental data are taken from \protect\cite{Moon92} for 62 MeV/nucleon, \protect\cite{Korsh97c} for 68.4 MeV/nucleon, and \protect\cite{Korsh96} for 75 MeV/nucleon. \label{fig4}} \end{figure} \begin{table*} \caption{Values of the $N$'s parameters, volume integrals $J_V$ and $J_W$ (in MeV fm$^{3}$), $\chi^{2}$ and total reaction cross section $\sigma_{R}$ (in mb) for results at three energies $E$ (in MeV/nucleon) considered and shown in Fig.~\ref{fig4}.} \label{tab2} \begin{center} \begin{tabular}{lcccccrcc} \hline \hline \noalign{\smallskip} $E$ & $N_R$ & $N_I$ & $N_R^{SO}$ & $N_I^{SO}$ & $J_V$ & $J_W$ & $\chi^{2}$ & $\sigma_{R}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} 62 & 0.871 & 0.953 & & & 342.474 & 332.015 & 1.415 & 456.97 \\ & 0.851 & 0.974 & 0.028 & 0.000 & 334.610 & 339.332 & 1.468 & 461.21 \\ \noalign{\smallskip} 68.4 & 0.625 & 0.186 & & & 232.210 & 60.489 & 1.328 & 153.44 \\ & 0.543 & 0.140 & 0.201 & 0.000 & 201.744 & 45.530 & 0.316 & 122.25 \\ \noalign{\smallskip} 75 & 0.679 & 0.370 & & & 238.048 & 112.913 & & 232.62 \\ & 0.660 & 0.369 & 0.045 & 0.000 & 231.387 & 112.607 & & 232.62 \\ \noalign{\smallskip}\hline \hline \end{tabular} \end{center} \end{table*} \begin{figure} \includegraphics[width=0.8\linewidth]{fig5.eps} \caption{The values of the volume integrals $J_V$ and $J_W$ [Eqs.~(\ref{eq:17}) and (\ref{eq:18})] as functions of the energy per nucleon for $^{11}$Li$+p$ elastic scattering. The $N$'s values are given in Table~\ref{tab2}. Solid line: without SO term of $U_{opt}$ [Eq.~(\ref{eq:16})]; dash-dotted line: with SO term of $U_{opt}$. The additional values of $J_V$ and $J_W$ at $E=62$ MeV/nucleon (connected by a dotted line with the other curves) are obtained in the case when the fitting procedure for the $N$'s parameters is limited up to the experimental points for $\theta_{c.m.}\leq 46^\circ$ (see the text). \label{fig5}} \end{figure} \begin{figure} \includegraphics[width=0.8\linewidth]{fig6.eps} \caption{The energy dependence of the volume integrals: (a) $J_V$ obtained in \protect\cite{Hassan2009} for folding potentials ReOP ($V$) calculated using two types of Gaussians (G and GG), Gaussian oscillator (GO) and COSMA densities of $^{11}$Li for $^{11}$Li+$p$ elastic scattering; (b) $J_W$ calculated using the fitted imaginary WS potentials corresponding to those real parts of OP that give $J_{V}$'s in (a). \label{fig6}} \end{figure} As a next step, we perform a methodical study of $^{11}$Li$+p$ elastic scattering cross section for $E=62$ MeV/nucleon limiting our fitting procedure for the $N$'s parameters up to the experimental points for $\theta_{c.m.}\leq 46^\circ$. The result of this study is presented in Fig.~\ref{fig7}. Doing so we consider now the experimental data for all three energies 62, 68.4, and 75 MeV/nucleon being at the same region of angles. The fit to this amount of data at 62 MeV/nucleon yields the new set of parameters: $N_R=0.656$, $N_I=0.164$ with $\chi^{2}=0.788$ and $\sigma_{R}=154.86$ mb. Now we obtain values of the volume integrals (without SO term of $U_{opt}$) $J_V=257.973$ MeV fm$^{3}$ and $J_W=57.136$ MeV fm$^{3}$ (shown in Fig.~\ref{fig5}), while the obtained before values are $J_V=342.47$ MeV fm$^{3}$ and $J_W=332.015$ MeV fm$^{3}$ (see the first line in Table~\ref{tab2}). As a result, we get the behavior of $J_V$ and $J_W$ in a reasonable agreement with the conclusions of Ref.~\cite{Romanovsky}. In our opinion, the procedure described above points out the role of the data at $\theta_{c.m.}>46^\circ$ on the values of $\chi^{2}$ and on the conclusions on the mechanism of the elastic scattering process. \begin{figure} \includegraphics[width=1.0\linewidth]{fig7.eps} \caption{The $^{11}$Li$+p$ elastic scattering cross section at $E=62$ MeV/nucleon when the fitting procedure for the $N$'s parameters is limited only up to the experimental points for $\theta_{c.m.}\leq 46^\circ$. The obtained values of $N_R$, $N_I$, $J_V$, $J_W$, $\chi^{2}$, and $\sigma_{R}$ are given in the text. \label{fig7}} \end{figure} \section{Breakup processes within $^{9}$Li+$2n$ cluster model} \subsection{Two-cluster model and applications} In this Section, in addition to the analysis of $^{11}$Li$+p$ elastic scattering cross section in Sec.~II, we study other characteristics of the reaction mechanism, such as the $^{11}$Li total reaction cross section, the breakup cross section and related quantities. This part of the work is based on the procedure for microscopic calculations of OP's presented in Sec.~II. We consider a simple two-cluster model that has been already used for $^{6}$He for studying its elastic scattering and breakup reactions on nuclear targets \cite{Lukyanov2011}. Within this model for the $^{11}$Li nucleus, first, the density distributions of $^{9}$Li core ($c$-cluster) and $h=2n$-halo must be given. Second, the folding potentials of interaction of each of the clusters with the incident proton have to be computed. Then, the sum of these two potentials must be folded with the respective two-cluster density distribution of $^{11}$Li that causes the necessity the wave function of the relative motion of two clusters to be known. We calculate the latter as a solution of the Schr\"{o}dinger wave equation by using of WS potential and given 0$s$ or 1$s$ state for particle with reduced mass of both clusters. The WS parameters are obtained by fitting the energy of a given state to the empirical separation energy value of $h$-cluster $\varepsilon=0.247$ MeV and the rms radius of the cluster function. For the latter we choose the value of 4.93 fm that is somehow "in between" the values obtained within the three-body COSMA \cite{Thompson94} and deduced from shell-model calculations \cite{Zelenskaya,Myo2007}. Such two-cluster model takes an interspace between the two classes of approaches. In one of them each of the clusters has its own phenomenological density that is often used to fit the elastic scattering data. The second class includes microscopic three-body models using to a different extent the shell-model picture. Among them we would like to note COSMA (see, for example, Refs.~\cite{Tostevin98,Ershov2010}), which has already successfully described a great amount of experimental data applying the Glauber scattering theory. Justifying our more simple two-cluster model, we hope, however, to keep the basic physical consideration avoiding some simplifications like folding without exchange effects, use of Gaussian-type functions for densities of clusters and bound-state wave functions of relative motion, use of phenomenological ImOP etc. We will always take into account the contribution of the exchange effects and the wave function of the relative motion of two clusters is calculated for the fitted finite-range potential that has an exponential behavior. The bound-state two-cluster system requires a particular consideration. In the earlier works estimations were made using the wave function of 0$s$-state ($n$=0) of the $(c+h)$ system, which does not have nodes inside the potential (except at $r=0$). However, it has been shown in Refs.~\cite{Thompson94,Myo2007} that due to the violation of Pauli principle (Pauli-blocking effect in $^{11}$Li ground state) the 1$s$- and 0$p$-states give the main contribution to the wave function of the two-cluster system with almost equal probabilities thus oscillating once inside the potential. Nevertheless, we will consider both $0s$- and $1s$-densities $\rho_{0}^{(0)}$ and $\rho_{0}^{(1)}$ in the further calculations and comparisons of the results. In the present study, the interaction between the clusters is taken to be a WS potential with the adjusted geometrical parameters $R=1.0$ fm, $a=0.25$ fm and the depth $V_{0}=32.55$ MeV for 0$s$-state and $R=6.25$ fm, $a=0.25$ fm, and $V_{0}=11.55$ MeV for 1$s$-state. The $s$-state ($l=0$) wave function of the relative motion of two clusters is \begin{equation} \phi_{00}^{(n)}({\bf s})=\phi_{0}^{(n)}(s)\frac{1}{\sqrt{4\pi}}, \;\;\; n=0, 1 \label{eq:15a} \end{equation} and thus, the respective density distribution is defined as a probability for clusters to be at a mutual distance $s$: \begin{equation} \rho_{0}^{(n)}({\bf s})=|\phi_{00}^{(n)}({\bf s})|^{2}= \frac{1}{4\pi}|\phi_{0}^{(n)}(s)|^{2}. \label{eq:15c} \end{equation} In the framework of the $^{9}$Li+2$n$ model of $^{11}$Li one can estimate the $^{11}$Li$+p$ OP as a sum of two OP's of interactions of the $c$- and $h$-clusters with protons folded with the density $\rho_{0}^{(n)}(s)$ ($n$=0, 1): \begin{widetext} \begin{eqnarray} U^{(b,n)}(r)&=&V^{(b,n)}+iW^{(b,n)}=\int d{\bf s}\rho_{0}^{(n)}(s)\left [U_{c}^{(n)}\left ({\bf r}+(2/11){\bf s}\right )+U_{h}^{(n)}\left ({\bf r}-(9/11){\bf s}\right )\right ]=2\pi \int_{0}^{\infty} \rho_{0}^{(n)}(s)s^{2}ds \nonumber \\ &\times & \int_{-1}^{1} dx \left [U_{c}^{(n)}\left(\sqrt{r^{2}+(2s/11)^{2}+r(4/11)sx}\right )+ U_{h}^{(n)}\left(\sqrt{r^{2}+(9s/11)^{2}-r(18/11)sx}\right )\right ]. \label{eq:15d} \end{eqnarray} \end{widetext} In Eq.~(\ref{eq:15d}) ${\bf r}-(9/11){\bf s}\equiv {\bf r}_{h}$ and ${\bf r}+(2/11){\bf s}\equiv {\bf r}_{c}$ define the corresponding distances between the centers of each of the clusters and the arbitrary position of the nucleon in $^{11}$Li nucleus, and ${\bf s}={\bf s}_{1}+{\bf s}_{2}=(9/11){\bf s}+(2/11){\bf s}$ determines the relative distance between the centers of the two clusters, $s_{1}$ and $s_{2}$ being distances between the centers of $^{11}$Li and each of the clusters, respectively. The potential $U_{c}^{(n)}$ in Eq.~(\ref{eq:15d}) is calculated within the microscopic hybrid model of OP described in Sect.IIA and B. For OP of the $h$-$p$ interaction we use the sum of two $v_{np}$ potentials as \begin{equation} U_{h}^{(n)}=2v_{np}=2v(r)(1+i\gamma). \label{eq:15e} \end{equation} Such $n$-$p$ complex potential has been used in the four-body model \cite{Suzuki93} in calculations of the $^{11}$Li$+p$ elastic scattering and it was shown that the cross sections are rather insensitive to a precise form of the $n$-$p$ potential taken in the form \cite{Thompson77} (in MeV): \begin{equation} v(r)=120e^{-1.487r^{2}}-53.4e^{-0.639r^{2}}-27.55e^{-0.465r^{2}} \label{eq:15f} \end{equation} with $\gamma=0.4$. We also intend to adopt the two-cluster model to calculate breakup reactions of $^{11}$Li in collisions with the proton target. To this end the HEA method which has been developed in Refs.~\cite{Hencken96,Bertulani2004} and applied in \cite{Lukyanov2011} for $^{6}$He+$^{12}$C reaction will be used in the present study. For simplicity, further the superscript index ($n$=0, 1) which corresponds to the number of nodes of the relative-motion $s$-wave function of the two clusters will be omitted. To show briefly the eikonal formalism, we start with the probability that after the collision with a proton ($z\rightarrow \infty $) the cluster $h$ or $c$ with an impact parameter $b$ remains in the elastic channel: \begin{widetext} \begin{eqnarray} |S_{i}(b)|^{2}=\exp{\left[-\frac{2}{\hbar v}\int_{-\infty}^{\infty}dzW_{i}\left(\sqrt{b^{2}+z^{2}}\right )\right ]},\;\;\;\;\; i=c,h, \label{eq:15g} \end{eqnarray} \end{widetext} where $W$ is the imaginary part of the microscopic OP (\ref{eq:15d}). Consequently, the probability for the cluster to be removed from the elastic channel is $(1-|S|^{2})$. Thus, the common probability of both $h$ and $c$ clusters to leave the elastic channel of the $^{11}$Li$+p$ scattering is $(1-|S_{h}|^{2})(1-|S_{c}|^{2})$. After averaging the latter by $\rho_{0}(s)$ (which characterizes the probability of $h$ and $c$ to be at a relative distance $s$), the total absorbtion cross section is obtained: \begin{equation} \sigma_{abs}^{tot}=2\pi \int_{0}^{\infty}b_{h}db_{h} [1-|S_{h}(b_{h})|^{2}] [1-I_{c}(b_{h})], \label{eq:15h} \end{equation} where \begin{equation} I_{c}(b_{h})=\int d{\bf s}\rho_{0}(s)|S_{c}(b_{c})|^{2}. \label{eq:15i} \end{equation} In Eq.~(\ref{eq:15i}) \begin{equation} b_{c}=\sqrt{s^{2}\sin^{2}\theta+b_{h}^{2}+2sb_{h}\sin\theta\cos(\varphi- \varphi_{h})} \label{eq:15ia} \end{equation} and it comes out from the relation ${\bf b}_{c}={\bf b}_{h}+{\bf b}$ with $b=s\sin\theta$ being the projection of ${\bf s}$ on the plane normal to the $z$-axis along the straight line trajectory of the incident nucleus. In the case of a stripping reaction with removing $h$-cluster from $^{11}$Li to the proton target, one should use the probability of $h$ to leave the elastic channel $[1-|S_{h}(b_{h})|^{2}]$, and for $c$ to continue its elastic scattering with a probability $|S_{c}(b_{c})|^{2}$. Then the probability of the whole process is $|S_{c}(b_{c})|^{2}[1-|S_{h}(b_{h})|^{2}]$, and to get the total stripping cross section one has to average over $\rho_{0}(s)$ [see Eqs.~(\ref{eq:15h}) and (\ref{eq:15i})]. Similarly, the $^{9}$Li transfer can be constructed, and the net contribution of both removal reactions yields the total breakup cross section: \begin{eqnarray} \sigma_{bu}^{tot} &=& 2\pi \int_{0}^{\infty} b_{h}db_{h}\{|S_{h}(b_{h})|^{2} \nonumber \\ &+& [1-2|S_{h}(b_{h})|^{2}]I_{c}(b_{h})\}. \label{eq:15j} \end{eqnarray} The sum of both absorption [Eqs.~(\ref{eq:15h}) and (\ref{eq:15i})] and breakup [Eq.~(\ref{eq:15j})] cross sections gives the total reaction cross section: \begin{equation} \sigma_{R}^{tot}=2\pi \int_{0}^{\infty} b_{h}db_{h}[1-|S_{h}(b_{h})|^{2}I_{c}(b_{h})]. \label{eq:15k} \end{equation} \subsection{Momentum distributions of fragments} As is known (see, e.g., \cite{Hencken96}), the differential and total cross sections (for elastic scattering, as well as for diffractive breakup and absorption) all require calculations of the probability functions of the ${\bf k}$-momentum distribution of a cluster in the two-cluster system $d^{3}P({\bf b},{\bf k})/d{\bf k}$ that depend on the impact parameter $\bf b$. The general expression for the probability functions can be written as \cite{Hencken96}: \begin{equation} \frac{d^{3}P_{\Omega}({\bf b},{\bf k})}{d{\bf k}}=\frac{1}{(2\pi)^{3}} \left |\int d{\bf s} \phi_{{\bf k}}^{*}({\bf s})\Omega({\bf b},{\bf r}_{\perp}) \phi_{00}^{(n)}({\bf s})\right |^{2}, \label{eq:15m} \end{equation} where $\Omega({\bf b},{\bf r}_{\perp})$ is expressed by means of the two profile functions $S_{c}$ and $S_{h}$ [Eq.~(\ref{eq:15g})] of the core and the di-neutron clusters, respectively. In Eq.~(\ref{eq:15m}) $\phi_{{\bf k}}({\bf s})$ is the continuum wave function and ${\bf k}$ is the relative momentum of both clusters in their center-of-mass frame. The vector ${\bf r}_{\perp}$ is the projection of the relative coordinate ${\bf s}$ between the centers of the two clusters on the plane normal to the $z$-axis mentioned above. The ground-state wave function of the relative motion of the two clusters $\phi_{00}$ is given for the $s$-state by Eq.~(\ref{eq:15a}). For calculations of e.g., the diffractive cross sections, the continuum wave function $\phi_{\bf k}$ is expanded in partial wave representation. If in this case the distortion in the final channel is neglected, the wave function $\phi_{{\bf k}}({\bf s})$ is replaced by a plane wave. Then, following Ref.~\cite{Hencken96} for the $s$-state ($l=0$) the expression for $d^{2}P_{\Omega}({\bf b},{\bf k})/dk_{L}dk_{\perp}$ will take the form: \begin{widetext} \begin{equation} \frac{d^{2}P_{\Omega}({\bf b},{\bf k})}{dk_{L}dk_{\perp}}= \frac{k_{\perp}}{16\pi^{3}k^{2}}\left |\int ds \int d(\cos\theta_{s})\,g(s)\sin{(ks)}\int d\varphi_{s}\Omega({\bf b},{\bf r}_{\perp})\right |^{2} \label{eq:15s} \end{equation} \end{widetext} with \begin{equation} \Omega({\bf b},{\bf r}_{\perp})=S_{c}({\bf b}_{c})S_{h}({\bf b}_{h}). \label{eq:15t} \end{equation} In Eq.~(\ref{eq:15s}) $g(s)=r\phi_{0}^{(n)}(s)=r\sqrt{4\pi\rho_{0}^{(n)}(s)}$, where $\phi_{0}^{(n)}$ and $\rho_{0}^{(n)}$ are given in Eqs.~(\ref{eq:15a}) and (\ref{eq:15c}). Hence, the diffraction breakup cross section has the form \begin{widetext} \begin{equation} \left (\frac{d\sigma}{dk_{L}}\right )_{diff}=\int_{0}^{\infty} b_h db_{h}\int_{0}^{2\pi} d \varphi_{h}\int_{0}^{\infty} d{k}_{\perp} \frac{d^{2}P_{\Omega}({\bf k},{\bf b})}{dk_{L} dk_{\perp}} \label{eq:15ldiff} \end{equation} \end{widetext} with $d^{2}P_{\Omega}({\bf b},{\bf k})/dk_{L}dk_{\perp}$ from Eq.~(\ref{eq:15s}). The integrations over $b_h$ and $\varphi_{h}$ in Eq.~(\ref{eq:15ldiff}) mean integration over the impact parameter ${\bf b}_h$ of the cluster $h$ with respect to the target. In the case of the stripping reaction when the $h$ - cluster leaves the elastic channel it can be shown (following \cite{Hencken96}) that the cross section takes the form: \begin{widetext} \begin{equation} \left(\frac{d\sigma}{dk_{L}}\right)_{str}=\frac{1}{2\pi^{2}}\int_{0}^{\infty}b_{h}db_{h}d\varphi_{h} \left [ 1-|S_{h}(b_{h})|^{2}\right ] \int \rho d\rho d\varphi_{\rho} |S_{c}(b_{c})|^{2} \left [ \int_{0}^{\infty}dz \cos (k_{L}z)\phi_{0}\left (\sqrt{\rho^{2}+z^{2}}\right ) \right ]^{2}. \label{eq:str} \end{equation} \end{widetext} Eq.~(\ref{eq:str}) is obtained when the incident nucleus has spin equal to zero and for the $s$-state of the relative motion of both clusters in the nucleus expressed by Eq.~(\ref{eq:15a}) with ${\bf s} ={\bf r}_{c}- {\bf r}_{h}$, ${\bf \rho}={\bf b}_{c}-{\bf b}_{h}$, ${\bf s}={\bf \rho} + {\bf z}$ and $b_{c}$ from Eq.~(\ref{eq:15ia}). \subsection{Results of calculations for breakup processes} To estimate the $^{11}$Li breakup on a proton target, we use the two-cluster model described in Sec.~IIIA. As presented there, we intend to study some observables when the $^{11}$Li nucleus with the $h=2n$-cluster separation energy of 0.247 MeV is considered as a system in the $l=0$ state with principal quantum numbers $n=0$ or $n=1$. The respective WS potentials $V(s)$ and probabilities $\rho_{0}^{(n)}(s)$ [Eq.~(\ref{eq:15c})] for the distance $s$ between the clusters in $^{11}$Li are shown for both $n=0, 1$ in Fig.~\ref{fig8}(a) and (b), respectively. It can be seen from Fig.~\ref{fig8}(a) that the WS potential for $n=0$ is about 2.8 times deeper than the one for the case of $n=1$, although the shapes of both potentials are similar. We note also that the half radius of the $n=1$ potential is equal to 6.25 fm and it is much larger than that of 1.01 fm of the $n=0$ potential. Fig.~\ref{fig8}(b) shows that the two densities differ from each other. Particularly, a steep drop of $n$=1 density is observed at $s\approx $ 3.8 fm. Moreover, bearing in mind the results of fitting procedures in phenomenological potentials (e.g., in Ref.~\cite{Dobrovolsky2006}) giving rms radius of about $5\div 6$ fm for the constituent $h$-cluster density $\rho_{h}(r)$, we may conclude that in our consideration the $n=1$ cluster state of $^{11}$Li becomes preferable. On the other hand, the existence of long tails of $\rho_{0}^{(n=0, 1)}(s)$ for both states provokes interest to test their effects in the further considerations. \begin{figure} \includegraphics[width=0.8\linewidth]{fig8.eps} \caption{(Color online) The WS potential $V(s)$ of the interaction between $c$ and $h$ clusters (a) and the two-cluster density distribution $\rho_{0}(s)$ normalized to unity (b) for the cases of $n$=0 (green dashed line) and $n$=1 (blue solid line). \label{fig8}} \end{figure} Our next step is to apply the optical potential $U^{(b,n)}$ [Eq.~(\ref{eq:15d})] constructed in the framework of the two-cluster model of $^{11}$Li to calculate the differential cross section of the elastic scattering $^{11}$Li$+p$ at 62 MeV/nucleon. For the real part $V^{(b,n)}$ of this OP we use a single-folding procedure in which the LSSM density \cite{Karataglidis97} is taken for the $^{9}$Li cluster. The imaginary part $W^{(b,n)}$ of the OP is considered like before to be either $W=W^{H}$ or $W=V^{F}$. The calculated cross sections are shown in Fig.~\ref{fig9} and compared with the experimental data \cite{Moon92}. For both cases we give in Table~\ref{tab3} the values of the fitted renormalization coefficients $N$'s and the respective total cross sections for $n=0$ and $n=1$ cases. One can see from Fig.~\ref{fig9} that the angular distributions for both kinds of ImOP are closely displayed and they lead to a fairly good agreement with the experimental data. However, we note that the data are reproduced better again when $W=W^{H}$ for both $n$=0, 1 cases, as it was pointed out from the discussion of the results presented in Fig.~\ref{fig3} and obtained with the usage of the LSSM density for $^{11}$Li. \begin{figure} \includegraphics[width=1.0\linewidth]{fig9.eps} \caption{(Color online) The $^{11}$Li$+p$ elastic scattering cross section at $E=62$ MeV/nucleon using $U^{(b,n)}$ [Eq.~(\ref{eq:15d})] for values of the parameters $N$ shown in Table~\ref{tab3}. Black solid line: $W^{(b,0)}=V^{F}$, red dashed line: $W^{(b,0)}=W^{H}$, and blue dash-dotted line: $W^{(b,1)}=W^{H}$. The experimental data are taken from Ref.~\protect\cite{Moon92}. \label{fig9}} \end{figure} In Table~\ref{tab3} the values of the total absorption $\sigma_{abs}^{tot}$, breakup $\sigma_{bu}^{tot}$ and total reaction $\sigma_{R}^{tot}$ cross sections are listed. First, we note the significant role that the breakup channel plays in the $^{11}$Li$+p$ reaction, where $\sigma_{bu}^{tot}$ contributes more than 80\% to $\sigma_{R}^{tot}$. This is not the case of $^{6}$He+$^{12}$C process at energy of 38.3 MeV/nucleon \cite{Lukyanov2011}, for which the breakup cross section constitutes only about the half of the total reaction cross section. This can be related with the observation that a quite substantial amount of the $^{11}$Li$+p$ imaginary potential in the elastic scattering channel is formed due to a transfer of the incident flux of $^{11}$Li to a larger extent into breakup channels. Also, for the case of $n=1$ state of the cluster wave function, the fitted strength coefficients $N$'s and the respective values of the cross sections are larger than for the $n=0$ state, but the general conclusions on the preferable role of breakup processes remain the same. \begin{table} \caption{The $N$'s parameters of OP's for $^{11}$Li$+p$ scattering at 62 MeV/nucleon and HEA estimations of the total cross sections $\sigma_{abs}^{tot}$ [Eq.~(\ref{eq:15h})], $\sigma_{bu}^{tot}$ [Eq.~(\ref{eq:15j})], and $\sigma_{R}^{tot}$ [Eq.~(\ref{eq:15k})] (in mb) using the cluster model of $^{11}$Li.} \label{tab3} \begin{center} \begin{tabular}{cccccc} \hline \hline \noalign{\smallskip} $W^{(b,n)}$ & $N_R$ & $N_I$ & $\sigma_{abs}^{tot}$ & $\sigma_{bu}^{tot}$ & $\sigma_{R}^{tot}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} $W^{(b,0)}=V^{F}$ & 1.407 & 1.195 & 79.0 & 431.8 & 510.8 \\ $W^{(b,0)}=W^{H}$ & 1.381 & 1.306 & 78.6 & 405.3 & 483.9 \\ $W^{(b,1)}=W^{H}$ & 4.68 & 3.99 & 106.6 & 581.6 & 688.2 \\ \noalign{\smallskip}\hline \hline \end{tabular} \end{center} \end{table} \begin{figure} \includegraphics[width=1.0\linewidth]{fig10.eps} \caption{(Color online) The functions $S_{i}(b_{i})$, $i=c,h$ [see Eq.~(\ref{eq:15g})] for $s$-state of the relative motion of clusters with $n$=0 and $n$=1. \label{fig10}} \end{figure} \begin{figure} \includegraphics[width=1.0\linewidth]{fig11.eps} \caption{(Color online) Cross section of diffraction breakup in $^{11}$Li$+p$ scattering at $E=62$ MeV/nucleon for the cases of $n$=0 (green dashed line) and $n$=1 (blue solid line). \label{fig11}} \end{figure} Our next step is to calculate using Eqs.~(\ref{eq:15ldiff}) and (\ref{eq:str}) as examples the cross sections of the diffractive and stripping (when $h=2n$ cluster leaves the elastic channel) $^{11}$Li$+p$ reactions at $E=62$ MeV/nucleon, respectively. For this purpose we use in Eqs.~(\ref{eq:15ldiff}) and (\ref{eq:str}) the corresponding functions $S_{i}(b_{i})$, $i=c,h$ [see Eq.~(\ref{eq:15g})]. They are given in Fig.~\ref{fig10} for $s$-state with $n$=0 and $n$=1. In Figs.~\ref{fig11} and \ref{fig12} we show the results for the diffraction breakup and stripping $^{11}$Li$+p$ scattering at $E=62$ MeV/nucleon, respectively. These results give predictions because of missing experimental data for such processes accompanying the $^{11}$Li$+p$ scattering at $E\leq 100$ MeV/nucleon. For the diffractive scattering we obtain values of the widths 98 MeV/c (for $n$=0) and 85 MeV/c (for $n$=1) and for the stripping reaction 79 MeV/c (for $n$=0) and 72 MeV/c (for $n$=1), respectively, thus favoring the configuration in which the two valence neutrons occupy $1s$ state in $^{11}$Li. It is worth to be noted that the calculated in our work widths for the $^{11}$Li breakup on the proton target are larger than those obtained in the experiments (around 50 MeV/c) for the reactions of $^{11}$Li on the nuclear targets $^{9}$Be, $^{93}$Nb and $^{181}$Ta at energy $66$ MeV/nucleon \cite{Orr92} and on a wide range of targets ($^{9}$Be to $^{238}$U) \cite{Orr95}. It is noted in \cite{Orr92,Orr95} that the width almost does not depend on the target's mass number and thus, it characterizes basically the momentum distribution of two clusters. Our width for the stripping of $2n$-cluster is similar to the cases of $2n$ stripping from other nuclei (but not from $^{11}$Li). It turns out that the account for the $2n$ binding in $^{11}$Li is not enough to obtain the observed widths in the scattering of $^{11}$Li on nuclei, as well as on proton targets. We would like to mention also that we had a methodical task to calculate the widths using different wave functions ($n=0,1)$ of the relative motion of the clusters. The results show similar values of the widths in both cases. Probably, it is difficult to solve the problem within our simplified two-cluster model and thus, it must be considered in a more complicated three-body model. Also, obviously experiments on stripping and diffraction reactions of $^{11}$Li on proton targets are highly desirable. This concerns measurements of the neutrons in the decay, as well. \begin{figure} \includegraphics[width=1.0\linewidth]{fig12.eps} \caption{(Color online) The same as in Fig.~\ref{fig11} but for the stripping reaction. \label{fig12}} \end{figure} \subsection{Single-particle density of $^{11}$Li in two-cluster model} In this subsection we would like to consider in more details the single-particle density distribution of $^{11}$Li, which can be calculated and applied instead of phenomenological one in the analyses and interpretation of $^{11}$Li+$p$ experimental data. For this purpose, we adopt our cluster model, consisting of $^{9}$Li core and halo $h=2n$. If one sets $\rho_{h}({\bf r}_{1})$ for the $h$-cluster and $\rho_{c}({\bf r}_{2})$ for $^{9}$Li nucleus, then the single-particle density distribution of $^{11}$Li can be derived in analogy to Eq.~(\ref{eq:15d}) in the following form: \begin{widetext} \begin{eqnarray} \rho(r)&=&\int d\phi \sin \theta d\theta \int ds s^{2} \left [\rho_{h}({\bf r}_{h}) +\rho_{c}({\bf r}_{c})\right ]\rho_{0}^{(n)}({\bf s})=2\pi\int_{-1}^{1}dx \nonumber \\ &\times & \int_{0}^{\infty} ds s^{2} \left [\rho_{h}\left(\sqrt{r^{2}-2(9/11)rsx+(9/11)^{2}s^{2}} \right )+\rho_{c}\left(\sqrt{r^{2}+2(2/11)rsx+(2/11)^{2}s^{2}}\right )\right ] \rho_{0}^{(n)}({\bf s}). \label{eq:41} \end{eqnarray} \end{widetext} The expression (\ref{eq:41}) indicates that the density of $^{11}$Li can be calculated using the sum of the corresponding densities of both clusters and folding it with the square of the relative-motion wave function of the two clusters $|\phi_{00}({\bf s})|^{2}$. As a comment of our approach we would like to mention the difference between the method to calculate the folding $^{11}$Li$+p$ OP [Eq.~(\ref{eq:15d})] and that to estimate the single-particle density of $^{11}$Li [Eq.~(\ref{eq:41})]. In fact, in the former, the $U_{h}$ optical potential was not calculated as a folding integral, but expressed through the $v_{np}$ potentials, and therefore there we did not include the density of the $h=2n$ cluster. Instead, in Eq.~(\ref{eq:41}) we consider the $h$-cluster density, together with the density of the $^{9}$Li core, both being folded by the wave function of the relative motion of the two clusters. Further, in the calculations we use the LSSM density for the $^{9}$Li cluster with rms radius $R_{c}$=2.31 fm \cite{Karataglidis97} and for the $h$-halo we probe two densities: the one being described by the Gaussian function (G density) (e.g., \cite{Alkhazov2002}) \begin{equation} \rho^{G}(r)=\left ( \frac{3}{2\pi R_{h}^{2}} \right )^{3/2}\exp{\left (-\frac{3r^{2}}{2R_{h}^{2}} \right )} \label{eq:42} \end{equation} and the other one is the symmetrized Fermi distribution (SF density) (e.g., \cite{Burov77}) \begin{equation} \rho^{SF}(r)=\rho_{0}\frac{\sinh{(R/a)}}{\cosh{(R/a)}+\cosh{(r/a)}}, \label{eq:43} \end{equation} where \begin{equation} \rho_{0}=\frac{3}{4\pi R^{3}}\left [1+\left (\frac{\pi a}{R}\right )^{2}\right ]^{-1} \label{eq:44} \end{equation} and the corresponding rms radius is: \begin{equation} <r^{2}>=R_{h}^{2}=\frac{3}{5}R^{2}\left [1+\frac{7}{3}\left (\frac{\pi a}{R}\right )^{2}\right ]. \label{eq:45} \end{equation} The two densities [Eqs.~(\ref{eq:42}) and (\ref{eq:43})] are normalized to unity and substituting them in Eq.~(\ref{eq:41}) they have to be multiplied by a factor of 2. As for the G density it has only one parameter, the rms radius of the halo $R_{h}$, that governs its behavior. First, we take $R_{h}$=2 fm which is almost twice the nucleon radius. In principle, such a choice of $R_{h}$ is justified since the cluster inside the nucleus is "smeared" and, moreover, the folding procedure itself (in which the relative motion function $\phi_{00}({\bf s})$ takes place with rms radius 4.93 fm, see also Sec.~III.A) ensures the $h$-cluster to be in the periphery. Concerning the SF density, we perform calculations with a set of parameters $R$ and $a$, selected so that to obey rms $R_{h}$=2 fm (see the set SF1 in Table~\ref{tab4}). For the choice of them the condition $R>\pi a$ must be satisfied and for a more convenience Eq.~(\ref{eq:45}) can be rewritten in the following way: \begin{equation} R^{2}=\frac{5}{3}R_{h}^{2}-\frac{7}{3}(\pi a)^{2}. \label{eq:46} \end{equation} \begin{table} \caption{Values of the parameters of the symmetrized Fermi and Gaussian density distributions, $h$- and $c$-cluster rms radii $R_{h}$ and $R_{c}$, and deduced matter rms radii $R_{m}$ (in fm) within the $^{9}$Li+2$n$ model of $^{11}$Li.} \label{tab4} \begin{center} \begin{tabular}{ccccccc} \hline \hline \noalign{\smallskip} Parametrization & & $R$ & $a$ & $R_{h}$ & $R_{c}$ & $R_{m}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} SF1 & & 2.234 & 0.27 & 2 & $2.31^{a}$ & 2.77 ($n$=0) \\ & & & & & & 2.93 ($n$=1) \\ G & & & & 2 & $2.31^{a}$ & 2.77 ($n$=0) \\ & & & & & & 2.93 ($n$=1) \\ SF2 & & 4.573 & 0.5 & 4 & $2.31^{a}$ & 3.32 ($n$=1) \\ 0.2GG+0.8GO \protect\cite{Dobrovolsky2006} & & & & 5.98 & $2.52 $ & 3.42 \\ \noalign{\smallskip}\hline \hline $^{a}$ From LSSM for $^{9}$Li \end{tabular} \end{center} \end{table} The calculated single-particle density distributions of $^{11}$Li are presented in Fig.~\ref{fig13} together with the LSSM density. Results are shown for both $n$=0 and $n$=1 cases. As can be seen, the usage of two kinds of $h$-density SF1 and G yields very similar $^{11}$Li densities shown as the pair of the dotted and dashed curves for $n$=0, and also as the solid and dot-dashed curves for $n$=1, correspondingly, in the whole region of $r$ up to 10 fm. In addition, all these four curves are close at $r<4$ fm. However, the difference between the $n$=0 and $n$=1 pairs is seen in the interval $5<r<7$ fm, where the $n$=1 curves exhibit a "bump", while the $n$=0 ones go down as compared to the case of the LSSM density of $^{11}$Li. Moreover, we note that the $^{11}$Li rms radius of 2.93 fm for $n$=1 curves is very close to the LSSM value of 2.94 fm. The tail of the LSSM density is higher at $r>8$ fm than those of the cluster curves with $R_{h}=2$ fm, but as it was pointed out in Ref.~\cite{Dobrovolsky2006} the calculated differential cross sections of $^{11}$Li$+p$ scattering are not sensitive to a possible long density tail at the nuclear far periphery. \begin{figure} \includegraphics[width=1.0\linewidth]{fig13.eps} \caption{(Color online) Single-particle density distribution of $^{11}$Li (normalized to $A=11$) obtained in the framework of the cluster model [Eq.~(\ref{eq:41})]. The $h$-cluster density distributions are taken in two forms: symmetrized Fermi distribution (SF1) and Gaussian function (G) with $R_{h}=2$ fm. The results are presented for the cases of $n$=0 and $n$=1, respectively. The LSSM density is also given. \label{fig13}} \end{figure} The very pronounced halo nature of $^{11}$Li nucleus is mainly supported by its large matter radius exhibited by Tanihata {\it et al.} in Ref.~\cite{Tanihata85a}. Recently, a successful attempt to get a "realistic" density of this nucleus was realized in Ref.~\cite{Dobrovolsky2006}. In the latter the experimental data at about 700 MeV/nucleon were described using the phenomenological constituent cluster model of the $^{11}$Li density composed of two terms 0.2GG+0.8GO with the Gaussian GG and the harmonic oscillator GO functions together. The fitting procedure led to the total rms matter radius $R_{m}=3.42$ fm of the whole density, where the fitted values $R_{c}=2.52$ fm and $R_{h}=5.98$ fm of its separate terms were interpreted as the core and $h$-halo radii, respectively. These radii satisfy the relation \begin{equation} R_{m}^{2}=\frac{A_{c}R_{c}^{2}+A_{h}R_{h}^{2}}{A}, \;\;\; A=A_{c}+A_{h}, \label{eq:47} \end{equation} ($A_{c}$, $A_{h}$, and $A$ being number of nucleons in the core, in the $2n$-cluster and the nucleus, respectively) that is valid for the constituent model. However, instead we may argue that the $^{9}$Li and $h$-systems can be considered as the true clusters only when in a cluster model they are folded [see Eq.~(\ref{eq:41})] with the probability density of their relative motion. In Fig.~\ref{fig14} our result is shown as the SF2 curve when the value of $R_{h}=4$ fm is taken to be twice larger than $R_{h}=2$ fm in the SF1 case. Also in the same figure we present the phenomenological 0.2GG+0.8GO density from Ref.~\cite{Dobrovolsky2006}. Our SF2 parametrization leads to a value for the matter rms radius $R_{m}=3.32$ fm that is close to $R_{m}=3.42$ fm for the phenomenological constituent model mentioned above. Thus, our folding method to calculate the single-particle density distribution [Eq.~(\ref{eq:41})] which takes into account the relative motion of the clusters makes it possible to get realistic densities within cluster models without use of phenomenology. It is seen from our analysis with SF2 parametrization that the $h$-cluster is really "smeared" in $^{11}$Li nucleus ($R_{h}=4$ fm) and that the averaging on a relative motion of both clusters (which strongly depends on the $h$-cluster separation energy) plays an important role. This fact is confirmed also in Ref.~\cite{Dobrovolsky2006}, where the deduced halo radius $R_{h}=5.98$ fm is larger than the core radius $R_{c}=2.52$ fm by a factor of more than 2. However, the ambiguity remains in the choice of the "best" density distribution of $^{11}$Li because only the $^{11}$Li$+p$ elastic scattering data are not sufficient. \begin{figure} \includegraphics[width=1.0\linewidth]{fig14.eps} \caption{(Color online) Single-particle density distribution of $^{11}$Li (normalized to $A=11$) obtained in the framework of the cluster model [Eq.~(\ref{eq:41})] with symmetrized Fermi (SF2) distribution for the $h$-cluster density with $R_{h}=4$ fm (green dash-dotted line). The black dashed line represents the best density parametrization that describes the $^{11}$Li$+p$ elastic scattering data \cite{Dobrovolsky2006}, while the black solid line is the LSSM density. \label{fig14}} \end{figure} \section{Conclusions} The results of the present work can be summarized as follows: (i) In the first part of the work (Sec.~II) the microscopic optical potentials and cross sections of $^{11}$Li$+p$ elastic scattering were calculated at the energies of 62, 68.4, and 75 MeV/nucleon and were compared with the available experimental data. The direct ($V^{D}$) and exchange ($V^{EX}$) parts of the real OP ($V^{F}$) were calculated using the folding procedure with density-dependent M3Y (CDM3Y6-type) effective interaction based on the Paris $NN$ potential. The imaginary part of OP ($W^{H}$) was calculated microscopically within the folding model based on the high-energy approximation. The LSSM densities \cite{Karataglidis97} of protons and neutrons with exponential asymptotic behavior of $^{11}$Li that is the correct one were used in the calculations. The spin-orbit contribution to the OP was also included in the calculations. The $^{11}$Li$+p$ elastic scattering cross sections and the total reaction cross sections were calculated using the program DWUCK4 \cite{DWUCK}. (ii) We pointed out that the regularization of our microscopic OP's is achieved by introducing the fitting parameters $N_R$, $N_I$, $N_R^{SO}$, $N_I^{SO}$ related to the "depths" of the separate parts of OP. They are, in principle, the only free parameters of our approach, in contrast to other phenomenological ones and serve as a quantitative test of the latter, i.e. the proximity of $N$'s values to unity shows the closeness of the approach to the reality. However, here the "ill-posed" problem takes place because the fitting procedure is applied to a limited number of experimental data. The problem of the ambiguity of the $N$'s parameters have been considered in our previous works \cite{Lukyanov2009,Lukyanov2010}. We used in the present work a physical constraint on the choice of the values of the $N$'s parameters, namely the known behavior of the volume integrals $J_V$ and $J_W$ as functions of the incident energy for $E\leq 100$ MeV/nucleon \cite{Romanovsky}. We compare the behavior of the values of $J_V$ and $J_W$ obtained in our work with those in the semi-phenomenological approach in Ref.~\cite{Hassan2009}, where much more parameters have been used than in our microscopic method. We discuss in more details the problem arising from the behavior of $J_W$ at $E=62$ MeV/nucleon and relate it to the quality of the data at larger angles ($\theta_{c.m.}>46^\circ$). We note that this problem had appeared also in \cite{Hassan2009}. Finally, we obtained a definite set of the fitted $N$'s parameters that give satisfactory agreement of our results with the data of elastic $^{11}$Li$+p$ scattering cross section using the physical criterion of the behavior of the volume integrals as functions of the energy. (iii) We would like to mention that the values of the total cross sections of scattering and reaction can serve as another physical criterion for the $N$'s values. However, the corresponding experimental data for these values are missing at the energy interval considered in our work, so they are highly desirable. (iv) As in our previous works \cite{Lukyanov2009,Lukyanov2010}, we would like to emphasize that a more successful explanation of the cross section data could be given by accounting for virtual excitations of inelastic and decay channels of the reaction. For this reason, in Sec.~III of the present paper, apart from the usual folding model based on the LSSM, we consider another folding approach that includes $^{11}$Li breakup suggesting a simple $^{9}$Li+2$n$ cluster model for its structure. Both LSSM and cluster models of $^{11}$Li are capable to reproduce fairly well the two-neutron separation energy from $^{11}$Li. In Sec.~III we use the procedure from the first part of our work (Sec.~II) for microscopic calculations of the necessary OP's in the breakup model for estimations of the elastic scattering cross sections, as well as of the momentum distributions in the processes of the proton scattering on clusters and the corresponding $S$-functions in $^{9}$Li$+p$ and $h$+$p$ scattering. The folding OP's calculated in the two parts of our work behave rather closely if one fits their strengths to the same elastic scattering data, as it is done for $^{11}$Li$+p$ at energy 62 MeV/nucleon. Thus, the analysis of other types of the reaction mechanism, such as the $^{11}$Li breakup, makes it possible to understand their significant role in the formation of the OP responsible for the $^{11}$Li$+p$ elastic scattering. It turns out that the breakup channel gives $\sigma_{bu}^{tot}$ that exceeds 80\% from $\sigma_{R}^{tot}$, while it is around a half of $\sigma_{R}^{tot}$ in the case of $^{6}$He+$^{12}$C (as obtained in Ref.~\cite{Lukyanov2011}). (v) In the present work we give also predictions for the longitudinal momentum distributions of $^{9}$Li fragments produced in the breakup of $^{11}$Li at 62 MeV/nucleon on a proton target. We calculated the diffraction and stripping (when the cluster $2n$ leaves the elastic channel) cross sections of the reaction of $^{11}$Li on proton target at energy 62 MeV/nucleon. We note that our breakup gives the width of the peak between 70 and 80 MeV/c, while the widths of about 50 MeV/c are known from the reactions of $^{11}$Li on nuclear targets $^{9}$Be, $^{93}$Nb and $^{181}$Ta at energy 66 MeV/nucleon. In relation with this, here we should mention that at the energy of the range 60-70 MeV/nucleon a distortion due to the nuclear and Coulomb forces could affect the cross sections. We have in mind also that our simplified two-cluster model could not give the correct answer and that it can be found in a more complicated three-body approach. Hence, this problem remains open and requires further analysis. We emphasize the necessity of experiments on stripping and diffraction reactions of $^{11}$Li on proton targets at energy $E<100$ MeV/nucleon. (vi) We present results for the single-particle density distribution of $^{11}$Li in the framework of a cluster model. Our calculated density is close to the phenomenological one obtained in Ref.~\cite{Dobrovolsky2006} by fitting to the experimental differential cross sections of scattering of $^{11}$Li at 700 MeV/nucleon on a proton target. From a physical point of view the cluster model allows more clear interpretation of the experimental data and together with the phenomenological densities can be applied as a pattern density to fit the data. Future measurements of the cross sections for proton elastic scattering and momentum distribution of the $^{9}$Li fragments in the $^{11}$Li breakup reactions might provide supplemental information on the internal spatial structure of the $^{11}$Li nucleus. \begin{acknowledgments} The authors are grateful to Professor N.S. Zelenskaya and Professor S.N. Ershov for helpful discussions. The work is partly supported by the Project from the Agreement for co-operation between the INRNE-BAS (Sofia) and JINR (Dubna). Four of the authors (D.N.K., A.N.A., M.K.G. and K.S.) are grateful for the support of the Bulgarian Science Fund under Contract No.~02--285 and one of them (D.N.K.) under Contract No.~DID--02/16--17.12.2009. The authors E.V.Z. and K.V.L. thank the Russian Foundation for Basic Research (Grants Nos. 12-01-00396 and 13-01-00060) for partial support. K.S. acknowledges the support of the Project BG-051P0001-3306-003. \end{acknowledgments}
2024-02-18T23:40:53.620Z
2013-09-16T02:03:04.000Z
algebraic_stack_train_0000
3,599
12,368
proofpile-arXiv_066-1638
\section{Introduction} It is inevitable that quantum processes played an important role in the very earliest stages of our universe. Possibly the most remarkable process of all is the decay of the quantum vacuum state. This is because the change in vacuum state can change the curvature of spacetime, and then vacuum decay becomes a fully non-perturbative quantum gravitational phenomenon. If we can provide a plausible understanding of vacuum decay in this context, then we may learn a little about quantum gravity. Some time ago \cite{Hawking:1981fz}, Hawking and Moss noticed that the simple picture of vacuum decay in a system with a scalar field coupled to gravity produces strange results when the field has a very flat potential. The usual picture of a bubble of true vacuum nucleating inside false vacuum with a distinct bubble wall \cite{CDL} no longer holds: as the potential becomes flatter, the bubble wall becomes thicker, and the field on either side of the wall becomes closer to either side of the maximum of the potential barrier, until the solution interpolating between each side of the potential maximum can no longer exist. Instead, it appears that the field `jumps' to the top of the potential barrier and hence the universe undergoes a uniform jump in spacetime geometry in which everything, up to and including the cosmological horizon, is affected. In a previous paper, we looked at the way vacuum decay occurs in the presence of a primordial black hole as the uniform field limit was approached \cite{Gregory:2020cvy}. In this paper, we consider vacuum decay in the presence of a primordial black hole and a uniform scalar field. We are led to make a new proposal, that vacuum decay is only permitted when the cosmological horizon does not grow in size. The set-up is as follows: Consider a scalar field theory on a curved background geometry described by Einstein gravity, with a standard Lagrangian for the scalar field $\phi$, \begin{equation} \label{langragian} \mathcal{L}_\phi = -\tfrac{1}{2}\partial^\mu\phi\partial_\mu\phi - V(\phi). \end{equation} Since the specifics of the potential are not relevant to our discussion, let us take a toy potential for $V$ of the form shown in Fig.~\ref{fig:potential}. In particular, $V$ has a false vacuum located at $\phi=\phi_{\rm F}$ and the true vacuum is at $\phi=\phi_{\rm V}$. The top of the potential barrier separating these two regions is at $\phi=\phi_{\rm T}$. If the potential is everywhere positive, then the stable, stationary solutions result in a de Sitter space. \begin{figure}[htb] \centering \includegraphics[width=0.5\linewidth]{Potential_phi.pdf} \caption{An example potential containing a true and a false vacuum, located at $\phi_{\rm V}$ and $\phi_{\rm F}$ respectively, separated by a barrier peaked at $\phi_{\rm T}$. } \label{fig:potential} \end{figure} If the field is initially in the false vacuum, there is a non-zero probability to tunnel through the barrier to the true vacuum. These are the bounce solutions of Coleman-de Luccia (CDL) \cite{coleman1977,callan1977,CDL}, which describe the nucleation of a bubble of true vacuum within a sea of false vacuum, i.e.\ a first order phase transition. The bubble subsequently expands under the influence of gravitation, converting the false vacuum to true \cite{CDL}, at least within a Hubble volume. The type of transition we are interested in here occurs when the field undergoes a fluctuation from the false vacuum up to the top of the potential barrier. This is known as the Hawking-Moss (HM) instanton \cite{Hawking:1981fz}, and involves an entire horizon volume of spacetime simultaneously undergoing a transition to a new state. In situations where the CDL bounce does not exist, the only non-perturbative way the system can evolve into the true vacuum is via a HM bounce. In the formal theory of vacuum decay \cite{coleman1977,callan1977}, the bounce solution asymptotes to the false vacuum state as the imaginary time becomes infinite. However, once gravity is included, {\it all} of the bounce solutions with positive false vacuum energy violate this condition due to the {\it finite} volume of Euclidean de Sitter space, but none violate this condition more so than the HM instanton. Consequently, various attempts have been made to understand the role of this instanton better. An early proposal was that the instanton solution represents the `creation of the universe from nothing' \cite{Vilenkin:1982de,Vilenkin:1983xq}. If this were true, then the instanton should play some role in the quantum wave function of the universe, and indeed the HM instanton gives the leading saddle-point contribution to the Hartle-Hawking wave function \cite{Hartle:1983ai}. The HM instanton also plays a role in a stochastic picture of vacuum decay. A particular feature of de Sitter space is that the large scale average of light fields, like the inflaton, satisfy a stochastic equation \cite{Starobinsky:1986fx,Starobinsky:1994bd}. It is therefore possible to evaluate the vacuum decay rate using stochastic techniques, and these reveal that the vacuum decay rate depends on the HM instanton in the WKB limit \cite{Linde:1993nz,Li:2007uc}. In yet another picture, the HM bounce can be interpreted as contributing to the thermal ensemble of states at the Hawking temperature of de Sitter space \cite{Brown:2007sd}. Motivated by this thermodynamical picture, it is important to examine the HM transition in the presence of a primordial black hole, which has its own additional thermodynamic profile. Indeed, it has been shown \cite{Gregory:2013hja,Burda:2015isa,Burda:2015yfa,Burda:2016mou} that the tunnelling rate for CDL bubbles is increased if a black hole is present. Thus, a natural question to ask is how the HM instanton picture and the stochastic formalism are altered in the presence of a black hole. In this paper we answer this question for the case of a primordial black hole in a single Hubble volume of the inflationary universe. In \S\ref{sec:HMBH} we generalise the HM instanton to include a black hole, and comment on how this impacts on the instanton action. We discover that in order for the non-perturbative description to remain well-defined we need an additional constraint on the instanton. We therefore make the following conjecture: \medskip \underline{Cosmological Area Conjecture:} {\it In an up-tunnelling transition, the cosmological horizon area can never increase}. \medskip Once we impose this constraint, the parameter space and instanton actions are remarkably reminiscent of the black hole bubbles of \cite{Gregory:2013hja}. In the following two sections we turn to the physical explanation of our conjecture: In \S\ref{sec:BHTD} we consider the thermodynamical implications of the tunnelling transition, computing the internal energy of the false and HM states. It turns out that the internal energy inside the cosmological horizon is directly related to the horizon area, thus can only increase if energy is being pumped in from beyond the horizon. This would correspond to an unnatural and artificially tuned set-up, so we conclude that an un-triggered decay cannot increase horizon area. In \S\ref{sec:SBH} we explore an alternate physical motivation, generalising the stochastic inflationary picture to include black holes. Using results from the analysis of slow roll inflation with black holes \cite{Chadburn:2013mta,Gregory:2017sor,Gregory:2018ghc}, we are again led to the conclusion that the area of the cosmological horizon cannot increase. We conclude in \S\ref{sec:concl}, discussing possible extensions of our analysis. Planck units are used throughout: $c=\hbar=k_B=G=1$. \section{Hawking-Moss instanton with a black hole seed} \label{sec:HMBH} The HM instanton represents a simultaneous up-tunnelling event from a false vacuum $\phi_F$, to the top of a potential barrier $\phi_T$. A natural way to generalise this picture is to include a seed primordial black hole in the false vacuum, and to allow a remnant black hole at the top of the potential. Typically, the masses of the black holes will be different, and this in turn will lead to a richer set of possibilities for the tunnelling process. Consider a HM tunnelling event from the false vacuum at $\phi_{\rm F}$ up to the top of the potential barrier $\phi_{\rm T}$, where the initial and final configurations contain a black hole. Assuming positive vacuum energy density $V(\phi)\neq 0$ for both states, the initial and final configurations are described by the Schwarzschild-de Sitter (SdS) solution, \begin{equation} ds^2 = -f dt^2 + f^{-1} dr^2 + r^2 d\Omega^2, \qquad f = 1-\frac{2m}{r} - \frac{r^2}{\ell^2}, \end{equation} where the radius of curvature $\ell$ is given by, \begin{equation} \label{curv} \ell= \sqrt{\frac{3}{8\pi V(\phi)}}. \end{equation} Note that since we are interested in up-tunnelling, we always have $\ell_{\rm F}>\ell_{\rm T}$. In these coordinates, the range for $r$ goes from the black hole horizon $r_h$ to the cosmological horizon $r_c$, i.e.\ $r\in[r_h,r_c]$, where $f(r_{c,h})=0$, and the roots can be expressed as: \begin{equation} \label{horizons} r_c = \frac{2}{\sqrt{3}}\ell\cos\left(\tfrac{\pi}{3}-b\right), \quad r_h = \frac{2}{\sqrt{3}}\ell\cos\left(\tfrac{\pi}{3}+b\right), \quad b = \frac{1}{3}\cos^{-1}\left(\frac{3\sqrt{3}m}{\ell}\right). \end{equation} The two horizons coincide at the Nariai mass $m_N$, \begin{equation} m_N = \frac{\ell}{3\sqrt{3}}, \end{equation} which places an upper bound on the mass parameter, $m\in[0,m_N]$. The tunnelling rate from the false vacuum to the top of the potential has the form $\Gamma \approx A e^{-B}$, where we focus on the tunnelling exponent $B$ rather than the pre-factor $A$. We follow Coleman and de Luccia in assuming that the tunnelling exponent is related to the change in Euclidean action $I$, \begin{equation} B = I_{\rm T} - I_{\rm F}. \end{equation} As we stated in the introduction, there is some evidence in support of this result from quantum cosmology and from stochastic inflation. In SdS, the action is totally determined by the areas of the horizons ${\cal A}_h$ and ${\cal A}_c$ \cite{Gregory:2013hja}, \begin{equation} I = -\frac{1}{4}\left(\mathcal{A}_c + \mathcal{A}_h\right). \end{equation} Since each horizon is associated with an entropy ${\cal S}={\cal A}/4$, the tunnelling rate is related to the change in total entropy $\Delta{\cal S}$ by the Boltzmann formula $\Gamma=Ae^{\Delta{\cal S}}$. This links the tunnelling process to gravitational thermodynamics, and provides further support for the validity of the tunnelling formula. The area of an horizon is $\mathcal{A} = 4\pi r^2$, so that using \eqref{horizons} the tunnelling exponent is, \begin{equation} B = \pi\left[\tfrac{4}{3}\left(\ell_{\rm F}^2 - \ell_{\rm T}^2\right) - \tfrac{2}{3}\ell_{\rm F}^2\cos(2b_{\rm F}) + \tfrac{2}{3}\ell_{\rm T}^2\cos(2b_{\rm T})\right]. \end{equation} Since $\ell_{\rm F}$ and $\ell_{\rm T}$ are fixed by the form of the potential $V$, we can consider the tunnelling exponent as a function of the seed and remnant masses, $B=B(m_{\rm F},m_{\rm T})$. The tunnelling exponent at the extremes of the mass ranges are, \begin{equation} \begin{aligned} B_\mathrm{HM} \equiv B(0,0) = & \ \pi\left(\ell_{\rm F}^2-\ell_{\rm T}^2\right), \\ B(0,m_{N{\rm T}}) = & \ \pi\left(\ell_{\rm F}^2-\tfrac{2}{3}\ell_{\rm T}^2\right), \\ B(m_{N{\rm F}},0) = & \ \pi\left(\tfrac23 \ell_{\rm F}^2-\ell_{\rm T}^2\right),\\ B(m_{N{\rm F}},m_{N{\rm T}}) = & \ \pi\left(\tfrac{2}{3}\ell_{\rm F}^2-\tfrac{2}{3}\ell_{\rm T}^2\right), \end{aligned} \label{eq:Blimits} \end{equation} where the HM bounce is recovered for vanishing seed and remnant masses, and we note that the Nariai limits for the false vacuum and potential top are distinct, since $\ell_{{\rm F}} \neq \ell_{{\rm T}}$. For a black hole seed of a given mass, the remnant mass can lie anywhere in the range $[0,m_{N{\rm T}}]$. The most probable tunnelling event will therefore be the one with the smallest value of $B$. However, from \eqref{eq:Blimits} we see that if $\sqrt{\frac23} \ell_{\rm F} < \ell_{\rm T}(<\ell_{{\rm F}})$, it is possible for $B(m_{{\rm F}},0)$ to become negative for masses close to the Nariai limit. Negativity of an instanton action (or indeed the action dropping below one in Planck units) indicates a breakdown of the semi-classical description underlying the calculation. We therefore need an additional constraint on the tunnelling process that prevents this catastrophe. Our conjecture (that we motivate in the subsequent sections) is to impose that the area of the cosmological horizon should never increase during a transition, \begin{equation} \label{eqA} \Delta\mathcal{A}_c = \mathcal{A}_{c{\rm T}}-\mathcal{A}_{c{\rm F}}\leq 0. \end{equation} This is consistent with the idea that the instanton represents a thermal fluctuation, because the condition implies that the fluctuation can be contained entirely inside the original cosmological horizon. A fluctuation that was larger than the event horizon could not arise in a causal process. Once we impose this constraint, we find that there is a natural cut-off in parameter space that keeps the instanton solutions in the range consistent with the semi-classical approximation. \begin{figure} [htb] \centering \includegraphics[width=\linewidth]{SdS_L09.pdf} \caption{Dependence of the tunnelling exponent $B$ on the seed mass $m_{\rm F}$ for $l_{\rm T}/\ell_{\rm F}=0.9$. The blue lines correspond to different values of the remnant mass $m_{\rm T}$, starting at zero and increasing in steps of $0.1 m_{N{\rm T}}$ up to the Nariai limit. The area of the cosmological horizon is conserved along the red curve and decreases above it. Above the broken black line, the remnant black hole has a larger mass parameter than the seed. } \label{fig:paramspaceMB} \end{figure} The parameter space of instantons is illustrated in figure \ref{fig:paramspaceMB}, where the ratio of the {\it Black-Hole-Hawking-Moss} (BHHM) instanton action to the pure HM action is plotted as a function of the seed primordial black hole mass, $m_{\rm F}$. As expected, for each seed mass there is a range of remnant masses with the action increasing as the remnant mass increases. The blue curves in the plot show how the action varies with seed mass, $m_{\rm F}$, for a given remnant mass, $m_{\rm T}$. As the seed mass increases, the action decreases until we reach the red curve boundary. This is the equal area curve, where the area of the cosmological horizon is the same for initial and final states. Note that in \cite{Gregory:2020cvy}, we had imposed this as a constraint on the BHHM instantons for convenience. Above the red curve, all instantons have $\Delta\mathcal{A}_c<0$, hence are allowed, but have higher action than the equal area curve, so are suppressed. Below the red curve, the cosmological horizon area would increase, which we argue is unphysical. Thus, the condition $\Delta\mathcal{A}_c\leq 0$ provides a lower bound on the allowed region of the parameter space, and is pleasingly familiar from the black hole bubbles of \cite{Gregory:2013hja}. The remaining bounds in the plot are fixed by recalling that the allowed masses in each of the SdS spacetimes are bounded by the appropriate Nariai mass, $m\in[0,m_N]$. The maximal tunnelling rate occurs at the point where $B$ is minimal, $m_{\rm F}=m_C$, where the cosmological horizon areas are identical and the remnant mass is zero: \begin{equation} m_C = \frac{\ell_{{\rm F}}(\ell_{{\rm F}}^2 - \ell_{{\rm T}}^2)}{2\ell_{{\rm F}}^2}\;. \label{mcrit} \end{equation} Simple analytic formulae are available in the small barrier approximation $\ell_{\rm F}\approx l_{\rm T}$, This approximation is equivalent to asserting that the height of the barrier relative to the false vacuum be small compared to its absolute value, i.e.\ $V(\phi_{\rm T})-V(\phi_{\rm F})\ll V(\phi_{\rm T})$. In this case, the maximal rate is obtained for the critical seed mass \eqref{mcrit}, $m_{C} \approx \ell_{\rm F} - \ell_{\rm T}$. The black hole horizon reduces approximately to the Schwarzschild value $r_{h{\rm F}} \approx 2m_C$, and the value of $B$ is given by, \begin{equation} B_C \approx 4\pi m_C^2 \quad \Rightarrow \quad \left(\frac{B}{B_\mathrm{HM}}\right)_C \approx \frac{2(\ell_{\rm F}-\ell_{\rm T})}{\ell_{\rm T}}. \end{equation} Before moving on to examine the physics of our conjecture, note that the line $m_{\rm T}=m_{N{\rm T}}$ in figure \ref{fig:paramspaceMB} does not close up with the equal area curve at $m_{\rm F}=m_{N{\rm F}}$. This is because the cosmological horizons in the Nariai limit are $r_{c{\rm F},{\rm T}}=\ell_{{\rm F},{\rm T}}/\sqrt{3}$, which clearly do not coincide for $\ell_{\rm F}\neq \ell_{\rm T}$. \section{Thermodynamics of the Hawking-Moss process} \label{sec:BHTD} In the tunnelling scenario, we have provided additional motivation for our result on the probability of decay as Boltzmann suppression of an entropy-lowering transition. We now seek to explore further physical explanations for our results. Note that the BHHM transition between black hole spacetimes with differing cosmological constants before and after the transition suggests that we explore the {\it extended} black hole thermodynamical description \cite{Kastor:2009wy,Dolan:2012jh,Dolan:2013ft,Kubiznak:2016qmn}, in which the cosmological `constant' determines a thermodynamic pressure $P=-\Lambda/(8\pi)$. Of course, if $P$ is to be truly dynamical, then $\Lambda$ cannot just be a constant term in the gravitational Lagrangian, rather, as we have here, the vacuum energy is determined by the expectation value of a scalar field, thus can obviously be allowed to vary. The thermodynamic pressure becomes a thermodynamic charge in the First Law, \begin{equation} \delta M = T \delta S + {\cal V} \delta P + ... \label{dsfirst} \end{equation} with an associated potential -- the thermodynamic volume ${\cal V}$ -- which can be computed for each of the horizons. This comes with the caveat that, although thermodynamical relationships exist for the individual horizons, the temperatures of the black hole and cosmological horizon are unequal and the total system cannot be in thermal equilibrium. Interestingly, as pointed out in \cite{Kastor:2009wy}, the black hole mass parameter, $m$, that is conventionally associated with the internal energy of the black hole in the original formulation of black hole thermodynamics, in this extended formulation leads to a variable $M$ that has the interpretation of enthalpy, $H$, due to the first law above containing a $+{\cal V} \delta P$ term, rather than the $-P\delta {\cal V}$ term associated to ``$dU$''. Although the extended thermodynamics of black holes is more conventionally explored in anti-de Sitter space, where the negative $\Lambda$ gives rise to a positive pressure, the extended thermodynamics of black holes in {\it de Sitter} space can equally well be considered, and was explored in \cite{Dolan:2013ft}, (see also \cite{Gregory:2017sor,Gregory:2018ghc}) with a Smarr relation and First Law \eqref{dsfirst} being derived. Now let us summarise the picture for the SdS spacetime. Computing the thermodynamic parameters locally at each horizon yields \begin{equation} M = m \;\;\;, \qquad S = \pi r_{h,c}^2 \;\;\;, \qquad T = \frac{1}{r_{h,c}} \left ( 1 - 3 \frac{r_{h,c}^2}{\ell^2} \right) \;\;\;, \qquad {\cal V} = \frac{4\pi}{3} r_{h,c}^3 \;, \end{equation} however, notice that this definition of the temperature yields a negative sign, and sometimes the modulus is taken. We will retain the signs here however for consistency of the expressions that follow. It proves useful to repackage these expressions in a `chemical' form, following \cite{Dolan:2012jh,Gregory:2019dtq} that uses only thermodynamic charges: \begin{equation} M = \sqrt{\frac{S}{4\pi}} \left ( 1+ \frac{8PS}{3} \right) \;\;\;, \qquad T = \frac1{4 \sqrt{\pi S}} \left ( 1+ 8PS \right) \;\;\;, \qquad {\cal V} = \frac43 \sqrt{\frac{S^3}{\pi}} \;. \end{equation} Let us now consider the internal energy bounded by the cosmological horizon; we can think of this as the total energy in the observable de Sitter universe. According to the thermodynamic expressions, this is \begin{equation} U = M - P{\cal V} = \sqrt{\frac{S}{4\pi}} \left ( 1+ \frac{8PS}{3} \right) - \frac{4P}3 \sqrt{\frac{S^3}{\pi}} = \sqrt{\frac{S}{4\pi}} \;, \end{equation} thus, the total internal energy of the SdS spacetime is determined by the entropy of the cosmological horizon. During a decay, the only way we can imagine the internal energy of the spacetime to increase is if there is an influx of energy from beyond the cosmological horizon. This would therefore not represent a spontaneous transition between vacua, but would be more analogous to a stimulated decay (and one would also have to take account of this input in any computation of a decay amplitude). However, it is natural to imagine that energy can be dissipated beyond the horizon, or that the decay gives an energy neutral budget. We therefore posit that the internal energy of the spacetime must not increase in any decay, hence $\delta S_c \leq 0$. This provides a natural constraint on the space of HM instantons. As we see from figure \ref{fig:paramspaceMB}, for a given seed mass, the preferred HM instanton either has no remnant black hole, or has a remnant, but conserves the internal energy of the observable de Sitter universe. \section{Stochastic tunnelling in the presence of a black hole} \label{sec:SBH} We now explore a very different approach, based on stochastic inflation, to support our premise that the cosmological horizon area decreases for the HM type of tunnelling process. We start from de Sitter space to review some of the basic premises, and then modify the stochastic formalism to include a population of primordial black holes. In the stochastic inflationary formalism, the inflaton field is averaged over large spatial scales in a spatially flat universe to produce a `coarse grained' effective cosmological model \cite{Starobinsky:1986fx,Starobinsky:1994bd}. The effective field $\phi(t)$ evolves by a stochastic equation \begin{equation} 3H\partial_t\phi=-\partial_\phi V+\xi,\label{sde} \end{equation} where $\xi$ is a gaussian random function that arises from the effects of small-scale quantum fluctuations. The noise correlation function obtained from quantum field theory can be approximated by a local expression with diffusion coefficient $D$, \begin{equation} \langle \xi(t)\xi(t')\rangle=2D\delta(t-t')=\frac{9H^5}{4\pi^2}\delta(t-t'). \end{equation} If the field is released in the false vacuum $\phi_F$, and there is a potential barrier with the top at $\phi_{\rm T}$, then the stochastic source in Eq. (\ref{sde}) pushes the field across the top of the barrier. The probability to remain inside the barrier falls, and therefore the false vacuum decays. The decay constant $\Gamma$ is given by a general formula \cite{Linde:1991sk,Li:2007uc} \begin{equation} \Gamma=\frac{1}{2\pi\gamma}\left(V''(\phi_F)V''(\phi_{\rm T})\right)^{1/2} e^{-\gamma(V(\phi_{\rm T})-V(\phi_F))/D}, \end{equation} where $\gamma$ is the effective friction: in our case $\gamma=H$. This decay constant is of the form $Ae^{-B}$, with \begin{equation} B=\frac{8\pi^2\delta V}{ 3 H^4}\;, \end{equation} exactly as we have for the HM instanton when $\delta V=V(\phi_{\rm T})-V(\phi_{\rm F})\ll V$. Note that the tunnelling result only holds as long as $B\gg 1$. If the potential is very flat, the field does a random walk and reaches $\phi_{\rm T}$ on a timescale of $\phi_{\rm T}^2/H^3$ that does not depend on the barrier height. We have a stochastic picture of the HM transition, but our concern is how the stochastic decay affects the cosmological horizon area, ${\cal A}=4\pi/H^2$. According to the stochastic inflationary formalism \cite{Starobinsky:1986fx,Starobinsky:1994bd}, the back reaction of the field on the metric implies that the Hubble expansion rate, $H(t)$, varies over large scales according to the Friedmann equation, \begin{equation} 3H^2=8\pi V,\label{friedman}\;. \end{equation} Starting with the field in the false vacuum, a change in potential $\delta V$ induces a change in the horizon area $\delta{\cal A}$, \begin{equation} \delta{\cal A}=-{8\pi{\cal A}\over 3H^2}\delta V. \end{equation} Therefore, stochastic evolution to the top of the potential barrier, which we have argued corresponds to the HM instanton transition, causes a decrease in horizon area. To extend these ideas to black holes in de Sitter space, we need a notion of the slow roll equation with a black hole, together with a time and radially dependent counterpart to the Friedmann equation. Fortunately, this problem was addressed for a single black hole with a slowly evolving scalar field in a sequence of papers \cite{Chadburn:2013mta,Gregory:2017sor,Gregory:2018ghc}. Physically, a black hole with a slowly evolving scalar field will be very close to a SdS spacetime, therefore the solution is expressed as a perturbation of SdS in time. This will not necessarily yield a solution for arbitrarily long timescales, but will give a good approximate solution in the same sense that slow roll inflation gives a good approximation to the inflationary universe. The first step is to identify a ``time'' coordinate in the SdS spacetime that will asymptote cosmological time beyond the cosmological horizon. This is done by identifying the direction in which the scalar field rolls. The challenge is that this coordinate must be regular at each of the black hole and cosmological horizon radii, $r_h$ and $r_c$ respectively. This was identified in \cite{Chadburn:2013mta,Gregory:2017sor,Gregory:2018ghc}, and interpolates between the local advanced Eddington time $v$ at the black hole horizon, and retarded time $U$ at the cosmological horizon. The time coordinate takes the form $T=t+h(r)$, (although any rescaling of this combination by a constant factor will also work). In \cite{Gregory:2017sor,Gregory:2018ghc}, it was shown that, provided the standard slow roll relations \cite{Liddle:1993fq} for the potential $V$ are satisfied, then $\phi$ approximately solves a modified slow roll equation: \begin{equation} 3\gamma \frac{d\phi}{ d T}=-\partial_\phi V,\label{clas} \end{equation} where \begin{equation} \gamma=\frac{r_c^2+r_h^2}{ r_c^3-r_h^3}. \end{equation} As pointed out in \cite{Gregory:2017sor}, $\gamma$ has the nice thermodynamical interpretation as being, up to a factor, the ratio of the total entropy divided by the thermodynamic volume of the intra-horizon SdS system. In order to obtain a stochastic system, we divide space into cells, and average as in stochastic inflation, but now include one black hole in each cell. Small scale quantum fluctuations will cause the field to evolve stochastically, and we replace Eq. (\ref{clas}) with \begin{equation} 3\gamma \frac{d\phi}{ d T}=-\partial_\phi V+\xi. \end{equation} By analogy with Eq.\ \eqref{sde}, we expect the noise correlation function to be of the form \begin{equation} \langle \xi(T)\xi(T')\rangle=2D\delta(T-T'). \end{equation} However, the particular form of the noise correlation function does not affect the argument which follows. We are particularly interested in how the scalar field back-reacts on the geometry, specifically, the area of the horizons. In \cite{Gregory:2017sor}, the evolution of the horizons was analysed, and to leading order it was found that \begin{equation} \delta {\cal A}_i = -\frac{8\pi{\cal A}_i}{3 \gamma |\kappa_i|} \delta V, \label{horizonvariation} \end{equation} where $\kappa_i$ is the surface gravity of the horizon in question, explicitly: \begin{equation} \kappa_h=\frac{(r_c-r_h)(2r_h+r_c)}{ 2r_h(r_h^2+r_c^2+r_hr_c)}\;\;\;, \qquad \kappa_c=\frac{(r_h-r_c)(2r_c+r_h)}{ 2r_c(r_h^2+r_c^2+r_hr_c)}. \end{equation} We see therefore that \eqref{horizonvariation} implies that under stochastic evolution from an initial false vacuum $\phi_{\rm F}$ to $\phi_{\rm T}>\phi_{\rm F}$, with $\delta V>0$, the horizon areas {\it decrease}. This confirms our general proposal that the cosmological horizon shrinks during the up-tunnelling type of vacuum decay. An interesting corollary is that since the same qualitative behaviour of horizon area occurs at {\it each} horizon, the black hole violates the area theorem during HM vacuum decay. This could not happen for a purely classical process, and confirms the quantum nature of vacuum decay. The decoherence process of quantum fluctuations leads to entropy production by which the generalized second law may be satisfied \cite{Oshita:2017hsb}. \section{Conclusion} \label{sec:concl} We have seen that the Hawking-Moss, or up-tunnelling, types of transition extend naturally to vacuum decays seeded by black holes, as long as we impose a condition that the nucleation event can be contained within the original cosmological event horizon, i.e.\ the cosmological horizon does not increase in area. The vacuum decay rate is always enhanced by the black hole seed. The mass of the remnant black hole after vacuum decay can be zero, and the cosmological horizon shrinks, or the remnant mass is non-zero and the cosmological horizon stays the same size. Which of these outcomes occurs depends on the value of the seed black hole mass. The theory of stochastic inflation was extended to include primordial black holes in \S \ref{sec:SBH}. In stochastic inflation, the Hawking-Moss instanton gives the leading order approximation for calculating the probability flux across the potential barrier. The new theory implies that both the black hole and cosmological horizon areas decrease during up-tunnelling events. On the other hand, the stochastic picture allows less freedom in the choice of remnant mass than does the vacuum tunnelling picture. It would be interesting to generalise these results to rotating black holes, as has been explored for black hole bubbles in \cite{Oshita:2019jan}. It might seem that the outcomes would be similar, however there are several important technical differences. Positivity of the tunnelling action was ensured here by imposing the thermodynamic constraint of decreasing internal energy, alternately, the argument from stochastic inflation that the horizon area not increase. For rotating black holes, the action of the instanton, related to the free energy, now contains a $\beta\Omega J$ term \cite{Chao:1998uj,Chao:1998hk,Wu:2004db}, dependent on the angular momentum and a potentially arbitrary periodicity of Euclidean time. Further, the scalar field in the Kerr-de Sitter background will now have superradiant modes \cite{Press:1972zz,Teukolsky:1974yv,Tachizawa:1992ue}, that will likely have a stronger effect on the system than any putative tunnelling decay process. We plan to study this system further. The aim of this paper has been to push the theory of vacuum decay to its limits, and yet we find nothing unreasonable in the results. It would be of interest to use the phenomena discussed here to test the scope of theories of quantum gravity. As for actual applications to our universe, vacuum decay during inflation can take place when there is a secondary, `spectator' field present with a suitable false vacuum state. Since `flat' potentials are a common feature of most inflationary models of the early universe, the Hawking-Moss, or up-tunnelling seems the most likely type of transition in this situation. \acknowledgments This work was supported in part by the Leverhulme Trust [Grant No. RPG-2016-233] (RG/IGM/SP), by the STFC [Consolidated Grant ST/P000371/1] (RG/IGM), by JSPS Overseas Research Fellowships (NO) and by the Perimeter Institute for Theoretical Physics (RG/NO). Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science.
2024-02-18T23:40:53.840Z
2020-07-23T02:17:35.000Z
algebraic_stack_train_0000
3,611
5,397
proofpile-arXiv_066-1666
\section{Introduction} \label{sec:intro} Some pulsars are known to have glitches---occasional, sudden spin up events that interrupt the normal, steady spin down. The first glitch was observed in Vela by \citet{1969Natur.222..228R}, and since then many glitches have been observed in other young pulsars \citep[e.g.,][]{2011MNRAS.414.1679E,2018arXiv180104332M}. Until recently, the timing data around the glitch epoch has been sparse due to observational constraints on major radio telescopes. In a remarkable campaign, Palfreyman and colleagues have used the Mount Pleasant 26-m radio telescope in Hobart, Tasmania and the 30-m telescope in Ceduna, South Australia to time Vela continuously for several years, with the specific purpose of study of its glitches. In 2016, a glitch event in the Vela pulsar was caught and observed with high time resolution such that single pulses during the glitch were recorded for the first time \citep{2018Natur.556..219P}. Coincident with the glitch, there was an unusually broad pulse, followed by a null pulse, then two pulses with unexpectedly low linear polarization fraction. Subsequent pulses in a 2.6 s interval arrived later than usual pulses. Since radio emission is believed to be connected to magnetospheric current and pair production \citep[e.g.,][]{2008ApJ...683L..41B,2020PhRvL.124x5101P}, the observed changes suggest that the overall magnetospheric machinery was affected by the glitch. Glitches are believed to be caused by a sudden transfer of angular momentum from the neutron superfluid to the rest of the star. While the star is spun down continuously due to external torques, the rotation of the neutron superfluid is fixed as long as the quantized vortices are pinned to the crustal ion lattice \citep{1975Natur.256...25A} or to superconducting proton flux tubes \citep{1998ApJ...492..267R}. Star quakes have been proposed as a mechanism to simultaneously unpin a multitude of vortices and trigger a glitch \citep[e.g.,][]{1976ApJ...203..213R,1996ApJ...457..844L,2002MNRAS.333..613L,2010ApJ...715L.142E}. \cite{2020ApJ...897..173B} suggested that the same starquake that triggered the 2016 glitch in Vela, could also dramatically alter the radio emission for a short amount of time. In this scenario, the quake launches \alfven waves into the magnetosphere, and as the waves propagate along magnetic field lines, they may generate local regions with enhanced current density that ignites additional pair production. This would change the pulse profile, and may even quench the radio emission if pair production further away on open field lines causes a backflow which screens the polar gap. Pair production on closed field lines might also modify the pulse profile, when these pair producing regions are very close to the separatrix. If \alfven waves on closed field lines keep bouncing back and forth, they may keep producing pairs and influence the radio emission for a long time. This should be constrained by the observed duration of the radio pulse disturbance. In addition, \cite{2020ApJ...897..173B} predict a weak X-ray burst to accompany the magnetospheric disturbance associated with the 2016 Vela glitch. The duration of the burst should be comparable to the dissipation timescale of Alfven waves in the closed magnetopshere. In this paper we study one of the mechanisms of this dissipation. The energy of \alfven waves may be removed through several channels. Firstly, in the closed zone, as waves bounce back and forth, counter-propagating \alfven waves lead to a turbulent cascade, and energy is dissipated on small scales \citep[e.g.,][]{2019ApJ...881...13L}. Small scale \alfven waves can also be more efficiently dissipated by Landau damping \citep{1986ApJ...302..120A}. Secondly, some wave energy may be absorbed by the crust \citep{2015ApJ...815...25L}. Thirdly, \alfven wave packets propogating along dipole field lines become increasingly oblique and dephased, leading to enhanced current density carried by the wave packet. If there is not enough $e^{\pm}$ in the magnetosphere to conduct the current, dissipation may happen through pair production or diffusion of the wave front \citep{2020ApJ...897..173B}. The charge starvation may also cause \alfven waves to convert to electromagnetic modes. Fourthly, \alfven waves could convert to fast magnetosonic waves in a plasma filled magnetosphere; the latter is not confined to the field lines and can escape from the magnetosphere. In this paper, we focus on the fourth channel, and quantify the efficiency of \alfven waves converting to fast waves in a plasma filled, dipolar magnetosphere in the force-free regime. The paper is organized as follows. In \S\ref{sec:method} we describe our numerical method and setup. We show the results in \S\ref{sec:non-rotating} and \S\ref{sec:rotating}, for a non-rotating dipolar magnetosphere and a rotating force-free magnetosphere, respectively. We apply the results to the Vela pulsar in \S\ref{sec:Vela}, and conclude with more discussion in \S\ref{sec:conclusion}. \section{Force-free formalism and numerical method}\label{sec:method} In a plasma filled pulsar magnetosphere, the electromagnetic energy is much larger than particle kinetic energy, so force free is a good approximation (except for current sheets). In this regime, the force balance equation is simply \begin{equation}\label{eq:FF_constraint} \rho\mathbf{E}+\mathbf{J}\times\mathbf{B}=0, \end{equation} and the evolution of the electromagnetic field is governed by the following equations \citep[e.g.,][]{1999astro.ph..2288G,2002luml.conf..381B} \begin{align} \frac{\partial\mathbf{E}}{\partial t}&= \nabla\times\mathbf{B}-\mathbf{J},\label{eq:FF_dEdt}\\ \frac{\partial\mathbf{B}}{\partial t}&=- \nabla\times\mathbf{E},\label{eq:FF_dBdt}\\ \mathbf{J}&=\nabla\cdot\mathbf{E}\frac{\mathbf{E}\times\mathbf{B}}{B^2}+\frac{(\mathbf{B}\cdot\nabla\times\mathbf{B}-\mathbf{E}\cdot\nabla\times\mathbf{E})\mathbf{B}}{B^2},\label{eq:FF_J} \end{align} with the constraints $\mathbf{E}\cdot\mathbf{B}=0$ and $E<B$ (we employ Heaviside-Lorentz units and set $c=1$). For simplicity, we only consider axisymmetric magnetospheres and axisymmetric perturbations in this work. We first numerically obtain the steady state of a force-free magnetosphere, then launch \alfven waves by applying a small toroidal displacement on the neutron star surface over a small angular range $\theta\in(\theta_1,\theta_2)$. More specifically, we assume a disturbance in the angular velocity of the neutron star surface in the following form during a finite time period $T$: \begin{equation}\label{eq:perturbation} \delta\omega= \begin{cases} \displaystyle \delta\omega_0e^{-\frac{1}{2}\left(\frac{\theta-\theta_m}{\sigma}\right)^2}\sin(2\pi n t/T), & 0\le t\le T,\\ 0, & t>T, \end{cases} \end{equation} where the Gaussian profile with $\theta_m=(\theta_1+\theta_2)/2$ and $\sigma=|\theta_2-\theta_1|/6$ allows the perturbation to go to zero smoothly at the boundaries $\theta_1$ and $\theta_2$; $n$ is an integer representing the number of wave cycles during time $T$. This generates an electric field perturbation at the stellar surface \begin{equation}\label{eq:perturbation_deltaE} \delta E_{\theta}=-\delta\omega r_*\sin\theta B_{0r}, \end{equation} where $r_*$ is the stellar radius and $\mathbf{B}_0$ is the background magnetic field. The magnitude of the magnetic perturbation at the center of the wave packet is \begin{equation} \left(\frac{\delta B}{B_0}\right)_{r_*}=\delta\omega_0 r_* \sin\theta_m. \end{equation} We then follow the subsequent propagation and evolution of the wave packet. We use our code \emph{Coffee} (COmputational Force FreE Electrodynamics)\footnote{\href{https://github.com/fizban007/CoffeeGPU}{https://github.com/fizban007/CoffeeGPU}} to numerically solve the force-free equations \citep{2020ApJ...893L..38C}. To suit our study of axisymmetric cases, we developed a 2D version using spherical coordinates $(r,\theta)$. The basic algorithm is similar to \citet{2015PhRvL.115i5002E, 2016ApJ...817...89Z}: we use fourth-order central finite difference stencils on a uniform $(\log r, \theta)$ grid and a five-stage fourth-order low storage Runge-Kutta scheme for time evolution \citep{carpenter_fourth-order_1994}. We use hyperbolic divergence cleaning \citep{2002JCoPh.175..645D} to enforce $\nabla\cdot\mathbf{B}=0$ so that the error is advected away at $c$ and damped at the same time. To enforce the force-free condition, we explicitly remove any $\mathbf{E}_{\parallel}$ by setting $\mathbf{E}\to\mathbf{E}-(\mathbf{E}\cdot\mathbf{B})\mathbf{B}/B^2$ at every time step \footnote{The $\mathbf{E}_{\parallel}$ cleaning is done in addition to evaluating parallel force-free current, not to replace it as in \citet{2006ApJ...648L..51S}.}, and when $E>B$ happens, we reset $\mathbf{E}$ to $(B/E)\mathbf{E}$. We apply standard sixth order Kreiss-Oliger numerical dissipation to all hyperbolic variables to suppress high frequency noise from truncation error \citep{kreiss_methods_1973}. At the outer boundary, we implement an absorbing layer to damp all outgoing electromagnetic waves \citep[e.g.,][]{2015MNRAS.448..606C,2019MNRAS.487.4114Y}. The code is parallelized and optimized to run on GPUs as well as CPUs with excellent scaling. Our simulation grid for runs in \S\ref{sec:non-rotating} has 3360 cells equally spaced in $\log r$ between $r=e^{-0.2}r_*=0.82r_*$ and $r=e^{4.8}r_*=121.51r_*$ (absorbing layer is not used), and 2048 cells uniformly distributed in $\theta\in(0,\pi)$. For runs in \S\ref{sec:rotating}, the simulation grid has 4096 cells in $\theta$ direction, and 7680 cells in $\log r$ direction between $r=e^{-0.2}r_*=0.82r_*$ and $r=e^{5.7}r_*=298.87r_*$, within which the last 15 cells are absorbing layers. \section{Alfv\'{e}n waves in a non-rotating dipole field}\label{sec:non-rotating} Let us first consider a non-rotating dipole. The magnetic field is purely poloidal, and can be written as \begin{equation} \mathbf{B}_0=\frac{\nabla\psi\times\hat{\phi}}{r\sin\theta}, \end{equation} where $\psi=\mu\sin^2\theta/r$ is the flux function, $\mu$ is the magnetic dipole moment, and $\hat{\phi}$ is the unit vector along azimuthal direction. Magnetic field lines lie on constant $\psi$ surfaces; they are described by the equation \begin{equation} r=r_{\rm eq}\sin^2\theta, \end{equation} where $r_{\rm eq}$ is the radius where the field line intersects the equatorial plane. Under axisymmetry constraint, the wave vectors need to be purely poloidal. As a result, \alfven waves have toroidal $\delta\mathbf{B}$ and poloidal $\delta\mathbf{E}$, while fast modes have toroidal $\delta\mathbf{E}$ and poloidal $\delta\mathbf{B}$. Therefore, the two modes are easily distinguished by their polarizations. \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{dw0_10hr-EBplot050.png}\\ \includegraphics[width=0.95\columnwidth]{dw0_10hr-EBplot100.png}\\ \includegraphics[width=0.95\columnwidth]{dw0_10hr-EBplot150.png} \caption{Snapshots of wave field evolution in a non-rotating dipole field. In this example, the initial Alfv\'{e}nic perturbation has a duration of $T=5r_*/c$ with only one full cycle, and is launched inside the flux tube whose equatorial intersection is bounded by $r_{\rm eq}=10r_*$ and $15r_*$; the center of the wave packet passes through $r_m=12.1r_*$. From top to bottom three different time slices are shown. Left panels show $B_{\phi}/B$, manifestation of \alfven modes; right panels show $E_{\phi}/B$, manifestation of fast modes. In the plot, lengths are in units of $r_*$ and time is in units of $r_*/c$ (same below).} \label{fig:nonrotating-fields} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{dw0_10hr-energy.png} \caption{Energy evolution as a function of time for different wave components, corresponding to the example shown in Figure \ref{fig:nonrotating-fields}. Blue dashed line: total magnetic energy in all the wave components; magenta dotted line: total electric energy in all the wave components; black solid line: total electromagnetic energy in all the wave components; red solid line: electric energy of the fast mode; black dotted line: magnetic energy of the fast mode; black dashed line: total electromagnetic energy in the fast mode. All values have been normalized to the initial injected energy $W_0$.} \label{fig:nonrotating-energy} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{eff0_dw.png}\\ \includegraphics[width=\columnwidth]{eff0_dB_log.png} \caption{Top: efficiency of converting to fast mode $W_F/W_0$ after the first passage of the \alfven wave through the equator, in a non-rotating dipole background field. Horizontal axis is the initial perturbation magnitude $\delta\omega_0$. Red points correspond to waves launched on a flux tube with $r_m=12.1r_*$; blue points correspond to waves launched on a flux tube with $r_m=26.6r_*$. Bottom: the same conversion efficiency, plotted against $(\delta B/B_0)_{\rm eq}$, the theoretical \alfven wave amplitude at the equator. The dashed line has the expression $W_F/W_0=0.2(\delta B/B)_{\rm eq}^2$.} \label{fig:nonrotating-eff_dw_dB} \end{figure} Figure \ref{fig:nonrotating-fields} shows one example of an \alfven wave packet propagating out along the dipole field lines. The group velocity of the wave is $c$ and it is directed along the background magnetic field. For small amplitude \alfven waves, energy conservation implies that $\delta B^2 A=const.$, where $A$ is the cross sectional area of the flux tube in which \alfven wave is launched. Since the poloidal magnetic flux of the background field is conserved, $B_0 A=const.$, we have $\delta B\propto B_0^{1/2}\propto r^{-3/2}$, and $\delta B/B_0\propto r^{3/2}$, namely, the relative amplitude of the wave grows as it propagates to large radius. Conversion to fast mode becomes significant when $\delta B/B_0$ gets large, and peaks near the equator where $\delta B/B_0$ is largest. This can be understood qualitatively from the following picture: the launched alfven wave is initially guided purely by magnetic tension, but when $\delta B / B$ approaches 1 the pressure of the perturbation $\delta B^2 \sim B^2$ can deform the background poloidal field, launching a wave driven by magnetic pressure and tension (fast mode). The total wave energy can be calculated from (Appendix \ref{sec:wave_energy}) \begin{equation}\label{eq:wave_energy} W=\int\frac{1}{2}(\delta\mathbf{B}^2+\delta \mathbf{E}^2)\,dV, \end{equation} and the energy of the fast mode is \begin{equation} W_F=\int\frac{1}{2}(\delta\mathbf{B}_p^2+\delta \mathbf{E}_{\phi}^2)\,dV, \end{equation} where $\delta\mathbf{B}_p$ denotes the poloidal components of $\delta\mathbf{B}$, and $\delta \mathbf{E}_{\phi}$ is the toroidal component of $\delta\mathbf{E}$. Figure \ref{fig:nonrotating-energy} shows the time evolution of the wave energies for the example of Figure \ref{fig:nonrotating-fields}. We can see periodic increase in the fast wave energy (black dashed line); this corresponds to each passage of the \alfven wave packet through the equator where most of the fast wave is generated. The total wave energy (black solid line) should in principle be conserved, but we observe stair-like decreases around $t=35r_*/c$ and $t=70r_*/c$. This is because when the \alfven wave packet propagates back toward the stellar surface, it is strongly dephased \citep{2020ApJ...897..173B}; both the dephasing and the spatial contraction following the dipole field lines lead to wave variation happening on very small scales. Numerical dissipation becomes important when these small scale structures are not well resolved by the grid. We do find the dissipation decrease as we increase the resolution. Conversion to fast mode, on the other hand, does not depend on resolution at all (Appendix \ref{sec:resolution}). Most of the conversion happens on first passage of the \alfven wave through the equator, before the numerical dissipation effect becomes important. To quantify the efficiency of \alfven waves converting to fast mode, we measure the fast wave energy $W_F$ at the end of the first passage of the \alfven wave through the equator, and compare that with the initially injected \alfven wave energy $W_0$. We carry out a series of experiments by launching \alfven waves with different magnitude and on different flux tubes. The top panel of Figure \ref{fig:nonrotating-eff_dw_dB} shows the measured conversion efficiency, plotted against the initial perturbation magnitude. The two trends correspond to waves on two different flux tubes. When we instead plot the efficiency against the theoretically computed \alfven wave amplitude at the equator, $(\delta B/B_0)_{\rm eq}=(\delta B/B_0)_{r_*}(r_m/r_*)^{3/2}$, where $r_m=r_*/\sin^2\theta_m$ is the radius at which the center of the \alfven wave packet passes through the equator, then all the points lie on one single trend, as shown in the bottom panel of Figure \ref{fig:nonrotating-eff_dw_dB}. The measured efficiency $W_F/W_0$ has very little dependence on the angular width of the initial \alfven wave perturbation $|\theta_2-\theta_1|$, as long as $|\theta_2-\theta_1|\ll 1$. The conversion efficiency does depend on the total duration $T$ of the \alfven wave perturbation: when $T$ becomes very short, the conversion efficiency drops. In the regime $T\ll r_m/c$, the wave packet has a small length compared to the radius of curvature of the field line, so WKB approximation is applicable. In WKB limit, the wave evolves adiabatically on the \alfven eigenstate; the expected conversion efficiency should go to zero. But for $T\gtrsim0.2r_m$, $W_F/W_0$ only varies slowly with $T$. We also find that the conversion efficiency does not depend on the wavelength $\lambda_{\parallel}=cT/n$ in this case. The scaling of conversion efficiency with $T$ and $\lambda_{\parallel}$ is shown in Figure \ref{fig:eff_lambda_nonrotating}. \begin{figure} \centering \includegraphics[width=\columnwidth]{eff_lambda_non_rotating.png} \caption{Scaling of the conversion efficiency as a function of the wavelength $\lambda_{\parallel}$, for waves launched on a fixed flux tube with $r_m=12.1r_*$. Blue points correspond to wave packets with a single wavelength, i.e., the perturbation duration $T=\lambda_{\parallel}/c$; the red dots correspond to wave trains with a fixed total duration $T=5r_*/c$.} \label{fig:eff_lambda_nonrotating} \end{figure} The above results suggest that the conversion efficiency only depends on $(\delta B/B_0)_{\rm eq}$ for sufficiently long wave trains. This is essentially a consequence of the self-similarity of the dipole field. Since most of the conversion happens at large radii, especially when the \alfven wave packet passes through the equator, the initial location of wave launch becomes unimportant. At small \alfven wave amplitude, we find that $W_F/W_0\propto(\delta B/B_0)_{\rm eq}^2$. This is consistent with the three wave interaction theory \citep[e.g.,][]{1998PhRvD..57.3219T,2019MNRAS.483.1731L}. An \alfven wave $A$ can convert to a forward propagating fast mode $F$ and another backward propagating \alfven mode $A_1$ (we can see a small amplitude backward propagating \alfven mode in the bottom row of Figure \ref{fig:nonrotating-fields}). The amplitude of the fast mode satisfies \begin{equation} \delta E_{F}\propto\delta E_{A}\delta E_{A_1}. \end{equation} Since $\delta E_{A_1}$ is generated by $\delta E_A$ due to propagation along curved field lines, we see that $\delta E_{F}\propto\delta E_{A}^2$. This leads to $W_F/W_0\propto\delta B_A^2$, consistent with the quadratic relation we see in the bottom row of Figure \ref{fig:nonrotating-eff_dw_dB}. A caveat is that theoretical analysis of three-wave interactions is usually carried out in a uniform background magnetic field, thus strictly speaking only applicable when the wavelengths are much smaller than the length scales of field variation. To study relatively large wavelength waves in a dipole field as we do here, numerical simulation is necessary. At large \alfven wave amplitude $(\delta B/B)_{\rm eq}>1$ the wave interaction becomes highly dynamic, and the result deviates from the above perturbation theory. Some field lines may be opened up, creating a current sheet that eventually dissipates through reconnection. When $(\delta B/B)_{\rm eq}>1$ but the energy of the \alfven wave packet $\mathcal{E}_A$ is small compared to the magnetospheric energy $\mathcal{E}_B(r_{\rm eq})$ at $r_{\rm eq}$, only a small portion of the field lines open up near the equator, which then quickly reconnect and relax back. However, when $\mathcal{E}_A>\mathcal{E}_B(r_{\rm eq})$, the \alfven wave packet can break out from the magnetosphere and eject a plasmoid into the pulsar wind. This was recently studied by \cite{2020ApJ...900L..21Y} in the context of fast radio bursts produced by the galactic magnetar 1935+2154 \citep{2020Natur.587...54T,2020Natur.587...59B}. \section{Alfv\'{e}n waves in a rotating dipole field}\label{sec:rotating} \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{steady.png} \caption{Force-free steady state of a rotating dipolar magnetosphere. Thin solid black lines are poloidal field lines and color represents $B_{\phi}$. The light cylinder is at $50r_*$ (denoted by the vertical dashed line).} \label{fig:rotating-steady} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{dw0_01r1_10-dEBplot020.png}\\ \includegraphics[width=\columnwidth]{dw0_01r1_10-dEBplot060.png} \caption{Snapshots of wave field evolution in a rotating dipole field of Figure \ref{fig:rotating-steady}. In this example, the initial Alfv\'{e}nic perturbation has a duration of $T=5r_*/c$ with only one full cycle, and is launched inside the flux tube whose equatorial intersection is bounded by $r_{\rm eq}=10r_*$ and $15r_*$; the center of the wave packet passes through $r_m=12.1r_*$. This is the same flux tube as Figure \ref{fig:nonrotating-fields}. From top to bottom two different time slices are shown. Left panels show $\delta B_{\phi}/B_0$, and right panels show $\delta E_{\phi}/B_0$. Note that the spatial scales are different for the top and bottom panels.} \label{fig:rotating-dw0.01r1_10-fields} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{dw0_01r1_10-energy1.png} \caption{Wave energy evolution for the example shown in Figure \ref{fig:rotating-dw0.01r1_10-fields}. Blue line corresponds to wave energy measured inside the flux surface $\psi=\psi(r_*,\theta_1)$ where the \alfven wave is launched; red line corresponds to wave energy measured outside this flux surface; black dashed line is the sum of the two.} \label{fig:rotating-dw0.01r1_10-energy} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{eff_dB_log.png} \caption{Measured efficiency of \alfven waves converting to fast waves in a rotating dipolar magnetosphere (triangles and stars), plotted together with the non-rotating measurements of Figure \ref{fig:nonrotating-eff_dw_dB} (red and blue dots). The dashed line has the expression $W_F/W_0=0.2(\delta B/B)_{\rm eq}^2$. Cyan triangles are measured for \alfven waves launched on a flux tube with $r_m=6.8r_*$; magenta triangles have $r_m=12.1r_*$; orange triangles have $r_m=24.2r_*$. All three have the same background as shown in Figure \ref{fig:rotating-steady}. Yellow stars are measured for \alfven waves launched on the same flux tube as the cyan triangles, but the background pulsar angular velocity is doubled.} \label{fig:eff_dB_log} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{eff_wrm.png} \caption{Conversion efficiency of small amplitude \alfven waves, plotted against $\omega_* r_m/c$. The black dashed line has the expression $W_F/W_0=0.8(\omega_*r_m/c)^2$.} \label{fig:eff_wrm} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{dw0_01r1_25-dEBplot050.png}\\ \includegraphics[width=\columnwidth]{dw0_01r1_25-dEBplot065.png} \caption{Another example of wave field evolution in the rotating dipole field of Figure \ref{fig:rotating-steady}. In this example, the initial Alfv\'{e}nic perturbation has a duration of $T=5r_*/c$ with only one full cycle, and is launched inside the flux tube whose equatorial intersection is bounded by $r_{\rm eq}=29r_*$ and $43r_*$ in the rotating magnetosphere. From top to bottom two different time slices are shown. Left panels show $\delta B_{\phi}/B_0$, and right panels show $\delta E_{\phi}/B_0$.} \label{fig:rotating-dw0.01r1_25-fields} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{eff_lambda_log_line1.png} \caption{Conversion efficiency as a function of the wavelength $\lambda_{\parallel}$ along the magnetic field line, for small amplitude \alfven waves launched on a flux tube with $r_m=12.1r_*$ in the equilibrium of Figure \ref{fig:rotating-steady}.} \label{fig:eff_lambda} \end{figure} Now let us consider the case of a rotating magnetosphere. Figure \ref{fig:rotating-steady} shows an example of the steady state field configuration for an aligned dipole rotator. We assume that the light cylinder is located at a radius $r_{\rm LC}=50r_*$. The overall field structure is consistent with e.g., \cite{2006ApJ...648L..51S}. Field lines that go through the light cylinder open up; in this region the magnetic field develops toroidal component and becomes increasingly toroidal at large distances. A current sheet exists on the equatorial plane outside the light cylinder. Field lines that are closed remain inside the light cylinder; in this region the magnetic field is purely poloidal, but there is an electric field \begin{equation} \mathbf{E}_0=-(\pmb{\omega}_*\times\mathbf{r})\times\mathbf{B}_0 \end{equation} that ensures the plasma in the closed zone corotates with the star. The field line separating the closed zone and the open zone is usually called the separatrix; its tip at the light cylinder, where the equatorial current sheet begins, is called the Y point. Similarly to the non-rotating dipole case, we launch \alfven waves in the closed zone by introducing a small perturbation in the stellar surface angular velocity according to Equation (\ref{eq:perturbation}). Figure \ref{fig:rotating-dw0.01r1_10-fields} shows one example of the wave field evolution. Here the \alfven wave packet is launched on a flux tube that is sufficiently far away from the Y point. In the rotating case both the \alfven mode and the fast mode can involve all six $\delta \mathbf{E}$ and $\delta\mathbf{B}$ components. Nevertheless, we find that \alfven mode is dominated by $\delta B_{\phi}$ while fast mode is dominated by $\delta E_{\phi}$, so we plot these field components in Figure \ref{fig:rotating-dw0.01r1_10-fields}. The top row of Figure \ref{fig:rotating-dw0.01r1_10-fields} is similar to the middle row of Figure \ref{fig:nonrotating-fields}, except that the wave form of the fast mode is different. As the fast mode propagates toward the Y point, there is increasing interaction with the separatrix and generation of additional \alfven waves near the Y point, as shown in the bottom panels of Figure \ref{fig:rotating-dw0.01r1_10-fields}. We calculate the wave energy from Equation (\ref{eq:wave_energy}) (see discussion in Appendix \ref{sec:wave_energy}). A complication here is that the \alfven mode and fast mode can no longer be easily separated by their polarizations. However, since the \alfven mode is confined along field lines while the fast mode is not, we can measure the fast mode energy when it has propagated away from the \alfven mode. More specifically, the flux surface intersecting the star at $\theta_1$ [namely $\psi=\psi(r_*,\theta_1)$] marks the outer boundary of the flux tube where the \alfven wave is launched. We measure the wave energy inside and outside this flux surface separately; the energy inside is mostly the \alfven wave energy $W_A$, and the energy outside is mostly the fast wave energy $W_F$. Figure \ref{fig:rotating-dw0.01r1_10-energy} shows the measured wave energy evolution for the example in Figure \ref{fig:rotating-dw0.01r1_10-fields}. The overall behavior is very similar to the non-rotating case shown in Figure \ref{fig:nonrotating-energy} (the smaller fast wave energy fraction is due to the small perturbation magnitude used in this particular run). We can again calculate the efficiency of converting to fast mode from $W_F/W_0$ after the first passage of the \alfven wave packet through the equator, in the same way as before. Figure \ref{fig:eff_dB_log} shows the measured conversion efficiency for \alfven waves with different magnitude, launched on different flux tubes. When the flux tube is sufficiently far away from the Y point, the deviation of the field from vacuum dipole is small. The center of the flux tube intersects the equator roughly at $r_m=r_*/\sin^2\theta_m$ as before.\footnote{Close to the Y point, the field significantly deviates from vacuum dipole scaling. For example, in the rotating steady state with $r_{\rm LC}=50r_*$, the field lines that originally intersect the equator at $r_m\gtrsim35r_*$ in the non-rotating case all open up.} We plot the conversion efficiency $W_F/W_0$ against the theoretical \alfven wave amplitude at the equator $(\delta B/B_0)_{\rm eq}$, similar to Figure \ref{fig:nonrotating-eff_dw_dB}. It is clearly seen that at small \alfven wave amplitude, the scaling deviates from non-rotating dipole cases. The conversion efficiency reaches a constant value, and the value is different for different flux tubes. We can understand the new scaling by noticing that in the rotating case, the plasma in the closed zone is corotating with the star. There is a spatially varying electric field induced by the rotation, and the \alfven wave can directly interact with this varying background, as shown in Appendix \ref{sec:rotating_conversion}. The amplitude of the generated fast mode satisfies \begin{equation} \delta E_F\propto \left(\frac{\omega_* r_m}{c}\right)\delta E_A. \end{equation} As a result, the conversion efficiency \begin{equation}\label{eq:eff_omega} \frac{W_F}{W_0}\propto \left(\frac{\omega_* r_m}{c}\right)^2, \end{equation} independent of the \alfven wave amplitude. In Figure \ref{fig:eff_wrm} we plot the conversion efficiency at small wave amplitude as a function of $\omega_*r_m/c$. Indeed the trend roughly follows Equation (\ref{eq:eff_omega}). The three-wave interaction analysis in Appendix \ref{sec:rotating_conversion} is just for illustration; in reality the wavelength can be large and a full numerical treatment is needed. The effect of rotation is important at small \alfven wave amplitude. At large amplitude, when the wave electric field becomes larger than the background rotation-induced electric field, rotation effect becomes subdominant, and we see the conversion efficiency falls back to the non-rotating trend, as shown in Figure \ref{fig:eff_dB_log}, especially the cyan and magenta trends. Rotation induced linear coupling between \alfven mode and fast mode is most important near the separatrix, where the rotation time scale is comparable to the \alfven wave travel time scale. Figure \ref{fig:rotating-dw0.01r1_25-fields} shows an example where the \alfven wave is launched closer to the separatrix. We see that the initial \alfven wave generates a fast wave, which then produces new \alfven waves, extending the original \alfven wave all the way to the separatrix. We also find that in the rotating magnetosphere, the conversion efficiency depends on the wavelength $\lambda_{\parallel}$ of the \alfven wave along the magnetic field line, as shown in Figure \ref{fig:eff_lambda}. This is likely because waves with different $\lambda_{\parallel}$ interact with different scales in the background variation. However, it does not depend on the total length of the wave train $cT$. This is different from the non-rotating case, confirming that the conversion mechanism is different. In this rotating case, We expect the conversion efficiency to drop at very small $\lambda_{\parallel}$ where WKB approximation is applicable; in the WKB regime the dispersion relations for the \alfven mode and fast mode do not intersect (Appendix \ref{sec:WKB}), so the conversion should be small. However, due to the very high resolution requirement, we are unable to reliably simulate cases with $\lambda_{\parallel}\ll r_*$. In summary, for \alfven waves with $\lambda_{\parallel}\approx5r_*$ and wave train length $cT\gtrsim0.2r_m$ propagating on field lines with a maximum radial extent $5r_*\lesssim r_{m}\lesssim 0.5r_{\rm LC}$, we find that the conversion efficiencies in a few asymptotic regimes are the following \begin{align}\label{eq:scaling} \frac{W_F}{W_0}\approx \begin{cases} \displaystyle 0.8\left(\frac{\omega_*r_m}{c}\right)^2, & \displaystyle \left(\frac{\delta B}{B}\right)_{\rm eq}\ll\frac{\omega_*r_m}{c}\lesssim0.5,\\ \displaystyle 0.2\left(\frac{\delta B}{B}\right)_{\rm eq}^2, & \displaystyle \frac{\omega_*r_m}{c}\ll\left(\frac{\delta B}{B}\right)_{\rm eq}\lesssim 1. \end{cases} \end{align} The first branch is applicable to very small amplitude \alfven waves such that the rotation of the background magnetosphere is important; this is the relation from Figure \ref{fig:eff_wrm}. The second branch applies to relatively large amplitude \alfven waves; rotation effect becomes negligible and the scaling follows that in Figure \ref{fig:nonrotating-eff_dw_dB}. So far we have measured the conversion efficiency $W_F/W_0$ for the first passage of the \alfven wave through the equator. After the reflection from the stellar surface, \alfven wave can continue to convert to fast mode, but the efficiency becomes lower. This is because the \alfven wave gradually becomes dephased \citep{2020ApJ...897..173B}: the wave front is stretched and becomes increasingly oblique with respect to the background magnetic field, due to different lengths of neighboring field lines. Appendix \ref{sec:dephasing} shows some examples of the dependence of the conversion efficiency on the phase shift across the \alfven wave front. The conversion efficiency decreases as the phase shift increases, suggesting that conversion to fast mode requires the coherent $k_{\parallel}$ part of the \alfven wave. \section{Implication for the Vela pulsar}\label{sec:Vela} The Vela pulsar has a spin period of $P=2\pi/\omega_*=89$ ms, so the light cylinder is located at $R_{\rm LC}=c/\omega_*=4.2\times10^8$ cm. Taking the neutron star radius to be $r_*=10$ km, we have $R_{\rm LC}/r_*\approx4.2\times10^2$. If the quake is triggered in the deep crust by a shear layer of thickness comparable to the local scale height, then the characteristic frequency of the waves is\footnote{The quake excites a broad spectrum of frequencies extending much higher than this characteristic frequency.} $\omega_A\sim10^4\,\rm{rad}\,\rm{s}^{-1}$ \citep{2020ApJ...897..173B}, and the corresponding wave length of the \alfven wave is $\lambda_A\sim 3\times10^6\,\rm{cm}$, a few times of $r_*$. The wave train is likely long; for a quake duration of $T\sim100$ ms, the length of the launched wave train would be $3\times10^{9}$ cm. The energy of the quake is not well constrained; for a rough estimation we take the characteristic energy flux of \alfven waves transmitted from the crust to the magnetosphere to be $F_*=10^{26}F_{*,26}\,\rm{erg}\,\rm{s}^{-1}\,\rm{cm}^{-2}$. The amplitude of the \alfven waves at the stellar surface is then $\delta B/B\approx10^{-4}F_{*,26}^{1/2}$. If we simply follow the dipole scaling, the \alfven wave amplitude at a radius $r$ is $\delta B/B\approx 10^{-4}F_{*,26}^{1/2} (r/r_*)^{3/2}$. Thus, $\delta B/B\sim0.3$ at $r=0.5R_{\rm LC}$ and $\delta B/B\sim0.86$ at the light cylinder. This can be marginally considered as a small amplitude \alfven wave, so the rotation effect is important in determining the efficiency of \alfven waves converting to fast modes. Applying the first branch in the scaling relation (\ref{eq:scaling}), we can see that if the \alfven wave is propagating on a flux tube that crosses the equator at half the light cylinder radius, the conversion efficiency after one pass is $\sim0.2$. If the \alfven wave is propagating closer to the separatrix, then the conversion efficiency can be higher, reaching $\sim0.3$. For waves with $(\delta B/B)_{\rm eq}\sim1$, using the second branch in the scaling relation (\ref{eq:scaling}), we get a conversion efficiency $\sim0.2$ as well. In these scenarios, the \alfven wave will lose a fraction $\gtrsim20\%$ of its initial energy during the first passage through the equator. Afterward the conversion efficiency decreases due to the dephasing of the wave, so the \alfven wave may keep bouncing in the magnetosphere for some time, until it loses most of its energy through this and other channels discussed in \S\ref{sec:intro}. \section{Discussion and conclusion}\label{sec:conclusion} In this paper we investigated the propagation of small amplitude \alfven waves in the closed zone of a dipolar pulsar magnetosphere. In the force-free regime \alfven waves can convert to fast magnetosonic waves as they propagate along curved field lines. We measured the conversion efficiency and obtained its scaling in different regimes (Equation \ref{eq:scaling}). The conversion efficiency is high for relatively large amplitude waves, and for waves propagating close to the separatrix/Y point, before the waves get significantly dephased. Typical \alfven waves launched by a quake in the Vela pulsar may convert to fast waves with an efficiency as high as 0.2 during the first passage, if the waves propagate to the outer region of the closed zone. However, the conversion efficiency decreases due to dephasing on subsequent passages. Therefore, during the $\sim 0.3$ seconds of quenched radio emission from Vela, the conversion to fast mode is not able to fully suppress the Alfven waves in the closed part of the magnetosphere. Thus we are currently unable to explain the short duration of the quenched radio emission during the glitch. This requires more detailed study of the quenching mechanism and other dissipation processes of the \alfven waves. Similar processes could also happen in magnetar magnetospheres. Recently \citet{2020ApJ...900L..21Y} studied the fate of large amplitude \alfven waves launched by a magnetar quake. If the \alfven wave packet propagating on a flux tube with a radial extent $R$ has an energy larger than the magnetospheric energy $B^2R^3$ at $R$, the \alfven wave packet could break out from the magnetosphere and launch a relativistic ejecta. These may power X-ray bursts by particle acceleration in the current sheet behind the ejecta, and even produce fast radio bursts by masers at the shock \citep[e.g.,][]{1992ApJ...391...73G,2014MNRAS.442L...9L,2017ApJ...843L..26B,2020ApJ...896..142B,2019MNRAS.485.4091M,2019MNRAS.485.3816P,2020MNRAS.494.4627M,2020ApJ...899L..27M} or colliding plasmoids in the current sheet \citep{2019MNRAS.483.1731L,2019ApJ...876L...6P,2020ApJ...897....1L}. For smaller amplitude \alfven waves, the picture we studied in this paper applies. A moderate fraction of the \alfven wave energy could escape as it converts to fast waves; the rest of the wave energy may be dissipated in the magnetosphere through the channels discussed in \S\ref{sec:intro}. In this paper we only studied axisymmetric modes. When the axisymmetry constraint is relaxed, more wave modes can participate in the interaction, which could change the conversion efficiency. Full 3D simulations are needed to quantify these effects. Furthermore, in the force-free fluid framework, we are essentially considering the low frequency limit of the plasma modes. Kinetic effects may become important when the wavelength gets close to the plasma skin depth, or wave frequency becomes comparable to plasma frequency. In this regime, the \alfven waves may experience cutoff and resonance, and may undergo conversion to other plasma modes. This needs to be studied using a kinetic framework. In our force-free simulations, we observe strong dephasing of \alfven waves, especially when the wave has passed through the equator and propagates back toward the star, consistent with \citet{2020ApJ...897..173B}. This leads to numerical dissipation. In reality, the strong shearing of the wave front could lead to a strong increase in the current density; this may trigger pair cascade or other types of plasma instability that dissipate away the \alfven wave energy. Kinetic simulations with physical dissipation mechanisms are required to study such processes and their influence on pulsar radio emission. \acknowledgements We thank Alex Chen, Xinyu Li and Anatoly Spitkovsky for helpful discussions. Y.Y. is supported by a Flatiron Research Fellowship at the Flatiron Institute, Simons Foundation. Y. L. and A. B. are supported by NSF grant 2009453 and by Simons Foundation grant 727992. A. P. is supported by NSF grant 1909458. \software{{\it Coffee}, \url{https://github.com/fizban007/CoffeeGPU}, \citet{2020ApJ...893L..38C}}
2024-02-18T23:40:53.971Z
2020-12-16T02:02:16.000Z
algebraic_stack_train_0000
3,620
6,693
proofpile-arXiv_066-1681
\section{Introduction} \label{sec:intro} Gas Electron Multiplier~(GEM) is one of the most advanced detectors of the Micro Pattern Gas Detector~(MPGD) group~\cite{sauli_GEM, sauli_GEM_overview}. GEM is widely used in many High Energy Physics~(HEP) experiments as a tracking device because of its good position resolution due to its micro pattern structure~\cite{compass, TOTEM, ALICE, CMS}. The high rate handling capability of the GEM detector makes it a suitable candidate for the experiments where large particle flux is expected~\cite{CBM_detector}. GEM is made up of a thin Kapton foil of thickness 50~$\mu m$ with 5~$\mu m$ copper cladding on sides of the foil. A large number of holes are etched on the Kapton using the photolithographic technique~\cite{photolithography}. \begin{figure}[htbp] \centering \vspace*{-2.0cm} \includegraphics[scale=0.35]{fig1.pdf} \vspace*{-2.0cm} \caption{\label{fig:charging_up}Schematic representation of the Charging up effect inside a GEM hole. E$_{Polarised}$ indicates the electric field generated due to the dielectric polarisation. E$_{External}$ indicates the electric field generated due to the external high voltage and E$_{Internal}$ indicates the electric field generated due to the accumulation of the charges on the kapton wall. } \end{figure} The holes in a standard GEM foil have an outer and inner diameter of 70~$\mu$m and 50~$\mu$m respectively. The distance between the centers of two neighboring holes, that is the pitch, is 140~$\mu$m. To create an electric field inside the holes, an external high voltage~(HV) is applied between the copper layers. The holes in the GEM foil act as the multiplication region for the incoming electrons. As shown in Fig.~\ref{fig:charging_up} usually voltage is applied in such a way that the top of the GEM foil is at negative potential compared to the bottom plane and the electrons move downwards. The electrode placed above the top layer of GEM foil is called the drift electrode or drift plane and the gap between the drift plane and top of the GEM foil is called the drift region. An incoming charged particle produces primary electrons mainly in the drift region. These primary electrons are focussed towards the GEM holes by the electric field. The high electric field inside the holes enforces the electrons to multiply by an avalanche. Several GEM layers can be used in cascade mode to attain high gain without increasing the biasing voltage and consequently the discharge probability of the chamber~\cite{bachmann, sbiswas_spark}. The presence of the Kapton foil inside the active part of the detector changes its behavior when exposed to external radiation. Due to the high electric field~($\sim kV/cm$) inside the GEM holes, the incoming electrons get sufficient kinetic energy to start an avalanche of further ionization. Due to the dielectric properties of the polyimide~(Kapton), they get polarised by the external electric field. During this multiplication process, the electrons and ions may diffuse to the polyimide surface and due to the polarisation of the polyimide by the external HV, the ions or electrons can be captured on the wall of the Kapton foil. This phenomenon is illustrated in Fig.~\ref{fig:charging_up}. Due to the high resistivity of the Kapton, the charges remain there for a rather long time. As a result of sufficient accumulation of charge on the wall, the electric field configuration inside the hole changes dynamically and this phenomenon is known as the charging up effect. The accumulated charges on the surface of the Kapton foil increase the field inside the holes and as a result, the gain of the chamber increases with time. Many studies have reported that the charging up effect is responsible for a time-dependent change in gain, which asymptotically reaches a constant value~\cite{charging_up_2, charging_up_3, charging_up_4, charging_up_5}. In this article, a systematic investigation of the charging up process with different irradiation rates in a triple GEM detector prototype built using double mask foils operated with Ar/CO$_2$ gas mixture in the 70/30 volume ratio is reported. A strong Fe$^{55}$ source is used to irradiate as well as to record the 5.9 keV X-ray spectra from the chamber. The details of the detector setup are described in Sec.~\ref{setup} and the results are discussed in Sec.~\ref{results}. \section{Detector description and experimental setup} \label{setup} In this study, a triple GEM detector prototype, consisting of 10~cm~$\times$~10~cm double mask GEM foils, obtained from CERN is used. The drift, transfer, and induction gaps of the chamber are kept at 3~mm, 2~mm, and 2~mm respectively (3-2-2-2 configuration). \begin{figure}[htbp] \centering \vspace*{-.3cm} \includegraphics[scale=0.35]{fig0.pdf} \vspace*{-.3cm} \caption{\label{fig:detector_setup} Schematic of the HV distribution of the triple GEM chamber. The drift gap, transfer gap and induction gaps are kept at 3~mm, 2~mm, and 2~mm respectively. } \end{figure} The HV to the drift plane and individual GEM planes are applied through a voltage dividing resistor chain. 10~M$\Omega$ protection resistors are applied to the drift plane and top of each GEM foil. A schematic of the resistor chain and different gaps of the chamber is shown in Fig,~\ref{fig:detector_setup}. The readout of the chamber is made up of nine pads of dimension 9~mm~$\times$~9~mm each. The signals in this study are taken from all the pads added by a sum up board and a single input is fed to a charge sensitive preamplifier (VV50-2)~\cite{preamp}. The gain of the preamplifier is 2~mV/fC with a shaping time of 300~ns. A NIM based data acquisition system is used to process the signals from the preamplifier. The output signal from the preamplifier is fed to a linear Fan-in-Fan-out (linear FIFO) module. One analog signal from the linear FIFO is put to a Single Channel Analyser~(SCA) to measure the rate of the incident particle. The SCA is operated in integral mode and the lower level in the SCA is used as the threshold to the signal. The threshold is set at 0.1 V to reject the noise. The discriminated signal from the SCA, which is TTL in nature, is put to a TTL-NIM adapter and the output NIM signal is counted using a NIM scaler. The count rate of the detector in Hz is then calculated. Another output of the linear FIFO is fed to a Multi-Channel Analyser (MCA) to obtain the energy spectra. A schematic representation of the electronics set-up is shown in Fig.~\ref{fig:electronic_setup}. Pre-mixed Ar/CO${_2}$ gas in a 70/30 volume ratio is used for the whole study. A constant gas flow rate of 3.5~l/hr is maintained using a V{\"o}gtlin gas flow meter. Perspex, aluminium and G-10 collimators having different hole diameters are used to irradiate the chamber with different X-ray flux coming from the Fe$^{55}$ source. The ambient temperature, pressure, and relative humidity are monitored continuously using a data logger, built-in house~\cite{data logger}. \begin{figure}[htbp] \centering \includegraphics[scale=0.6]{fig2.pdf} \caption{\label{fig:electronic_setup}Schematic representation of the electronics setup} \end{figure} \section{Results} \label{results} The 5.9~keV peak of the Fe$^{55}$ energy spectrum obtained from the MCA is fitted with a Gaussian distribution to obtain the gain of the chamber. A typical Fe$^{55}$ energy spectrum at -~4.2~kV is shown in Fig.~\ref{fig:fe55_sprctra}. The applied HV of -~4.2~kV corresponds to $\Delta V$ of $\sim$~390~V across each GEM foil and the drift field, transfer field, and induction field of ~2.3~kV/cm,~3.5~kV/cm, and~3.5~kV/cm respectively. \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{fig3.pdf} \vspace*{-1.0cm} \caption{\label{fig:fe55_sprctra}Typical Fe$^{55}$ spectra obtained at a HV of -~4.2~kV and irradiated with 10~kHz X-ray on 50~mm$^{2}$ area . The Main peak is fitted with a Gaussian distribution~(red line) to calculate the gain of the chamber.} \end{figure} The amount of the input charge is calculated by assuming the full energy deposition of the 5.9~keV X-ray in the 3~mm drift gap of the chamber. The number of primary electrons for Ar/CO$_{2}$ in the 70/30 ratio is 212. The ratio of the output charge to the input charge gives the gain of the chamber. The details of the gain calculation and long-term behavior of the chamber were reported earlier~\cite{s. roy,s. chatterjee JOP}. The variation of the gain as a function of time for three different rates of the incoming X-rays, 1~kHz, 10~kHz, and 90~kHz respectively along with the ratio of ambient temperature~(T) to pressure~(p) are shown in the top~(a), middle~(b) and bottom~(c) plot of Fig.~\ref{fig:gain_tp_time}. Using collimators, X-rays of rates 1~kHz, 10~kHz, and 90~kHz are made to fall on 13~mm$^{2}$, 50~mm$^{2}$ and 28~mm$^{2}$ area of the chamber, which implies particle flux of 0.08~kHz/mm$^{2}$, 0.2~kHz/mm$^{2}$ and 3.2~kHz/mm$^{2}$ respectively. All the measurements are carried out at an HV of -~4.2 kV. \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{fig4.pdf} \includegraphics[scale=0.4]{fig5.pdf} \includegraphics[scale=0.4]{fig6.pdf} \caption{\label{fig:gain_tp_time}Variation of gain and T/p as a function of time. The top~(a), middle~(b) and bottom~(c) plots are for 1~kHz, 10~kHz and 90~kHz X-rays irradiation rates falling on 13$~mm^2$~(0.08~kHz/mm$^{2}$), 50$~mm^2$~(0.2~kHz/mm$^{2}$) and 28$~mm^2$~(3.2~kHz/mm$^{2}$) area of the GEM chamber respectively. All the measurement are caried out at a HV of -~4.2~kV and three different positions on the active area of the chamber.} \end{figure} Since it is well known that the gain of any gaseous detector depends on temperature and pressure~\cite{tp_gas_detector}, that is why the variation in T/p is plotted along with the gain as a function of time. The HV is kept OFF for 180 minutes, 60 minutes, and 8 minutes before the measurement is started with 1~kHz, 10~kHz, and 90~kHz X-ray rates respectively. In all three cases, the data taking starts immediately after the HV is switched ON and the source is placed on the active area of the detector. The energy spectra are stored at an interval of 10 minutes for 1~kHz and 10~kHz rates and 3 minutes for 90~kHz X-ray rates respectively. The same Fe$^{55}$ source is used to irradiate the chamber as well as to obtain the spectra. From Fig.~\ref{fig:gain_tp_time}, it is evident that the gain decreases for the first few minutes and then increases for a few hours of operation and reaches a saturation asymptotically. The decrease in the initial gain may be due to the loss of the primary electrons(/ions) which are stuck on the polarised dielectric~(Kapton) surface. Since the polarisation of the dielectric medium itself takes some finite time that is why whenever the HV and irradiation started simultaneously, an initial decrease in gain is observed. Afterward, the gain increases sharply for the first few hours due to the lensing effect created by the accumulated charges on the wall of the Kapton foil because this effect increases the electric field strength inside the GEM hole. The absolute gain values after saturation are not the same for all the cases due to the different source positions. The variation in gain over the active area of the particular chamber is reported earlier~\cite{s. chatterjee} and it was found that there was a variation of $\sim$10\%~(RMS). For all the measurements, the gain shows a saturation followed by an initial increase. The gain is normalised further to eliminate the T/p dependence on the gain. For the T/p normalisation, data obtained after $\sim$360 minutes of operation~(i.e. the saturated gain value) is used where only the T/p effect is dominant on the gain variation. The method of normalisation is discussed in Ref~\cite{s. roy}. The normalised gain is fitted with an exponential function of the form~\cite{P. hauer} \begin{equation} \label{eqn} G = p_0(1-p_1e^{(-t/p_2)})\tag{1} \end{equation} where $G$ is the normalised gain, $p_0$ \& $p_1$ are the constants, $t$ is the measurement time in hours, and $p_2$ is the time constant of the charging-up effect, taking analogy from the charging up mechanism of any RC network~\cite{V. Tikhonov}. \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{fig7.pdf} \includegraphics[scale=0.4]{fig8.pdf} \includegraphics[scale=0.4]{fig9.pdf} \caption{\label{fig:normalised_gain}Variation of the normalised gain as a function of time. The top~(a), middle~(b) and bottom~(c) plots are for 1~kHz, 10~kHz and 90~kHz X-rays irradiation rates falling on 13$~mm^2$~(0.08~kHz/mm$^{2}$), 50$~mm^2$~(0.2~kHz/mm$^{2}$), 28$~mm^2$~(3.2~kHz/mm$^{2}$) area of the GEM chamber respectively.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.40]{fig10.pdf} \includegraphics[scale=0.40]{fig11.pdf} \caption{\label{fig:example}Variation of gain, T/p~(a) and normalised gain~(b) as a function of time for 1~kHz X-rays irradiating 13$~mm^2$~(0.08~kHz/mm$^{2}$) area of the GEM chamber. The measurement has been carried out at an HV of -~4.2~kV. The HV was kept OFF for $\sim$~60~minutes before taking the first measurement with the Fe$^{55}$ X-ray source.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.40]{fig12.pdf} \includegraphics[scale=0.40]{fig13.pdf} \caption{\label{fig:polarisation}Variation of gain, T/p~(a) and normalised gain~(b) as a function of time for 1~kHz X-rays irradiating 13$~mm^2$~(0.08~kHz/mm$^{2}$) area of the GEM chamber. The measurement is carried out at HV of - 4.1~kV. The HV is kept ON for 24 hours before taking the first measurement with the Fe$^{55}$ X-ray source.} \end{figure} The fitted normalised gain is shown in Fig.~\ref{fig:normalised_gain} for the 1~kHz~(top), 10~kHz~(middle) and 90~kHz~(bottom) X-ray irradiation rates. For the fitting, the first $\sim$20 minutes are excluded because that includes both the effect of dielectric polarisation and charging up. After that, the charging up effect is dominant and is fitted with equation~\ref{eqn} to get an idea about the time constant of the charging up effect. In Fig.~\ref{fig:gain_tp_time}~(b), a small change is visible in the trend of increasing gain from 1-2 hours along the time axis and that is due to the two opposite effects namely charging-up and T/p variation. The charging-up process will tend to increase the gain and decrease in T/p will tend to reduce the gain. As a result of these two competing processes, the slope of the curve changes, and that is also reflected in Fig.~\ref{fig:normalised_gain}~(b). This competing effect between the charging-up and T/p is distinct in Fig.~\ref{fig:example} where 1~kHz X-rays irradiation 13~$mm^2$ area of the GEM chamber. For this measurement, the high voltage is switched ON, the source is placed on the detector, and data-taking is started. Before that, the HV is kept OFF for $\sim$ 60 minutes. The first three points in Fig.~\ref{fig:example}~(a) show a decreasing trend in the gain which is a combined effect of T/p and dielectric polarisation. After that, though the T/p value shows a decreasing trend, there is no visible decrease in the gain. That is due to the two competing processes, the effect of the decreasing T/p and charging-up on the gain is anti-correlated. Then after $\sim$1.5~hr, the gain increases because of the charging-up and T/p variation. The corresponding normalised gain variation is shown in Fig.~\ref{fig:example}~(b). To identify whether the decrease in the gain at the first few minutes is due to dielectric polarisation or not, a different measurement is performed by keeping the HV ON for $\sim$24 hours before the first measurement. The HV is kept at -~4.1~kV which corresponds to $\Delta V \sim$~382~V across each GEM foil and drift field, transfer field, and induction field of ~2.3~kV/cm,~3.4~kV/cm and~3.4~kV/cm respectively. The chamber is irradiated with 1~kHz X-ray falling on 13~mm$^2$ area of the chamber. Once the source is placed, the measurement is started immediately. The variation of gain, T/p, and normalised gain is shown as a function of time in Fig.~\ref{fig:polarisation}~(a) and~(b) respectively. The data is stored at an interval of 10~minutes. It is evident from the plot that there is no decrease in gain is observed at the beginning. Since the charging-up process is due to the accumulation of the charges on the GEM holes therefore the charging-up process depends on the flux of incident radiation. More the flux of the incident particle faster will be the charging-up effect and the same behavior also appears from this study. From Fig.~\ref{fig:normalised_gain}, for 1~kHz, 10~kHz, and 90~kHz operations, the time constant of the charging-up effect is found to be 2.376~$\underline{+}$~0.02 hours, 1.524~$\underline{+}$~0.008 and 1.395~$\underline{+}$~0.004 hours respectively. The time constant of charging-up effect obtained from Fig.~\ref{fig:example}~(b), agrees well with~\ref{fig:normalised_gain}~(a). From Fig.~\ref{fig:polarisation}~(b), the time constant of the charging-up effect is found to be 3.294~$\underline{+}$~0.018 for 1~kHz X-ray. The time constant of charging-up effect obtained from~\ref{fig:polarisation}~(b) can not be compared with Fig.~\ref{fig:normalised_gain}~(a) and Fig.~\ref{fig:example}~(b) because the HV is different. The residual voltage dependence on the charging-up effect is also seen in Ref~\cite{P. hauer}. \section{Summary and Outlook} The charging-up effect of a double mask triple GEM prototype is studied using different irradiation rates from a Fe$^{55}$ X-ray source. The chamber is operated with Ar/CO$_2$ gas mixture in a 70/30 volume ratio. The HV is kept OFF for a few minutes to several hours before starting the respective measurements. The data is stored just after the HV is ON and the source is placed on the chamber to see the effect of dielectric polarisation on the gain of the chamber. It is observed that the gain initially decreases and then increases to reach a saturation value. To ensure the decrease in the gain of the chamber during the first few minutes is due to dielectric polarisation, a different set of measurements is performed where the HV is kept ON for $\sim$24 hours before the data taking. No initial decrease in gain is observed in that case as shown in Fig.~\ref{fig:polarisation} because the dielectric (i.e. Kapton) is already polarised due to the application of HV beforehand. With different particle fluxes, the time constant of the charging up effect is investigated. It is found that the charging-up time decreases with increasing particle flux. Though the time constant value is decreasing with increasing particle flux, the exact scaling of the time constant with particle flux is not possible because we are observing an overall effect due to the three GEM foils and it is very difficult to disentangle the effects of each GEM foil on the final results. Also since the charging up time depends on the GEM hole geometry, properties of the Kapton foil, charge density in the GEM holes, etc, therefore we can only conclude that the time constant of the charging up effect decreases with increasing particle flux. The two competing effects of T/p variation and charging-up on the gain the chamber is studied by recording the 5.9~keV Fe$^{55}$ X-ray spectra and as expected it is coming to be anti-correlated as shown in Fig.~\ref{fig:example}. The dependence of the charging-up process on the gain of the detector, electric field strengths of different layers, used gas mixture, and also on the different kinds of GEM foils is under investigation. \section{Acknowledgments} The authors would like to thank the RD51 collaboration for the support in building and initial testing of the chamber in the RD51 laboratory at CERN. We would like to thank Dr. A. Sharma, Dr. L. Ropelewski, Dr. E. Oliveri and Dr. Chilo Garabatos of CERN and Dr.~C.~J.~Schmidt and Mr.~J{\"o}rg~Hehner of GSI Detector Laboratory and Prof. Sanjay K. Ghosh, Prof. Sibaji Raha, Prof. Rajarshi Ray and Dr. Sidharth K. Prasad of Bose Institute for valuable discussions and suggestions in the course of the study. This work is partially supported by the research grant SR/MF/PS-01/2014-BI from DST, Govt. of India, and the research grant of CBM-MuCh project from BI-IFCC, DST, Govt. of India. S. Chatterjee acknowledges his Institutional Fellowship research grant of Bose Institute. S. Biswas acknowledges the support of DST-SERB Ramanujan Fellowship (D.O. No. SR/S2/RJN-02/2012) and Intramural Research Grant provided by Bose Institute.
2024-02-18T23:40:54.042Z
2020-07-23T02:18:13.000Z
algebraic_stack_train_0000
3,624
3,670
proofpile-arXiv_066-1883
\section{Introduction} The analytic properties of the $S$-matrix are a central element of our understanding of Quantum Field Theory (QFT). Stemming from seminal works on partial wave unitarity in the 1970s \cite{Yndurain:1972ix,Pham:1985cr,Ananthanarayan:1994hf}, there has recently been a modern resurgence of interest in this topic, in connection to the study of Effective Field Theories (EFT) \cite{Adams:2006sv}. $S$-matrix properties can be used to formulate \emph{positivity bounds} for 2-to-2 scattering amplitudes within the physical region for scattered momenta in the forward limit. These are written in terms of dispersion relations which relate the scattering amplitude at a given kinematical point with the integral of its imaginary part along the whole physical region, which is strictly positive from the optical theorem. Standard applications then follow a top-down reasoning. One first promotes a given EFT to be the low energy expansion of an unknown ultraviolet (UV) complete theory satisfying the usual axioms of unitarity, locality, and Lorentz invariance. Positivity bounds are then valid and applicable to the UV complete theory. However, they can also be evaluated at small center of mass energy, in the infrared (IR) region. There, scattering amplitudes are well approximated by those computed in the EFT. As a consequence, positivity implies constraints on the Wilson coefficients accompanying those relevant operators that contribute to the $S$-matrix elements. This UV-IR connection has been thoroughly used in the literature to constrain, or assess the validity, of many different EFTs, see for example \cite{Cheung:2016wjt,Bonifacio:2016wcb,deRham:2017imi,deRham:2017xox,deRham:2018qqo,Afkhami-Jeddi:2018own,Zhang:2018shp,Bellazzini:2019bzh,Melville:2019wyy,Alberte:2019xfh,Alberte:2019zhd,Kim:2019wjo,Herrero-Valea:2019hde,Remmen:2019cyz,Remmen:2020uze,Wang:2020jxr,deRham:2021fpu,Traykova:2021hbr,Davighi:2021osh,Bern:2021ppb}. This approach however, relies on the existence of a mass gap in the spectrum of the EFT, a property needed for the forward limit to be regular. In gapless theories, exchange of massless particles leads to forward limit divergences in the scattering amplitudes, which obstruct a direct application of positivity bounds. This divergence can be relaxed for both scalar and vector degrees of freedom by a proper regularization, but the fundamental problem remains for graviton exchange, which requires an alternative approach. An elegant way out of this is to isolate the divergence in the right hand side of the dispersion relation --which is exact --, so that it can be cancelled against the one in the left hand side \cite{Tokuda:2020mlf}. By doing this, one is left with an approximate positivity bound, which can be mildly violated by terms that become important only at very high energies. Nevertheless, these approximate bounds are still very powerful in constraining many IR proposals for gravitational physics \cite{Aoki:2021ckh,Noumi:2021uuv,Herrero-Valea:2021dry}. This result can also obtained in a different way, based on the impact parameter formulation \cite{Caron-Huot:2021rmr}. In order for this cancellation to be possible, an assumption about the high energy behavior of graviton scattering has to be done, though. In \cite{Tokuda:2020mlf} this is assumed to be of the Regge form, inspired by the Veneziano formula of string scattering \cite{Veneziano:1968yb}; but we could of course wonder if this result is unique, or if there are other possible UV behaviors that work. This has been partially answered in \cite{Herrero-Valea:2020wxz}, where they show that subtraction of the tree-level divergence requires a linear Regee trajectory at leading order in the high energy region, but nothing has been established so far about sub-leading corrections. This is an important question because, for example, string scattering is not an exact linear Regge trajectory. Sub-leading corrections are always present. Thus, it is natural to ask ourselves: Are these unique? What kind of sub-leading terms allow for well-posed dispersion relations and positivity bounds? Is the scattering of strings the only possible UV behavior of graviton scattering that satisfies this condition? And moreover, are positivity bounds insensitive to this choice? In general, we ask ourselves what is the minimal piece of information about the UV behavior of gravitation needed for dispersion relations to be well-posed. In this work we try to answer this question by reversing the usual direction of thought in the literature in positivity bounds. By looking at the structure of graviton exchange in the IR limit, and exploiting mathematical properties of the dispersion relation, we constrain the UV behavior of graviton-mediated scattering amplitudes. We arrive to an integral formula that relates the IR structure of forward limit divergences to properties of the UV completion, which is further constrained in the limit $t \log s\rightarrow 0$. We also show how this knowledge modifies the standard derivation of positivity bounds, leading even to undeterminate bounds unless extra assumptions about the UV completion are made. A recent development in a similar direction -- reverse bootstrapping UV amplitudes from IR properties -- was presented in \cite{Alberte:2021dnj}, where they consider the scattering of photons and gravitons in QED coupled to the Einstein-Hilbert action. Based on one-loop computations done in \cite{Alberte:2020bdz}, they show that parametrically large negative terms appear in the dispersion relation, naively contradicting positivity bounds unless new physics is introduced at relatively low energies. Instead, the authors of \cite{Alberte:2021dnj} argue that the presence of these negative large pieces can, and most likely should, have an origin on non-trivial properties of the UV completion, which might in principle be affected by the presence of light particles such as the electron \cite{Alberte:2020jsk}. In this work we show explicitly how this way of thinking, together with our results about the shape of the UV scattering amplitude, can lead to a resolution of the mentioned tension. This paper is organized as follows. First, we introduce dispersion relations for theories with graviton exchange in \ref{sec:dispersion_rel}, and we show how cancellation of IR divergences determines several properties of scattering amplitudes in the UV in section \ref{sec:asympt_exp}. Later, in section \ref{sec:arc_int} we show how our findings imply a non-vanishing value for the arc integrals contributing to the dispersion relation, and we discuss how this can solve the conundrum that emerges when applying positivity bounds to gravitational coupled QED in section \ref{sec:QED}. Section \ref{sec:fate_bounds} is devoted to show how positivity bounds might be rendered useless by our results in the presence of gravitation, while in section \ref{sec:strings} we explore an explicit example of a UV completion in the form of string amplitudes, finding agreement with our result. Finally, we draw our conclusions in section \ref{sec:conclusions}. \section{Dispersions relations with graviton exchange} \label{sec:dispersion_rel} Let us start by considering the $ab\rightarrow ab$ scattering between some identical but otherwise unspecified initial and final states with equal mass $m$, as described in an EFT with an unknown UV completion, but which we demand to be causal, local, unitary and Lorentz invariant. We will also assume that this process includes exchange of a massless graviton. Due to Lorentz invariance, the scattering amplitude ${\cal A}(s,t)$ can be uniquely described in terms of the Mandelstam variables $s$, $t$ and $u$, satisfying $s+t+u=4m^2$. The presence of massless gravitons in the scattering channel implies that the amplitude will diverge in the forward limit\footnote{It is important to remark here that the forward limit corresponds to taking $t\rightarrow 0$ from the negative side of the real axis, since the physical region corresponds to $t<0$.} $t\rightarrow 0^-$. The typical expansion of the amplitude in this limit, including tree-level graviton exchange and one graviton loop, has the form \begin{equation} \label{amplitude} {\cal A}(s,t)= A_0 \frac{s^2}{M_P^2}\frac{1}{t}+A_1 \frac{s^2}{M_P^4} \log{\left(\frac{-t}{\mu_R^2}\right)}+(\text{regular terms})+(\text{higher loops}), \end{equation} where $\mu_R$ is the renormalization scale. In the forward limit, this expression has explicit $1/t$ and $\log{t}$ divergences. The pole is inherited from the one in the graviton propagator, while loop corrections are responsible for generating the logarithmic branch cut, which represents production of soft gravitons of arbitrarily low energy. The values of $A_0$ and $A_1$ characterize the particular theory from which this amplitude is obtained. In perturbation theory, they are proportional to the residue in the pole of the propagator of the massless graviton, and to the $\beta$-function of the $a^2b^2$ coupling. Higher loops -- two and beyond -- will produce further logarithmic divergences, but we ignore them hereinafter, since they are further suppressed. From crossing symmetry, the amplitude contains in general equivalent divergences $s^{-1}$, $u^{-1}$, $\log(s)$, $\log(u)$. Due to the latter, the forward limit amplitude ${\cal A}(s,t\rightarrow 0^-)$ will exhibit also a branch cut along the whole real axis in the complex plane for $s$, which obstructs the standard derivation of positivity bounds \cite{Adams:2006sv}. This can be avoided, however, by following the derivation in \cite{Herrero-Valea:2020wxz}, which we adopt hereinafter. We thus consider the following quantity \begin{equation} \label{S} \Sigma(\mu,0^-)=\frac{1}{2\pi i}\oint_{\gamma}{\frac{{\cal A}(s,0^-)s^3 ds}{(s^2+\mu^4)^3}} \end{equation} where the value $0^-$ must be understood as $t\rightarrow 0^-$ at all times. Thus, we retain only divergent and finite terms, and get rid of those that vanish polynomially in $t$. Here the integral is taken over the contour $\gamma$, as shown in figure \ref{fig:gamma_cont}, corresponding to the sum of two small circles surrounding the values $s=\pm i \mu^2$. The choice of $\mu \in \mathbb{R}$ is a matter of convenience, and later we will assume it to be much smaller than the cutoff scale of the low energy EFT. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{cont.pdf} \caption{Integration contours for the dispersion relation in \eqref{S} and \eqref{eq:final_disp}.} \label{fig:gamma_cont} \end{figure} Note that the analytic structure of the amplitude is completely determined by the assumptions above. It is fully analytic in the whole $s$-plane, except for the branch cut along the whole real axis. We can thus modify the integration contour to $\Gamma$, consisting on two lines ${\rm Re}(s)\pm i\epsilon$, with $\epsilon\ll 1$, together with two arcs of infinite radius. Thus \begin{equation} \label{eq:final_disp} \Sigma(\mu ,0^-)= \frac{1}{2\pi i}\oint_{\Gamma}{\frac{{\cal A}(s,0^-)s^3 ds}{(s^2+\mu^4)^3}} =\int_0^{\infty}\frac{ds}{\pi}\left({\frac{s^3\, {\rm Im}{\cal A}(s+i\epsilon,0^-)}{(s^2+\mu^4)^3}}+\frac{(s-4m^2)^3{\rm Im}{\cal A}^\times(s+i\epsilon,0^-)}{((s-4m^2)^2+\mu^4)^{3}}\right)+\Sigma_{\infty}. \end{equation} Here we have used the Schwarz reflection principle $A^*(s)=A(s^*)$ to relate the integral in the lower part of the complex plane to that on the upper part, and introduced the crossing-symmetric process\footnote{We have also used the fact that $s+t+u=4m^2$. For a detailed derivation of this expression, cf. \cite{Herrero-Valea:2020wxz}.} ${\cal A}^{\times}(s,t)$, obtained by letting $s\rightarrow u$, together with a change of variables, to rewrite the whole expression as an integral over positive values of $s$. In the previous formula, $\Sigma_{\infty}$ stands for the sum of the integrals along the two infinite arcs $\Gamma_C=\Gamma^+ +\Gamma^-$ in the upper and lower parts of the complex plane, \begin{align} \label{sigma_inf} \Sigma_{\infty}=\frac{1}{2\pi i}\oint_{\Gamma_C}ds \frac{{\cal A}(s,0^-)}{s^{3}}, \end{align} where $\mu$ has been neglected, since for this integral $|s|\rightarrow \infty$. The contribution of $\Sigma_\infty$ is normally ignored by invoking the Froissart-Martin bound \cite{Martin:1965jj} in the case of exchange of massive particles. For graviton exchange, it is typically assumed that a certain version of this bound holds in the form \begin{align}\label{eq:froisart_martin} \lim_{|s|\rightarrow \infty}\left|\frac{{\cal A}(s,0^-)}{s^2}\right|=0, \end{align} which seems enough for the arc integrals to vanish. This bound has been rigorously derived for theories in $d>4$ space-time dimensions, but the realistic case of $d=4$ remains elusive, so eq.\eqref{eq:froisart_martin} has to be regarded as an extra assumption at this stage. In this work, we do not want to make such beforehand assumption on the UV behavior of the scattering amplitude, and thus we keep the integral arbitrary hereinforward. Our only starting requirement will be that $\Sigma_{\infty}$ does not contain any forward limit singularities stronger than a pole. Namely $\left(t\cdot \Sigma_{\infty}\right)_{t\rightarrow 0^-}\sim \text{constant}$. Note that \eqref{eq:final_disp} can be thought as a formula connecting the IR and UV behaviors of a given theory. While the RHS is an explicit integral along the full range of $s$, and thus sensitive to the properties of the UV theory; the value of $\Sigma(\mu,0^-)$ can also be computed in the IR region by using \eqref{S}, provided that $\mu$ is sufficiently small. In this case, it can even be computed in an EFT approximation of the full theory, as long as $\mu\ll \Lambda$, with $\Lambda$ being the cut-of of the EFT. This leads to a simple expression in terms of the residues of the integrand in \eqref{S} \begin{equation} \label{sigma} \Sigma(\mu,0^-)=\frac{{\cal A}_{ss}(i \mu^2, 0^-)}{16}-\frac{3 i {\cal A}_s(i \mu^2,0^-)}{16 \mu^2}, \end{equation} where ${\cal A}_s(x,0^-)=\left.\partial_s {\cal A}(s,0^-)\right|_{s=x}$, and equivalently with ${\cal A}_{ss}(x,0^-)$ and the second derivative. For an amplitude of the form \eqref{amplitude}, which can be obtained from an EFT coupled to General Relativity\footnote{Or, in general, to any theory whose tree-level gravitational dynamics matches that of the Einstein-Hilbert action.}, one obtains \begin{equation} \Sigma(\mu,0^-)=\frac{1}{2}\left(\frac{A_0}{t}+A_1 \log{t}\right)+(\text{regular terms})+(\text{higher loops}). \end{equation} Thus, $\Sigma(\mu,0^-)$ defined as in eq. \eqref{S} captures exactly the coefficient in front of the $s^2$ term in the amplitude. Remarkably, this expression still contains singularities in the forward scattering limit $t\rightarrow 0^-$, produced by the $s^2$ dependence in those terms coming from graviton exchange. In the next section we discuss what these singularities tell us about the UV theory. It is important to remark here that this problem is particular to graviton scattering. If one scatters any other massless species -- scalars or vector bosons -- which are instead described by a renormalizable theory, the scattering amplitude will still be divergent in the forward limit. However, these divergences come without a quadratic $s$-dependence, which means that they disappear when computing $\Sigma_\infty$, leading to a regular dispersion relation. \section{Graviton scattering and cancellation of divergences} \label{sec:asympt_exp} Let us start by recalling that \eqref{eq:final_disp} is \emph{exact}. On its derivation there is no approximation or expansion whatsoever, besides taking an approximate forward limit. This means that, if the LHS is divergent, the RHS must be so. However, the latter depends only on the imaginary part of the scattering amplitude, which is regular within the physical region by application of the optical theorem \begin{align}\label{eq:optical_theorem} {\rm Im}{\cal A}(s,0)=s\sqrt{1-\frac{4m^2}{s}}\ \sigma(s), \end{align} and the requirement that the total cross-section is finite. As discussed in \cite{Alberte:2021dnj}, this is enough to conclude that the divergence in the RHS of \eqref{eq:final_disp} must come from the failure of the integral to converge when $t\rightarrow 0^-$ at some high-energy regime $s\gg M_*^2$, where $M_*$ is thus the scale above which the EFT fails to describe graviton scattering and must be replaced by its UV completion. Assuming the mildest possible analytic behavior of the amplitude leads to a linear Regge trajectory -- cf. \cite{Alberte:2021dnj} and the appendix in \cite{Herrero-Valea:2020wxz} \begin{align} \left. {\rm Im} {\cal A}(s,t)\right|_{s\gg M_*^2}=r_* s^{2+\alpha' t}. \end{align} Here $\alpha'\sim M_*^{-2}$ and $r_*$ are provided by the concrete UV completion leading to this form. By assuming this \emph{exact} high energy behavior, we can cancel the tree-level pole in the LHS of \eqref{eq:final_disp}. Such a linear Regge behavior for graviton scattering is typical in String Theories. Indeed, it can be obtained from the famous Veneziano amplitude \cite{Veneziano:1968yb}, describing the scattering of four closed bosonic strings \cite{Gross:1987ar}. However, it is naive to assume that the Regge behavior at high energies is an exact linear trajectory. Indeed, this would lead to two problems. First, it only provides cancellation of the tree-level pole. Moreover, plausible candidates for UV amplitudes, like the scattering of strings, are not Regge exact, only their leading behavior is of this form. It is then natural to wonder what is the possible allowed form of these sub-leading corrections. One first mandatory requirement is that they must be able to cancel the logarithmic divergences as well as the pole. As shown in \cite{Herrero-Valea:2020wxz}, this requires sub-leading terms to contain a piece \begin{align} \left. {\rm Im} {\cal A}(s,t)\right|_{s\gg M_*^2}=r_* s^{2+\alpha' t}\left(1+\frac{\zeta}{\log(\alpha's)}\right), \end{align} with $\zeta$ a dimensionless constant, but nothing else is known beyond this. We show now however, that we can indeed obtain a good amount of extra information on the UV amplitude by simply requiring the cancellation of divergences, obtaining some results that go beyond the linear Regge trajectory. In order to proceed, we assume that the high energy form of the imaginary part of the amplitude reads \begin{align} \left. {\rm Im} {\cal A}(s,t)\right|_{s\gg M_*^2}=s^{2+\alpha' t} \phi(s,t), \end{align} where $\phi(s,t)$ is an arbitrary function. We now take \eqref{eq:final_disp} and split the integration regime in the RHS as \begin{align} \int_{0}^{\infty}=\int_{0}^{M_*^2}+\int_{M_*^2}^{\infty}. \end{align} As we previously discussed, all the divergences in this RHS come from the high-energy behavior of the integral, which means that the first piece in the previous expresion is regular. Thus, we move it to the LHS and write \begin{align} \Sigma-\Sigma_{\infty}-\frac{1}{\pi}\int_{0}^{M_*^2} ds\ F(s)=\frac{1}{\pi}\int_{M_*^2}^\infty ds\ F(s), \end{align} where we have introduced \begin{align} F(s)=\frac{s^3{\rm Im}{\cal A}(s+i\epsilon,0^-)}{(s^2+\mu^4)^{3}}+\frac{(s-4m^2)^3{\rm Im}{\cal A}^\times(s+i\epsilon,0^-)}{((s-4m^2)^2+\mu^4)^{3}}. \end{align} The LHS here thus contains divergences and regular pieces, that we decide to separate as \begin{align} \lim_{t\rightarrow 0^-} \left(\Sigma-\Sigma_{\infty}-\frac{1}{\pi}\int_{0}^{M_*^2} ds F(s)\right)= \frac{\beta_0}{t}+\beta_1 \log(-t) + \bar{f}, \end{align} where $\bar{f}$ is constant. On the other hand, we take the limit $\{m,\mu\} \ll M_*$ in the RHS, as well as the assumption that the external states are bosonic -- and thus ${\cal A}^{\times}(s,0^-)={\cal A}(s,0^-)$ -- arriving to \begin{align} \frac{\beta_0}{t}+\beta_1 \log(-t) + \bar{f}=\frac{1}{\pi}\int_{M_*^2}^\infty \frac{ds}{s}s^{\alpha't} \phi(s,t)=\frac{M_*^{2\alpha't}}{\alpha' \pi}\int_0^\infty d\sigma\ \phi(\sigma ,t) e^{\sigma t}, \end{align} where in the last step we have performed a change of variables $s=M_*^2\ e^{\sigma/\alpha'}$. Finally, taking $x=-t$, reminding that the physical region corresponds to $t<0$, and thus $x>0$, and absorbing proportionality coefficients onto the definition of $\beta_0$, $\beta_1$ and $\bar f$, we arrive to the final expression \begin{align}\label{eq:laplace_formula} \frac{\beta_0}{x}+\beta_1 \log(x) +\bar{f}=\int_0^\infty d\sigma\ \phi(\sigma,x) e^{-\sigma x}, \end{align} where we can recognize the Laplace measure in the integral on the RHS. Knowing the UV behavior of ${\rm Im}{\cal A}(s,t)$, then \eqref{eq:laplace_formula} can easily be used to compute the coefficients $\beta_0$, $\beta_1$ and $\bar f$, in a standard way. However, the inverse problem, obtaining the form of $\phi(\sigma,x)$ from the coefficients of the IR amplitude, is not so simple. Actually, this mathematical problem has, in general, infinitely many possible solutions, but not all of them will satisfy the analiticity properties that we must require for a physical amplitude\footnote{Indeed, a trivial solution is given be $\phi(\sigma,x)=e^{\sigma x}\left(\frac{\beta_0}{x}+\beta_1 \log(x) +\bar{f}\right)\delta(\sigma-x)$, which does not satisfy analiticity at $x=0$ for all values of $\sigma$.}. In order to find a proper solution, let us make use here of Watson's lemma \cite{https://doi.org/10.1112/plms/s2-17.1.116} for the integral in \eqref{eq:laplace_formula}. We will thus assume that the function $\phi(\sigma,x)$ satisfies\footnote{Exponential boundedness is a softer behavior than the one required by the Froissart-Martin bound. The latter is actually problematic when confronted to the full Veneziano amplitude for string scattering, which does not satisfy it. Instead, Veneziano's amplitude is exponentially bounded. Exactly as we require here.} \begin{align} \lim_{\sigma\rightarrow \infty}\frac{\phi(\sigma,x)}{e^{\gamma \sigma}}=0, \end{align} for some $\gamma>0$, and that it is a meromorphic function\footnote{The assumption of meromorphicity is true at one-loop, as we show below. However, we might be forced to abandon it in order to account for higher loop divergences in the IR, such as $\log(\log(x))$. Nevertheless, all these terms will enter with an extra scale suppression and we thus ignore them hereinafter.} around $\sigma=0$. Thus, it can be expanded in a Laurent series around this point \begin{align} \phi(\sigma,x)=\sum_{n=0}^{\infty}\left(a_n(x)\sigma^n+\frac{b_n(x)}{\sigma^n}\right), \end{align} with $b_0(x)=0$. Any individual term of this sum, when plugged onto \eqref{eq:laplace_formula}, corresponds to a Laplace transform of $\sigma^a$ for a certain power $a$. Let us focus first on the non-analytic pieces of the series. By performing the integral -- using the analytic continuation of the $\Gamma$-function -- we get \begin{align} b_n(x) \int_\epsilon^\infty d\sigma \ e^{-\sigma x} \sigma^{-n}= \frac{(-1)^n}{(n-1)!}\ b_n(x) x^{n-1}\log(x)+{\cal O}\left(x,\epsilon^{-1}\right), \end{align} where $\epsilon\ll 1$ is a regulator. For $n\geq 2$ we get terms which are not present in the LHS of \eqref{eq:laplace_formula}, unless $b_n(x)\sim x^{1-n}$. However, this violates the assumption of analiticity of $\phi(\sigma,x)$ when $x\rightarrow 0$. We conclude that all singular terms in the Laurent series for $n\geq 2$ must vanish. We thus have \begin{align}\label{eq:Laurent} \phi(\sigma,x)=\frac{b_1(x)}{\sigma}+\sum_{n=0}^{\infty}a_n(x)\sigma^n, \end{align} where $b_1(x)=\beta_1+{\cal O}(x)$, in order to account for the $\log(x)$ forward divergence in the LHS of \eqref{eq:laplace_formula}. This justifies the choice done in \cite{Herrero-Valea:2020wxz}. We shift our focus now to the Taylor series. Since $\phi(\sigma,x)$ is analytic around $x=0$, we have \begin{align}\label{eq:cond_cancel} \lim_{x\rightarrow 0} a_n(x)=a_n x^{\eta_n}, \end{align} where all the $\eta_n$ are constant and we assume them to be different. Later we will see that this is necessary to avoid double and higher poles in \eqref{eq:laplace_formula}. For now, let us take it as an assumption. We invoke now the partial wave expansion of the amplitude for a unitary theory, which implies -- see Appendix B of \cite{deRham:2017imi} for a proof \begin{align} \frac{d^k}{dt^k}\left. {\rm Im}{\cal A}(s,t)\right|_{t=0}\geq 0, \end{align} for all $k$ and all values of $s$ within the physical region. Using this fact, we easily conclude by direct computation that \begin{align} a_n\geq 0, \quad \text{for all}\ n. \end{align} Knowing this, we plug now the Taylor series in \eqref{eq:Laurent} back onto the integral, and by integrating term by term we get \begin{align}\label{eq:integral_sum} \sum_{n=0}^{\infty} \int_0^\infty ds\ e^{-s x}a_n(x)s^n=\sum_{n=0}^\infty \frac{a_n(x) \Gamma(n)}{x^{n+1}}. \end{align} Since all $a_n>0$, there cannot be cancellations between different values of $n$, which implies that all the terms in the RHS must, at most, diverge as a single pole. This implies \begin{align}\label{eq:cond_an} a_n(x)=a_n x^n+{\cal O}\left(x^{n+1}\right), \end{align} for some constant, perhaps vanishing, coefficient $a_n$. Note however that, in order to cancel the single pole $\beta_0/t$ in \eqref{eq:laplace_formula}, at least one of the $a_n$ coefficients must be non-vanishing. At this point one might be worried about the convergence of the sum in \eqref{eq:integral_sum}, since we are expanding around $\sigma=0$ and integrating over the whole real line. However, this leads to no problems in the setting discussed here. Let us show this explictly by cutting the Taylor series at a finite order $N$ \begin{align} \phi(\sigma,x)=\frac{b_1(x)}{\sigma}+\sum_{n=0}^N a_n(x)\sigma^n+\mathfrak{R}_{N+1}(\sigma,x). \end{align} Since this is a Laurent series, there exist a function $K(x)$ such that \begin{align} |\mathfrak{R}_{N+1}(\sigma,x)|< K(x) \sigma^{N+1}, \end{align} at least in the limit $x\rightarrow 0$ of interest. Using this condition we can thus estimate the size of the remainder after cutting the series and exchanging the order of summation and integration. We have \begin{align} \left|\int_0^\infty d\sigma\ e^{-\sigma x}\mathfrak{R}_{N+1}(\sigma,x) \right|< \int_0^\infty d\sigma\ e^{-\sigma x}\left|\mathfrak{R}_{N+1}(\sigma,x)\right| <K(x) \int_0^\infty d\sigma\ e^{-\sigma x}\sigma^{N+1}. \end{align} The last integral is immediate and gives \begin{align} \left|\int_0^\infty d\sigma\ e^{-\sigma x}\mathfrak{R}_{N+1}(\sigma,x) \right|<{\cal O}\left(\frac{1}{x^{N+2}}\right). \end{align} Noting that the $N$-th term in the series contributes at order $a_n(x)x^{-(N+1)}$, this shows that \eqref{eq:integral_sum} is thus well-behaved as an asymptotic series, which is enough for our purposes here. Before going forward let us go back to the condition \eqref{eq:cond_cancel}. Now it is obvious that all $\eta_n$ have to be different in the limit $x\rightarrow 0$. Unless they satisfy \eqref{eq:cond_an} there would be extra divergences after integration of $\phi(\sigma,x)$. A possible way out would be to have two terms giving rise to the same divergence and cancelling each other. However, since all $a_n>0$, this is not possible. We thus conclude that the form of our asymptotic expansion is indeed unique and reads in the forward limit, once all knowledge is collected \begin{align}\label{eq:phi_forward_limit} \lim_{x\rightarrow 0} \phi(\sigma,x)=\frac{\beta_1}{\sigma}+\sum_{n=0}^\infty a_n (x\sigma)^n, \quad \sum_{n=0}^\infty a_n \Gamma(n)=\beta_0, \end{align} Note that the expansion of the function $\phi(\sigma,x)$ in the forward limit, which naively corresponds to $x\rightarrow 0$, has become instead an expansion when $x\sigma\rightarrow 0$, since it is only under this assumption that \eqref{eq:phi_forward_limit} is well-behaved as an asymptotic expansion. This suggest that the proper forward limit to take in the presence of graviton exchange, at least at high energies, is actually \begin{align}\label{eq:limit} \tau=\sigma x \sim t\log(s)\rightarrow 0, \end{align} or $t\log|s|\rightarrow 0$ in the complex plane for $s$, which ensures perturbative control of the UV amplitude. As we will see in a moment, this has a strong impact on the derivation of positivity bounds. Since the asymptotic limit $|s|\rightarrow \infty$ and $t\rightarrow 0$ has to satisfy \eqref{eq:limit}, the computation of the integral along $\Gamma_C$ in \eqref{sigma_inf} has to be taken carefully. \section{Arc integrals in the forward limit} \label{sec:arc_int} By means of analyticity and crossing symmetry, one can actually go beyond our results in the previous section and recover the asymptotics of the whole amplitude out of its imaginary part. Indeed, for $|s|\gg M_*^2$ we can write a totally standard dispersion relation for the scattering amplitude by using Cauchy's integral theorem. We have \cite{deRham:2017avq}\cite{Herrero-Valea:2020wxz} \begin{equation} {\cal A}(s,t)=\frac{s^2}{2\pi i}\oint_{\gamma_s}\frac{{\cal A}(z,t)dz}{z^2(z-s)}=F(s,t)+F(-s-t,t). \end{equation} Here $\gamma_s$ is a small circle around $z=s$ and we have exploited crossing symmetry to obtain an explicit expression in terms of $s$ and $u=-s-t$. The function $F(s,t)$ reads\footnote{See \cite{deRham:2017avq} for a detailed derivation. Here we are taking the subtraction point $\mu_p^2\ll |s|$, and taken into account that all pathologies of the scattering amplitude, such as the pole, are contained in the IR.} \begin{equation} F(s,t)=\frac{ s^2}{\pi}\int_{0}^{\infty}{\frac{{\rm Im}{\cal A}(z,t)dz}{z^2(z-s)}}=\frac{ s^2}{\pi}\int_{0}^{M_*^2}{\frac{{\rm Im}{\cal A}(z,t)dz}{z^2(z-s)}}+\frac{ s^2}{\pi}\int_{M_*^2}^{\infty}{\frac{a_0 z^{\alpha' t}dz}{z-s}} +{\cal O}\left(t \log(s)\right), \end{equation} where we have taken into account only the leading term in the expansion \eqref{eq:phi_forward_limit}. In the region of validity $|s|\gg M_*^2$, and the first integral becomes proportional to $s/M_*^2$, so that it can be neglected. The asymptotics of the last one for large $|s|$ leads instead to \begin{equation} F(s,t)=-\frac{a_0 e^{-i\pi \alpha' t}}{\sin{(\pi \alpha't)}}s^{2+\alpha't}. \end{equation} Thus, we see that the total leading part of the amplitude, and not only its imaginary part, is actually completely determined. It reads \begin{equation} \label{eq:Regge} A(s,t)=-\frac{a_0 e^{-i\pi \alpha' t}}{\sin{(\pi \alpha't)}}(s^{2+\alpha't}+(-s-t)^{2+\alpha't})+{\cal O}\left(t\log (s)\right). \end{equation} This result of course reproduces the imaginary part $a_0 s^{2+\alpha' t}$, while being at the same time its analytic and crossing symmetric continuation\footnote{Notice that for large $s>0$ and small $t<0$ the second term does not contribute to the discontinuity.}. After obtaining this asymptotic form for the UV scattering amplitude including exchange of gravitons, we can now focus on understanding whether we can really set the contribution of the infinite arcs $\Sigma_{\infty}$ to the dispersion relation \eqref{eq:final_disp} to vanish or not. In the case of gapped theories, the Froisart-Martin bound ${\cal A}(s,t)<s\log^2{s}$ guarantees that $\Sigma_{\infty}=0$. For \eqref{eq:Regge} instead, we find a different result. Let us then compute the integral in \eqref{sigma_inf} explicitly. Taking into account that the arc $\Gamma_R$ is described by $s=R e^{i\theta}$ with $R\rightarrow \infty$, and that \begin{align} \frac{1}{2\pi i}\oint_{\Gamma_R}ds \frac{s^{2+\alpha' t}}{s^{3}}=\frac{ R^{\alpha't}}{2\pi} \int_0^{2\pi} d\theta e^{i\alpha't \theta}=\frac{ R^{\alpha't }}{2\pi}\frac{e^{2\pi i\alpha't}-1}{i\alpha't }, \end{align} we get \begin{equation} \Sigma_{\infty}= -\frac{2 a_0 e^{-i\pi \alpha' t}}{\sin{(\pi \alpha't)}} \frac{ R^{\alpha't }}{2\pi}\frac{e^{2\pi i\alpha't}-1}{i\alpha't }=-\frac{2 a_0}{\pi\alpha' t}R^{\alpha' t}=-\frac{2 a_0}{\pi\alpha' t}, \end{equation} where we have taken into account that the asymptotic expansion is well-controlled only when $t\log R\rightarrow 0$. Since in this limit $R^{\alpha' t}\rightarrow 1$, we get a non-vanishing contribution, unlike in a case where one takes first the limit $R\rightarrow \infty$ with small but finite $t$ \cite{inpreparation}. For the sake of completeness, let us notice that if one takes the integral over the branch cut in \eqref{eq:final_disp} not from $M_*^2$ to infinity, but from $M_*^2$ to $R$, one obtains instead \begin{equation} \Sigma_{UV}=\frac{2}{\pi}\int_{M_*^2}^{\infty}\frac{ds\,{\rm Im}{\cal A}(s,t)}{s^3} =\frac{2}{\pi}\int_{M_*^2}^{R}\frac{ds}{s} \,a_0 s^{\alpha' t}=\frac{2a_0}{\pi \alpha' t}\left(R^{\alpha' t}-(M_*^2)^{\alpha' t}\right). \end{equation} In the limit $t \log{R}\rightarrow 0$ both terms go to unity and we get \begin{equation} \Sigma_{UV}=O(t)+O(M_*^4) \end{equation} without the $1/t$ divergence, which appears instead to be captured in the arc contribution $\Sigma_R$. If instead we carefully take $R\rightarrow \infty$ first at finite negative $t$, the familiar result with $\Sigma_{\infty}=0$ is reproduced \cite{inpreparation}. Thus, it seems that the combination $\Sigma_{UV}+\Sigma_{\infty}$ does not depend on the way of taking the limits in $R$ and $t$, although individual terms do. Hereinafter we use the limit $\tau\rightarrow 0$ motivated by our findings in section \ref{sec:asympt_exp}. This simplifies the computation by allowing us to use only the leading term in the $\tau$ expansion for the imaginary part of the amplitude. With this choice, and for the simple amplitude \eqref{eq:Regge}, the infinite arc brings a $1/t$ term. However, sub-leading terms might lead also to finite contributions. In particular, by noting that the new pole contribution comes from the real part of the amplitude, which is not constrained by unitarity or any other of the arguments here, it seems that any finite value can be obtained from the UV, simply by modifying ${\rm Re}{\cal A}(s,t)$. For example, take the equally valid amplitude \begin{equation} \label{eq:A_UV_simple} A(s,t)=-\frac{a_0 e^{-i\pi \alpha' t}}{\sin{(\pi \alpha't)}}(s^{2+\alpha't}+(-s-t)^{2+\alpha't})+\beta e^{-i\pi \alpha' t}(s^{2+\alpha't}+(-s-t)^{2+\alpha't}), \end{equation} which leads to \begin{equation} \Sigma_{\infty}=-\frac{2 a_0}{\pi\alpha' t} +\beta +O(t). \end{equation} This indeed contains a finite piece that must be matched by the infrared terms in the LHS of \eqref{eq:final_disp} in order for the dispersion relation to be valid. Notice however, that $\beta$ is not constrained at all, and in particular it is not forced to be positive. It can be large ($|\beta|\gg M_*^{-4}$) and negative. Thus, its value will influence applicability of positivity bounds based on the relation \eqref{eq:final_disp}. We will discuss this point later. Here we did not include the $\log\left(-t\right)$ divergences in the dispersion relation. Given that they can be cancelled by the sub-leading contributions to ${\rm Im}{\cal A}(s,t)$ at large $s$, their impact to $\Sigma_R$ should be also sub-leading. An accurate computation shows that the term $s^{2+\alpha' t}/(\log{s})$ appearing in the imaginary part would always lead to a vanishing contribution. Thus, only those terms needed to cancel the leading IR divergences induce a non-trivial value of the infinite arc integrals. However, let us also note that although sub-leading IR divergences are not sensitive to the real part of the amplitude in the UV, they are still relevant to recover its imaginary part, as previously discussed in this work. \section{QED with gravity}\label{sec:QED} Following the derivation of positivity bounds from twice-subtracted dispersion relations, several works examined their consequences on different physical theories of interest for model building. In particular, bounds in the presence of graviton exchange were closely studied in recent works \cite{Alberte:2020bdz,Alberte:2021dnj}. There, and by looking at photon scattering, the authors show that positivity bounds are easily violated by gravitational contributions. In this section we examine how our findings and, in particular, the contribution of the infinite arcs, relax this issue. Let us then consider Quantum Electrodynamics (QED) coupled to gravitation, with action \begin{equation}\label{eq:QED} S=\int\sqrt{-g}\,d^4x\left(-\frac{M_P^2}{2}R-\frac{1}{4}F_{\mu\nu}^2+\bar{\psi}(D_{\mu}\gamma^{\mu}-m)\psi\right), \end{equation} where $F_{\mu\nu}$ is the photon field strength, and $\psi$ is a fermion field with charge $e$. Following \cite{Alberte:2020bdz} we look at $2\rightarrow 2$ photon scattering, including tree-level graviton exchange and one-loop fermion corrections -- thus retaining contributions up to ${\cal O}\left(e^6\right)$ and ${\cal O}\left(M_P^{-4}\right)$. In the forward limit, and by taking $s\gg m^2$, this reads \begin{equation} A(s,t)=-\frac{s^2}{M_P^2 t}+\frac{1}{M_P^2}\left(-\frac{11 e^2 s^2}{360 \pi^2 m^2}+\frac{e^2 s}{12\pi^2}\right)+\frac{11 e^4 s^2}{720 \pi^2 m^4}. \end{equation} The different topologies contributing to this scattering are shown in figure \ref{fig:diagrams_QED}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{diagrams.pdf} \caption{Topologies providing the leading contributions to the photon scattering amplitude at large $s$ in the forward limit.} \label{fig:diagrams_QED} \end{figure} The RHS of \eqref{eq:final_disp} contains four different pieces when computed for this amplitude. The first one is the integral of the imaginary part of the amplitude for those contributions that survive in the limit $M_P\rightarrow \infty$. These can be computed analytically, since the action \eqref{eq:QED} corresponds to a renormalizable theory in this limit, and were obtained in \cite{Alberte:2020bdz}. We will borrow their result here. The second piece is the contribution given by pure gravitational terms in the IR. Again, these can be computed explicitly by using the amplitude above, as \cite{Alberte:2020bdz} did. We also have those coming from the UV part of the integral, when $s\gg M_*^2$, and which depend on the UV completion, as previously discussed. We name them $\Sigma_{\rm UV}$. Finally, we have the contribution of the infinite arcs, which we have learnt that cannot be taken to vanish a priori. The total result for the right hand side -- in the limit of small $\mu$ -- is then \begin{equation} \Sigma(\mu^2,0^-)=\frac{11 e^4 }{360 \pi^2 m^4}-\frac{11 e^2}{360 \pi^2 m^2 M_P^2}+\Sigma_{\rm UV}+\Sigma_{\infty}. \end{equation} On the other hand, the LHS of \eqref{eq:final_disp} can be directly computed and reads in this case \begin{equation} \Sigma(\mu^2,0^-)=A_{ss}(s)=-\frac{2}{M_P^2 t}-\frac{11 e^2 }{180 \pi^2 m^2 M_P^2}+\frac{11 e^4}{360 \pi^2 m^4}. \end{equation} As we can see, both results agree in the decoupling limit of gravitational interactions. This is not a surprise because, as we have already pointed out, the action \eqref{eq:QED} is renormalizable in this limit. This means that \eqref{eq:final_disp} becomes trivial. The divergence $1/t$ however, has a pure gravitational origin and, as we have discussed, will be cancelled by the interplay between $\Sigma_{UV}$ and $\Sigma_{\infty}$. However, the contributions of order $(m M_P)^{-2}$, showing up in both sides, do not cancel each other. If we were being naive, assuming that $\Sigma_\infty=0$ and simply cancelling the pole with $\Sigma_{\rm UV}$, then we would find a clash between the two approaches to derive $\Sigma(\mu^2,0^-)$. The only way out is to assume the existence of new physics turning on before gravitational interactions, such that the amplitude gets modified and leads to a cancellation of the undesired piece \begin{align} -\frac{11 e^2 }{360 \pi^2 m^2 M_P^2}+\frac{\Theta}{\Lambda^2}=0, \end{align} where $\Theta$ is a constant. From simple dimensional analysis we can see that this implies that the cut-off of the SM -- in other words, the scale of introduction of new physics -- would be $\Lambda\sim \sqrt{m M_P}$, which in QED leads to $\Lambda \sim 10^8 {\rm GeV}$, significantly lower than the Planck scale. This option was studied in \cite{Alberte:2021dnj}. However, there is another natural solution to the problem -- the possibility of having a non-zero infinite arc contribution $\Sigma_{\infty}$ with negative sign. As we have shown in previous sections, the value of $\Sigma_\infty$ contains contributions from the real part of the gravitational amplitude at high energies -- the term $\beta$ in \eqref{eq:A_UV_simple} --, which are not constrained at all. Thus, they can potentially cancel the remaining negative contribution of order $(m M_P)^{-2}$. Moreover, by considering this possibility, we can obtain non-trivial information about the UV completion of gravitational interactions. In particular, in the case of QED, we can conclude that the UV amplitude in the Regge limit should `know' about the presence of the light particle (electron), since it needs to contain a large negative contribution related to it. For instance, by borrowing expression \eqref{eq:A_UV_simple}, a possible consistent UV amplitude in the forward limit is \begin{equation} \label{A_QED} A(s,t)=e^{-i\pi \alpha' t}\left(-\frac{11 e^2 }{360 \pi^2 m^2 M_P^2}-\frac{\pi \alpha' }{M_P^2\,\sin{(\pi \alpha't)}}\right)\left(s^{2+\alpha't}+(-s-t)^{2+\alpha't}\right), \end{equation} but this option is, of course, not unique and other possible amplitudes could lead to similar physical results -- cf. \cite{Alberte:2020jsk}. Let us stress that the Regge slope $\alpha'$ cannot be fixed from IR considerations, and it is instead connected to properties of the UV completion. By considering this solution to the conundrum unveiled in \cite{Alberte:2020bdz,Alberte:2021dnj}, QED can be `rescued' and trusted as a good EFT when coupled to gravitation up to the Planck scale. Of course, a realistic model of QED breaks down before $M_P$, since it needs to be embedded onto the Electroweak model, but a similar reasoning can be made even in the general case of the Standard Model (SM) -- where the problem is even worse due to neutrino loops, which bring the cut-off down to values just slightly over the LHC reach. Finally, let us point out that the QED contribution to \eqref{A_QED} can be interpreted as the contribution of loops of light particles -- the fermion in this case -- to the Regge behaviour of the amplitude in the UV \cite{Alberte:2021dnj}. \section{The fate of gravitational positivity bounds} \label{sec:fate_bounds} The prototypical application of formula \eqref{eq:final_disp} is to derive positivity bounds, constraints on the values of Wilson coefficients of EFTs, by computing explicitly the value of $\Sigma(\mu,0^-)$ in the IR. These are obtained by simply considering the following expression \begin{equation} \Sigma=\int_{s_{\rm th}}^{\infty}\frac{ds}{\pi}\left({\frac{s^3\, {\rm Im}{\cal A}(s,0)}{(s^2+\mu^4)^3}}+\frac{(s-4m^2)^3{\rm Im}{\cal A}^\times(s,0)}{((s-4m^2)^2+\mu^4)^{3}}\right), \end{equation} where we have assumed that $\Sigma_\infty =0$ for the moment. Here $s_{\rm th}$ stands for the threshold of particle production where the branch cut starts on the real axis. For scattering processes without masless particles in the exchange channel, this corresponds to $s_{\rm th}=4m_l^2$, with $m_l$ the mass of the lightest exchanged state, and the integral runs along the physical regime for the Mandelstam variable $s$ \cite{Adams:2006sv}. In the case of a massless exchange, we have $s_{\rm th}=0$. Taking into account that the optical theorem \eqref{eq:optical_theorem} implies that the integrand in the RHS is always positive, from unitarity requirements of the UV completion, we can conclude that \begin{align}\label{eq:positivity_b_simple} \Sigma>0, \end{align} which in turn will imply conditions on the Wilson coefficients contributing to the scattering amplitudes and ultimately to $\Sigma$. The bounds \eqref{eq:positivity_b_simple} can be improved by noting that part of the RHS can actually be computed within an EFT. Splitting the integral in the RHS as $\int_{s_{\rm th}}^\infty=\int_{s_{\rm th}}^{\Lambda^2}+\int_{\Lambda^2}^{\infty}$, we can move the first piece to the left and conclude in the same fashion that \begin{equation} \Sigma-\int_{s_{\rm th}}^{\Lambda^2}\frac{ds}{\pi}\left({\frac{s^3\, {\rm Im}{\cal A}(s,0)}{(s^2+\mu^4)^3}}+\frac{(s-4m^2)^3{\rm Im}{\cal A}^\times(s,0)}{((s-4m^2)^2+\mu^4)^{3}}\right)>0. \end{equation} These \emph{improved} positivity bounds have been also referred in the literature as \emph{beyond} positivity bounds \cite{Bellazzini:2017fep}. If $s_{th}\ne 0$ there is a simpler way to bound the coefficient in front of $s^2$ in the amplitude. One can equivalently derive a bound for \begin{equation} \bar\Sigma=\frac{1}{2\pi i}\oint\frac{{\cal A}(s)\,ds}{(s-\mu^2)^3}=\frac{1}{2}{\cal A}_{ss}(s)>0, \end{equation} which is applicable for $\mu^2<s_{th}$ \cite{Adams:2006sv,deRham:2017xox,Bellazzini:2017fep}. This approach, however, cannot be directly applied for scattering of massless particles. The presence of a branch cut with $s_{\rm th}=0$ requires to use the more complicated dispersion relation \eqref{eq:final_disp}. These type of bounds can be systematically obtained from many different dispersion relations, by simply taking more subtractions -- see \cite{deRham:2017avq, Bellazzini:2020cot}. Even amplitudes containing graviton exchange can provide rigorous bounds for the coefficients in front of higher powers of $s$ -- $s^4$ and beyond --, as well as for their $t$ derivatives \cite{Bern:2021ppb}, which are regular in the forward limit. This happens because the $1/t$ pole (as well as the loop IR singularities) is accompanied by a $s^2$ power at most\footnote{This may not be true in theories with higher spin states, which we are not considering here.}. For this reason, only the application of positivity bounds for the $s^2$ term gets obstructed in the presence of graviton exchange and graviton loops. Of course, one can proceed naively by regularizing the divergence in the same way as we have proceeded here, by keeping small $t<0$, and simply bound the coefficient accompanying the divergence to be negative, since it dominates the bound. However, this is just the residue in the pole of the graviton propagator, whose sign is already constrained to satisfy trivial requirements of perturbative unitarity. Thus, no new information is obtained from positivity bounds in this case, unless one resolves the singularity. This can be done provided that the contribution of the infinite arc $\Sigma_\infty$ vanishes. In this case the divergence can be simply cancelled by the proper Regge behaviour in the UV. However, there are finite remainders whose sign cannot be determined a priori, and thus one arrives to an approximate positivity bound \begin{equation} \Sigma>-O(M_*^{-4}). \end{equation} which allows for small negativity \cite{Tokuda:2020mlf,Herrero-Valea:2020wxz}. As we have mentioned, this is only true under the extra assumption that the infinite arc contribution is either zero in the limit $t\rightarrow 0$ or shown to be parametrically smaller than $O(M_*^{-4})$. However, as we have discussed in section \ref{sec:QED} with the example of QED coupled to gravity, the contribution of the infinite arc can actually be negative and parametrically large, violating this assumption. This reflects the fact that the loops of light particles can affect the amplitude in the forward limit even in the UV region of large $s$. If this happens, then no bounds can be set for the finite part of the $s^2$ term in the amplitude, since the former are modified to \begin{align*} \Sigma>\Sigma_\infty -{\cal O}\left(M_*^{-4}\right), \end{align*} which is meaningless without a systematic way to determine the size of $\Sigma_\infty$. Any amount of negativity can always be explained by contributions to the UV amplitude which, to the best of our knowledge, do not contradict any of the basic principles of QFT. Although the identification of the undetermined term as part of the infinite arc integral is related to our choice of kinematics in the UV, controlled by $\tau\rightarrow 0$, let us stress that the previous conclusion is not tied to it. For other choices, $\Sigma_\infty$ might vanish, but a similar contribution would arise from the branch cut, leading to the same physical conclusion \cite{inpreparation}. As a final note, let us note that a possible way out to this conundrum is the case when the IR amplitudes are parametrically larger than the UV contribution to the arcs. This requires the existence of a cut-off scale in the IR $\Lambda$ such that the amplitude in this region can be organized as an EFT \begin{align} {\cal A}(s,t)=\sum_{n=0}^\infty {\cal A}_n(s,t) \Lambda^{-n}, \end{align} while gravitational dynamics will contribute with terms ordered by of $M_P$. In the case in which $\Lambda \ll M_P$, the contribution of $\Sigma_\infty$ can thus be safely neglected, so that we recover an approximate positivity bound \begin{align} \Sigma>{\cal O}\left(M_P^{-2}\right). \end{align} This justifies the application of positivity bounds to the case of gapped theories much below the gravitational scale -- and the neglection of gravity even though everything universally couples to gravitation --, but the problem survives if one wants to account for graviton exchange. Information about the UV completion is needed. \section{Do string theories provide bona-fide positivity bounds?}\label{sec:strings} We cannot say that we are currently in position to provide a number of known non-perturbative amplitudes for the test of our findings above for the UV behavior of gravitational scattering. However, string theory gives us some hints on how some examples are constructed. In this section we use string amplitudes as a test ground to see if one indeed recognizes $\tau=t\log(s)$ as the expansion parameter of the forward limit, and to assess what happens to the large arc integral, i.e. if they can provide a constant contribution, similar to the one that solves the QED conundrum in section \ref{sec:QED}, upon some conditions or not. Let us give a try to the 4-graviton scattering amplitude derived in type II superstring theory. It can be written in the following form \cite{Schwarz:1982jn,Tokuda:2020mlf} \begin{equation} {\cal A}_{\rm string}(s,t)=-A(s^2t^2+s^2u^2+t^2u^2)\left.\frac{\Gamma(-s)\Gamma(-t)\Gamma(-u)}{\Gamma(1+s)\Gamma(1+t)\Gamma(1+u)}\right|_{u=-s-t} \label{ssa} \end{equation} where $A$ is positive and the string constant is set to $\alpha'=4$ for simplicity. This amplitude represents the tree-level 4-graviton scattering in the closed super-string NS-NS sector. The polynomial factor accounts for the polarisation states of the spin-2 particle. This result has some limitations, though. Neither R-NS nor open-closed string interactions are included here. Moreover, it takes into account only scattering of string states without touching the question on how branes enter into the game. Note that D-branes, which superficially accommodate the previously successful Chan-Paton factors, are responsible for the inclusion of SM particles in string phenomenology scenarios \cite{Polchinski:1996fm,Douglas:1996uz}. All this means that the amplitude above is far from describing our real world, but it is still of great interest for our purposes here, because it is an example of an extremely successful theoretically justified non-perturbative scattering description. The expansion of the amplitude (\ref{ssa}) for $s\to\infty$ and $t\to0$ can be performed straightforwardly. Arranging the terms in powers of $t$ and keeping the leading $s$ contribution we get \begin{equation} {\cal A}_{\rm string}(s,t)=A\frac{s^2}t\left(1+2t\log(s)+2t^2\log^2(s)+\frac43t^3\log^3(s)+\frac23t^4\log^4(s)+\dots\right), \end{equation} where we readily reveal the canonical $s^2/t$ pole for the spin-2 massless particle, and recognize the presence of $\tau=t\log(s)$ as the expansion parameter. We note that the appearance of $\tau$ is a very non-trivial property. It was not granted a priori to observe it here. However, our considerations in previous sections suggested its presence as a necessary condition for a healthy amplitude. Its emergence here thus serves as a very non-trivial sanity check of our results. Computing $\Sigma(\mu,0^-)$ out of ${\cal A}_{\rm string}(s,t)$ one gets \begin{equation} \Sigma(\mu,0^-)=-\frac A{2t}+4At+O\left(t^2\right), \end{equation} where we notice that the first sub-leading contribution is linear in $t$, with no constant ${\cal O}(1)$ term. This result can be obtained by following the procedure outlined in \cite{Tokuda:2020mlf}. It requires summing over residues of poles arising at the integer negative points of the $\Gamma$-function. Remarkably, note that since the constant term is absent, the amplitude \eqref{ssa} does not provide the large negative contribution that saves the day in the case of QED -- cf. section \ref{sec:QED}. Therefore, we conclude that pure NS-NS superstring amplitudes cannot heal the curious contribution observed in \cite{Alberte:2020bdz}. However, there is hope for this to happen once SM particles are included in the amplitude, through coupling to D-branes. This is definitely an ambitious open question to be understood in the string framework, as one needs generalizations of SM amplitudes computed from the string perspective. Alternatively, there may still be a window in the string framework on its own if one includes other contributions arising from NS-R interactions or from open-closed string interactions. This analysis is however clearly beyond the scope of the present work. \section{Conclusions} \label{sec:conclusions} In this paper we have shown that the requirement of cancellation of IR forward divervenges appearing in graviton (mediated) scattering is enough to constrain the form of the imaginary part of the scattering amplitude at very high energies, above a scale $M_*$. In particular, we have proven that whatever the form of the UV completion of gravitation is, ${\rm Im}{\cal A}(s,t)$ must admit an asymptotic expansion of the form \eqref{eq:phi_forward_limit} in the limit $\tau\propto t \log s\rightarrow 0$. The appearance of the parameter $\tau$ is a highly non-trivial feature that is however reproduced in the known case of the Veneziano amplitude of string theory \cite{Veneziano:1968yb}, as we discuss in section \ref{sec:strings}. The determination of the form of ${\rm Im}{\cal A}(s,t)$ has an inmediate impact on the construction of positivity bounds, which are widely used to constrain EFTs of matter coupled to gravitation. In their derivation, there appear integrals along arcs with radius $|s|\rightarrow \infty$, which are typically taken to vanish, either by invoking the Froisart-Martin bound for gapped theories, or with other arguments in the gapless case. By using our expansion we show however that this cancellation is not guaranteed and instead depends on the form of the \emph{real} part of the scattering amplitude, which is not constrained at all, to the best of our knowledge. In the case that this real contribution exists, then the predictability of gravitational positivity bounds is doomed, since the previous simple expression $\Sigma>0$, which is computable within an EFT, gets modified into \begin{align} \Sigma-\Sigma_\infty>0, \end{align} which is meaningless unless some input about the contibution of the UV completion $\Sigma_\infty$ is given case by case. Although this situation is overall negative for the applicability of positivity bounds, it can also have a bright side. The undetermined contribution from the UV completion could compensate the presence of anomalously large negative terms appearing in $\Sigma$ in the case of QED coupled to Einstein-Hilbert gravity, which we have studied in section \ref{sec:QED}, and in the general case of the SM. A naive solution to both preserving the fate of positivity bounds, and accepting these terms, is to assume the existence of new physics above a relatively low energy scale $\Lambda\sim \sqrt{m_l M_P}$, where $m_l$ is the mass of the lightest fermion. In the case of the SM this can be within the LHC scale and thus puts in tension the validity of the SM itself. In contrast to this solution, the existence of a non-vanishing contribution $\Sigma_\infty$, coming from the real part of the scattering amplitude of the UV completion, can solve the issue and preserve the validity of the SM up to the Planck scale. However, this would imply that positivity bounds cannot give us any new information in these situations. Alternatively, we can look at this as an opportunity to realize a reverse bootstrapping. We can use the theories in the IR to compute the value of $\Sigma$, and use it to determine contributions to ${\rm Re}{\cal A}(s,t)$ at high energies through $\Sigma_\infty$. This could give an important insight on how light particles contribute to graviton scattering even beyond the Planck scale. Finally, we have tested our results by looking at the non-perturbative amplitude for graviton scattering obtained from the scattering of NS-NS closed superstrings. This amplitude indeed organizes as an asymptotic expansion in $\tau\rightarrow 0$ in the double limit $s\rightarrow \infty$ and $t\rightarrow 0$, satisfying our results. However, it does not provide the negative large term $\Sigma_\infty$ required to save QED and the SM from a low cut-off. Although this could be interpreted as a hint of the true existence of this cut-off, we believe instead that it points out to the necessity of a better understanding of scattering amplitudes in the string framework. In particular, the problem at hand seems to require to go beyond the simple Veneziano amplitude and also account for interaction with SM particles through attaching the strings to D-branes, or perhaps including also NS-R sectors, and open-closed string interactions. Additionally, it is also interesting to question if there exist other UV completions besides string scattering that satisfy the requirements discussed in this work. Even more, we wonder if it is possible to construct model-independent amplitudes that not only cancel the IR forward divergences in graviton scattering, but also render the SM safe until the Planck scale, and what we can say about these amplitudes. One possible direction of research along these lines could be to consider triple-product amplitudes. Indeed, by looking at \eqref{ssa} we see that, apart from the polarization factor, the expression is a triple product of ${\cal B}(z)=\Gamma(-z)/\Gamma(1+z)$ such that ${\cal A}(s,t)\sim{\cal B}(s){\cal B}(t){\cal B}(u)$. Recently a new set of amplitudes with triple-product structure was considered in \cite{Huang:2022mdb}, where they claim that a wide class of functions ${\cal B}(z)$ leads to a unitary construction. It would be interesting to see whether those new amplitudes obey the constraints obtained in the present paper using a general model independent approach. \section*{Acknowledgements} We are grateful to Claudia de Rham and Andrew J. Tolley for discussions. M. H-V. is supported by the Spanish State Research Agency MCIN/AEI/10.13039/501100011033 and by the EU NextGenerationEU/PRTR funds, under grant IJC2020-045126-I. IFAE is partially funded by the CERCA program of the Generalitat de Catalunya. A. S. K. is supported in part by FCT Portugal investigator project IF/01607/2015. A. T. is supported by the European Union Horizon 2020 Research Council Grant No. 724659 MasssiveCosmo ERC2016COG and by a Simons Foundation Award ID 555326 under the Simons Foundation Origins of the Universe initiative, Cosmology Beyond Einstein’s Theory.
2024-02-18T23:40:54.879Z
2022-05-27T02:16:05.000Z
algebraic_stack_train_0000
3,669
10,246
proofpile-arXiv_066-1912
\section*{Introduction}\label{section:introduction} In the era of e-commerce, consumers typically buy products or services based on reviews. Therefore reviews are increasingly valuable for sellers and service providers due to the benefits of positive reviews or the damage from negative ones. In light of this, fake reviews are flourishing and pose a real threat to the proper conduct of e-commerce platforms. A group of reviewers (we will call them opinion spammers) post fake reviews to promote their products or demote their competitors' products. Fake reviews are often written by experienced professionals who are paid to write high-quality, believable reviews. Detecting opinion fraud is a non-trivial and challenging problem that was extensively studied in the literature using various approaches. Some approaches are based solely on the review content \cite{jindal2008opinion,li-etal-2013-topicspam,ott-etal-2011-finding}, reviewer behavior \cite{lim2010detecting,mukherjee2013yelp} and the tripartite relationships between reviewers, reviews, and products \cite{10.1145/2187836.2187863,rayana2015collective,rayana2016collective,wang2011review,10.1007/s10115-017-1068-7}. While each paper presented a method that is useful to some extent for detecting certain kinds of spamming activities, there is no one-size-fits-all solution. This is because spammers keep changing their strategies, many times in an adaptive manner to the spam detection policies. Therefore, there is a need to study and incorporate as many approaches as possible. Another challenge is the fact that most datasets are imbalanced, as the three datasets that we used for evaluation, where only about 20\% of the users are spammers (Table \ref{tab:modVSgnn}). Despite the vast commercial impact of opinion spam detection, most machine-learning-based solutions for this problem do not achieve very high performance due to insufficient labeled data to properly train an ML model. In addition, standard ML models often treat each sample separately, disregarding the underlying graph structure of the spammer group. Indeed, this graph is often latent and should be derived from the data in an ad-hoc manner. One can use deep graph learning methodology to automatically embed the users, and thus infer the underlying graph structure, but such an approach requires a large training set, which is often not available or costly to obtain. The setting where only part of the data is labeled is often called ``semi-supervised learning". When very few examples are labeled, this setting is also termed ``few-shot learning". When in addition one is given the possibility to choose which set of users will be labeled (given a budget, one can decide for which users to invest the budget in order to acquire their label), this is called the active-learning setting. In this paper we study the few-shot active learning setting for the opinion spam detection problem. The few-shot active learning setting is very reasonable for the opinion-spam detection problem because labeled data never comes labeled for free, and operators of e-commerce websites choose de-facto which users they want to check manually and label, and typically only a small fraction of the users are labelled. \begin{figure}[h] \includegraphics[width=0.55\textwidth]{cliques.pdf} \caption{Illustration of the user-user graph, inducing an interconnected set of cliques, one for each product.} \label{fig:cliques} \end{figure} \subsection*{Our Contribution} We propose a classification algorithm, {\em Clique Reviewer Spammer Detection Network}, {\textit{CRSDnet}} , for detecting fake reviews in the few-shot active learning setting. {\textit{CRSDnet}}~ harnesses the power of both machine learning algorithms and classical graphical models algorithms such as Belief Propagation. We show that this combination yields a better performance than each approach separately. We evaluate our algorithm on a golden-standard dataset for the spammer detection task, the Yelp Challenge Data \cite{YelpData}. The performance of our algorithm is better than all previous work in almost every metric. We also outperform other methods that use the graph structure, such as graph embedding, and neural graph networks algorithms~\cite{liu2020alleviating,wang2019fdgars,9435380}. These algorithms use much more labeled data for training (30\% and over, compared to at most 2.5\% that we use). We show that using both machine learning and the graph structure (via label propagation algorithms) improves over stand-alone machine learning by at least 10\% in AUC measure. We attribute the success of our algorithm to the following key innovations in our approach: \begin{itemize} \item We derive from the raw data a user-user graph, where two users share an edge if they wrote a review for the same item. Figure \ref{fig:cliques} illustrates this graph, which is a clique graph by definition. In previous work a tripartite user-review-product graph was used. \item The user-user graph may be much denser than the tripartite user-review-product graph. To overcome computational issues that such density entails, we design a careful edge sparsification procedure to speed up the algorithm without compromising performance much. The sparsification is guided by the rule that each node will end up having just enough edges connecting it to nodes both from his own class (spammer or not) and from the opposite class. \item We run a label-propagation algorithm (concretely, Belief Propagation), but some parameters of that algorithm (the node and edge potentials) are determined using a machine learning model. This is the first time that such a combination of approaches is undertaken, and its usefulness demonstrated. \item We propose a new way of choosing the set of users whose labels will be obtained (active learning). Instead of randomly choosing a set of users up to the allowed budget, we choose random users from the largest clique of the user-user graph. The intuition behind this rule comes from the work of Wang et. al. \cite{wang2020collueagle} where it was shown that collusive spamming (or, co-spamming) is a useful lens to identify spammers. \end{itemize} \section*{Related Work}\label{section:rw} Opinion spam detection has different nuances such as fake review detection \cite{jindal2008opinion,ott-etal-2011-finding,10.1145/2187980.2188164,10.1145/1281192.1281280}, fake reviewer detection \cite{lim2010detecting,wang2011review,10.1145/2505515.2505700} and spammer group detection \cite{10.1145/2187836.2187863, 10.1007/978-3-319-23528-8_17, wang2020collueagle,10.1007/s10115-017-1068-7,10.1007/s10489-018-1142-1}. Two survey papers, \cite{Crawford2015SurveyOR,VivianiSurvey}, provide a broad perspective on the field. Our method is part of the graph-based models, which take into account the relationships among reviewers, comments, and products. The key algorithm in this approach is Belief Propagation (BP)~\cite{pearl2014probabilistic} which is applied to a carefully designed graph and the Markov Random Field (MRF) associated with it. The first to use this approach were Akoglu et al. \cite{akoglu2013opinion} who suggested FraudEagle, a BP-based algorithm that runs on the bipartite reviewer-product graph, where the edge potentials are based on the sentiment in the review. In later work, Rayana et. al. \cite{rayana2015collective} introduced SPEagle, where node and edge potentials are derived from a richer set of meta-data features, improving significantly over the performance of FraduEagle. Wang et al. \cite{wang2011review} consider the tripartite user-review-product network and define scores for trustiness of users, honesty of reviews, and reliability of products. They use an ad-hoc iterative procedure to compute the scores, rather than BP. In \cite{7865975} an algorithm called NetSpam was introduced which utilizes spam features for modeling review datasets as heterogeneous information networks. A graph-based approach was suggested in \cite{fei2013exploiting} but this time the graph contains edges for reviews that were written within a certain time difference from each other (a ``burst"). The authors use a different dataset to evaluate their method, reviews from Amazon.com. \begin{figure*}[h] \centering \includegraphics[width=0.9\linewidth]{flow_chart.png} \caption{The flow chart of our pipeline, from raw data, through sampling users for labeling, using them to train an ML algorithm for predicting edge and node potentials, sparsification of the graph using trusted edges, running LBP, and completing the classification task.} \label{fig:flowChart} \end{figure*} The authors of \cite{wang2020collueagle} present ColluEagle, a graph-based algorithm to detect both review spam campaigns and {\em collusive} review spammers. To measure the collusiveness of pairs of reviewers, they identify reviewers that review the same product in a similar way. A different approach to the problem of spammer detection is via Graph Neural Networks (GNNs) and Graph Convolutional Networks (GCNs). These are deep learning architectures for graph-structured data. The core idea is to learn node representations through local neighborhoods~\cite{kipf2016semi}. In \cite{liu2020alleviating}, the authors design a new GNN framework, GraphConsis, to tackle the fraud detection task. The authors evaluated the method on four data sets, where one of them is used by us too. GraphConsis is benchmarked on different training set sizes, from 40\% to 80\% of the data. GraphConsis with $80\%$ training set size achieves AUC of $0.742$ on the Yelp Chicago dataset, and our method achieves $0.754$ with only $2.5\%$. Note that GraphConsis uses all the metadata that we use as well (the graph structure, and the reviews). A GCN-based algorithm was designed in \cite{10.1145/3308560.3316586}, and tested on reviews from Tencent Inc. The algorithm outperformed four baseline algorithms (Logistic Regression, Random Forest, DeepWalk~\cite{perozzi2014deepwalk}, LINE~\cite{tang2015line}). Our work departs from previous work in several ways. Compared to the works where label-propagation algorithms were used, we use machine learning to predict the edge and node potentials rather than hand-crafted threshold functions. Second, we consider the user-user graph and not a bi/tri-partite user-review-product graph. To overcome the computational challenge incurred by the density of the user-user graph, we apply a new rule for edge sparsification, based on the ML prediction. Finally, in the active learning setting, we introduce a new sampling rule. All these modifications have led to an improvement over FraudEagle \cite{akoglu2013opinion} and SPEagle \cite{rayana2015collective}. \noindent\textbf{Active Learning: } The active learning approach aims to achieve high accuracy by using few queries, and therefore the ``most informative" points are natural candidates for label acquisition. Various heuristics were proposed to determine the ``most informative" nodes, e.g., uncertainty sampling \cite{lewis1994heterogeneous,culotta2005reducing,settles2008analysis} and variance reduction \cite{flaherty2006robust,schein2007active}. In our setting, we chose a rule that is native to the problem itself -- sampling from the largest clique, following the take-home message from the work of \cite{wang2020collueagle} about collusive spamming. Many works on active learning choose the train set using adaptive rules, point by point. This however is infeasible in our case as re-running the entire pipeline for every new example is computationally prohibitive for datasets as large as ours. Therefore we choose all users for labelling in bulk. \section*{Methodology} \label{sec:PropMetho} In this section we describe our pipeline, end to end. The flow chart is depicted in Figure \ref{fig:flowChart}. We formulate the spam detection problem as a classification task on the user network. The dataset consists of $n$ reviewers who write reviews on $m$ products from the set $P$. The vertex set of the graph $G=(V,E)$ is the set of users (reviewers); user $i$ and $j$ share an undirected edge if there exists some product $p \in P$ such that user $i$ and $j$ wrote a review for $p$. The resulting graph consists of interconnected cliques, each corresponds to a different product. Figure \ref{fig:cliques} illustrates such a network. Each node $i \in V$ has in addition a vector of features $F_i$ associated with it, and a binary class variable $v_i \in \{1,-1\}$, for spammer (1) or benign (-1). The classification task is given the graph $G$ (with the nodes' features), and possibly a set of labeled users $\{i_{1},\dots i_{k}\}$ (the ``few shots" training set), predict the value of $v_i$ for the remaining nodes (the test set). Ideally, to solve the classification task, we would find an assignment $s:\{-1,1\}^n \to V$ that maximizes \begin{align}\label{eq:OrigProb} Pr\left[ v_1 = s_1,\ldots, v_n = s_n | v_{i_1}=s_{i_1}, \ldots, v_{i_{k}}=s_{i_{k}},G\right]. \end{align} This Maximum Likelihood Estimation (MLE) task is in general NP-hard. However, in practice, a useful solution $s$ (perhaps not the maximizer) may be obtained by using a Markov Random Field modelling for the probability space. \subsection*{Markov Random Field} Markov Random Field (MRF) is often used to model a set of random variables having a Markov property described by an undirected dependency graph. A pairwise-MRF (pMRF) is an MRF satisfying the pairwise Markov property: a random variable depends only on its neighbors and is independent of all other variables. A pMRF model involves two types of potentials, node potentials, and edge potentials. Our node potentials, $\phi_i(v_i)$, stand for the probability that reviewer $i$ belongs to either class (spam/benign): \begin{align}\label{eq:nodePotential} \phi_i(v_i)=\begin{cases} a_i & , v_i=1 \ \ (i \text{ is a spammer})\\ 1-a_i & ,v_i=-1 \ \ (i \text{ is a benign).} \end{cases}\\ \notag \end{align} The edge potential $\psi_{ij}(v_{i},v_{j})$ signifies the affinity of reviewer $i$ and $j$, namely, the probability $p_{ij}$ that both belong to the same class. Formally, \begin{align}\label{eq:edgePotential} \psi_{ij}(v_i,v_j)=\begin{cases} p_{ij} & ,v_i=v_j\\ 1-p_{ij} & ,v_i \neq v_j. \end{cases}\\ \notag \end{align} The parameters $a_i$ and $p_{ij}$ satisfy $a_i,p_{ij} \in [0,1]$ for all $i,j$. To determine the values of these parameters we use machine learning applied to features that are extracted from the metadata of the reviewers dataset. The pMRF model is used to approximate the expression for $Pr[s|G]$ in Eq.~\eqref{eq:OrigProb}: \begin{align}\label{eq:objectiveFunction} \Pr\left(s\right)=\frac{1}{Z}\prod_{v_i \in V}\phi_i(v_{i})\prod_{\left( i,j\right) \in E}\psi_{ij}(v_{i},v_{j}), \end{align} where $Z$ is a normalization factor, the sum over the energies of all possible $2^{|V|}$ assignments $s$. Finding the assignment $s$ that maximizes the probability in Eq.~\eqref{eq:objectiveFunction} is still an intractable problem; LBP (loopy belief propagation) is the go-to heuristic for approximating the intractable maximization problem. The LBP algorithm \cite{pearl2014probabilistic} is based on an iterative message passing along the edges of the graph. Messages are initialized according to some user-defined rule. At iteration $t$, a message $m^{(t)}_{ij}$ is sent from node $i$ to each neighboring node $j$. The message represents the belief of $i$ about the label of $j$. If $G$ is a tree, then BP is guaranteed to converge; if $G$ contains cycles then convergence is not guaranteed (hence the name loopy), but in practice, a cap on the number of iterations is set. We use the standard LBP messages, omitted for brevity, and can be found in \cite{pearl2014probabilistic}. Each iteration of LBP takes $O(|V|+|E|)$ time, hence the number of edges, which may be quadratic in $|V|$, plays a key role in the computational complexity. The more iterations one can perform for the same time budget, the better the performance. In the next section, we describe how to address the computational aspect using graph sparsification. \subsection*{Running LBP on a Sparse Graph}\label{sec:method:sparsification1} Recall that our graph is defined over users, and not as a tri-partite user-product-review graph. This may result in a rather dense graph, which poses a computational impediment even on LBP, when the number of nodes is large. For example, the graph created from the Yelp Chicago dataset is very dense (average degree 1193). Therefore our first step is to sparsify the graph by choosing a linear number of ``useful" edges (linear in the number of nodes). To gain intuition into a useful way of sparsification, we conducted the following experiment using the Chicago Yelp data \cite{rayana2015collective}. The initial graph contains $38063$ nodes, and, $2.4 \cdot 10^7$, edges. The sparsification procedure is parameterized with two numbers $k_{1}$ and $k_{2}$. For every node $i$ we choose $k_{1}$ neighbors from $i$'s class (spammer or benign) and $k_{2}$ neighbors from the opposite class, and color these $k_1+k_2$ edges red. We then remove all edges that were not colored red. The resulting graph has an average degree of at most $k_1+k_2$ (multiple edges are merged). We set all node potentials to $\phi(v_{i})\gets \{0.5,0.5\}$; in other words, we don't provide any prior knowledge about the class of the node $v_i$. We set all edge potentials $\psi_{ij}$ as follows: $p_{ij}=\epsilon$ if $v_i \ne v_j$ and $p_{ij}=1-\epsilon$ if $v_i = v_j$ (that is, according to the true agreement relationship between users $i$ and $j$). We fix $\epsilon=0.001$. We run LBP on the resulting graph, and label each node $v_i$ as a spammer if the probability that LBP assigns it is larger than a pre-defined threshold $\tau$. \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{edgeMotivation.pdf} \caption{The AUC of the LBP algorithm for the Yelp Chicago dataset. LBP is run on a subgraph in which each node has $k_1$ neighbors from the similar class and $k_2$ neighbors from the other (dis-similar) class. Edge potentials are set according to the ground truth.} \label{fig:edgeMotivation} \end{figure} Figure \ref{fig:edgeMotivation} depicts the AUC of the LBP classification when varying $k_1$ and $k_2$ between 0 and 15. We see that the AUC approaches 1 when each node has ``enough" neighbors from each class. \subsection*{Sparsification and Edge Potentials}\label{section:method:edge_potentials} In practice, we have the true labels of a small set of nodes, and we use it to learn and predict the edge potentials between the remaining unlabelled nodes. We shall use this prediction for a sparsification procedure similar to what was just described. Our sparsification proceeds as follows: (1) The first step is to train a machine learning algorithm on the set of users whose labels we know with the objective of predicting $p_{ij}$, the probability that a pair of reviewers $i,j$ belongs to the same class (either both spammers or both benign). The exact choice of ML algorithm, along with the parameters is explained in experimental setting section. (2) Compute $p_{ij}$ using the ML model for all the remaining edges of the graph (edges that do not connect two users from the training set). (3) Choose all edges for which $p_{ij} \in [0.95,1]$ or $p_{ij} \in [0,0.05]$ and set the potential in Eq.~\eqref{eq:edgePotential} accordingly. We call these edges the ``trusted" edges. LBP will be run on the graph containing the trusted edges. The input to the machine learning algorithm in steps (1) and (2) is a set of features that is extracted from the metadata of both the users and the reviews. Typical metadata includes the text of the review, the rating that the reviewer gave the product, the total number of reviews that the reviewer wrote, etc. The Yelp set of features is described in Tables \ref{tab:User_Features} and \ref{tab:Review_Features}. Additional sparsification of the graph can be obtained by removing edges that connect users whose reviews of the same product were written in faraway times. Namely, two users $i$ and $j$ share an edge only if they wrote a review for the same product, and these reviews were written within a period of $T$ days (following previous work, we fixed $T=7$). Such a graph with time-dependent edges is called a { \em bursty} graph and was introduced in \cite{fei2013exploiting}. We tested our pipeline with and without the bursty variant. The bursty sparsification, if applied, is done before the trusted edges are selected. \subsection*{Node Potential}\label{sec:method:node_potential} In the experiment just described, all node potentials were set to $\{0.5,0.5\}$, and only edge potentials played a role. However, there may be a gain in setting the node potentials according to the metadata features rather than ignoring it. Similar to the way we set the edge potential, we use machine learning to predict $a_i$ in Eq.~\eqref{eq:nodePotential}. The machine learning algorithm is trained on the set of users chosen for labeling (active learning setting) with the objective of predicting spam or benign. The value of $a_i$ is predicted for all the remaining users using the trained model (which gives the ``probability" of being a spammer or benign, alongside the discrete label). \subsection*{Active learning: Sampling Users}\label{sec:method:sampling} The final component in our methodology is the way we choose the set of users for training. In this work we explore two sampling rules with the same budget of $k$ users: \begin{enumerate} \item \textit{Random Sampling:} Pick $k$ reviewers uniformly at random from $V$. \label{stra:RS} \item \textit{Sampling from largest clique:} \label{stra:LargestClique} In this strategy, we sample $k$ users that belong to the largest clique. The largest clique corresponds to the product on which the largest number of reviews were written. If the budget is not consumed, we sample the remainder from the second-largest clique, and so on. \end{enumerate} \begin{table}[t!] \centering \begin{tabular}{ |p{1.0cm}|p{1.7cm}|p{2.0cm}|p{1.59 cm}| } \hline Dataset & \#Reviews \newline(fake \%) & \#Users \newline(spammer \%) & \#Products\\ \hline \hline Y'Chi &67,395 \newline(13.23 \%) & 38,063 \newline(20.33\%) &201 \\ \hline Y'NYC & 359,052 (10.27\%) &160,225 (17.79\%) &923 \\ \hline Y'Zip & 608,598 (13.22\%) &260,277 (23.91\%) &5,044 \\ \hline \end{tabular} \caption{Summary statistics of the three Yelp datasets \cite{mukherjee2013yelp,rayana2015collective}.} \label{tab:modVSgnn} \end{table} \section*{Data Description}\label{section:Data} To evaluate our methodology we use three datasets that contain reviews from Yelp.com, summary statistics of which are presented in Table \ref{tab:statDS}. The datasets contain reviews of restaurants and hotels and were collected by \cite{mukherjee2013yelp,rayana2015collective}. YelpChi covers the Chicago area, YelpNYC covers NYC and YelpZip is the largest, and it includes ratings and reviews for restaurants in a continuous region including NJ, VT, CT, PA, and NY. They differ in size (YelpChi is the smallest and YelpZip is the largest), as well as in the percentage of spammers out of the total number of users. Yelp has a filtering algorithm that identifies fake/suspicious reviews. The three datasets contain these labels. We partition the users into spammers: authors of at least one filtered review and benign: authors with no filtered reviews. Alongside the text of the reviews, the dataset contains additional metadata such as ratings, timestamps. From the text and the additional data, various features are extracted, which were used in previous work that studied these datasets \cite{rayana2015collective,mukherjee2013yelp,lim2010detecting}. Tables \ref{tab:User_Features} and \ref{tab:Review_Features} include brief descriptions of these features. Most of them are self-explanatory, and hence we omit detailed explanations for brevity. Note that we used exactly the same set of features as \cite{rayana2015collective} to allow a fair comparison. The features are used to compute both the node and the edge potentials as explained in the Methodology section. \begin{table}[h] \centering \begin{tabular}{ |p{0.7cm}|p{6.5cm}|} \hline \multicolumn{2}{|c|}{User Features} \\ \hline MNR & Max. number of reviews written in a day \cite{mukherjee2013spotting,mukherjee2013yelp} \\\hline PR & Ratio of positive reviews (4-5 star) \cite{mukherjee2013yelp} \\\hline NR &Ratio of negative reviews (1-2 star) \cite{mukherjee2013yelp} \\\hline avgRD &Avg. rating deviation of user’s reviews \cite{mukherjee2013yelp,lim2010detecting,fei2013exploiting} \\ \hline WRD &Weighted rating deviation \cite{lim2010detecting}\\\hline BST & Burstiness \cite{mukherjee2013yelp,fei2013exploiting} (spammers are often short-term members of the site). \\\hline RL &Avg. review length in number of words \cite{mukherjee2013yelp} \\\hline ACS & Avg. content similarity—pairwise cosine similarity among user’s (product’s) reviews, where a review is represented as a bag-of-bigrams \cite{lim2010detecting,fei2013exploiting} \\\hline MCS &Max. content similarity—maximum cosine similarity among all review pairs \cite{mukherjee2013spotting} \\ \hline \end{tabular} \caption{User Features } \label{tab:User_Features} \end{table} \begin{table}[h] \centering \begin{tabular}{ |p{0.7cm}|p{6.5cm}|} \hline \multicolumn{2}{|c|}{Review Features} \\ \hline Rank & Rank order among all the reviews of product \cite{jindal2008opinion} \\\hline RD & Absolute rating deviation from product’s average rating \cite{li2011learning} \\\hline EXT &Extremity of rating \cite{mukherjee2013spotting} \\\hline DEV & Thresholded rating deviation of review \cite{mukherjee2013spotting} \\\hline ETF & Early time frame \cite{mukherjee2013spotting} (spammers often review early to increase impact) \\\hline ISR & If review is user’s sole review, then $x_{ISR} =1$, otherwise 0 \cite{rayana2015collective} \\\hline PCW &Percentage of ALL-capitals words \cite{li2011learning,jindal2008opinion} \\\hline PC &Percentage of capital letters \cite{li2011learning} \\\hline L &Review length in words \cite{li2011learning} \\\hline PP1 &Ratio of 1st person pronouns (‘I’, ‘my‘, etc.) \cite{li2011learning}\\\hline RES &Ratio of exclamation sentences containing ‘!’ \cite{li2011learning}\\ \hline \end{tabular} \caption{Review Features} \label{tab:Review_Features} \end{table} \section*{Evaluation}\label{section:Simulation} In this section, we describe the results of the experiments we ran on the three Yelp datasets. We report our results and results obtained by previous work on the same datasets. \subsection*{Evaluation Metrics} We evaluated the performance of {\textit{CRSDnet}}~using four popular metrics, which were used by previous work as well. The Average Precision (AP), which is the area under the precision-recall curve, the ROC AUC, and the precision@k. To compute precision@k we rank the reviewers according to the probability that LBP assigned each one to be a spammer. We compute the fraction of real spammers among the top $k$ places. We compute precision@k for $k = 100, 200,\dots, 1000$. Finally, we use the Discounted Cumulative Gain (DCG@k) which provides a weighted score that favors correct spammer predictions at the top indices. Formally, $DCG@k=\sum_{i=1}^k=\frac{2^{l_i}-1}{\log_2(i+1)}$ where $l_i=1$ if the user at the $i^{th}$ place is a correctly identified spammer, and 0 otherwise. For compatibility with other works, we actually report the normalized $DCG$, which is obtained by dividing $DCG@k$ by the ideal $DCG$ which is the $DCG@k$ where all $l_i=1$ (all top $k$ are indeed spammers in the ideal ranking, for the $k$ that we choose). \begin{table}[ht!] \centering \begin{tabular}{ |c|c|c|c|c|} \hline Setting & Nodes & Edges & Sampling & Bursty \\ \hline\hline 1 & ML & None & Random & No \\ \hline 2 & ML & Threshold & Random & No \\ \hline 3 & Threshold & ML & Random & No \\ \hline 4 & ML & ML & Random & No \\ \hline 5 & ML & ML & Clique & No \\ \hline 6 & ML & ML & Random & Yes \\ \hline 7 & ML & ML & Clique & Yes \\ \hline \end{tabular} \caption{Various configurations of running {\textit{CRSDnet}}. The first two columns say how potentials were computed: using ML or the threshold method of \cite{rayana2015collective}. The sampling rule corresponds to the two options mentioned at the end of the Methodology section. Bursty refers to time-dependent sparsification. Setting \#1 consists of only using ML to predict nodes class, without LBP.} \label{tab:Methods} \end{table} \begin{table*}[ht!] \centering \begin{tabular}{|p{1.3cm} |p{0.75cm}|p{0.75cm}|p{0.75cm}|p{0.75cm}||p{0.75cm}|p{0.75cm}|p{0.75cm}|p{0.75cm}||p{0.75cm}|p{0.75cm}|p{0.75cm}|p{0.75cm}| } \hline Method & \multicolumn{4}{c||}{Y'Chi } & \multicolumn{4}{c||}{Y'NYC } & \multicolumn{4}{c|}{Y'Zip }\\ \hline &$0.25\%$ & $0.5\%$ & $1\%$ & $2.5\%$ & $0.25\%$ & $0.5\%$ & $1\%$ & $2.5\%$ & $0.25\%$ & $0.5\%$ & $1\%$ & $2.5\%$ \\ \hline SpEagle & & & 0.691 & & & & 0.657 & & & & 0.671 & \\ \hline SpEagle$^+$ & & & 0.708 & & & & 0.683 & & & & 0.691& \\ \hline NetSPAM & & & & & & &0.650 &0.650 & & & & \\ \hline\hline Set. 1 & 0.519 &0.602 &0.620 & 0.632 & 0.561 & 0.578& 0.558 & 0.585& 0.566& 0.593 & 0.672&0.693 \\ \hline Set. 2 & 0.669 & 0.701 & 0.711 & 0.729 & 0.664 & 0.663 & 0.687 & 0.692 & 0.685 & 0.700 & 0.784 & 0.794 \\ \hline Set. 3 & 0.688 & 0.699 & 0.702 & 0.702 & 0.659 & 0.677 & 0.691 & 0.692 & 0.562 & 0.708 & 0.779 & 0.831 \\ \hline Set. 4 & 0.689 & 0.712 & 0.723 & 0.731 & 0.673 & 0.681 & 0.685 & 0.665 & 0.684 & 0.707 & 0.706 & 0.828 \\ \hline Set. 5 & {\bf 0.718} & {\bf 0.724} & {\bf 0.735} & {\bf 0.754} & {\bf 0.669} & { \bf 0.688} & {\bf 0.720} & {\bf 0.766} & {\bf 0.703} & {\bf 0.729} & {\bf 0.790} & {\bf 0.848} \\ \hline Set. 6 & 0.673 & 0.730 & 0.719 & 0.741 & 0.668 & 0.671 & 0.682 & 0.696& 0.628 & 0.707 & 0.783 & 0.835 \\ \hline Set. 7 & 0.672 & 0.694 & 0.662 & 0.668 & 0.666 & 0.666 & 0.645 & 0.645& 0.631 & 0.726 & 0.763 & 0.824\\ \hline \end{tabular} \caption{AUC performance of compared methods on all three Yelp datasets. Best results are in bold. Empty cells stand for results that were not reported in the paper or were not computed by us. } \label{tab:AUC} \end{table*} \begin{table*}[ht!] \centering \begin{tabular}{|p{1.3cm} |p{0.75cm}|p{0.75cm}|p{0.75cm}|p{0.75cm}||p{0.75cm}|p{0.75cm}|p{0.75cm}|p{0.75cm}||p{0.75cm}|p{0.75cm}|p{0.75cm}|p{0.75cm}| } \hline Method & \multicolumn{4}{c||}{Y'Chi } & \multicolumn{4}{c||}{Y'NYC } & \multicolumn{4}{c|}{Y'Zip }\\ \hline &$0.25\%$ & $0.5\%$ & $1\%$ & $2.5\%$ & $0.25\%$ & $0.5\%$ & $1\%$ & $2.5\%$ & $0.25\%$ & $0.5\%$ & $1\%$ & $2.5\%$ \\ \hline \hline SpEagle$^+$ & & & 0.396 & & & & 0.348 & & & & 0.424 & \\ \hline NetSPAM & & & & & & & ~0.300 &~0.28 & & & & \\ \hline\hline Set. 1 & 0.874 & 0.802 & 0.896 & 0.852 & 0.902 & 0.913 &0.917 &0.916 & 0.885 &0.896 & 0.901 & 0.886 \\ \hline Set. 2 & 0.901 & 0.879& 0.882 & 0.906 & 0.915 & 0.912 & 0.921 & 0.912& 0.782 & 0.805 & 0.859 & 0.883 \\ \hline Set. 3 & 0.901 & 0.793 & 0.897 & 0.883 & 0.912 & 0.919 & 0.924 & 0.924 & {0.794} & 0.845 & 0.923 & 0.941 \\ \hline Set. 4 & 0.890 & 0.825 & 0.896 & 0.901 & 0.917 & {\bf0.922} & {\bf0.968}& 0.927 & {0.859} & {0.870} & 0.869 & 0.875 \\ \hline Set. 5& {\bf0.909}& {\bf0.906}& {\bf0.900} & {\bf0.914}& 0.875 & 0.727& 0.925& {\bf0.926}& {\bf0.903}& {\bf0.906}& {\bf0.935}&0.942 \\ \hline Set. 6 & 0.907 & 0.707 & 0.886 & 0.913 & {\bf0.914} & 0.920 & 0.926 & 0.921 & {0.864} & {0.891} & 0.927 & {\bf0.946} \\ \hline Set. 7 & 0.885 & 0.873 & 0.886 & 0.883 & 0.914 & 0.920 & 0.948 & 0.914 & {0.899} & {0.833} & 0.847 & 0.870\\ \hline \end{tabular} \caption{AP performance of compared methods on all three datasets. The best results are in bold. Empty cells stand for results that were not reported in the paper or were not computed by us.} \label{tab:AP} \end{table*} \begin{center} \begin{table*}[] \centering \begin{tabular}{|l|l|l|l|l||l|l|l|l||l|l|l|l|l|l|} \hline & \multicolumn{4}{l||}{Y’Zip} & \multicolumn{4}{l||}{Y'NYC} & \multicolumn{4}{l|}{Y’Chi} \\ \hline $k$ & \rotatebox{90}{FraudEagle} & \rotatebox{90}{Wang} & \rotatebox{90}{SpEagle$^+$} & \rotatebox{90}{{\textit{CRSDnet}}} & \rotatebox{90}{FraudEagle} & \rotatebox{90}{Wang} & \rotatebox{90}{SpEagle$^+$} & \rotatebox{90}{{\textit{CRSDnet}}} & \rotatebox{90}{FraudEagle} & \rotatebox{90}{Wang} & \rotatebox{90}{SpEagle$^+$} & \rotatebox{90}{{\textit{CRSDnet}}} \\ \hline 100 & 0.30 & 0.21 & 0.93 & {\bf1} & 0.21 & 0.15 & 0.96 & {\bf0.98} & 0.55 & 0.18 & 0.90 & {\bf0.99} \\\hline 200 & 0.30 & 0.19 & 0.81 & {\bf1 } & 0.19 & 0.19 & {\bf0.96} & 0.91 & 0.52 & 0.18 & 0.91 & {\bf0.99} \\\hline 300 & 0.38 & 0.21 & 0.69 & {\bf0.93} & 0.17 & 0.18 & {\bf0.95} & 0.86 & 0.48 & 0.20 & 0.91 & {\bf0.99} \\\hline 400 & 0.33 & 0.26 & 0.61 & {\bf0.80} & 0.21 & 0.17 & {\bf0.95} & 0.86 & 0.49 & 0.20 & 0.92 & {\bf0.99} \\\hline 500 & 0.29 & 0.27 & 0.57 & {\bf0.75} & 0.22 & 0.17 & {\bf0.95} & 0.88 & 0.48 & 0.20 & 0.92 & {\bf0.93} \\\hline 600 & 0.28 & 0.27 & 0.56 & {\bf0.74} & 0.27 & 0.17 & {\bf0.96} & 0.89 & 0.47 & 0.21 & 0.92 & {\bf0.90} \\\hline 700 & 0.27 & 0.29 & 0.54 & {\bf0.76} & 0.37 & 0.16 & {\bf0.95} & 0.90 & 0.47 & 0.21 & {\bf0.92} & 0.91 \\\hline 800 & 0.26 & 0.30 & 0.51 & {\bf0.73} & 0.45 & 0.16 & 0.90 & {\bf0.91} & 0.49 & 0.22 & 0.91 & {\bf0.92} \\\hline 900 & 0.26 & 0.30 & 0.50 & {\bf0.69} & 0.5 & 0.15 & 0.85 & {\bf0.92} & 0.48 & 0.22 & 0.91 & {\bf0.92} \\\hline 1000 & 0.28 & 0.32 & 0.49 & {\bf0.67} & 0.45 & 0.16 & 0.82 & {\bf0.92} & 0.47 & 0.22 & 0.90 & {\bf0.93}\\ \hline \end{tabular} \caption{Precision@k of compared methods when using 1\% of the users for training. The best results are in bold. {\textit{CRSDnet}}~runs in setting 5.} \label{tab:KP} \end{table*} \end{center} \begin{table}[ht!] \centering \begin{tabular}{ |p{1.5cm} |p{3.5cm}|p{1.5cm}|} \hline Dataset & Method & AUC\\ \hline \hline Y’Zip &DFraud (80\%)~\cite{9435380} & 0.733 \\ \hline Y’Zip &RF (30\%) & 0.740 \\ \hline Y’Zip &{\textit{CRSDnet}}~(2.5\%) & {\bf 0.847} \\ \hline \hline Y'Chi &GraphConsis~\cite{liu2020alleviating} (80\%) &0.742 \\ \hline Y'Chi &RF (30\%) &0.735 \\ \hline Y'Chi &{\textit{CRSDnet}}~(2.5\%) & { \bf 0.754} \\ \hline \end{tabular} \caption{comparison with NN-based algorithms and a RF baseline. The percentage of data used for training appears in parenthesis.} \label{tab:statDS} \end{table} \subsection*{The Experimental Setting}\label{sec:experiment} There are four choices that effect the performance of {\textit{CRSDnet}}: (a) the way the node potentials are computed, using ML or using a threshold function as in \cite{rayana2015collective}; (b) the way the edge potentials are computed, again using ML or using a threshold function~\cite{rayana2015collective}; (c) the active learning sampling rule which specifies how to choose the users for which the label is revealed; (d) with the time-dependent bursty sparsification or without. In Table \ref{tab:Methods} we summarize the seven different configurations with which we tested {\textit{CRSDnet}}. Each configuration was tested with a labeled set of users that is of size $0.25\%,0.5\%,1\%, 2.5\%$ of the entire set of users. In total we have $7 \times 4= 28$ experiments, each was ran 10 times with fresh randomness. The machine learning algorithm that we used to compute the edge and node potentials was random forest, written in the Wolfram Language. The code is available on Github \footnote{\url{https://github.com/users/KirilDan/projects/1}}. We chose this implementation as Wolfram has a good support for graph structures, on which LBP can be easily run. All the parameters of the random forest are the default ones besides the following: 950 trees , a maximum tree depth of 16, and a maximum of $0.65$-fraction of the features are considered per split. The features that we used are the ones in Tables \ref{tab:User_Features} and \ref{tab:Review_Features}. To measure the extent to which each new component in our pipeline is responsible for the improvement over previous results, we ran our pipeline also with the way that edge and node potentials were computed in \cite{rayana2015collective}. For completeness, we describe this method briefly. This method is completely unsupervised. A set of features $F_1,\ldots,F_r$ is computed for every user and review. Let $f_{u,i}$ be the value of feature $i$ for user $u$. For every feature $F_i$, the probability $p_{u,i}=Pr[F_i < f_{u,i}]$ is estimated from the data. Finally, a ``spam score" $S_u$ is computed for every user $u$ via \begin{align} S_u =1-\sqrt{\frac{\sum_{i=1} ^r p_{u,i}}{r}}\label{eq:girls} \end{align} The potential of reviewer $v_{i}$ is set to $\phi(v_{i}) \leftarrow {\{1-S_u,S_u\}} $. A similar procedure is carried out to determine edge potentials. \subsection*{Results} Tables \ref{tab:AUC},\ref{tab:AP} and \ref{tab:KP} present the results of running {\textit{CRSDnet}}~ according to the aforementioned experimental setting, reporting the different evaluation metrics. We compared the performance of {\textit{CRSDnet}}\ to other algorithms that were evaluated on the same dataset: SpEagle$^+$ \cite{rayana2015collective}, FraudEagle \cite{akoglu2013opinion}, NetSpam \cite{7865975}, Wang et al. \cite{wang2011review} and ColluEgale \cite{wang2020collueagle}. The results that we report are taken from the relevant papers and are not reproductions that we carried out. Different papers report different metrics and for different budgets. Hence some cells in the tables are left empty. Some algorithms are completely missing from a table/plot if the paper did not report that metric at all. Table \ref{tab:AUC} provides a comparison using the AUC measure. As evident from the table, our method is superior to all previous work. The most interesting comparison is with SpEagle+ \cite{rayana2015collective}. That work reports only results for a budget of 1\% of the users. For all three datasets, we already obtain a better result than SpEagle+ when using only 0.25\% of the users (6\% better for the Chicago dataset, 10\% better for the NYC dataset and 17\% for the ZIP). Table \ref{tab:AP} reports the AP measure. Here the difference is even more dramatic. Our results are between 2 to 3 times better than SpEagle+ on all three datasets. Table \ref{tab:KP} reports the precision@k measure when using 1\% of the users and our best configuration, \#5. For the Chicago and ZIP dataset, our algorithm has the upper hand; for NYC, SpEagle+ outperforms {\textit{CRSDnet}}~for $k=200$ up to $k=700$, but the difference is very small. When our algorithm outperforms the other competitors, it is by a very large margin in most cases. Tables \ref{tab:AUC} and \ref{tab:AP} suggest that the best way to run {\textit{CRSDnet}}~is according to configuration \#5 in Table \ref{tab:Methods}. Namely, both edge and node potentials are set using ML, and the budget is spent on users from the largest clique. Comparing settings 2,3 vs 4 we see that using ML for both nodes and edges (setting 4) is preferable to using ML only on one of them (settings 2,3). Settings 6 and 7 show that adding the time-dependent aspect, bursty edges, only harms the performance. Configuration \#1, using only ML applied to the user's features, gave the worse performance in the AUC measure, with a big gap. Figures \ref{fig:NDCGYelpChi},\ref{fig:NDCGYelpNYC} and \ref{fig:NDCGYelpZip} plot the NDCG@k measure for $k$ between 0 and 1000. Here, all five competing algorithms are represented. Again, {\textit{CRSDnet}}~outperforms all algorithms for most values of $k$, in all three datasets. Table \ref{tab:statDS} shows a comparison with two GNN-based approaches, \cite{9435380} and \cite{liu2020alleviating}. These approaches use much more data for training (80\%). As an additional baseline, we trained our Random Forest classifier this time on 30\% of the data, and predicted the reviewers' class. As evident from the table, more data is not a guarantee for better performance. One possible reason for the relative poor performance of NN-based methods is that much more data (in absolute value) is needed for successfully training the NN. \begin{figure}[htbp!] \centering \includegraphics[width=0.9\linewidth]{figRes1.pdf} \caption{NDCG@k of compared methods on YelpChi } \label{fig:NDCGYelpChi} \end{figure} \begin{figure}[htbp!] \centering \includegraphics[width=0.9\linewidth]{figRes2.pdf} \caption{NDCG@k of compared methods on YelpNYC } \label{fig:NDCGYelpNYC} \end{figure} \begin{figure}[htbp!] \centering \includegraphics[width=0.9\linewidth]{figRes3.pdf} \caption{NDCG@k of compared methods on YelpZip } \label{fig:NDCGYelpZip} \end{figure} \section*{Conclusion}\label{section:conclusion} In this work, we proposed a new holistic framework called {\textit{CRSDnet}}~for detecting review spammers. Our method combines both machine learning and more classical algorithmic approaches (Belief Propagation) to better exploit the relational data (user–review– product) and metadata (behavioral and text data) to detect such users. Adding to previous work in this line of research, we come up with two new components: using machine learning to predict the edge and node potentials, and a new sampling rule in the active learning setting -- sample users from the largest clique. Our results suggest that the two components improve performance one on top of the other, and when combined, give the best result obtained so far for the Yelp dataset. Another point that our work highlights is that while in many settings, NN-based methods give the best results, this is highly contingent upon having sufficient data for training. The spammer detection problem is exactly one of those problems where obtaining a lot of labeled data is expensive and non-trivial. Fake reviews are many times written by professionals, and it takes an experienced person to identify them. Hence platforms like Amazon Turk may not provide an easy solution to the shortage of data problem. In such cases, old-school algorithmic ideas become relevant again (Belief Propagation), and as we demonstrate in this paper, their performance may be boosted by incorporating ML in a suitable manner (computing potentials in our case) alongside domain expertise (sampling from the largest clique, following the insight about collusive spamming \cite{wang2020collueagle}). One limitation of our work is the fact that we only tested on one platform, Yelp. Future work should run our pipeline on other datasets, once they become publicly available (as far as we know there is only one more dataset in English from Amazon which is publicly available). Also, we only considered the problem of user classification. It will be interesting to extend our method for the task of review classification.
2024-02-18T23:40:55.031Z
2022-05-27T02:19:37.000Z
algebraic_stack_train_0000
3,678
7,073
proofpile-arXiv_066-1915
\section{Introduction} Recent advances in machine learning techniques has enabled predictive algorithms to perform better than expected in certain domains. However, in real-world situations, a sufficiently effective model requires massive data for training. In some more sensitive scenarios, such as patient data from different hospitals, or driving data from different vehicles, a single may not have sufficient quantity and quality of data to learn a more robust model, and co-training may cause privacy leakage problems.\par Federated learning (FL) is a new paradigm of distributed learning that aims to address the problem of communication efficiency in learning deep networks from decentralized data. Each client uses local data to learn local model parameters or parameter updates, then only transmits parameters to the server, and aggregates all parameters in the cloud, thereby obtaining a federated model without data exchange. However, practical applications show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2\% accuracy drop, 2.32 × lengthened training time, and undermined fairness \cite{Yang2021CharacterizingIO, Horvath2021FjORDFA}. For example, the data distribution of each client may be different, then the performance of the model, for example, between client A and client B, may vary greatly and the accuracy rate may even be lower than the prediction result of the local model.\par \cite{Donahue2021ModelsharingGA, Blum2021OneFO} analyzed the willingness of clients to participate in the federal update in this case, and found that in some cases, clients with poor performance are more inclined to withdraw from the alliance. Based on this, referring to the fair methods for machine learning \cite{Cotter2019OptimizationWN, Dwork2012FairnessTA}, the entire network is faced with a choice, that is, the server hopes to maximize the global accuracy rate as much as possible, while each client hopes the final model to behave little differently than the other members. This problem can be described as a game theory problem of total cost allocation: the self-interested goal of all individual actors (fairness) and the overall goal of reducing total cost (optimality).\par In addition, due to the different distribution of data on the clients, some clients with higher data quality may have more important predictive capabilities than others. In other words, the global may be overly dependent on the trained model of some clients. Therefore, when we start to pay attention to fairness and avoid this influence, intuitively, the convergence speed and prediction accuracy of the overall model may be affected \cite{Cui2021AddressingAD}.\par Due to privacy issues that need to be guaranteed, we cannot directly access the raw data on each client, it is impossible to analyze the data distribution on the client. However, \cite{Wang2020OptimizingFL} demonstrates that there is an implicit connection between the distribution of training samples on the device and the parameters of the model trained based on these samples. To get as close as possible to the optimal solution, in this paper, we propose an extensible federated learning framework called Policy Gradient Fair Federated Learning (PG-FFL). PG-FFL can be regarded as an additional plug-in of the FL algorithm. Based on the policy gradient reinforcement learning algorithm, PG-FFL uses the local parameters of the client training model as observations, aims to balance the above problems of the model by assigning different aggregation weights to the clients participating in the update in each round of aggregation. The main contributions of this paper are as follows:\par \begin{itemize} \item In this paper, we propose to utilize the Gini coefficient as the measure of fairness, which objectively and intuitively reflects the performance gap of the aggregated model among clients participating in federated training, and prevents polarization between client performance. \item We propose a fairness adjustment plug-in. For the federated model, we add the fairness indicator and use a plug-in based on the deep reinforcement learning (DRL) algorithm that can be used for any federated learning algorithm that does not involve aggregation weight adjustment. \item In this paper, we port the policy gradient fair federated learning (PG-FFL) paradigm to two advanced FL optimization algorithms, namely FedAvg and FedProx. Experimental result shows that PG-FFL can significantly improve fairness in multiple datasets. \end{itemize} \section{Related work} In this section, we introduce the main challenges federated learning faces and briefly introduce the current state of research. \subsection{Federated Learning} Federated learning is a distributed learning framework under differential privacy, which aims to learn a global model on the server-side using the model parameters learned by different local clients based on clients' private data \cite{McMahan2017CommunicationEfficientLO}.\par In horizontal federated learning, the server trains a global model in specific aggregation ways iteratively aggregates local models from different clients. In each iteration, the server randomly selects a certain number of clients to transmit global model parameters, clients participate in the training using the downloaded global model for training, and then upload local training model parameters and aggregate the new global model on the server \cite{Wahab2021FederatedML, Li2020FederatedLC}.\par \subsection{Fairness Challenges of Non-IID Data Distribution} Classic federated learning algorithms aggregates the models of different participating clients by calculating a weighted average based on the amount of training data \cite{McMahan2017CommunicationEfficientLO}. However, in practical applications, the data and label distribution on different clients cannot fully meet the requirements for the distribution of all local client data and label IID distribution in the distributed algorithm. Therefore, the convergence and stability of federated learning are affected challenge \cite{Konecn2016FederatedLS, Karimireddy2019SCAFFOLDSC}. \cite{Xiao2021ANS, Yang2021HFLAH} proposes that part of the reason is the improper way of traditional federated learning’s server-side aggregation method.The contributions of clients in federated learning can be distinguished by their trained models’ validated accuracies.\par Previous work has shown that non-IID data may bring parameter differences \cite{Zhao2018FederatedLW}, data distribution biases \cite{Hsieh2020TheND}, and unguaranteed convergence \cite{Sahu2020FederatedOI}, which can be improvemented both on the client side \cite{Sahu2020FederatedOI} and on the server side \cite{Huang2021BehaviorMD,Hsu2019MeasuringTE,Reddi2021AdaptiveFO}.\par Due to the heterogeneity of data size and distribution on different clients in federated learning, simply aiming to minimize the total loss in large networks may disproportionately advantage or disadvantage the model performance on some of the clients, such as resulting in loss of uniformity of results across the clients \cite{Li2021DittoFA}. The accuracy of individual devices in the network, for example, cannot be guaranteed despite the high federated average accuracy.\par There has been tremendous recent interest in developing fair methods for machine learning \cite{Cotter2019OptimizationWN, Dwork2012FairnessTA}, unfortunately, current methods can not apply to federated settings directly. The recent work introduces a fairness algorithm suitable for federated learning. \cite{Mohri2019AgnosticFL} uses a minimax optimization method to ensure that the overall fairness will not be improved at the expense of some client performance. \cite{Li2020FairRA} borrows the idea of resource allocation, fairness is allocated as a resource to achieve uniform distribution of clients’ performance, \cite{Wang2021FederatedLW} mitigates potential conflicts among clients before averaging their gradients. But these algorithms have fairness as the only goal, we can simply think that there is a permutation relationship between fairness and the best performance (usually expressed by the average performance). Therefore, in real federated learning applications, people will naturally want to further guarantee fairness when their programs can guarantee optimal performance.\par Based on above, we propose a fairness adjustment plug-in. Our algorithm can be used for any federated learning algorithm that does not involve aggregation weight adjustment, and we add fairness considerations on the basis of pursuing the best performance.\par \section{Fair Federated Learning} In this section, we first formally define the problem in Section A, then propose a naive solution in Section B combining deep reinforcement learning methods, and propose a new general framework in Section C, which can effectively handle the fairness disaster caused by non-iid data distribution while ensuring the performance of federated learning methods. \subsection{Problem Statement} The standard horizontal federated learning can be defined as to minimize\par \begin{equation} \mathop{\min}_{\omega} f(x,\omega) = \sum_{i=1}^{N}p_{i}f_{i}(x,\omega), \end{equation}\par Where $f_{i}(x,\omega):=E_{x \sim P_i}[f{i}(x,\omega)]$, the vector $\omega$ denote model weights and $(x,y)$ denote a particular labeled sample, is the local loss function of the $i$-th client. Aggregation of loss functions of different clients, assuming that there are N clients partitioning data, where $D_i$ is the number of index sets of data points on client $i$, aggregation weight of the $i$-th client is defined as:\par \begin{equation} p_i = \frac{D_i}{\sum_{i=1}^{N}D_{i}}. \end{equation}\par That is, simply think that the influence of a client on the global model is determined by its sample size. Training is a uniform distribution on the union of all samples, where all samples are uniformly weighted.\par The traditional federated learning method solves the problem of (1) jointly by (2) calculating the contributions of different clients, but this may cause the final global model to be biased towards clients with a large number of data points. Because of the considerable limitations in practical applications, we will not adopt this assumption in this paper, which we will illustrate through Fig.~\ref{fig: IID_and_Non}.\par Considering some special cases in federated learning, such as joint training of different hospitals, different clients hope to jointly train a better model under the condition of protecting privacy, so the performance of different clients should not vary too much at this time. In order to reflect the fairness of a federated network, \cite{Wahab2021FederatedML, Li2020FederatedLC} proposed the uniformity and the standard deviation (STD) of testing accuracy between clients to measure network fairness. Unfortunately, indicators such as STD are related to the expectation of testing accuracy. Therefore, for different application scenarios, Assuming that all clients’ testing accuracy for networks A and B are 0.07, 0.08, 0.09 and 0.7, 0.8, 0.9, respectively, then A will have A smaller STD despite the "rich and poor" difference in test accuracy between the two networks. In order to alleviate this problem and make an index better measure the degree of network fairness, we put forward a new definition of fairness in the definition 1.\par \textbf{Definition 1} (Fairness) \emph{We say a model $\omega_{1}$ provides a more fair solution than $\omega_{2}$ if the test performance of $\omega_{1}$ on $N$ devices, $\{acc_1,...,acc_N\}$, is more fair than that of $\omega_{2}$, i.e., $Gini\{F_{n}(\omega_{1})\}<Gini\{F_{n}(\omega_{2})\}$, where $F_{n}(\cdot)$ denotes the test accuracy on $N$ devices, and Gini\{·\} denotes the Gini Coefficient. Let $acc_{i}$ and $acc_{j}$ represents the accuracy on any client test set in the model, $\mu=\frac{1}{N}\sum_{n=1}^{N}acc_{n}$ represents the average accuracy of all clients.} \begin{equation} Gini = \frac{\sum_{i=1}^{N} \sum_{j=1}^{N}|acc_i-acc_j|}{2 N^2 \mu}. \end{equation}\par We define the fairness of the model on all clients based on the Lorentz curve, and the indicator for judging the fairness is a proportional value between 0 and 1. The maximum Gini coefficient is 1 and the minimum is 0. The former indicates that the performance of the global model on the clients is absolutely uneven (that is, it performs best on one, and the rest are all 0), while the latter indicates that the global model is on the clients. The performance is absolutely average, and our goal is to ensure the final model performance while keeping Gini as small as possible.\par Different from the existing FL system fairness definitions such as uniformity and STD, our proposed Gini describes the dispersion degree of constant distribution and has the characteristics of scale invariance. Therefore, the fairness degree of networks with different average performance can be compared with a uniform index.\par Next, we prove through experiments that the non-IID data distribution on clients will not only reduce the accuracy and convergence efficiency but also lead to the disaster of fairness between clients. We trained the CIFAR-100 dataset with CNN model using FedAvg, set 100 clients and 10\% of the clients are selected to participate in the update in each round.\par \begin{figure}[ht] \centering \subfigure[CIFAR-100 FedAvg IID]{\includegraphics[width=4.3cm, height=3.5cm]{Gini-IID.png}} \subfigure[CIFAR-100 FedAvg Non-IID]{\includegraphics[width=4.3cm, height=3.5cm]{Gini-non-IID.png}} \caption{When the CIFAR-100 data on 100 clients is in the IID distribution (left) and the non-IID distribution (right), the test results of FedAvg after 1000 rounds of training are shown in the figure. The larger the proportion of the orange takes, the more unfair it is.} \label{fig: IID_and_Non} \end{figure} In the Lorenz curve, the smaller the proportion of orange, the higher the degree of fairness. As can be seen from Fig. 1, according to (2) updating the global model, the non-IID distribution of client data will increase the unfairness of the model's performance on the client side.\par \subsection{DRL Settings}\label{AA} In the federated learning training, since a certain percentage of clients are randomly selected to participate in the update in each round, the optimal weight distribution is non-differentiable. There are various approaches to deal with non-differentiable optimization bottlenecks, such as Gumbel-softmax \cite{Jang2017CategoricalRW} or stochastic back-propagation \cite{JimenezRezende2014StochasticBA}. In this paper, we model the problem of distributing the weight of different local models in the global model as a deep reinforcement learning problem to explore the optimal aggregation strategy \cite{Zhang2021DeepRL, Zhang2021AdaptiveCS}.\par Instead of labeled data, reinforcement learning is a self-learning process in which agents maximize reward through interaction with the environment. The state-action transition and reward in the training process are abstracted as a Markov Decision Process(MDP), then the purpose of the DRL agent is to find an optimal policy that maximizes long-term reward expectations $\pi$:\par \begin{equation} \pi^{*} = argmax_{\pi}E_{\tau \sim \pi(\tau)}[r(\tau)], \end{equation}\par where $\pi$ represents the policy, $\tau$ represents a trajectory obtained by using the policy to interact with the environment, and $r(\tau)$ represents the overall reward for this trajectory. Next, we expand the formula to obtain the objective function and the gradient based on the Monte Carlo approximation as: \par \begin{equation} \ J(\theta) = E_{\tau \sim \pi_{\theta}(\tau)}[r(\tau)] =\int_{\tau \sim \pi_{\theta}(\tau)}\pi_{\theta}(\tau)r(\tau)d\tau, \end{equation}\par \begin{equation} \ \nabla_{\theta} J(\theta) = E_{\tau \sim \pi_{\theta}(\tau)}[\nabla_{\theta}log\pi_{\theta}(\tau)r(\tau)]. \end{equation}\par Since our goal is to maximize long-term return expectations, we use gradient ascent to find the optimal policy \cite{Sutton1999PolicyGM}.\par \begin{algorithm} \caption{PGF-FedAvg} \KwIn{Number of communication round $T$, number of clients $N$, percentage of updating in each round $C$, local epochs $E$, learning rate $\alpha$ and $\beta$.} \textbf{Initialize:} Parameters $\phi^0$, $\omega^0$. \par \For{t=0,1,...,T-1} { \textbf{Server} randomly selects a subset of clients $K=C*N$, and sends $\omega^{t}$ to them;\par \For{client $k \in [K]$ \textbf{in parallel}} { Client $k$ copies $\omega^{t}$ as local model parameters $\omega_{k}^{t}$;\par \For{j=0,1,...,E-1} { Calculate gradient $g_k^j$\par $\omega_{k}^{j+1} \gets \omega_{k}^{j} - \alpha g_k^j $} $acc_k^t \gets$ \textbf{Accuracy}\{$\omega_{k}^{t}$ tests on validation dataset $\{ \mathcal{D}_k^V\}$\} } Calculate $\mu^t=\frac{1}{K}\sum^{K}_{k=1}acc_k^t$ and $Gini^t$ using (3)\par \textbf{DRL agent do:}\par Get reward $r^{t}=-\mu^t\log(Gini^t)$\par Update the DRL policy parameters $\phi$ \par \[\phi^{t+1} \gets \phi^{t}+\beta r^{t} \nabla_{\phi}log\pi(s_t,a_t)\] Get state $S^{t}=\{\omega_{1}^{t},...,\omega_{K}^{t}\}$\par Calculate Gaussian distribution mean $\{a_1^t,...,a_K^t\}$\par Calculate the aggregate weight $\{p_1^t,...,p_K^t\}$\par \textbf{DRL end}\par \textbf{Server} aggregates global model as \[\omega^{t+1} \gets \sum^{K}_{k=1} p_k \omega_{k}^{t}\] } \end{algorithm} Therefore, the federated learning process can be modeled as a Markov Decision Process (MDP), where the state is represented by the model parameters of each client in each round. Given the current state, the reinforcement learning agent learns a policy distribution according to the policy calculate the aggregated weights corresponding to each client, thereby updating the global model. After that, the updated global model parameters are transmitted to the local. On the local validation set, the local validation accuracy of the global model will be observed, and the reward of the DRL agent will be obtained from the average validation accuracy and Gini coefficient of each client. function. The objective is to train the DRL agent to converge to the target accuracy and fairness level for federated learning as quickly as possible.\par In addition, in our algorithm, the DRL agent only needs to obtain the local model parameters and validation accuracy on clients, neither introducing additional communication overhead, nor without collecting and checking any private information, which can achieve the purpose of privacy protection.\par \textbf{State:} The state of the $t$-th round is represented by a vector $\{\omega_1^t,...,\omega_K^t\}$, which respectively represents the model parameters of K clients participating in the update. During the training process, the client and the server jointly maintain a list of model parameters $\{\omega_k^t {\mid}k{\in}K\}$. In each round of FL, the client updates the list after uploading the trained local model to the server.\par \textbf{Action:} Each time the state list is updated, we use the client model parameters participating in the aggregation to train the DRL agent. The action space is composed of a vector $\{a_1^t,...,a_K^t\}$, thus the $k$-th client draws the aggregation weight $p^t_k$, where $p^t_k \in (0, 1)$, which comes from Gaussian distribution with a learnable mean $a_k^t$ and a unit variance, for participating in the aggregation of the global model.\par \textbf{Reward:} The reward comes from a small verification set locally on the client, defined as $r^t=-\mu^t log(Gini^t$), where $\mu^t$ denotes the average validation accuracy on clients and $Gini^t$ denotes the fairness Gini coefficient of accuracy (see Definition 1), respectively. Such setting encourages federated models to achieve optimal and fair performance.\par The DRL agent is trained to maximize long-term rewards based on $\gamma$ discounts:\par \begin{equation} \ R = \sum_{a \sim \tau}\gamma^t r_t. \end{equation}\par Next, take FedAvg as an example to introduce our fairness optimization for the federated learning algorithm, the pseudo-code is shown in Algorithm 1.\par \subsection{PG-FFL Workflow} We define federated learning on a classification problem. If there are $K$ clients in total, the training set, validation set and test set on the $k$-th client are respectively $D_k^{Train} = {(x_i,y_i)_{i=1}^N}{\sim}P_k$, $D_k^{Test} = {(x_i,y_i)_{i=1}^M}{\sim}P_k$ and $D_k^V = {(x_i,y_i)_{i=1}^L}{\sim}P_k$, where $x_i{\in}X_k $ is the d-dimensional feature vector of feature space $X_k$, and $y_i$ is its corresponding label. The training set, validation set, and test set on each client are independent and identically distributed (IID), but the data size and distribution on different clients are not required to be the same. The goal of our training is to learn a global model that performs well and is fair on the test set of each client.\par \begin{figure}[ht] \centering \includegraphics[width=9cm, height=5.5cm]{workflow.png} \caption{The framework of PG-FFL.} \label{fig: framework} \end{figure} Unlike traditional federated learning, our setup neither requires the data on different clients to be independently and identically distributed, nor is it designed to train a model that performs well on the server-side test set, which is more adapted to the requirements of the real world.\par Fig.~\ref{fig: framework} shows how our algorithm, PG-FFL, is based on a reinforcement learning algorithm that assigns the aggregated weights of clients participating in the update in each round, following the steps below:\par% \begin{itemize} \item \textbf{Step 1(initialization)}: All N available devices with non-identical data size and distribution check in server as clients, the server selects $K=N*C$ clients participating in the update according to a certain proportion C, initializes the model parameter $\omega^{init}$ and transmits it to the selected client, the client uses the global Model parameters get validation accuracy on the validation set, then train on local data, and return local model parameters $\{\omega_k^1, k\in K\}$ and validation accuracy $\{acc_k^1, k\in K\}$. \item \textbf{Step 2}: In the $t$-th round of iteration, the server calculates the average precision $\mu^t$ and Gini coefficient $Gini^t$ according to the returned $acc_k^t$, and then calculates the weight $P_{k}^t$ for the client k to participate in the global update, and then according to $\{\omega_k^t, k\in K\}$and $\{p_k^t, k\in K\}$ update the global model parameters. \item \textbf{Step 3}: The server randomly selects a certain percentage of clients to participate in the update. After the selected clients use the last round of global model $\omega^{t-1}$ for local training, upload the locally updated model parameters and verification accuracy. \end{itemize} \hspace{-1cm} \begin{figure*}[bp] \centering \subfigure[Case \uppercase\expandafter{\romannumeral1}]{\includegraphics[width=6cm, height=4.3cm]{CIFAR10-20.pdf}} \label{fig: dataset_case1} \hspace{-0.5cm} \subfigure[Case \uppercase\expandafter{\romannumeral2}]{\includegraphics[width=6cm, height=4.3cm]{CIFAR10-5.pdf}} \label{fig: dataset_case2} \hspace{-0.5cm} \subfigure[Case \uppercase\expandafter{\romannumeral2}]{\includegraphics[width=6cm, height=4.3cm]{CIFAR10-2.pdf}} \label{fig: dataset_case3} \caption{The data distribution of each client using non-IID data partition. The color bar denotes the number of data samples. Each rectangle represents the number of data samples of a specific class in a client.} \end{figure*} \vspace{-0.5cm} \section{Experiment} In this section, we provide the present empirical setup and results. We first describe our experimental setup (Section A), then we demonstrate the motivation for adding a fairness adjustment module based on an RL algorithm, showing that our algorithm can effectively reduce the classical federation model in the case of varying degrees of data non-uniform distribution Differences in performance on different clients while maintaining better performance. Meanwhile, we set the situation so bad that the data types between clients do not overlap at all, and compare the classification accuracy and fairness of the algorithm (Section B). Next, we compare the algorithm with the fairness goals of several baselines (Section C). Finally, we show the constraints of the algorithm (Section D).\par \subsection{Experimental Setup} \textbf{Federated datasets.} In this section, we will explore a suite of federated datasets based on classification tasks. These datasets include CIFAR-10 \cite{Krizhevsky2009LearningML}, CIFAR-100 \cite{Krizhevsky2009LearningML} and Fashion-MNIST \cite{Xiao2017FashionMNISTAN}. When used to compare with q-FFL and AFL, we will use a small benchmark dataset studied by \cite{Li2020FairRA} based on Fashion-MNIST.\par \textbf{Data partitions.} In the construction of the non-IID dataset, we divide the data of each class of the N-class classification dataset into equal-sized partitions, and all clients randomly select different numbers of partitions, so that each client has local data with inconsistent quantities and categories. As shown in Fig. 3, i) Case \uppercase\expandafter{\romannumeral1}, as (a), set 100 clients, each client randomly selects 20 data blocks, take CIFAR-100 as an example; ii) Case \uppercase\expandafter{\romannumeral2}, as (b), set 5 clients, each client has 4 data blocks, take CIFAR-10 as an example, the data categories on the client overlap; iii) Case \uppercase\expandafter{\romannumeral3}, as (c), set 5 clients, each client has 2 data blocks, Taking CIFAR-10 as an example, the data categories on the client do not overlap at all.\par \begin{figure*} \centering \subfigure[CIFAR-10]{\includegraphics[width=5.8cm, height=3cm]{CIFAR10-fedavg-acc.pdf}} \subfigure[CIFAR-100]{\includegraphics[width=5.8cm, height=3cm]{CIFAR100-fedavg-acc.pdf}} \subfigure[Fashion-MNIST]{\includegraphics[width=5.8cm, height=3cm]{fmnist-fedavg-acc.pdf}} \\ \vspace{-0.2cm} \centering \subfigure[CIFAR-10]{\includegraphics[width=5.8cm, height=3cm]{CIFAR10-fedavg-gini.pdf}} \subfigure[CIFAR-100]{\includegraphics[width=5.8cm, height=3cm]{CIFAR100-fedavg-gini.pdf}} \subfigure[Fashion-MNIST]{\includegraphics[width=5.8cm, height=3cm]{fmnist-fedavg-gini.pdf}} \\ \vspace{-0.2cm} \centering \subfigure[CIFAR-10]{\includegraphics[width=5.8cm, height=3cm]{CIFAR10-fedprox-acc.pdf}} \subfigure[CIFAR-100]{\includegraphics[width=5.8cm, height=3cm]{CIFAR100-fedprox-acc.pdf}} \subfigure[Fashion-MNIST]{\includegraphics[width=5.8cm, height=3cm]{fmnist-fedprox-acc.pdf}} \\ \vspace{-0.2cm} \centering \subfigure[CIFAR-10]{\includegraphics[width=5.8cm, height=3cm]{CIFAR10-fedprox-gini.pdf}} \subfigure[CIFAR-100]{\includegraphics[width=5.8cm, height=3cm]{CIFAR100-fedprox-gini.pdf}} \subfigure[Fashion-MNIST]{\includegraphics[width=5.8cm, height=3cm]{fmnist-fedprox-gini.pdf}} \caption{From left to right are the experimental results on the CIFAR-10, CIFAR-100 and Fashion-MNIST datasets, respectively. Data distribution refer to Fig. 3, case \uppercase\expandafter{\romannumeral1}. The first and second rows show the performance (average test accuracy on clients) and fairness (definition 1) of our algorithm compared with FedAvg. The third and fourth rows show the performance and fairness of our algorithm compared with FedProx. Note that our fairness plugin has the same parameter settings as the original algorithm before adding.} \end{figure*} \textbf{Implementation.} We implemented all the codes based on Pytorch, using a server and N clients to simulate a federated network, where N is the total number of clients.\par \hspace{-0.5cm} \subsection{Fairness of PG-FFL} In this section, we show the efficiency of our fairness adjustment plug-in combined with FedAvg and FedProx, which are both classical and effective FL algorithms. We set up 100 clients to train on CIFAR-10, CIFAR-100 and Fashion-MNIST, respectively, and the data distribution of clients is as case \uppercase\expandafter{\romannumeral1} shown in Fig. 3(c), the local data distribution is highly heterogeneous. On the model selection, we train CIFAR-10 and CIFAR-100 by CNN, Fashion-MNIST by a four-layer MLP. The fairness adjustment plug-in based on the policy gradient reinforcement learning algorithm uses four-layer multi-layer-perceptron to learn the optimal aggregation strategy.\par For each communication epoch, FedProx and PGF-FedProx are set to train locally for 5 epochs, while FedAvg and PGF-FedAvg execute one epoch locally. we can observe that in Fig. 4, our algorithm basically maintains the same convergence speed as the baseline, the average accuracy is slightly improved or more stable, and the fairness is significantly enhanced.\par Next, we further exacerbate the inhomogeneity of the data distribution, verifying that our proposed PG-FFL algorithm provides a fairer solution for federated data. We will validate on a 10-class classification problem, reduce the number of clients trained on Cifar10 and Fashion MNIST to 5, and have them all participate in each round of federated updates. In Table 1, we compare the final test accuracy and fairness of our proposed fairness adjustment plugin combined with FedAvg and FedProx, respectively, for the data IID and non-IID distribution as case \uppercase\expandafter{\romannumeral2} and case \uppercase\expandafter{\romannumeral3}. Note that, when each client has only two classes, their data classes will not overlap at all.\par We can observe that with the deterioration of the client data distribution non-IID situation, the baseline algorithm can significantly improve the fairness after adding the fairness adjustment plug-in, and also improve the average accuracy of the model on the client. When the client data distributes IID, our method sometimes can also improve the fairness of the model, but it will cause a certain loss of accuracy. We speculate that it is because the fairness and average accuracy are considered in our RL model reward at this time, so In order to ensure a high degree of fairness between the client test accuracy, it prevents our algorithm from pursuing higher average accuracy, thus causing some accuracy loss.\par \begin{table}[ht] \caption{The accuracy and fairness comparison of PG-FFL and baselines tested on datasets with varying degrees of non-IID.} \setlength{\tabcolsep}{1.2mm}{ \renewcommand\arraystretch{1.5} \begin{tabular}{ccccccc} \multicolumn{7}{c}{CIFAR-10} \\ \hline \multicolumn{1}{c}{Non-IID level} & \multicolumn{2}{c}{IID} & \multicolumn{2}{c}{Case 2} & \multicolumn{2}{c}{Case 3} \\ \multicolumn{1}{c}{} & acc(↑) & Gini(↓) & acc(↑) & Gini(↓) & acc(↑) & Gini(↓) \\ \hline \multicolumn{1}{c}{FedAvg} & 0.675 & 0.098 & 0.612 & 0.102 & 0.454 & 0.161 \\ \multicolumn{1}{c}{PGF-FedAvg} & \textbf{0.683} & \textbf{0.042} & \textbf{0.665} & \textbf{0.049} & \textbf{0.488} & \textbf{0.053} \\ \multicolumn{1}{c}{FedProx} & \textbf{0.708} & 0.107 & 0.595 & 0.110 & 0.422 & 0.154 \\ \multicolumn{1}{c}{PGF-FedProx} & 0.689 & \textbf{0.073} & \textbf{0.608} & \textbf{0.037} & \textbf{0.453} & \textbf{0.031} \\ \hline \multicolumn{7}{c}{CIFAR-100} \\ \hline \multicolumn{1}{c}{Non-IID level} & \multicolumn{2}{c}{IID} & \multicolumn{2}{c}{Case 2} & \multicolumn{2}{c}{Case 3} \\ \multicolumn{1}{c}{} & acc(↑) & Gini(↓) & acc(↑) & Gini(↓) & acc(↑) & Gini(↓) \\ \hline \multicolumn{1}{c}{FedAvg} & 0.493 & 0.130 & 0.468 & 0.133 & 0.316 & 0.151 \\ \multicolumn{1}{c}{PGF-FedAvg} & \textbf{0.502} & \textbf{0.093} & \textbf{0.491} & \textbf{0.068} & \textbf{0.337} & \textbf{0.093} \\ \multicolumn{1}{c}{FedProx} & 0.509 & 0.133 & 0.467 & 0.143 & 0.299 & 0.164 \\ \multicolumn{1}{c}{PGF-FedProx} & \textbf{0.514} & \textbf{0.084} & \textbf{0.483} & \textbf{0.079} & \textbf{0.331} & \textbf{0.085} \\ \hline \multicolumn{7}{c}{Fashion-MNIST} \\ \hline \multicolumn{1}{c}{Non-IID level} & \multicolumn{2}{c}{IID} & \multicolumn{2}{c}{ Case 2} & \multicolumn{2}{c}{Case 3} \\ \multicolumn{1}{c}{} & acc(↑) & Gini(↓) & acc(↑) & Gini(↓) & acc(↑) & Gini(↓) \\ \hline \multicolumn{1}{c}{FedAvg} & 0.869 & 0.032 & 0.850 & 0.063 & 0.737 & 0.074 \\ \multicolumn{1}{c}{PGF-FedAvg} & \textbf{0.884} & \textbf{0.021} & \textbf{0.874} & \textbf{0.024} & \textbf{0.796} & \textbf{0.033} \\ \multicolumn{1}{c}{FedProx} & 0.879 & 0.021 & 0.851 & 0.045 & 0.828 & 0.037 \\ \multicolumn{1}{c}{PGF-FedProx} & \textbf{0.883} & \textbf{0.017} & \textbf{0.854} & \textbf{0.031} & \textbf{0.836} & \textbf{0.020} \\ \hline \end{tabular}} \end{table} \subsection{Comparison With Other Fair Federated Learning Algorithms} Next, we compare with other two algorithms that also aim to address fairness issues in federated networks.\par In the experiments in this section, we implement a very extreme case where each client has only a completely disjoint class of data. Using the same experimental setup as \cite{Mohri2019AgnosticFL}: The Fashion-MNIST dataset \cite{Xiao2017FashionMNISTAN} is an MNIST-like dataset where images are classified into 10 categories of clothing instead of handwritten digits. We extract a subset of the data labeled with three categories - shirts/tops, pullovers, and shirts, and divide this subset into three clients, each containing a category of clothing. We then train a classifier for these three classes using logistic regression and the Adam optimizer. Since the clients here uniquely identify the labels, in this experiment we did not compare with models trained on specific baselines.\par We observe in Table 2 that our algorithm performs better both in the final average accuracy and fairness between clients.\par \vspace{-0.3cm} \begin{table}[ht] \caption{The accuracy and fairness comparison with PG-FFL and other fairness algorithms in the case of extreme data non-IID distribution.} \centering \renewcommand\arraystretch{1.5} \begin{tabular}{lccccc} \hline & \multicolumn{2}{c}{All Clients} & Shirts & Pullovers & T-shirts \\ & acc(↑) & Gini(↓) & acc(↑) & acc(↑) & acc(↑) \\ \hline q-FFL(q=0) & 78.8 & 0.084 & 66.0 & \textbf{84.5} & \textbf{85.9} \\ AFL & 78.2 & 0.046 & 71.4 & 81.0 & 82.1 \\ PGF-FedAvg & \textbf{79.1} & \textbf{0.027} & \textbf{74.2} & 80.5 & 82.6 \\ \hline \end{tabular} \end{table} \vspace{-0.5cm} \subsection{Scalability Analysis} In this section, we continue the experimental setup in section B by combining our proposed fairness adjustment plugin with FedAvg and FedProx, respectively, to modify the percentage of participating updates in each round, but keep the total number of clients unchanged. It can be observed from Fig. 5 that there is an upper limit on the scalability of the algorithm, and the greater the proportion of clients participating in the global update, the better the effect. We guess it is because each time the RL algorithm will output the proportion of the client participating in the update, but when the proportion of the client participating in the update is small, the non-participating clients cannot obtain the updated aggregate participation weight in time, which will affect the experimental results. We use STD as the fairness indicator here because that is consistent with the test accuracy dimension and is more likely to show volatility. It's easy to see that PG-FFL can still improve the fairness of the federated network under other fairness definition.\par \begin{figure}[ht] \centering \subfigure[10\% from 100 clients]{\includegraphics[width=4cm]{0.1.pdf}} \subfigure[25\% from 100 clients]{\includegraphics[width=4cm]{0.25.pdf}} \\ \centering \subfigure[50\% from 100 clients]{\includegraphics[width=4cm]{0.5.pdf}} \subfigure[75\% from 100 clients]{\includegraphics[width=4cm]{0.75.pdf}} \caption{The solid line is the client's average validation accuracy, the green range is the accuracy STD on different clients, and the smaller the area is, the fairer it is.} \end{figure} \section{Conclusion} In this paper, we propose fairness as a new optimization objective defined by the Gini coefficient of clients' validation accuracy, which is based on realistic considerations that encourage fairer accuracy distribution across clients in federated learning. We design a reinforcement learning plug-in to apply federated algorithms to solve this problem in large-scale networks, and experiments demonstrate that PG-FFL can be regarded as a fairness add-on for any global objective. We illustrate the fairness and superiority of PG-FFL on a set of federated datasets, and experimental results show that our framework outperforms baseline methods in terms of overall performance, fairness, and convergence speed.\par \section*{Acknowledgement} This paper is supported by the Key Research and Development Program of Guangdong Province under grant No. 2021B0101400003 and Shenzhen Basic Research Program (Natural Science Foundation) Key Program of Fundamental Research (No. JCYJ20200109143016563). Corresponding authors are Jianzong Wang from Ping An Technology (Shenzhen) Co., Ltd (jzwang@188.com) and Yuhan Dong from Tsinghua University (dongyuhan@sz.tsinghua.edu.cn). \footnotesize
2024-02-18T23:40:55.046Z
2022-05-27T02:19:24.000Z
algebraic_stack_train_0000
3,679
5,821
proofpile-arXiv_066-1922
\section{Introduction} Since ACL2 provides only limited support for quantification, modeling group theory in its logic is a challenging problem. A 1990 paper of Yuan Yu \cite{yu} presents a formal development of finite group theory in Nqthm based on the {\tt defn-sk} macro (surviving in ACL2 as {\tt defun-sk}), which he uses to define a predicate that characterizes a list whose members satisfy the group axioms with respect to a fixed operation. He also defines the notion of group homomorphism and proves that the kernel of any homomorphism is a normal subgroup, but this requires an additional {\tt defn-sk} form in order to introduce a group with a different operation. The culmination of Yu's work is Lagrange's Theorem: {\it The order of a finite group is divisible by the order of any subgroup.} Any significant further development of the theory would require induction on the order of a group, which seems to be inaccessible through this method. We are unaware of any ACL2 results in this domain that duplicate Yu's achievement. A similar approach based on {\tt encapsulate} has been suggested \cite{eric}, but while this provides a generalization to infinite groups, it otherwise shares the same limitations as the {\tt defun-sk} method. Heras et al.~\cite{heras} describe "a guideline to develop tools that simplify the formalizations related to algebraic structures in ACL2", but do not mention any results that are directly relevant to the theory of groups. We shall present an ACL2 formalization of finite groups that provides for inductive proofs as well as computations on concrete groups. Our scheme is based on the definition of a group as an explicit operation table, i.e., a matrix of group elements. We define a {\tt defgroup} macro that provides definitions of parametrized families of groups, which we apply to the additive and multiplicative groups of integers modulo $n$, the symmetric and alternating groups, arbitrary quotient groups, and cyclic subgroups. Our proof of Lagrange's theorem shares some features with Yu's, but we prove a stronger version stating that the order of a group is the product of that of a subgroup and its index. This leads naturally into an analysis of quotient groups, which lays the groundwork for a theorem of Cauchy: {\it If the order of a group $G$ is divisible by a prime $p$, then $G$ has an element of order $p$.} We present an inductive proof of this result, which illustrates the effectuality of our scheme. The proof, which resides in {\tt books/workshops/2022/russinoff-groups/}, uses several number-theoretic results from {\tt books/projects/quadratic-reciprocity/euclid.lisp}, including the following basic theorem of Euclid: {\it If a product of integers $ab$ is divisible by a prime $p$, then $p$ divides either $a$ or $b$.} \section{Groups and Subgroups}\label{grps} In our formalization, a group is a square matrix, the first row of which is a list of the group elements: \begin{small} \begin{verbatim} (defmacro elts (g) `(car ,g)) (defmacro in (x g) `(member-equal ,x (elts ,g))) (defmacro order (g) `(len (elts ,g))) \end{verbatim} \end{small} The {\it index} of a group element is its position in the list {\tt (elts g)}: \begin{small} \begin{verbatim} (defun index (x l) (if (consp l) (if (equal x (car l)) 0 (1+ (index x (cdr l)))) ())) (defmacro ind (x g) `(index x (elts g))) \end{verbatim} \end{small} The group operation is defined as a table access: \begin{small} \begin{verbatim} (defun op (x y g) (nth (ind y g) (nth (ind x g) g))) \end{verbatim} \end{small} We also define \begin{small} \begin{verbatim} (defmacro e (g) `(car (elts ,g))) \end{verbatim} \end{small} Note that {\tt (nth (ind (e g) g) g)} = {\tt (nth 0 g)} = {\tt (elts g)} and therefore, \begin{center} {\tt (op (e g) y g)} = {\tt (nth (ind y g) (elts g))} = {\tt y} \end{center} i.e., {\tt (e g)} is a left identity: \begin{small} \begin{verbatim} (defthm group-left-identity (implies (in x g) (equal (op (e g) x g) x))) \end{verbatim} \end{small} The left inverse {\tt (inv x g)}, if it exists, is the group element of least index satisfying \begin{center} {\tt (op (inv x g) x g)} = {\tt x} \end{center} defined as follows: \begin{small} \begin{verbatim} (defun inv-aux (x l g) (if (consp l) (if (equal (op (car l) x g) (e g)) (car l) (inv-aux x (cdr l) g)) ())) (defun inv (x g) (inv-aux x (elts g) g)) \end{verbatim} \end{small} The definition of a group is based on a set of predicates, including those representing the group axioms: \begin{small} \begin{verbatim} (defund groupp (g) (and (posp (order g)) (matrixp g (order g) (order g)) (dlistp (elts g)) (not (in () g)) (closedp g) (assocp g) (inversesp g))) \end{verbatim} \end{small} The predicate {\tt matrixp} is a straightforward characterization of a matrix of given dimensions, and {\tt dlistp} recognizes lists of distinct members. The condition that {\tt NIL} is not a group element is unnecessary but avoids certain technical difficulties. The definition of {\tt closedp} recursively searches for a a pair of group elements {\tt x} and {\tt y} such that {\tt (op x y g)} is not in the group. This allows us to prove the theorem \begin{small} \begin{verbatim} (defthm group-closure (implies (and (groupp g) (in x g) (in y g)) (in (op x y g) g))) \end{verbatim} \end{small} and also provides a counterexample if the search succeeds: \begin{small} \begin{verbatim} (defthm not-closedp-cex (implies (and (dlistp (elts g)) (not (closedp g))) (let* ((cex (closedp-cex g)) (x (car cex)) (y (cadr cex))) (and (in x g) (in y g) (not (in (op x y g) g)))))) \end{verbatim} \end{small} The latter result is useful in verifying {\tt (closedp g)} for a conjectured group {\tt g}. An analogous pair of theorems is derived for {\tt assocp}, which searches for a triple that violates associativity, and {\tt inversesp}, which searches for an element without a left inverse. Other basic properties, e.g., right identity, right inverse, and cancellation laws, follow easily. Similarly, subgroups are defined by the predicate \begin{small} \begin{verbatim} (defun subgroupp (h g) (and (groupp g) (groupp h) (sublistp (elts h) (elts g)) (not (subgroupp-cex h g)))) \end{verbatim} \end{small} where {\tt subgroupp-cex} exhaustively searches for a pair of elements of {\tt h} on which the two group operations disagree. Thus, \begin{small} \begin{verbatim} (defthm subgroup-op (implies (and (subgroupp h g) (in x h) (in y h)) (equal (op x y h) (op x y g)))) \end{verbatim} \end{small} Again, the search produces a counter-example if it exists: \begin{small} \begin{verbatim} (defthm not-subgroupp-cex (implies (and (groupp g) (groupp h) (sublistp (elts h) (elts g)) (not (subgroupp h g))) (let* ((cex (subgroupp-cex h g)) (x (car cex)) (y (cadr cex))) (and (in x h) (in y h) (not (equal (op x y h) (op x y g))))))) \end{verbatim} \end{small} The following results are also readily derived from the definition: \begin{small} \begin{verbatim} (defthm subgroup-e (implies (subgroupp h g) (equal (e h) (e g)))) (defthm subgroup-inv (implies (and (subgroupp h g) (in x h)) (equal (inv x h) (inv x g)))) \end{verbatim} \end{small} We also define a function {\tt subgroup}, which constructs the subgroup of a given group with a given element list if such a group exists. An illustration will be provided in the next section. \section{Parametrized Groups} The macro {\tt defgroup} is based on the following encapsulation, which constrains three functions representing the list of elements of a group, the group operation, and its inverse operator: \begin{small} \begin{verbatim} (encapsulate (((glist) => *) ((gop * *) => *) ((ginv *) => *)) (local (defun glist () (list 0))) (local (defun gop (x y) (+ x y))) (local (defun ginv (x) x)) (defthm consp-glist (consp (glist))) (defthm dlistp-glist (dlistp (glist))) (defthm g-non-nil (not (member-equal () (glist)))) (defthm g-identity (implies (member-equal x (glist)) (equal (gop (car (glist)) x) x))) (defthm g-closed (implies (and (member-equal x (glist)) (member-equal y (glist))) (member-equal (gop x y) (glist)))) (defthm g-assoc (implies (and (member-equal x (glist)) (member-equal y (glist)) (member-equal z (glist))) (equal (gop x (gop y z)) (gop (gop x y) z)))) (defthm g-inverse (implies (member-equal x (glist)) (and (member-equal (ginv x) (glist)) (equal (gop (ginv x) x) (car (glist))))))) \end{verbatim} \end{small} Our definition of the group {\tt (g)} is based on the constrained functions {\tt gop} and {\tt glist}: \begin{small} \begin{verbatim} (defun g-row (x m) (if (consp m) (cons (gop x (car m)) (g-row x (cdr m))) ())) (defun g-aux (l m) (if (consp l) (cons (g-row (car l) m) (g-aux (cdr l) m)) ())) (defun g () (let ((l (glist))) (g-aux l l))) \end{verbatim} \end{small} Using the results discussed in Section~\ref{grps}, we prove the theorem \begin{small} \begin{verbatim} (defthm groupp-g (groupp (g))) \end{verbatim} \end{small} along with three results characterizing the group structure: \begin{small} \begin{verbatim} (defthm glist-elts (equal (elts (g)) (glist))) (defthm op-g-rewrite (implies (and (in x (g)) (in y (g))) (equal (op x y (g)) (gop x y)))) (defthmd inv-g-rewrite (implies (in x (g)) (equal (inv x (g)) (ginv x)))) \end{verbatim} \end{small} The macro defines a parametrized family of groups given a parameter list, a predicate that the parameters must satisfy, and three terms corresponding to the above constrained functions. For example, the additive group of integers modulo {\tt n} is generated by the following: \begin{small} \begin{verbatim} (defun ninit (n) (if (zp n) () (append (ninit (1- n)) (list (1- n))))) (defun z+-op (x y n) (mod (+ x y) n)) (defun z+-inv (x n) (mod (- x) n)) (defgroup z+ (n) (posp n) (ninit n) (z+-op x y) (z+-inv x)) \end{verbatim} \end{small} Prior to the evaluation of a {\tt defgroup} form, a set of preliminary rewrite rules corresponding to the seven exported theorems of the encapsulation must be proved. The first three, which state that the specified list of elements is a non-{\tt NIL} list of distinct non-{\tt NIL} members, are generally trivial. In this case, the remaining four are also easy to prove: \begin{small} \begin{verbatim} (defthm z+-identity (implies (and (posp n) (member-equal x (ninit n))) (equal (z+-op 0 x n) x))) (defthm z+-closed (implies (and (posp n) (member-equal x (ninit n)) (member-equal y (ninit n))) (member-equal (z+-op x y n) (ninit n)))) (defthm z+-assoc (implies (and (posp n) (member-equal x (ninit n)) (member-equal y (ninit n)) (member-equal z (ninit n))) (equal (z+-op x (z+-op y z n) n) (z+-op (z+-op x y n) z n)))) (defthm z+-inverse (implies (and (posp n) (member-equal x (ninit n))) (and (member-equal (z+-inv x n) (ninit n)) (equal (z+-op (z+-inv x n) x n) 0)))) \end{verbatim} \end{small} The family {\tt (z+ n)} is then defined by the above {\tt defgroup} form, which also automatically proves four theorems: \begin{small} \begin{verbatim} (DEFTHM GROUPP-Z+ (IMPLIES (POSP N) (GROUPP (Z+ N)))) (DEFTHM Z+-ELTS (IMPLIES (POSP N) (EQUAL (ELTS (Z+ N)) (NINIT N)))) (DEFTHM Z+-OP-REWRITE (IMPLIES (AND (POSP N) (IN X (Z+ N)) (IN Y (Z+ N))) (EQUAL (OP X Y (Z+ N)) (Z+-OP X Y N)))) (DEFTHM Z+-INV-REWRITE (IMPLIES (AND (POSP N) (IN X (Z+ N))) (EQUAL (INV X (Z+ N)) (Z+-INV X N)))) \end{verbatim} \end{small} Each of these results is derived by the same functional instantiation of the corresponding lemma pertaining to the constrained constant {\tt g}. For example, \begin{small} \begin{verbatim} (DEFTHM GROUPP-Z+ (IMPLIES (POSP N) (GROUPP (Z+ N))) :HINTS (("Goal" :USE ((:FUNCTIONAL-INSTANCE GROUPP-G (GLIST (LAMBDA NIL (IF (POSP N) (NINIT N) (GLIST)))) (GOP (LAMBDA (X Y) (IF (POSP N) (Z+-OP X Y) (GOP X Y)))) (GINV (LAMBDA (X) (IF (POSP N) (Z+-INV X) (GINV X)))) (G-ROW (LAMBDA (X M) (IF (POSP N) (Z+-ROW X M N) (G-ROW X M)))) (G-AUX (LAMBDA (L M) (IF (POSP N) (Z+-AUX L M N) (G-AUX L M)))) (G (LAMBDA NIL (IF (POSP N) (Z+ N) (G))))))))) \end{verbatim} \end{small} Note that {\tt Z+-ROW} and {\tt Z+-AUX} are auxiliary functions that are generated by {\tt defgroup} along with {\tt Z+}. The multiplicative group {\tt (z* n)} of integers modulo {\tt n} is similarly generated, replacing addition with multiplication: \begin{small} \begin{verbatim} (defun z*-op (x y n) (mod (* x y) n)) \end{verbatim} \end{small} where we assume {\tt (and (natp n) (> n 1))}. The element list is the sublist {\tt (rel-primes n)} of {\tt (ninit n)} consisting of integers relatively prime to {\tt n}. This list is computed using the greatest common divisor function, {\tt g-c-d}, which is treated in {\tt euclid.lisp}. It is clear that {\tt (car (rel-primes n))} = 1 satisfies the identity property. Closure and associativity follow from the established properties of {\tt g-c-d} and {\tt mod}. For the definition of the inverse operator, we appeal to the following property of {\tt g-c-d}: \begin{small} \begin{verbatim} (defthm g-c-d-linear-combination (implies (and (integerp x) (integerp y)) (= (+ (* (r-int x y) x) (* (s-int x y) y)) (g-c-d x y)))) \end{verbatim} \end{small} Thus, {\tt (g-c-d x y)} is a linear combination of {\tt x} and {\tt y} with integer coefficients {\tt (r-int x y)} and {\tt (s-int x y)}. We define \begin{small} \begin{verbatim} (defun z*-inv (x n) (mod (r-int x n) n)) \end{verbatim} \end{small} and the required property follows from {\tt g-c-d-linear-combination}: \begin{small} \begin{verbatim} (defthm z*-inverse (implies (and (natp n) (> n 1) (member-equal x (rel-primes n))) (and (member-equal (z*-inv x n) (rel-primes n)) (equal (z*-op (z*-inv x n) x n) 1)))) \end{verbatim} \end{small} Thus, we have \begin{small} \begin{verbatim} (defgroup z* (n) (and (natp n) (> n 1)) (rel-primes n) (z*-op x y n) (z*-inv x n)) \end{verbatim} \end{small} and the usual four generated theorems, including \begin{small} \begin{verbatim} (DEFTHM GROUPP-Z* (IMPLIES (AND (NATP N) (> N 1)) (GROUPP (Z* N)))) \end{verbatim} \end{small} For example, evaluation of {\tt (z* 15)} yields a group or order 8: \begin{small} \begin{verbatim} ((1 2 4 7 8 11 13 14) (2 4 8 14 1 7 11 13) (4 8 1 13 2 14 7 11) (7 14 13 4 11 2 1 8) (8 1 2 11 4 13 14 7) (11 7 14 2 13 1 8 4) (13 11 7 1 14 8 4 2) (14 13 11 8 7 4 2 1)) \end{verbatim} \end{small} As an illustration of the {\tt subgroup} function mentioned at the end of Section~\ref{grps}, we observe that \begin{small} \begin{verbatim} (subgroup '(1 4 7 13) (z* 15)) \end{verbatim} \end{small} is \begin{small} \begin{verbatim} ((1 4 7 13) (4 1 13 7) (7 13 4 1) (13 7 1 4)) \end{verbatim} \end{small} and {\tt (subgroupp (subgroup '(1 4 7 13) (z* 15)) (z* 15))} = {\tt T}. Note that in order for {\tt sub\-group} to succeed in generating a group, the first member of the supplied list must be the identity element.\medskip The element list of the symmetric group {\tt (sym n)} is given by \begin{center} {\tt (defund slist (n) (perms (ninit n)))} \end{center} where {\tt (perms l)} returns a list of all permutations of a list {\tt l}. The group operation is composition, defined by \begin{small} \begin{verbatim} (defun comp-perm-aux (p r l) (if (consp l) (cons (nth (nth (car l) r) p) (comp-perm-aux p r (cdr l))) ())) (defun comp-perm (p r n) (comp-perm-aux p r (ninit n))) \end{verbatim} \end{small} and the inverse operator is \begin{small} \begin{verbatim} (defun inv-perm-aux (p l) (if (consp l) (cons (index (car l) p) (inv-perm-aux p (cdr l))) ())) (defun inv-perm (p n) (inv-perm-aux p (ninit n))) \end{verbatim} \end{small} Once we establish the required preliminary lemmas (which has not yet been done at the time of writing), we shall invoke \begin{small} \begin{verbatim} (defgroup sym (n) (posp n) (slist n) (comp-perm x y n) (inv-perm x n)). \end{verbatim} \end{small} In the meantime, we have defined a weaker version of {\tt defgroup} that defines a family of groups without proving any theorems, and does not require either the parameter constraint or the inverse operator: \begin{small} \begin{verbatim} (defgroup-light sym (n) (slist n) (comp-perm x y n)) \end{verbatim} \end{small} This allows us to analyze concrete groups of the family. For example, {\tt (sym 3)} is a group of order 6, \begin{small} \begin{verbatim} (((0 1 2) (0 2 1) (1 0 2) (1 2 0) (2 0 1) (2 1 0)) ((0 2 1) (0 1 2) (2 0 1) (2 1 0) (1 0 2) (1 2 0)) ((1 0 2) (1 2 0) (0 1 2) (0 2 1) (2 1 0) (2 0 1)) ((1 2 0) (1 0 2) (2 1 0) (2 0 1) (0 1 2) (0 2 1)) ((2 0 1) (2 1 0) (0 2 1) (0 1 2) (1 2 0) (1 0 2)) ((2 1 0) (2 0 1) (1 2 0) (1 0 2) (0 2 1) (0 1 2))) \end{verbatim} \end{small} and we can prove \begin{small} \begin{verbatim} (defthm groupp-sym-3 (groupp (sym 3))) \end{verbatim} \end{small} by direct computation. Note that the identity element of {\tt (sym n)} is the trivial permutation {\tt (ninit n)}. To construct the alternating groups, we define a function {\tt cyc} that converts an element of {\tt (sym n)} to an alternative representation as a product of cycles. For example, in {\tt (sym 5)}, \begin{center} {\tt (cyc '(2 3 4 1 0))} = {\tt ((0 2 4) (1 3))}, \end{center} We can derive the parity of a permutation from the observation that a cycle of odd (resp., even) length is a product of an even (resp., odd) number of transpositions. Thus, we define an even permutation as follows: \begin{small} \begin{verbatim} (defun even-cyc (cyc) (if (consp cyc) (if (evenp (len (car cyc))) (not (even-cyc (cdr cyc))) (even-cyc (cdr cyc))) t)) (defun even-perm (perm) (even-cyc (cyc perm))) \end{verbatim} \end{small} The function {\tt even-perms} extracts the sublist of even permutations from a list. The alternating group {\tt (alt n)} is the subgroup of {\tt (sym n)} consisting of the even permutations: \begin{small} \begin{verbatim} (defgroup-light alt (n) (even-perms (slist n)) (comp-perm x y n)) \end{verbatim} \end{small} For example, {\tt (alt 3)} is the group \begin{small} \begin{verbatim} (((0 1 2) (1 2 0) (2 0 1)) ((1 2 0) (2 0 1) (0 1 2)) ((2 0 1) (0 1 2) (1 2 0))) \end{verbatim} \end{small} and we can prove theorems such as {\tt (subgroupp (alt 5) (sym 5))}. \section{Cosets and Lagrange's Theorem} For our purposes, we need only consider left cosets. Given an element {\tt x} and a subgroup {\tt h} of a group {\tt g}, {\tt (lcoset x h g)} is defined to be a list of all elements of {\tt g} of the form {\tt (op x y g)} that satisfy {\tt (in y h)}. In particular, {\tt (lcoset (e g) h g)} is a permutation of {\tt (elts h)}. Our definition of {\tt lcoset} ensures that this list is ordered by indices with respect to {\tt g}. It follows that its members are distinct: \begin{small} \begin{verbatim} (defthm dlistp-lcosets (implies (and (subgroupp h g) (in x g)) (dlistp (lcoset x h g))))) \end{verbatim} \end{small} It is also easily shown that the length of each coset is the order of the subgroup: \begin{small} \begin{verbatim} (defthm len-lcoset (implies (and (subgroupp h g) (in x g)) (equal (len (lcoset x h g)) (order h)))) \end{verbatim} \end{small} The following is a useful criterion for coset membership: \begin{small} \begin{verbatim} (defthmd member-lcoset-iff (implies (and (subgroupp h g) (in x g) (in y g)) (iff (member-equal y (lcoset x h g)) (in (op (inv x g) y g) h)))) \end{verbatim} \end{small} As a consequence of this result, intersecting cosets have the same members, and the following may be derived from the ordering property: \begin{small} \begin{verbatim} (defthmd equal-lcoset (implies (and (subgroupp h g) (in x g) (in y g) (member-equal y (lcoset x h g))) (equal (lcoset y h g) (lcoset x h g)))) \end{verbatim} \end{small} The list {\tt (lcosets h g)} is constructed by traversing {\tt (elts g)} and adding a coset to the list whenever an element is encountered that does not already appear in the list. By definition, the length of the list is the index of {\tt h} in {\tt g}: \begin{small} \begin{verbatim} (defun subgroup-index (h g) (len (lcosets h g))) \end{verbatim} \end{small} Our proof of Lagrange's Theorem is based on the list {\tt (append-list (lcosets h g))}, produced by appending all members of {\tt (lcosets h g)}. The above results lead to the following properties of this list: \begin{small} \begin{verbatim} (defthm dlistp-append-list-lcosets (implies (subgroupp h g) (dlistp (append-list (lcosets h g))))) (defthm len-lcosets (implies (subgroupp h g) (equal (len (append-list (lcosets h g))) (* (order h) (subgroup-index h g))))) \end{verbatim} \end{small} The proof of Lagrange's Theorem depends on the observation that if each of two lists of distinct members is a sublist of the other, then the lists have the same length: \begin{small} \begin{verbatim} (defthmd sublistp-equal-len (implies (and (dlistp l) (dlistp m) (sublistp l m) (sublistp m l)) (equal (len l) (len m)))) \end{verbatim} \end{small} The hypotheses of this lemma are readily established for the lists {\tt (append-list (lcosets h g))} and {\tt (elts g)}, and the theorem follows: \begin{small} \begin{verbatim} (defthm lagrange (implies (and (groupp g) (subgroupp h g)) (equal (* (order h) (subgroup-index h g)) (order g)))) \end{verbatim} \end{small} \section{Normal Subgroups and Quotient Groups}\label{normal} For elements {\tt x} and {\tt y} of a group {\tt g}, the conjugate of {\tt x} by {\tt y} is defined by \begin{small} \begin{verbatim} (defund conj (x y g) (op (op (inv y g) x g) y g)) \end{verbatim} \end{small} Note that this computation returns {\tt x} iff {\tt x} and {\tt y} commute. A normal subgroup {\tt h} of {\tt g} is recognized by the predicate {\tt (normalp h g)}, which first requires that {\tt h} be a subgroup of {\tt g} and then exhaustively checks that every conjugate of every element of {\tt h} is an element of {\tt h}. As usual, we have the following two results: \begin{small} \begin{verbatim} (defthm normalp-conj (implies (and (normalp h g) (in x h) (in y g)) (in (conj x y g) h))) (defthmd not-normalp-cex (let* ((cex (normalp-cex h g)) (x (car cex)) (y (cadr cex))) (implies (and (subgroupp h g) (not (normalp h g))) (and (in x h) (in y g) (not (in (conj x y g) h)))))) \end{verbatim} \end{small} We shall apply {\tt defgroup} to define the group {\tt (quotient g h)} when {\tt h} is a normal subgroup of {\tt g}. The elements of this group are the members of {\tt (lcosets h g)}, and the identity element is the coset of {\tt (e g)}: \begin{small} \begin{verbatim} (defun qe (h g) (lcoset (e g) h g)) \end{verbatim} \end{small} Thus, we must rearrange {\tt (lcosets h g)}, moving this element to the front of the list: \begin{small} \begin{verbatim} (defun qlist (h g) (cons (qe h g) (remove1-equal (qe h g) (lcosets h g)))) \end{verbatim} \end{small} The group operation is \begin{small} \begin{verbatim} (defun qop (x y h g) (lcoset (op (car x) (car y) g) h g)) \end{verbatim} \end{small} and the inverse operator is \begin{small} \begin{verbatim} (defun qinv (x h g) (lcoset (inv (car x) g) h g)) \end{verbatim} \end{small} The closure property is trivial. The remaining properties required by {\tt defgroup} (identity, associativity, and inverse) may be derived from the following result, which is a consequence of {\tt normalp-conj} and {\tt member-lcoset-iff}: \begin{small} \begin{verbatim} (defthm op-qop (implies (and (normalp h g) (member-equal x (qlist h g)) (member-equal y (qlist h g)) (member-equal a x) (member-equal b y)) (member-equal (op a b g) (qop x y h g)))) \end{verbatim} \end{small} We may now invoke \begin{small} \begin{verbatim} (defgroup quotient (g h) (normalp h g) (qlist h g) (qop x y h g) (qinv x h g)) \end{verbatim} \end{small} which generates the usual four results, including \begin{small} \begin{verbatim} (DEFTHM GROUPP-QUOTIENT (IMPLIES (NORMALP H G) (GROUPP (QUOTIENT H G)))) \end{verbatim} \end{small} It is easily shown that any subgroup of index 2 is normal. For example, by direct computation, \begin{center} {\tt (normalp (alt 5) (sym 5))} = {\tt T}. \end{center} As another example, the element {\tt (1 2 0)} of {\tt (sym 3)} generates a subgroup of order 3 in a group of order 6, and therefore, \begin{center} {\tt (normalp (subgroup '((0 1 2) (1 2 0) (2 0 1)) (sym 3)))} = {\tt T}. \end{center} According to {\tt GROUPP-QUOTIENT}, its quotient group \begin{small} \begin{verbatim} (quotient (sym 3) (subgroup '((0 1 2) (1 2 0) (2 0 1)) (sym 3))) \end{verbatim} \end{small} is a group of order 2: \begin{small} \begin{verbatim} ((((0 1 2) (1 2 0) (2 0 1)) ((0 2 1) (1 0 2) (2 1 0))) (((0 2 1) (1 0 2) (2 1 0)) ((0 1 2) (1 2 0) (2 0 1)))) \end{verbatim} \end{small} Of course, any subgroup of an abelian group is normal. For example, {\tt (subgroup '(1 3 9) (z* 13))} is a normal subgroup of {\tt (z* 13)} of index 4. Its quotient group is \begin{small} \begin{verbatim} (((1 3 9) (2 5 6) (7 8 11) (4 10 12)) ((2 5 6) (4 10 12) (1 3 9) (7 8 11)) ((7 8 11) (1 3 9) (4 10 12) (2 5 6)) ((4 10 12) (7 8 11) (2 5 6) (1 3 9))) \end{verbatim} \end{small} \section{Parametrized Subgroups} The macro {\tt defsubgroup} calls {\tt defgroup} to define a subgroup of a given group {\tt g}. The last two arguments of {\tt defgroup} are not supplied to {\tt defsubgroup}, since they are always {\tt (op x y g)} and {\tt (inv x g)}. As an illustration, the {\it centralizer} of an element {\tt a} of {\tt g} is the subgroup consisting of all elements that commute with {\tt a}. The definition of its element list, {\tt (centizer-elts a g)}, is straightforward. Several of the rewrite rules required by {\tt defgroup} are generated by {\tt defsubgroup}, but the following must be proved by the user: \begin{small} \begin{verbatim} (defthm dlistp-centizer-elts (implies (and (groupp g) (in a g)) (dlistp (centizer-elts a g)))) (defthm sublistp-centizer-elts (implies (and (groupp g) (in a g)) (sublistp (centizer-elts a g) (elts g)))) (defthm centizer-elts-identity (implies (and (groupp g) (in a g)) (equal (car (centizer-elts a g)) (e g)))) (defthm consp-centizer-elts (implies (and (groupp g) (in a g)) (consp (centizer-elts a g)))) (defthm centizer-elts-closed (implies (and (groupp g) (in a g) (member-equal x (centizer-elts a g)) (member-equal y (centizer-elts a g))) (member-equal (op x y g) (centizer-elts a g)))) (defthm centizer-elts-inverse (implies (and (groupp g) (in a g) (member-equal x (centizer-elts a g))) (member-equal (inv x g) (centizer-elts a g)))) \end{verbatim} \end{small} we may then invoke \begin{small} \begin{verbatim} (defsubgroup centralizer (a g) (and (groupp g) (in a g)) (centizer-elts a g)) \end{verbatim} \end{small} In addition to the lemmas generated by {\tt defsubgroup}, this produces \begin{small} \begin{verbatim} (DEFTHM SUBGROUPP-CENTRALIZER (IMPLIES (AND (GROUPP G) (IN A G)) (SUBGROUPP (CENTRALIZER A G) G))) \end{verbatim} \end{small} The {\it center} of {\tt g} consists of all elements that commute with every element of {\tt g}. The list of such elements, {\tt (cent-elts g)} is again easily defined, and after proving the requisite rewrite rules, we have \begin{small} \begin{verbatim} (defsubgroup center (g) (groupp g) (cent-elts g)) \end{verbatim} \end{small} Our final example is the cyclic subgroup generated by an element {\tt a} of {\tt g}. First we define the powers of {\tt a}: \begin{small} \begin{verbatim} (defun power (a n g) (if (zp n) (e g) (op a (power a (1- n) g) g))) \end{verbatim} \end{small} The usual formulas for a product of powers and a power of a power are derived by induction: \begin{small} \begin{verbatim} (defthm power+ (implies (and (groupp g) (in a g) (natp n) (natp m)) (equal (op (power a n g) (power a m g) g) (power a (+ n m) g)))) (defthm power* (implies (and (groupp g) (in a g) (natp n) (natp m)) (equal (power (power a n g) m g) (power a (* n m) g)))) \end{verbatim} \end{small} Next, we define the order of {\tt a} in {\tt g}: \begin{small} \begin{verbatim} (defun ord-aux (a n g) (declare (xargs :measure (nfix (- (order g) n)))) (if (equal (power a n g) (e g)) n (if (and (natp n) (< n (order g))) (ord-aux a (1+ n) g) ()))) (defun ord (a g) (ord-aux a 1 g)) \end{verbatim} \end{small} We cannot have {\tt (ord a g)} = {\tt NIL}, for it would then follow from {\tt power+} that the powers of {\tt a} include $\mbox{\tt (order g)} + 1$ distinct elements. This observation has the following consequences: \begin{small} \begin{verbatim} (defthm ord<=order (implies (and (groupp g) (in a g)) (and (posp (ord a g)) (<= (ord a g) (order g))))) (defthm divides-ord (implies (and (groupp g) (in a g) (natp n)) (iff (equal (power a n g) (e g)) (divides (ord a g) n)))) (defthm power-mod (implies (and (groupp g) (in a g) (natp n)) (equal (power a n g) (power a (mod n (ord a g)) g)))) (defthm ord-power-div (implies (and (groupp g) (in a g) (posp n) (divides n (ord a g))) (equal (ord (power a n g) g) (/ (ord a g) n)))) \end{verbatim} \end{small} Thus, there are {\tt (ord a g)} distinct powers of {\tt a}. A list of these elements is computed by {\tt powers}: \begin{small} \begin{verbatim} (defun powers-aux (a n g) (if (zp n) () (append (powers-aux a (1- n) g) (list (power a (1- n) g))))) (defun powers (a g) (powers-aux a (ord a g) g)) \end{verbatim} \end{small} The following are readily derived from the definition: \begin{small} \begin{verbatim} (defthm member-powers (implies (and (groupp g) (in a g) (natp n) (< n (ord a g))) (equal (nth n (powers a g)) (power a n g)))) (defthm power-index (implies (and (groupp g) (in a g) (member-equal x (powers a g))) (equal (power a (index x (powers a g))) x))) \end{verbatim} \end{small} It follows from {\tt power-mod} that {\tt (powers a g)} is closed under the group operation, and it follows from {\tt power-index} and {\tt power+} that for {\tt x} in {\tt (powers a g)}, \begin{small} \begin{verbatim} (inv x g) = (power a (- (ord a g) (index x (powers a g))) g) \end{verbatim} \end{small} and hence, {\tt (inv x g)} belongs to {\tt (powers a g)}. The remaining prerequisites are trivial, and we have \begin{small} \begin{verbatim} (defsubgroup cyclic (a g) (and (groupp g) (in a g)) (powers a g) \end{verbatim} \end{small} Note that the two example subgroups at the end of Section~\ref{normal} can be computed as cyclic subgroups: \begin{center} {\tt (subgroup '((0 1 2) (1 2 0) (2 0 1)) (sym 3))} = {\tt (cyclic '(1 2 0) (sym 3))} \end{center} and \begin{center} {\tt (subgroup '(1 3 9) (z* 13))} = {\tt (cyclic 3 (z* 13))}. \end{center} As another example, the permutation {\tt (1 2 3 4 0)} of {\tt (sym 5)} is of order 5, and its cyclic subgroup \begin{center} {\tt (cyclic '(1 2 3 4 0) (sym 5))} \end{center} is \begin{small} \begin{verbatim} (((0 1 2 3 4) (1 2 3 4 0) (2 3 4 0 1) (3 4 0 1 2) (4 0 1 2 3)) ((1 2 3 4 0) (2 3 4 0 1) (3 4 0 1 2) (4 0 1 2 3) (0 1 2 3 4)) ((2 3 4 9 1) (3 4 0 1 2) (4 0 1 2 3) (0 1 2 3 4) (1 2 3 4 0)) ((3 4 0 1 2) (4 0 1 2 3) (0 1 2 3 4) (1 2 3 4 0) (2 3 4 0 1)) ((4 0 1 2 3) (0 1 2 3 4) (1 2 3 4 0) (2 3 4 0 1) (3 4 0 1 2))) \end{verbatim} \end{small} \section{Abelian Case of Cauchy's Theorem} The formulation of Cauchy's Theorem requires a witness function, which searches a group for an element of a given order: \begin{small} \begin{verbatim} (defun elt-of-ord-aux (l p n) (if (consp l) (if (= (ord (car l) g) n) (car l) (elt-of-ord-aux (cdr l) n g)) ())) (defun elt-of-ord (n g) (elt-of-ord-aux (elts g) n g)) \end{verbatim} \end{small} Thus, {\tt (elt-of-ord n g)} selects an element of {\tt g} of order {\tt n}, or returns {\tt NIL} if none exists: \begin{small} \begin{verbatim} (defthm elt-of-ord-ord (implies (and (groupp g) (natp n) (elt-of-ord n g)) (and (in (elt-of-ord n g) g) (equal (ord (elt-of-ord n g) g) n)))) (defthm elt-of-ord-ord (implies (and (groupp g) (natp n) (null (elt-of-ord n g)) (in a g)) (not (= (ord a g) n)))) \end{verbatim} \end{small} Here are some simple examples: \begin{itemize} \item {\tt (elt-of-ord 5 (sym 5))} = {\tt (1 2 3 4 0)}; \item {\tt (elt-of-ord 22 (z* 23))} = 5, the least primitive root of 23; \item {\tt (elt-of-ord (order (z* 35)) (z* 35))} = {\tt NIL}, since {\tt (z* 35)} is not cyclic. \end{itemize} The theorem may be stated as follows: \begin{small} \begin{verbatim} (defthm cauchy (implies (and (groupp g) (primep p) (divides p (order g))) (and (in (elt-of-ord p g) g) (equal (ord (elt-of-ord p g) g) p)))) \end{verbatim} \end{small} Our proof, which closely adheres to the informal proof presented in \cite{rotman}, consists of two steps, both involving induction on the order of {\tt g}. First, it is proved for abelian {\tt g}, and then it is shown that if {\tt g} is nonabelian, then it must have a proper subgroup with order divisible by {\tt p}. Both steps depend critically on a result from {\tt euclid.lisp}: \begin{small} \begin{verbatim} (defthm euclid (implies (and (primep p) (integerp a) (integerp b) (not (divides p a)) (not (divides p b))) (not (divides p (* a b))))) \end{verbatim} \end{small} If {\tt g} has no element of order {\tt p}, then by {\tt ord-power-div}, it has no element of order divisible by {\tt p}, and hence no cyclic subgroup of order divisible by {\tt p}. Combining {\tt lagrange} and {\tt euclid}, we have \begin{small} \begin{verbatim} (defthm divides-order-quotient (implies (and (groupp g) (primep p) (divides p (order g)) (not (elt-of-ord p g)) (in a g)) (divides p (order (quotient g (cyclic a g)))))) \end{verbatim} \end{small} By {\tt QUOTIENT-OP-REWRITE} and induction, \begin{small} \begin{verbatim} (defthm lcoset-power (implies (and (normalp h g) (in x g) (natp n)) (equal (power (lcoset x h g) n (quotient g h)) (lcoset (power x n g) h g)))) \end{verbatim} \end{small} It follows that \begin{center} {\tt (power (lcoset x h g) (ord x g) (quotient g h))} = {\tt (lcoset (e g) h g)} \end{center} where \begin{center} {\tt (lcoset (e g) h g)} = {\tt (e (quotient g h))} \end{center} By {\tt divides-ord}, {\tt (ord x g)} is divisible by {\tt (ord (lcoset x h g) (quotient g h))}. Therefore, if {\tt g} has no element of order divisible by an integer {\tt m}, then neither does {\tt (quotient g h)}: \begin{small} \begin{verbatim} (defthm lift-elt-of-ord (implies (and (normalp h g) (posp m) (elt-of-ord m (quotient g h))) (elt-of-ord m g))) \end{verbatim} \end{small} Now assume that {\tt g} is abelian. Then every subgroup of {\tt g} is abelian and normal. Since the quotient group of any nontrivial cyclic subgroup of {\tt g} has a smaller order than {\tt g}, we have our induction scheme: \begin{small} \begin{verbatim} (defun cauchy-induction (g) (declare (xargs :measure (order g))) (if (and (groupp g) (abelianp g) (> (order g) 1)) (cauchy-induction (quotient g (cyclic (cadr (elts g)) g))) ())) (defthm cauchy-abelian-lemma (implies (and (groupp g) (abelianp g) (primep p) (divides p (order g))) (elt-of-ord p g)) :hints (("Goal" :induct (cauchy-induction g)))) \end{verbatim} \end{small} In the proof of the above lemma, ACL2 generates a single nontrivial subgoal, the hypothesis of which is the instantiation of the goal with \begin{center} {\tt (quotient g (cyclic (cadr (elts g)) g))} \end{center} substituted for {\tt g}. By {\tt divides-order-quotient}, the order of this quotient group is divisible by {\tt p}, and therefore, by hypothesis, it has an element of order {\tt p}. The subgoal follows from {\tt lift-elt-of-ord}. The final theorem is an immediate consequence of {\tt cauchy-abelian-lemma} and {\tt elt-of-ord-ord}: \begin{small} \begin{verbatim} (defthm cauchy-abelian (implies (and (groupp g) (abelianp g) (primep p) (divides p (order g))) (and (in (elt-of-ord p g) g) (equal (ord (elt-of-ord p g) g) p)))) \end{verbatim} \end{small} \begin{small} \begin{verbatim} \end{verbatim} \end{small} \section{Conjugacy Classes and the Class Equation} The general case of Cauchy's Theorem ia based on an expression for the order of a group derived from a partition of its elements into {\it conjugacy classes}. The orsered list of conjugates of {\tt x} is {\tt (conjs x g)}, computed by \begin{small} \begin{verbatim} (defun conjs-aux (x l g) (if (consp l) (if (member-equal (conj x (car l) g) (conjs-aux x (cdr l) g)) (conjs-aux x (cdr l) g) (insert (conj x (car l) g) (conjs-aux x (cdr l) g) g)) ())) (defund conjs (x g) (conjs-aux x (elts g) g)) \end{verbatim} \end{small} Conjugacy is easily shown to be an equivalence relation, and it follows that intersecting classes are equal: \begin{small} \begin{verbatim} (defthmd equal-conjs (implies (and (groupp g) (in x g) (in y g) (member-equal y (conjs x g))) (equal (conjs y g) (conjs x g)))) \end{verbatim} \end{small} We define a bijection between the conjugates of {\tt x} and the cosets of its centralizer: \begin{small} \begin{verbatim} (defund conj2coset (y x g) (lcoset (inv (conjer y x g) g) (centralizer x g) g)) (defund coset2conj (c x g) (conj x (inv (car c) g) g)) (defthm coset2conj-conj2coset (implies (and (groupp g) (in x g) (member-equal y (conjs x g))) (equal (coset2conj (conj2coset y x g) x g) y))) (defthm conj2coset-coset2conj (implies (and (groupp g) (in x g) (member-equal c (lcosets (centralizer x g) g))) (equal (conj2coset (coset2conj c x g) x g) c))) \end{verbatim} \end{small} It follows that the size of the conjugacy class is the index of the centralizer: \begin{small} \begin{verbatim} (defthm len-conjs-cosets (implies (and (groupp g) (in x g)) (equal (len (conjs x g)) (subgroup-index (centralizer x g) g)))) \end{verbatim} \end{small} Since {\tt (len (conjs x g))} = 1 iff {\tt (in x (center g))}, a list of the nontrivial conjugacy classes is computed by \begin{small} \begin{verbatim} (defun conjs-list-aux (l g) (if (consp l) (let ((conjs (conjs-list-aux (cdr l) g))) (if (or (in (car l) (center g)) (member-list (car l) conjs)) conjs (cons (conjs (car l) g) conjs))) ())) (defund conjs-list (g) (conjs-list-aux (elts g) g)) \end{verbatim} \end{small} Thus, we can show that the following is a list of distinct elements that contains every element of {\tt g}: \begin{small} \begin{verbatim} (append (elts (center g)) (append-list (conjs-list g)))) \end{verbatim} \end{small} As a consequence, we have the {\it class equation}: \begin{small} \begin{verbatim} (defthmd class-equation (implies (groupp g) (equal (len (append (elts (center g)) (append-list (conjs-list g)))) (order g)))) \end{verbatim} \end{small} \section{General Case of Cauchy's Theorem} Assume that the order of {\tt g} is divisible by a prime {\tt p}. The function {\tt find-elt} searches for an element outside the center of {\tt g} that has a centralizer with order divisible by {\tt p}: \begin{small} \begin{verbatim} (defun find-elt-aux (l g p) (if (consp l) (if (and (not (in (car l) (center g))) (divides p (order (centralizer (car l) g)))) (car l) (find-elt-aux (cdr l) g p)) ())) (defund find-elt (g p) (find-elt-aux (elts g) g p)) \end{verbatim} \end{small} If such an element exists, then since it is not in the center, the order of its centralizer is less than that of {\tt g}. This observation provides our induction scheme: \begin{small} \begin{verbatim} (defun cauchy-induction (g p) (declare (xargs :measure (order g))) (if (and (groupp g) (primep p) (find-elt g p)) (cauchy-induction (centralizer (find-elt g p) g) p) t)) \end{verbatim} \end{small} On the other hand, if no such element exists, then for every non-central element {\tt x}, {\tt lagrange} implies that the index of the centralizer of {\tt x} is divisible by {\tt p}, and by {\tt len-conjs-cosets}, so is {\tt (len (conjs x g))}. According to the class equation, the same is true of {\tt (center g)}. Since {\tt (center g)} is abelian, we may apply {\tt cauchy-abelian} to complete the induction, and we have \begin{small} \begin{verbatim} (defthmd cauchy (implies (and (groupp g) (primep p) (divides p (order g))) (and (in (elt-of-ord p g) g) (equal (ord (elt-of-ord p g) g) p)))) \end{verbatim} \end{small} \section{Conclusion} In 2007, Georges Gonthier et al.~\cite{georges} embarked on a formalization of finite group theory in Coq, with the objective of a machine-checked proof of the Feit-Thompson Theorem: {\it All groups of odd order are solvable.} Six years later, the ultimate success of this undertaking was announced in an Inria Technical Report \cite{gonthier} listing fifteen coauthors. In light of the experience of our present project, it would be unsurprising to find that the Inria result did indeed involve some ninety man-years of effort. The leap in complexity from Cauchy's Theorem to Feit-Thompson is daunting, but as C.~S.~Lewis reminds us, “With the possible exception of the equator, everything begins somewhere.” We may find further solace in a plan to pursue a direction orthogonal to Inria's objective of the classification of finite groups, and better suited to ACL2's strengths. While not specifically designed for the formalization of higher mathematics, ACL2 is equipped with sophisticated procedures for managing rational arithmetic and polynomials. \cite{krug} Algebraic number theory, the study of finite extension fields of the rationals and their Galois groups, could be a fruitful application area founded on a formalization of elementary group theory. We have already demonstrated that our approach provides group computations in a straightforward manner, and we may anticipate that the computational power and proof automation of ACL2 can be brought to bear on the analysis and verification of a variety of number-theoretic algorithms of practical significance. Clearly, there is much work to be done before such a plan can be more than a fanciful dream. \nocite{*} \bibstyle{eptcs} \bibliographystyle{eptcs}
2024-02-18T23:40:55.121Z
2022-05-27T02:16:41.000Z
algebraic_stack_train_0000
3,680
7,595
proofpile-arXiv_066-1937
\section{0pt}{9pt plus 4pt minus 2pt}{5pt plus 2pt minus 2pt} \titlespacing*\subsection{0pt}{9pt plus 4pt minus 2pt}{5pt plus 2pt minus 2pt} \titlespacing*\subsubsection{0pt}{9pt plus 4pt minus 2pt}{5pt plus 2pt minus 2pt} \title{{QUIC-FL}\xspace: Quick Unbiased Compression for Federated Learning} \newcommand{{QUIC-FL}\xspace}{{QUIC-FL}\xspace} \newcommand*\samethanks[1][\value{footnote}]{\footnotemark[#1]} \author{% Ran Ben Basat \samethanks[1]\\ University College London\\ \texttt{r.benbasat@ucl.ac.uk} \And Shay Vargaftik \thanks{Equal Contribution.} \ \\ VMware Research \\ \texttt{shayv@vmware.com} \And Amit Portnoy \samethanks[1]\\ Ben-Gurion University \\ \texttt{amitport@post.bgu.ac.il} \And \hspace{5mm}Gil Einziger \\ \hspace{5mm}Ben-Gurion University\\ \hspace{5mm}\texttt{gilein@bgu.ac.il} \And \hspace{4mm}Yaniv Ben-Itzhak \\ \hspace{4mm}VMware Research \\ \hspace{4mm}\texttt{ybenitzhak@vmware.com} \And \hspace{-2mm}Michael Mitzenmacher \\ \hspace{-2mm}Harvard University \\ \hspace{-2mm}\texttt{michaelm@eecs.harvard.edu} } \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\floor}[1]{\left\lfloor#1\right\rfloor} \newcommand{\ceil}[1]{\left\lceil#1\right\rceil} \newcommand{\parentheses}[1]{\left(#1\right)} \newcommand{\angles}[1]{\left\langle#1\right\rangle} \newcommand{\brackets}[1]{\left[#1\right]} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\ensuremath{\mathbbm{1}}}{\ensuremath{\mathbbm{1}}} \newtheorem{theorem}{Theorem} \begin{document} \maketitle \begin{abstract} Distributed Mean Estimation (DME) is a fundamental building block in communication efficient federated learning. In DME, clients communicate their lossily compressed gradients to the parameter server, which estimates the average and updates the model. State of the art DME techniques apply either unbiased quantization methods, resulting in large estimation errors, or biased quantization methods, where unbiasing the result requires that the server decodes each gradient individually, which markedly slows the aggregation time. In this paper, we propose {QUIC-FL}\xspace, a DME algorithm that achieves the best of all worlds. {QUIC-FL}\xspace is unbiased, offers fast aggregation time, and is competitive with the most accurate (slow aggregation) DME techniques. To achieve this, we formalize the problem in a novel way that allows \mbox{us to use standard solvers to design near-optimal unbiased quantization schemes.} \end{abstract} \section{Introduction} In federated learning~\cite{mcmahan2017communication,kairouz2019advances}, clients periodically send their gradients to the parameter server, which calculates their means. This communication is often a network bottleneck, and methods to approximate the mean using small communication are desirable. The Distributed Mean Estimation problem (DME)~\cite{pmlr-v70-suresh17a} formalizes this fundamental building block as follows: each of $n$ \emph{clients} communicate a representation of a $d$-dimensional vector to a \emph{parameter server} which estimates the vectors' mean. Various DME methods have been studied (e.g.,~\cite{pmlr-v70-suresh17a,konevcny2018randomized,vargaftik2021drive,davies2021new,EDEN}), examining tradeoffs between the required bandwidth and performance metrics such as the estimation accuracy, learning speed, and the eventual accuracy of the model. These works utilize lossy compression techniques, using only a small number of bits per coordinate, which is shown to accelerate the training process~\cite{bai2021gradient,zhong2021compressed}. For example, in~\cite{pmlr-v70-suresh17a}, each client randomly rotates its vector before applying stochastic quantization. When receiving the messages from the clients, the server sums up the estimates of the rotated vectors and applies the inverse rotation. As the largest coordinates are asymptotically larger than the mean, their Normalized Mean Squared Error (\ensuremath{\mathit{NMSE}}\xspace) is bounded by $O\parentheses{\log d /n}$. They also propose an entropy encoding method that reduces the \ensuremath{\mathit{NMSE}}\xspace to $O(1/n)$ but is slow and not GPU-friendly. A different approach to DME computes the Kashin's representation~\cite{lyubarskii2010uncertainty} of a client's vector before applying quantization~\cite{caldas2018expanding,safaryan2020uncertainty}. Intuitively, this replaces the input $d$-dimensional vector by $\lambda\cdot d$ coefficients, for some $\lambda > 1$, each bounded by $O\parentheses{\sqrt{\norm{x}_2}/{d}}$. Applying quantization to the coefficients instead of the original vectors allows the Angel\xspace to estimate the mean using $\lambda>1$ bits per coordinate with an \ensuremath{\mathit{NMSE}}\xspace of $O\Big({\frac{\lambda^2}{(\sqrt\lambda -1)^4\cdot n}}\Big)$. However, it \mbox{requires applying multiple randomized Hadamard transforms, slowing down its encoding.} The recently introduced DRIVE~\cite{vargaftik2021drive} (which uses $b=1$ bits per coordinate) and its generalization EDEN~\cite{EDEN} (that can be used with any $b>0$) also randomly rotate the input vector, but unlike~\cite{pmlr-v70-suresh17a} use \emph{biased} and deterministic quantization on the rotated coordinates. Interestingly, both yield \emph{unbiased} estimates of the original vector after multiplying the estimated vector by a real-valued ``scale'' that is sent by each client together with the quantization. Both solutions have an \ensuremath{\mathit{NMSE}}\xspace of $O(1/n)$ and are empirically more accurate than Kashin's representation. However, to achieve unbiasedness, each client must generate a distinct rotation matrix independently from other clients. In turn, the Angel\xspace must invert the rotation for each vector before aggregating them, resulting in $O(n)$ rotations instead of one, asymptotically increasing the decoding time. In this work we present \textbf{Q}uick \textbf{U}nb\textbf{i}ased \textbf{C}ompression for \textbf{F}ederated \textbf{L}earning ({QUIC-FL}\xspace): a DME algorithm that produces unbiased estimates, with a fast estimation procedure and an NMSE of $O\parentheses{{1}/{n}}$. {QUIC-FL}\xspace also leverages random rotations, and uses the observation that after rotation the coordinates' distribution approaches $d$ i.i.d. normal variables, $\mathcal N\parentheses{0,{\norm x_2}/{d}}$~\cite{vargaftik2021drive}. The goal of {QUIC-FL}\xspace is to unbiasedly quantize each coordinate while minimizing the error. Compared with~\cite{pmlr-v70-suresh17a}, we present two key improvements: (1) Instead of quantizing all coordinates, we allow the algorithm to send an expected $p$-fraction of the \emph{rotated} coordinates exactly (up to precision) for some small $p$ (e.g., $p={1}/{512}$). This limits the range of the other coordinates to $[-T_p, T_p]$, where $T_p=O(1)$ for any constant $p>0$, thus reducing the possible quantization error significantly. (2) We study how to leverage \emph{client-specific shared randomness}~\cite{ben2020send} to reduce the error further. Specifically, we model the problem of transmitting a ``truncated'' normal random variable $Z\sim\mathcal N(0,1)\ |\ Z\in[-T_p, T_p]$, using $b\in\mathbb N^+$ bits, with the goal of obtaining an unbiased estimate at the Angel\xspace. Our model considers both a client's private randomness and shared randomness, allowing us to derive an input to optimization \mbox{problem solver, whose output yields algorithms with a near-optimal accuracy to bandwidth tradeoff.} { \begin{table}[ \resizebox{\columnwidth}{!} \begin{tabular}{|l|l|l|l|l|l|} \hline \textbf{Algorithm} & \textbf{QSGD~\cite{NIPS2017_6c340f25}} & \textbf{Hadamard~\cite{pmlr-v70-suresh17a}} & \textbf{Kashin~\cite{caldas2018expanding,safaryan2020uncertainty}} & \textbf{EDEN~\cite{EDEN}} & \textbf{QUIC-FL (Ours)} \\ \hline \textbf{Encoding complexity} & $O(d)$ & $O(d\cdot \log d)$ & $O(d\cdot \log d\cdot \log (n\cdot d))$ & $O(d\cdot \log d)$ & $O(d\cdot \log d)$ \\ \hline \textbf{Decoding complexity} & $O(n\cdot d)$ & $O(n\cdot d+d\cdot \log d)$ & $O(n\cdot d+d\cdot \log d)$ & $O(n\cdot d \cdot \log d)$ & $O(n\cdot d+d\cdot \log d)$ \\ \hline \textbf{NMSE} & $O(d/n)$ & $O(\log d /n)$ & $O(1 /n)$ & $O(1 /n)$ & $O(1 /n)$ \\ \hline \end{tabular} } \vspace*{1mm} \caption {The asymptotic guarantees of the algorithms with $b=O(1)$ bits per coordinate and using the Hadamard transform for rotation based algorithms. The table does not consider variable length encodings (see Appendix~\ref{app:extended_RW}).}} \label{tbl:asymptotics} \end{table} } \begin{wrapfigure}{r}{0.262\textwidth} \begin{center} \vspace*{-5.5mm}\hspace{-3.4mm} \includegraphics[width=.278\textwidth]{Figures/scatter.pdf} \end{center} \vspace*{-8mm} \end{wrapfigure} We implement {QUIC-FL}\xspace in PyTorch~\cite{NIPS2019_9015} and TensorFlow~\cite{tensorflow2015-whitepaper}, showing that it can compress vectors with over 33 million coordinates within 44 milliseconds and is markedly more accurate than existing fast-estimate \mbox{approaches} such as QSGD~\cite{NIPS2017_6c340f25}, Hadamard~\cite{pmlr-v70-suresh17a}, and Kashin~\cite{caldas2018expanding,safaryan2020uncertainty}. \mbox{Compared} with DRIVE~\cite{vargaftik2021drive} and EDEN~\cite{EDEN}, {QUIC-FL}\xspace has only slightly worse NMSE (e.g., less than 1\% for $b=4$ bits per dimension) while asymptotically improving the estimation time, as shown on the right. The figure illustrates the cycle (encode plus decode) times vs. NMSE for $b=4$ bits per coordinate, $d=2^{20}$ dimensions, and $n=256$ clients. (see \S\ref{sec:evaluation} for the algorithms' description.) We summarize the asymptotic guarantees of the discussed DME techniques in Table~\ref{tbl:asymptotics}. \ifarxiv While we have surveyed the most relevant related work, we review other techniques in Appendix~\ref{app:extended_RW}. \else While we have surveyed the most relevant related work above, we review other techniques in Appendix~\ref{app:extended_RW}. (All appendices appear in the supplementary material.) \fi \vspace*{-1mm} \section{Preliminaries} \T{Problems and Metrics.} Given a non-zero vector $x\in\mathbb R^d$, a vector compression protocol consists of a Buffy\xspace that computes a message $X$ and a Angel\xspace that given the message estimates $\widehat x\in\mathbb R^d$. The \emph{vector Normalized Mean Squared Error} (\ensuremath{\mathit{vNMSE}}\xspace) of the protocol is defined as $\frac{\mathbb{E}\brackets{\norm{x-\widehat x}_2^2}}{\norm{x}_2^2}$~\cite{vargaftik2021drive,EDEN}. This problem generalizes to the Distributed Mean Estimation (DME) problem, where $n$ clients\xspace{} have vectors $\set{x_c\in\mathbb R^d}$ that they communicate to a centeralized Angel\xspace. We are then interested in minimizing {the \emph{Normalized Mean Squared Error} (\ensuremath{\mathit{NMSE}}\xspace), defined as $ \frac{\mathbb{E}\brackets{\norm{\frac{1}{n}\sum_{c=1}^n \widehat x_c - \frac{1}{n}\sum_{c=1}^n x_c}_2^2}}{\frac{1}{n}\cdot\sum_{c=1}^n\norm{x_c}_2^2}~ $~\cite{pmlr-v70-suresh17a,vargaftik2021drive,EDEN}.} Note that for unbiased algorithms and independent estimates, we have that $\ensuremath{\mathit{NMSE}}\xspace=\ensuremath{\mathit{vNMSE}}\xspace/n$~\cite{vargaftik2021drive}. \T{Shared randomness.} We allow both global (common to all clients and the server) and client-specific shared randomness (common to a single client and the server). \section{The {QUIC-FL}\xspace Algorithm} \subsection{Rotation-based Truncated Quantization}\label{sec:truncation} Similarly to previous works~\cite{pmlr-v70-suresh17a,vargaftik2021drive,EDEN}, our algorithm uses random rotations after which the coordinates' distribution approaches independent normal random variables for high dimensions~\cite{vargaftik2021drive}. {QUIC-FL}\xspace features near-optimal \emph{unbiased} quantization for the normal distribution. We emphasize that {QUIC-FL}\xspace is unbiased for any input; its quantization is just tuned for normally distributed inputs. The most closely related previous works, DRIVE~\cite{vargaftik2021drive} and EDEN~\cite{EDEN}, can achieve unbiased results, but only by using a distinct rotation matrix for each client. {QUIC-FL}\xspace instead allows all clients to use \emph{the same} rotation matrix (generated with global shared randomness). {QUIC-FL}\xspace's unbiasedness is guaranteed by its quantization technique that uses both private randomness and client-specific shared randomness (shared between it and the Angel\xspace). As all clients apply the same rotation matrix, the \mbox{Angel\xspace can sum the \emph{rotated} vectors and apply a single inverse rotation, speeding up the aggregation.} As another comparison point, \cite{pmlr-v70-suresh17a}, given a bit budget of $b(1+o(1))$ bits per packet, stochastically quantizes each rotated coordinate into one of $2^b$ levels. The algorithm uses a max-min normalization, and the levels are uniformly spaced between the minimal and maximal coordinates. Their algorithm then communicates the max and min, together with $b$ bits per coordinate indicating its quantized level, and is shown to have a \ensuremath{\mathit{NMSE}}\xspace of $O\parentheses{{\log d}/{n}}$ for any $b=O(1)$. {QUIC-FL}\xspace has two main improvements over~\cite{pmlr-v70-suresh17a}. The first uses a conceptually simple modification that truncates the rotated coordinates' distribution. This is achieved by allowing the algorithm to send a $p$ fraction (e.g., for $p=\frac{1}{512}$) of the coordinates precisely, thereby reducing the \ensuremath{\mathit{NMSE}}\xspace{} \mbox{to $O\parentheses{1/n}$. The second leverages client-specific shared randomness to lower the \ensuremath{\mathit{NMSE}}\xspace further.} We begin by analyzing the value of the truncation. Let $Z=\mathcal N(0,1)$ be a normal random variable, modeling a rotated (and scaled) coordinate. Given a user-defined parameter $p$, we can compute a threshold $T_p$ such that $\Pr\brackets{Z\not\in[-T_p,T_p]}=p$. For example, by picking $p=2^{-9}$ (i.e., less than 0.2\%), we get a threshold of $T_p\approx 3.097$.\footnote{Note that, because we begin with a normal random variable $Z$, truncation is effective at removing the long, small-probability tails. Additionally, as the learning process typically uses 16-64 bit floats, and we further need to send the coordinate indices, sending each coordinate is expensive, and thus we focus on small $p$ values.} In general, for any \emph{constant} $p>0$, we have $T_p=O(1)$, and using $b$ bits for each coordinate in $[-T_p,T_p]$ we get a \ensuremath{\mathit{NMSE}}\xspace of $O\parentheses{{1}/n}$ for any constant $b$ (due to unbiased and independent quantization among clients). For example, consider sending each coordinate in $[-T_p,T_p]$ using $b=1$ bit per coordinate. One solution would be to use stochastic quantization, i.e., given a coordinate $Z\in[-T_p,T_p]$ send $T_p$ with probability $\frac{Z+T_p}{2T_p}$ and $-T_p$ otherwise. This quantization results in an expected squared error of {\small \begin{equation*} \mathbb E\brackets{(Z-\widehat Z)^2} = \frac{1}{\sqrt{2\pi}}{\int_{-T_p}^{T_p}\parentheses{ \frac{z+T_p}{2T_p}\cdot(z-T_p)^2 + \frac{T_p-z}{2T_p}\cdot(z+T_p)^2 }\cdot e^{-\frac{z^2}{2}} dz}. \end{equation*} } \looseness=-1 With $p{=}2^{-9}$ as above, we get {\small $\mathbb E\brackets{(Z{-}\widehat Z)^2}{\approx} 8.58$. }As shown in Appendix~\ref{app:vnmse_proof}, for {QUIC-FL}\xspace{} $\ensuremath{\mathit{vNMSE}}\xspace=$ {\small $\mathbb{E}\brackets{\parentheses{Z {-} \widehat{Z}}^2} {+} O\parentheses{\sqrt{\frac{\log d}{d}}}$}. Thus, using the above for each coordinate for large gradients results in $\ensuremath{\mathit{NMSE}}\xspace \approx 8.58{/}n$. We next show that shared randomness decreases $\mathbb E[(Z-\widehat Z)^2]$ and thus its \ensuremath{\mathit{NMSE}}\xspace. \subsection{Client-specific Shared Randomness: Intuition and Examples} \looseness=-1 We now provide an example to show how shared randomness can improve the \ensuremath{\mathit{vNMSE}}\xspace, leading to \S\ref{sec:generalUnbiasedSR} where we formalize our approach to finding near-optimal unbiased compression schemes for truncated $\mathcal N(0,1)$ variables. Using a single shared random bit (i.e., $H\in\set{0,1}$), we can use the following algorithm, where $X$ is the sent message and $\alpha=0.8, \beta=5.4$ are constants: {\small \begin{equation*} \hspace*{-6mm} X = \begin{cases} 1 & \mbox{if $H = 0$ and $Z\ge0$}\\ 0 & \mbox{if $H = 1$ and $Z<0$}\\ Bernoulli(\frac{2Z}{\alpha+\beta})& \mbox{If $H=1$ and $Z\ge0$}\\ 1-Bernoulli(\frac{-2Z}{\alpha+\beta})& \mbox{If $H=0$ and $Z<0$}\\ \end{cases} \qquad \widehat Z = \begin{cases} -\beta & \mbox{if $H=X=0$}\\ -\alpha & \mbox{if $H = 1$ and $X=0$}\\ \alpha & \mbox{If $H=0$ and $X=1$}\\ \beta & \mbox{If $H=X=1$}\\ \end{cases}. \end{equation*} } For example, if $Z=1$, then with probability $1/2$ we have that $H=0$ and thus $X=1$, and otherwise the Buffy\xspace sends $X=1$ with probability $\frac{2}{\alpha+\beta}$ (and otherwise $X=0$). Similarly, the reconstruction would be $\widehat Z = \alpha$ with probability $1/2$ (when $H=0$), $\widehat Z = \beta$ with probability $1/2\cdot\frac{2}{\alpha+\beta}= 0.16$, and $\widehat Z = -\alpha$ with probability $1/2\cdot\frac{\alpha+\beta-2}{\alpha+\beta}= 0.84$. Indeed, we have that the estimate is unbiased since: {\small \begin{equation*} \mathbb E[\widehat Z \mid Z=1] = \alpha\cdot 1/2 + \beta\cdot 1/2\cdot\frac{2}{\alpha+\beta} + (-\alpha)\cdot 1/2\cdot\frac{\alpha+\beta-2}{\alpha+\beta} = 1. \end{equation*} } \vspace{-0.1in} We calculate the quantization's expected squared error, conditioned on $Z\in[-T_p,T_p]$. (From symmetry, we integrate over positive $t$.) {\small \begin{multline*}\hspace*{-4mm} \mathbb E\brackets{(Z-\widehat Z)^2} = \sqrt{\frac{2}{\pi}}\parentheses{\int_0^{T_p} \frac{1}{2}\cdot\parentheses{(z-\alpha)^2 + \frac{2z}{\alpha+\beta}\cdot (z-\beta)^2 + \frac{\alpha+\beta-2z}{\alpha+\beta}\cdot (z+\alpha)^2}\cdot e^{-z^2/2}dz} \end{multline*} } Using the same $p=2^{-9}$ parameter ($T_p\approx 3.097$), we get an error of $\mathbb E\Big[(Z-\widehat Z)^2\Big]\approx 3.29$, 61\% lower than without shared randomness. This algorithm is derived from the solver, which numerically approximates the optimal unbiased algorithm with a single shared random bit, in terms of expected squared {error, for this $p$. We present our general approach for using the solver in the following sections.}\looseness=-1 \subsection{Designing Near-optimal Unbiased Compression Schemes}\label{sec:generalUnbiasedSR} In order to design our compression scheme, we first model the problem as follows: \begin{itemize}[align=left, leftmargin=3mm, labelindent=.5\parindent, listparindent=.5\parindent, labelwidth=3mm, itemindent=!,itemsep=2pt,parsep=0pt,topsep=0pt] \item We first choose a parameter $p>0$, the expected fraction of coordinates allowed to be sent exactly. \item The input, known to the Buffy\xspace, is a coordinate $Z\sim\mathcal N(0,1)$. The $p$ parameter restricts further the distribution to $Z\in[-T_p,T_p]$. \item The shared randomness $H$ is known to both the Buffy\xspace and Angel\xspace, and without loss of generality, we assume that $H\sim U[0,1]$. We denote by $\mathcal H=[0,1]$ the domain of $H$. \item We use a bit budget of $b\in\mathbb N^+$ bits per coordinate, and accordingly assume that the messages are in the set $\mathcal X_b=\set{0,\ldots,2^b-1}$.\footnote{We note that using entropy encoding, one may use more than $2^b$ messages (and thereby reduce the error) if the resulting entropy is bounded by $b$ (e.g.,~\cite{pmlr-v70-suresh17a,EDEN,NIPS2017_6c340f25}). As our goal is to design a quick and GPU-friendly compression scheme, we do not investigate entropy encoding further.} Again, coordinates outside the range $[-T_p,T_p]$ are sent exactly. \item The Buffy\xspace is modeled as $S:\mathcal H\times\mathbb R\to \Delta(\mathcal X_b)$. That is, the Buffy\xspace observes the shared randomness $H$ and the input $Z$, and chooses a distribution over the messages. We further denote by $S_x(h,z)$ the probability that the Buffy\xspace sends $x\in\mathcal X_b$ given $h$ and $z$ (i.e., $\forall h,z:\sum_x S_x(h,z)=1$). For example, it may choose $S_x(0,0)=\begin{cases} 1/2 & \mbox{If $x\in\set{0,1}$}\\ 0 & \mbox{Otherwise} \end{cases}.$ That is, the Buffy\xspace shall use private randomness to decide whether to send $x=0$ or $x=1$, each with probability $1/2$. \item The Angel\xspace is modeled as a function $R:\mathcal H\times\mathcal X_b\to\mathbb R$, such that if the shared randomness is $H\in\mathcal H$ and the Angel\xspace receives the message $X\in\mathcal X_b$, it produces an estimate $\widehat Z = R(H,X)$. \item We require that the estimates are unbiased, i.e., $\mathbb E[\widehat Z\ |\ Z]=Z$, where the expectation is taken over both $H\in\mathcal H$ and the private randomness of the Buffy\xspace. \end{itemize} We are now ready to formally define the problem. \vspace{-0.1in} \begin{equation*} \everymath{\displaystyle} \begin{array}{ll@{}l} \displaystyle{\minimize_{S,R}} & \displaystyle \frac{1}{\sqrt{2\pi}}\int_{-T}^T\int_{0}^1 \sum_x S_x(h,z)\cdot \parentheses{z-R(h,x)}^2 \cdot e^{-z^2/2}\ dh\ dz\vspace*{2mm}\\ \text{subject to}& \displaystyle \int_{0}^1 \sum_x S_x(h,z)\cdot R(h,x)\ dh = z, & \hspace {-1.0in} \forall z\in[-T,T]. \end{array} \end{equation*} \vspace{-0.1in} We are unaware of methods for solving the above problem analytically. Instead, we propose a discrete relaxation of the problem, allowing us to approach it with a \emph{solver}.\footnote{We used the Gekko~\cite{beal2018gekko} software package that provides a Python wrapper to the APMonitor~\cite{Hedengren2014} environment, running the \mbox{solvers Interior Point OPTimizer (IPOPT)~\cite{IPOPT} and Advanced Process OPTimizer (APOPT)~\cite{APOPT}.}} Namely, we model the algorithm as an optimization problem and let the solver output the optimal algorithm. To that end, we need to discretize the problem. Specifically, we make the following relaxations: \begin{itemize}[align=left, leftmargin=3mm, labelindent=.5\parindent, listparindent=.5\parindent, labelwidth=3mm, itemindent=!,itemsep=2pt,parsep=0pt,topsep=0pt] \item The shared randomness $H$ is selected uniformly at random from a finite set of values $\mathcal H_\ell \triangleq \set{0,\ldots,2^\ell-1}$, i.e., using $\ell$ shared random bits. \item The truncated distribution of a rotated and scaled $Z\sim \mathcal N(0,1)$ coordinate is approximated using a finite set of \emph{quantiles} $\mathcal Q_{m}=\set{q_0,\ldots,q_{m-1}}$, for a parameter $m\in\mathbb N^+$. In particular, the quantile is the point on the CDF of the truncated normal distribution (restricted to $[-T_p,T_p]$) such that the $\Pr[Z \le q_i\ |\ Z\in[-T_p,T_p]] = \frac{i}{m-1}$. Notice that we have $m$ such quantiles, corresponding to the probabilities $\set{0,\frac{1}{m-1},\frac{2}{m-1},\ldots,1}$. For example, $p=2^{-9}$ and $m=4$ we get the quantile set $\mathcal Q_4 \approx \set{-3.097, -0.4298, 0.4298, 3.097}$. \item The Buffy\xspace is now modeled as $S:\mathcal H_\ell\times \mathcal Q_m\to \Delta(\mathcal X_b)$. That is, for each shared randomness $h\in\mathcal H_\ell$ and quantile $q\in \mathcal Q_m$ values, the Buffy\xspace has a \emph{probability distribution} on the messages from which it samples, using private randomness, at encoding time. \item The Angel\xspace is modeled as a function $R:\mathcal H_\ell\times\mathcal X_b\to\mathbb R$, such that if the shared randomness is $H$ and the Angel\xspace receives the message $X$, it produces an estimate $\widehat Z = R(H,X)$. \end{itemize} Given this modeling, we use the following variables: \begin{itemize}[align=left, leftmargin=3mm, labelindent=.5\parindent, listparindent=.5\parindent, labelwidth=3mm, itemindent=!,itemsep=2pt,parsep=0pt,topsep=0pt] \item $s=\set{s_{h,q,x}\ \mid \ h\in\mathcal H_\ell, \ q\in\mathcal Q_m,\ x\in \mathcal X_b}$, where $s_{h,q,x}$ denotes the probability of sending a message $x$, given the quantile $q$ and shared randomness value $h$. We note that the solver's solution will only instruct us what to do if all our coordinates were quantiles in $\mathcal Q_m$. In what follows, we show how to interpolate the result and get a practical algorithm for any $Z\in[-T_p, T_p]$. \item $r=\set{r_{h,x}\ \mid \ h\in\mathcal H_\ell,\ x\in \mathcal X_b}$, where $r_{h,x}$ denotes the Angel\xspace's estimate value given the shared randomness $h$ and the received message $x$. \end{itemize} Accordingly, the discretized optimization problem is defined as: {\small \begin{equation*} \begin{array}{ll@{}l} \displaystyle{\minimize_{s,r}} & \displaystyle \frac{1}{m}\cdot\frac{1}{2^{\ell}}\cdot \sum_{h,q,x} s_{h,q,x}\cdot \parentheses{q-r_{h,x}}^2\vspace*{-0mm}\\ \text{subject to}\\ {\small (\textit{\textcolor{gray}{Unbiasedness}})}& \displaystyle \frac{1}{2^{\ell}}\cdot \sum_{h,x} s_{h,q,x}\cdot {r_{h,x}} = q, &\forall q\\ {\small (\textit{\textcolor{gray}{Probability}})}&\displaystyle \sum_{x}s_{h,q,x}=1,\qquad &\forall h, q \\ &s_{h,q,x}\ge0,\qquad &\forall h, q, x \end{array} \end{equation*} } \looseness=-1 As mentioned, the solver's output does not directly yield an implementable algorithm, as it only associates probabilities to each $\angles{h,q,x}$ tuple. A natural option is to first stochastically quantize $Z$ to a quantile. For example, when $Z=1$ and using the $\mathcal Q_4$ described above, before applying the algorithm, we quantize it to $q^-=0.4298$ with probability $\approx 0.786$ or $q^+=3.097$ with probability $\approx 0.214$. \looseness=-1 This approach gives an algorithm whose pseudo-code is given in Algorithm~\ref{alg:quickfl_initial}. The resulting algorithm is near-optimal in the sense that as the number of quantiles and shared random bits tend to infinity, we converge to an optimal algorithm. In practice, the solver we use is only able to produce an output for finite $m,\ell$ values; this means that the algorithm would be optimal if coordinates are uniformly distributed over $\mathcal Q_m$, and not in $\mathcal N(0,1)$. Further, we observed that the solver has not fully converged for the highest $\ell$ values for which we were able to obtain an output. While the solver's result is still guaranteed to yield an unbiased algorithm, its error may be suboptimal. We explore these issues further in \S\ref{sec:evaluation}. In words, in Algorithm~\ref{alg:quickfl_initial} each Buffy\xspace $c$ uses shared randomness to compute a global random rotation $\mathcal R$ (note that all clients\xspace use the same rotation). Next, it computes the rotated vector $\mathcal R\parentheses{x_c}$; for sufficiently large dimensions, the distribution of each entry in $\overline Z_c$ converges to $\mathcal N\parentheses{0,\frac{\norm{x_c}_2^2}{d}}$. The client then normalizes it, $\overline Z_c= \frac{\sqrt d}{\norm {x_c}_2}\cdot \mathcal R\parentheses{x_c}$, to have the coordinates roughly distributed $\mathcal N(0,1)$. Next, it stochastically quantizes the vector to $\mathcal Q_m$. Namely, for a given coordinate $Z$, let $q^-, q^+\in \mathcal Q_m$ denote the largest quantile smaller or equal to $Z$, and the smallest quantile larger than $q$ respectively. Then we denote by $\mathcal Q_m(Z)$ the stochastic quantization operation that returns $q^+$ with probability $\frac{Z-q^-}{q^+-q^-}$ and $q^-$ otherwise. The stochastic quantization of the vector applies coordinate-wise, i.e., $\mathcal Q_m(\overline Z_c) = (\mathcal Q_m(\overline Z_c[0]),\ldots,\mathcal Q_m(\overline Z_c[d-1]))$. The next step is to generate a client-specific shared randomness vector $\overline{H}_c$ in which each entry is drawn uniformly and independently from $\mathcal H_\ell$. Finally, the client follows the Buffy\xspace algorithm produced by the solver. That is, for each coordinate $Z$, the Buffy\xspace takes the mapped quantile $q=\mathcal Q_m(Z)\in\mathcal Q_m$, considers the set of probabilities $\set{s_{h,q,x}\mid x\in\mathcal X_b}$, and samples a message accordingly. We denote applying this operation coordinate-wise by $\overline X_c\sim \set{x\ \text{with prob.}\ s_{\overline{H}_c,\widetilde Z_c,x} \mid x\in\mathcal X_b}$. It then sends the resulting vector $\overline X_c$ to the Angel\xspace, together with the norm $\norm {x_c}_2$. In turn, for each Buffy\xspace $c$, the Angel\xspace estimates its rotated vector by looking up the shared randomness and message for each coordinate. That is, given $\overline{H}_c=(\overline{H}_c[0],\ldots,\overline{H}_c[d-1])$ and $\overline X_c=(\overline X_c[0],\ldots,\overline X_c[d-1])$ we denote $r_{\overline{H}_c,\overline X_c}=(r_{\overline{H}_c[0],\overline X_c[0]},\ldots)$. The Angel\xspace then estimates $\mathcal R(x_c)$ as $\parentheses{\norm{x_c}/\sqrt d \cdot r_{\overline{H}_c,\overline X_c}}$ and averages across all clients before performing the inverse rotation. {In the next section, we analyze the solver's output and show how to improve this method.} \begin{algorithm}[t] \small \caption{} \label{code:alg1} \begin{multicols}{2} \begin{algorithmic}[1]\vspace*{-8mm} \Statex \hspace*{-4mm}\textbf{Client\xspace{} $c$:} \State Compute $\overline Z_c = \frac{\sqrt d}{\norm {x_c}_2}\cdot \mathcal R\parentheses{x_c}$.\textcolor{white}{$\big($} \State Stochastically quantize $\widetilde Z_c = \mathcal Q_m(\overline Z_c)$\label{line:to_quantiles} \State Sample {\small$\overline X_c\sim \set{x\ \text{with prob.}\ s_{\overline{H}_c,\widetilde Z_c,x} \mid x\in\mathcal X_b} } \State Send $\parentheses{\norm {x_c}_2,\overline X_c}$ to Angel\xspace\textcolor{white}{$\widehat {\mathcal R(x)}$} \vspace*{-3mm} \end{algorithmic} \columnbreak \begin{algorithmic}[1]\vspace*{-8mm} \Statex \hspace*{-4mm}\textbf{Server\xspace:} \State $\forall c:$ Compute ${\widehat {\overline Z}_c}= r_{\overline{H}_c,\overline X_c}$ \State Compute $\widehat{\overline Z}_{\mathit{avg}} = \frac{1}{n}\cdot \frac{1}{\sqrt d}\cdot \sum_{c=1}^n {\norm {x_c}_2} \cdot\widehat{\overline Z}_c$ \State Estimate $\widehat x_{\mathit{avg}} = \mathcal R^{-1}\parentheses{\widehat{\overline Z}_{\mathit{avg}}}$ \end{algorithmic} \vspace*{-3mm} \end{multicols} \vspace*{-1mm} \label{alg:quickfl_initial} \end{algorithm} \vspace*{-2.5mm} \subsection{Interpolating the Solver's Solution}\label{sec:interpolation} \vspace*{-1mm} Based on our examination of solver outputs, we determined an alternative approach that does not stochastically quantize each coordinate to a quantile as above and empirically performs better. We explain the process first considering an example. We consider the setting of $p=\frac{1}{512}$ ($T_p\approx3.097$), $m=512$ quantiles, $b=2$ bits per coordinate, and $\ell=2$ bits of shared randomness. One optimal solution for the Angel\xspace is given below:\footnote{A crucial ingredient in getting a human-readable solution from the solver is that we, without loss of generality, force monotonicity in both $h$ and $x$, i.e., $(x\ge x')\wedge(h\ge h')\implies r_{h,x}\ge r_{h',x'}.$ Further, note that Table~\ref{tbl:receiver} is symmetric. We found tables were symmetric for small $\ell,m$, and then forced symmetry in order to reduce model size for larger values. We use this symmetry in our interpolation. \label{footnote:monotonicity}\vspace*{-1mm}} {\small \begin{table}[h] \centering\vspace*{-2.5mm} \begin{tabular}{|l|l|l|l|l|} \hline & $x=0$ & $x=1$ & $x=2$ & $x=3$ \\ \hline $h=0$ & -5.48 & -1.23 & \textbf{0.164} & 1.68 \\ \hline $h=1$ & -3.04 & -0.831 & \textbf{0.490} & 2.18 \\ \hline $h=2$ & -2.18 & \textbf{-0.490} & 0.831 & 3.04 \\ \hline $h=3$ & -1.68 & \textbf{-0.164} & 1.23 & 5.48 \\ \hline \end{tabular}\vspace*{0.5mm} \caption{Optimal Angel\xspace values ($r_{h,x}$) for $x\in\mathcal X_2, H\in\mathcal H_2$ when $p=1/512$ and $m=512$, rounded to $3$ significant digits. For example, when $Z=0$, the Angel\xspace will estimate one of the values in bold based on the shared randomness and the message received from the Buffy\xspace.} \label{tbl:receiver} \vspace*{-6mm} \end{table} } Given this table, by symmetry, if $Z=0$ we can send {\small$X=\begin{cases} 1 & \mbox{If $H\le 1$}\\ 2 & \mbox{Otherwise} \end{cases}$ } , which is formally written as {\small$S_x(H,0)=\begin{cases} 1 & \mbox{If $(x=1\wedge H\le 1)\vee (x=2\wedge H> 1)$}\\ 0 & \mbox{Otherwise} \end{cases}$ }. Indeed, we have that $\mathbb E\brackets{\widehat Z} = \frac{1}{4}\sum_{h}r_{h,X}=0$. Now, suppose that $Z>0$ (the negative case is symmetric); the Buffy\xspace can increase the Angel\xspace estimate's expected value (compared with the above choice of $X$) by moving probability mass to larger $x$ values for some (or all) of the options for $H$. For any $Z\in(-T_p, T_p)$, there are infinitely many Buffy\xspace alternatives that would yield an unbiased estimate. For example, if $Z=0.1$, below are two Buffy\xspace options (rounded to one significant digit): {\footnotesize \vspace{-1mm} \begin{equation*} S'_x(H,0.1)\approx\begin{cases} 1 & \mbox{If $(x=1\wedge H\le 2)$}\\ 0.6 & \mbox{If $(x=2\wedge H=3)$}\\ 0.4 & \mbox{If $(x=3\wedge H=3)$}\\ 0 & \mbox{Otherwise} \end{cases} \ ,\ \ S''_x(H,0.1)\approx\begin{cases} 1 & \mbox{If $(x=2\wedge H\le 1) \vee (x=1\wedge H=3)$}\\ 0.3 & \mbox{If $(x=1\wedge H=2)$}\\ 0.7 & \mbox{If $(x=2\wedge H=2)$}\\ 0 & \mbox{Otherwise} \end{cases} \end{equation*} \vspace{-0mm} } \looseness=-1 Note that while both $S'$ and $S''$ produce unbiased estimates, their expected squared errors differ. Further, since $0.1\not\in\mathcal Q_m$, the solver's output does not directly indicate what is the optimal Buffy\xspace's algorithm, even if the Angel\xspace table is fixed. Unlike Algorithm~\ref{code:alg1}, which stochastically quantizes $Z$ to either $q^-$ or $q^+$, we studied the solver's output $\set{s_{h,q,x}}_{h,q,x}$ to interpolate the Buffy\xspace to non-quantile values. The approach we take corresponds to the following process. We move probability mass from the \emph{leftmost, then uppermost} entry with mass to its right neighbor in the Angel\xspace table. So, for example, in Table~\ref{tbl:receiver}, as $Z$ increases from 0 we first move mass from the entry $x=1, h=2$ to the entry $x=2, h =2$. That is, the Buffy\xspace, based on its private randomness, increases the probability of message $x=2$ and decreases the probability of message $x=1$ when $h =2$. The amount of mass moved is always chosen to maintain unbiasedness. At some point, as $Z$ increases, all of the probability mass will have moved, and then we start moving mass from $x=1, h=3$ similarly. (And subsequently, from $x=2, h=0$ and so on.) This process is visualized in Figure~\ref{fig:solvers_alg}. Note that $S_x(h,z)$ values are piecewise linear as a function of $z$, and further, these values either go from 0 to 1, 1 to 0, or 0 to 1 and back again (all of which follow from our description). We can turn this description into formulae, and we defer this mathematical interpretation to Appendix~\ref{app:alg2}. The final algorithm, named {QUIC-FL}\xspace, is given by Algorithm~\ref{alg:final} (based on the formula given in the appendix). \begin{algorithm}[t] \small \caption{{QUIC-FL}\xspace} \label{alg:final} \vspace*{-5mm} \begin{multicols}{2} \begin{algorithmic}[1]\vspace*{-3mm} \Statex \hspace*{-4mm}\textbf{Client\xspace{} $c$:} \State Compute $\overline Z_c = \frac{\sqrt d}{\norm {x_c}_2}\cdot \mathcal R\parentheses{x_c}$.\textcolor{white}{$\big($} \State Compute $S(H_c, \overline Z_c)$ as in~\eqref{Eq:SQ_expectation} given in Appendix~\ref{app:alg2} \State Sample $\overline X_c\sim S( H_c,\overline Z_c)$ \State Send $\parentheses{\norm {x_c}_2,X_c}$ to Angel\xspace\textcolor{white}{$\widehat {\mathcal R(x)}$} \end{algorithmic} \columnbreak \begin{algorithmic}[1]\vspace*{-3mm} \Statex \hspace*{-4mm}\textbf{Server\xspace:} \State $\forall c:$ Compute ${\widehat {\overline Z}_c}= r_{\overline{H}_c,\overline X_c}$ \State Compute $\widehat{\overline Z}_{\mathit{avg}} = \frac{1}{n}\cdot \frac{1}{\sqrt d}\cdot \sum_{c=1}^n {\norm {x_c}_2} \cdot\widehat{\overline Z}_c$ \State Estimate $\widehat x_{\mathit{avg}} = \mathcal R^{-1}\parentheses{\widehat{\overline Z}_{\mathit{avg}}}$ \end{algorithmic} \end{multicols} \vspace*{-2mm} \end{algorithm} \begin{figure*}[h] \centering \vspace*{-2.mm} \hspace*{-8mm}\includegraphics[width=0.258\linewidth]{Figures/Solver_X_=_0.pdf} \hspace*{-1mm}\includegraphics[width=0.258\linewidth]{Figures/Solver_X_=_1.pdf} \hspace*{-1mm}\includegraphics[width=0.258\linewidth]{Figures/Solver_X_=_2.pdf} \hspace*{-1mm}\includegraphics[width=0.258\linewidth]{Figures/Solver_X_=_3.pdf} \includegraphics[width=0.5\linewidth]{Figures/solver_legend.pdf} \vspace*{-2mm} \caption{The solver's Buffy\xspace algorithm (for $b=\ell=2, m=512, p=\frac{1}{512}$) \emph{for the quantiles} $\set{s_{h,q,x}}_{h,q,x}$. {Markers correspond to quantiles in $\mathcal Q_m$, and the lines illustrate our interpolation.}\label{fig:solvers_alg}} \vspace*{-5.mm} \end{figure*} \subsection{Hadamard}\label{sec:hadamard} \vspace*{-2mm} Similarly to previous rotation-based compression algorithms~\cite{pmlr-v70-suresh17a,vargaftik2021drive,EDEN} we propose to use the Randomized Hadamard Transform (RHT) instead of uniform random rotations. \looseness=-1 Although RHT does not induce a uniform distribution on the sphere (and the coordinates are not exactly normally distributed), under mild assumptions, the resulting distribution is sufficiently close to the normal distribution~\cite{vargaftik2021drive}. Here, we are interested in how using RHT affects the guarantees of our algorithm. We analyze how using RHT affects our guarantees, starting by noting that our algorithm remains unbiased \emph{for any input vector}. However, adversarial inputs may (1) increase the probability that a rotated coordinate falls outside $[-T_p, T_p]$ and (2) increase the \ensuremath{\mathit{vNMSE}}\xspace as the coordinates' distribution deviates from the normal distribution. We show in Appendix~\ref{app:hadamard} that {QUIC-FL}\xspace with RHT has similar guarantees as with random rotations, albeit somewhat weaker (constant factor increases in the fraction of accurately sent coordinates and \ensuremath{\mathit{vNMSE}}\xspace). We note that these guarantees are still stronger than those of DRIVE~\cite{vargaftik2021drive} and EDEN~\cite{EDEN}, which only prove RHT bounds for input vectors whose coordinates are sampled i.i.d. from a distribution with finite moments, and are not applicable to adversarial vectors. In practice, as shown in the evaluation, the actual performance is close to the theoretical results for uniform rotations; improving the bounds is left as future work. In our evaluation, we use {QUIC-FL}\xspace (Algorithm~\ref{alg:final}) with RHT-based vector rotation. \vspace*{-4mm} \section{Evaluation} \label{sec:evaluation} \vspace*{-2mm} \subsection{Theoretical Evaluation: \ensuremath{\mathit{NMSE}}\xspace and Speed Measurements} \vspace*{-2mm} \begin{figure*}[t] \centering \hspace*{-7mm}\includegraphics[width=0.35\linewidth]{Figures/error_vs_sender_bits.pdf} \hspace*{-1.1mm}\includegraphics[width=0.35\linewidth]{Figures/vmnse_vs_quantiles.pdf} \hspace*{-1.1mm}\includegraphics[width=0.35\linewidth]{Figures/error_vs_shared_random_bits.pdf} \vspace*{-3mm} \caption{The \ensuremath{\mathit{vNMSE}}\xspace of {QUIC-FL}\xspace as a function of the bit budget, fraction $p$, and shared random bits $\ell$.}\label{fig:sensitivity} \vspace*{-0.mm} \end{figure*} \T{Parameter Selection.} We experiment with how the different parameters (number of quantiles $m$, the fraction of coordinates sent exactly $p$, the number of shared random bits $\ell$, etc.) affect the performance of our algorithm. As shown in Figure~\ref{fig:sensitivity}, introducing shared randomness decreases the \ensuremath{\mathit{vNMSE}}\xspace significantly compared with $\ell=0$. Additionally, the benefit from adding each additional shared random bit diminishes, and the gain beyond $\ell=4$ is negligible, especially for large $b$. Accordingly, we hereafter use $\ell=6$ for $b=1$, $\ell=5$ for $b=2$, and $\ell=4$ for $b\in\set{3,4}$. With respect to $p$, we determined $\frac{1}{512}$ as a good balance between the \ensuremath{\mathit{vNMSE}}\xspace and bandwidth overhead. \begin{figure*}[] \centering \hspace*{-7mm}\includegraphics[width=0.201\linewidth]{Figures/vnmse_vs_b.pdf} \hspace*{-0.5mm}\includegraphics[width=0.201\linewidth]{Figures/vnmse_vs_n.pdf} \hspace*{-0.5mm}\includegraphics[width=0.201\linewidth]{Figures/decode_time_vs_n.pdf} \hspace*{-0.5mm}\includegraphics[width=0.201\linewidth]{Figures/decode_time_vs_d.pdf} \hspace*{-0.5mm}\includegraphics[width=0.201\linewidth]{Figures/encode_time.pdf} \includegraphics[width=0.75\linewidth]{Figures/encoding_legend.pdf}\\\vspace{-0mm} \vspace*{-0mm} \caption{Comparison to alternative works with $n$ clients that have the same $LogNormal(0,1)$ input vector~\cite{vargaftik2021drive,EDEN}. The default values are $n=256$ clients, $b=4$ bit budget, and $d=2^{20}$ dimensions. }\label{fig:NMSE} \vspace*{1mm} \end{figure*} \looseness=-1 \T{Comparison to State of the Art DME techniques.} Next, we compare the performance of {QUIC-FL}\xspace to the baseline algorithms in terms of \ensuremath{\mathit{NMSE}}\xspace, encoding speed, and decoding speed, using an NVIDIA 3080 RTX GPU machine with 32GB RAM and i7-10700K CPU @ 3.80GHz. Specifically, we compare with Hadamard~\cite{pmlr-v70-suresh17a}, Kashin's representation~\cite{caldas2018expanding,safaryan2020uncertainty}, QSGD~\cite{NIPS2017_6c340f25}, and EDEN~\cite{EDEN}. We evaluate two variants of Kashin's representation: (1) The TensorFlow (TF) implementation~\cite{tensorflowfedkashincode} that, by default, limits the decomposition to three iterations, and (2) the theoretical algorithm that requires $O(\log (nd))$ iterations. As shown in Figure~\ref{fig:NMSE}, {QUIC-FL}\xspace has the second-lowest \ensuremath{\mathit{NMSE}}\xspace, slightly higher than EDEN's, which has a far slower decode time. Further, {QUIC-FL}\xspace is significantly more accurate than approaches with similar speeds. We observed that the default TF configuration of Kashin's representation suffers from a bias, and therefore its \ensuremath{\mathit{NMSE}}\xspace does not decrease inversely proportional to $n$. In contrast, the theoretical algorithm is unbiased but has a markedly higher encoding time. We observed similar trends for different $n,b$, and $d$ values. We consider the algorithms' bandwidth over all coordinates (e.g., with $b+\frac{64}{512}$ bits for {QUIC-FL}\xspace). Overall, the empirical measurements fall in line with the bounds in Table~\ref{tbl:asymptotics}. \begin{figure*}[h!] \centering \vspace{-1.5mm} \includegraphics[clip,width=1\linewidth]{Figures/shakespeare2.pdf} \vspace{-4mm} \caption{\emph{FedAvg} over the Shakespeare next-word prediction task at various bit budgets (rows). We report training accuracy per round with a rolling mean window of 200 rounds. The second row zooms in on the last 100 rounds (QSGD is not included in the zoom since it performed poorly).} \label{fig:shakespeare} \vspace{-2mm} \end{figure*} \vspace{-3mm} \subsection{Federated Learning Experiments} \vspace{-2mm} \T{Next-word prediction.} \looseness=-1 We evaluate {QUIC-FL}\xspace over the Shakespeare next-word prediction task \cite{shakespeare, mcmahan2017communication} using an LSTM recurrent model. We run \emph{FedAvg}~\cite{mcmahan2017communication} with the Adam server optimizer~\cite{KingmaB14} and sample $n=10$ clients per round. We use the setup from the federated learning benchmark of \cite{reddi2021adaptive}, restated for convenience in Appendix~\ref{app:expr-details}. Figure~\ref{fig:shakespeare} shows how {QUIC-FL}\xspace compares with other compression schemes at various bit budgets. As shown, {QUIC-FL}\xspace is competitive with EDEN and nearly matches the accuracy of the uncompressed baseline for $b\ge 3$. \begin{figure*}[h!] \centering \includegraphics[trim={0 2.2cm 0 0},clip,width=1\linewidth]{Figures/cross-silo-fl.pdf} \vspace{-3mm} \caption{Train and test accuracy for CIFAR-10 and CIFAR-100 with 10 persistent clients (i.e., silos) and $b=1$.} \vspace{1mm} \label{fig:cross_silo} \end{figure*} \vspace{-1mm} \looseness=-1 \T{Image classification.}\label{subsec:csfl} We evaluate {QUIC-FL}\xspace against other schemes with $10$ persistent clients over uniformly distributed CIFAR-10 and CIFAR-100 datasets~\cite{krizhevsky2009learning}. We also evaluate \emph{Count-Sketch}~\cite{charikar2002finding} (CS), often used for federated compression schemes (e.g.,~\cite{ivkin2019communication}). For CIFAR-10 and CIFAR-100, we use ResNet-9~\cite{he2016deep} and ResNet-18~\cite{he2016deep}, with learning rates of $0.1$ and $0.05$, respectively. For both datasets, the clients perform a single optimization step at each round. Our setting includes an SGD optimizer with a cross entropy loss criterion, a batch size of 128, and a bit budget $b=1$. \looseness=-1 The results are shown in Figure~\ref{fig:cross_silo}, with a rolling mean average window of 500 rounds. As shown, {QUIC-FL}\xspace is competitive with EDEN and the Float32 baseline and is more accurate than other methods. \begin{figure*}[h!] \centering \vspace{-1mm} \includegraphics[trim={0 2.2cm 0 0},clip,width=1\linewidth]{Figures/cross-device-fl.pdf} \vspace{-3mm} \caption{Cross-device federated learning of MNIST and CIFAR-10 with 50 clients ($b=1$).} \label{fig:cross_device} \vspace{-1mm} \end{figure*} Next, we consider a highly heterogeneous cross-device setup with $50$ clients over MNIST and CIFAR-10 datasets~\cite{krizhevsky2009learning, lecun1998gradient, lecun2010mnist}. For MNIST, each client stores only a single class of the dataset and trains LeNet-5~\cite{lecun1998gradient} with a learning rate of 0.05. For CIFAR-10, all clients have the same data distribution, and each trains ResNet-9~\cite{he2016deep} with a learning rate of 0.1. At each training round, $10$ clients are randomly selected and perform training over $5$ local steps. We use an SGD optimizer with a cross entropy loss criterion, a batch size of 128, and a bit budget $b=1$. Figure~\ref{fig:cross_device} shows the results with a rolling mean window of 200 rounds. Again, {QUIC-FL}\xspace is competitive {with EDEN and the uncompressed baseline. Kashin-TF is less accurate followed by Hadamard. } \T{Additional Evaluation} Due to lack of space, we defer additional evaluation results to Appendix~\ref{app:eval}. \vspace{-2mm} \section{Discussion} \vspace{-2mm} In this work, we presented {QUIC-FL}\xspace, a quick unbiased compression algorithm for federated learning. Both theoretically and empirically, {QUIC-FL}\xspace achieves an \ensuremath{\mathit{NMSE}}\xspace that is comparable with the most accurate DME techniques, while allowing an asymptotically faster decode time. We point out a few challenging directions for future work. {QUIC-FL}\xspace optimizes the worst-case error, and while it is compatible with orthogonal directions such as sparsification~\cite{konevcny2018randomized,EDEN,konecy2017federated,fei2021efficient}, it is unclear how it would leverage potential correlations between coordinates~\cite{mitchell2022optimizing} or client vectors~\cite{davies2021new}. Another direction for future research is understanding how to incorporate non-linear aggregation functions, such as approximate geometric median, that have shown to improve the training robustness~\cite{pillutla2022robust}. \ran{To change in the CR/arXiv} \begin{ack} Gil Einziger was funded in part by the Data Science Research Center at Ben-Gurion University. Michael Mitzenmacher was supported in part by NSF grants CCF-2101140, CNS-2107078, and DMS-2023528, and by a gift to the Center for Research on Computation and \mbox{Society at Harvard University.} \end{ack} \newpage \bibliographystyle{unsrt}
2024-02-18T23:40:55.185Z
2022-05-31T02:10:51.000Z
algebraic_stack_train_0000
3,682
7,662
proofpile-arXiv_066-2128
\section{Introduction} Ultrafast internal conversion between excited states of photosynthetic proteins has been a subject of intense spectroscopic interest\cite{JonasARPC2018,FlemingARPC2009} owing to its near unity quantum yield. A number of studies on photosynthetic proteins from different origins have reported oscillatory experimental transients arising from quantum mechanical superpositions or coherences\cite{Engel2007,Panit2010,Fleming2014,Fuller2014,Grondelle2014,ThyrhaugFMO2017,Dean2017,Palecek2017,Scholes2018}. Theoretical studies have suggested multiple interpretations for such experimental signatures. Of particular interest is the possibility of strong mixing between vibrational and electronic degrees of freedom caused by likely coincidences between exciton energy gaps and dense low-frequency vibrational spectrum of photosynthetic pigments \cite{Womick2011,Tiwari2013,Mancal2012,Chin2014}. Following the experimentally consistent explanation \cite{Tiwari2013} of reported spectroscopic signatures arising from vibronically coupled excitons, several experimental studies have reported\cite{Fuller2014,Fleming2014,Grondelle2014,ThyrhaugFMO2017,Dean2017,Palecek2017,Scholes2018} vibronic coherences between excited states of proteins, as well as persistent ground state vibrational coherences. Simulations of vibronic exciton models of extended multi-pigment proteins \cite{Fuller2014,Grondelle2014,Thorwart2015,Lovett2014,Fleming2020,Valete2020} have further suggested a functional role\cite{Scholes2017} for excited state vibronic coherences in enhancing the rates of energy and charge delocalization. However, computationally expensive simulations on extended systems with explicit treatment of certain intramolecular vibrations often necessitate the use of reduced basis set descriptions\cite{Rashba1965,Philpott1969,Briggs1970}. A key questions which arises in this context is -- what are the distinguishing properties of excitons coupled through resonant vibronic coupling, and whether these properties could be well approximated in basis sets with reduced vibrational dimensionality without oversimplifying the expected excited state dynamics and relaxation processes? The above question will be the main theme of this paper.\\ The mutual electronic coupling between the pigments versus their coupling to the vibrational bath places the photosynthetic proteins in between the strong and weak coupling regimes classified by Simpson and Peterson\cite{Peterson1957} in the context of vibronic excitons in molecular crystals. The intermediate coupling regime has been challenging to treat analytically, with initial perturbative\cite{McRae1963} and variational\cite{Siebrand1964} approaches developed by McRae and Siebrand starting from zero-order strong coupling or weak coupling type wavefunctions. Energy transfer under weak coupling is described as a site excitation, with its accompanying vibrational distortion at the site of electronic excitation, \textit{both} hopping to another site. Thus, vibrational excitations accompany electronic excitations. The exciton is said to be `trapped'\cite{Holstein1959,McRae1963,Rashba1965,Philpott1969} in the potential well created by vibrational distortion at the site of excitation. Energy transfer under strong coupling is interpreted as a delocalized excitonic wavefunction along with collective `lattice' vibrational modes. The potential well created by the vibrational distortion at a site is too shallow to trap the exciton. In this case, the electronic and vibrational excitations do not follow each other. Crucially, the coupling strength criterion as well as the above analytical approximation approaches have either assumed Born-Oppenheimer separability of the electronic and nuclear wavefunctions, or discarded scalar or derivative non-adiabtic couplings terms\cite{Pullerits2002} driving energy transfer. \\ Owing to the above complexity in treating vibronic excitons across coupling regimes, several approaches\cite{Strunz1997,Marcus2002,Kuhn2012} to calculating spectroscopic properties of vibronic excitons have been employed. Among these, the direct numerical diagonalization approach will be the subject of this paper. The size of the Hamiltonian matrix in the truncated Hilbert space grows as $N n_{e,vib} n_{g,vib}$ x $N n_{e,vib} n_{g,vib}$, where $N$ is the number of molecules in the aggregate and $n_{g(e),vib}$ is the number of vibrational quanta on the ground (excited) electronic states. Several approximations have been developed to scale down the number of basis states. Rashba\cite{Rashba1965} and Philpott\cite{Philpott1967,Philpott1969} developed approximation approaches for treating vibronic excitons with vibrational and electronic excitations distributed over different sites. The $n$-particle approximation approach assumes that electronic and vibration excitations are restricted to be not more than $n-1$ sites away, and allow to treat larger aggregates without significant increase in the computational cost of diagonalizing a large Hamiltonian matrix. In the same context, Briggs\cite{Briggs1970,Briggs1971,Briggs1972} developed the coherent exciton scattering (CES) approximation which assumes a `frozen' ground state, although extensions to higher temperatures are possible\cite{Briggs2005}. The validity of CES approximation, numerically similar to 1PA, has been extensively tested in the context of linear absorption and emission spectrum of molecular dimer and larger aggregates. Briggs et al. reported \cite{Briggs2005,Briggs2008} that in comparison to the numerically exact results, one-particle basis sets well described the absorption properties across weak and strong coupling regimes, with the exception of $H$-aggregates in the intermediate coupling regimes. In the context of molecular dimers with multiple vibrational modes, Schulze et al. have also reported\cite{Schulze2014} good agreement of linear absorption and emission spectra calculated under one-particle description versus numerically exact results calculated using multi-configurational time-dependent Hartree approach. The one-particle approximation (1PA) was also found to be in agreement with experimental cryogenic absorption spectrum of tubular aggregates\cite{Megow2016}. Spano\cite{Spano2018} and Petelenz\cite{Petelenz2007} have conducted extensive theoretical investigations of linear absorption and emission properties of $\pi$-conjugated oligomeric aggregates. They have shown that interference\cite{Spano2003,Spano2006,Petelenz2009_2} between one-particle and $n$-particle states, can strongly influence the linear spectra. \\ The above studies highlight two key points -- 1. For certain combinations of electronic couplings, vibrational stabilization energy and frequency, optically dark basis sets with `dissociated' electronic and vibrational excitations can influence optical properties. 2. Low-temperature linear optical lineshapes may not highlight the true extent of vibronic mixing in such situations. The goal of this paper is to further expand upon these two points using the simplest situation of an excitonically coupled dimer with one vibrational mode per pigment. We model the dimer based on the reported \cite{Herek2010, Aartsma1998} low-temperature excitonic splitting in the FMO antenna complex, with explicit quantum treatment of an underdamped vibrational mode\cite{Bocian1995,Freiberg2011,Wendling2000} of the Bacteriochlorophyll a (\textit{BChl a}) pigment which is experimentally established\cite{Tiwari2017} to be resonant with the excitonic energy gap. Until recently, energy and charge delocalization in photosynthetic proteins was studied under the adiabatic framework \cite{Sinanoglu} assuming separability of electronic and nuclear motions. In a notable departure, Jonas and co-workers showed\cite{Tiwari2013,Tiwari2017,Peters2017} that vibronic resonance results in non-adiabatic radial and derivative couplings operating over a nuclear coordinate range dictated by the width of the vibrational wavepacket. These couplings drive strong mixing between electronic and vibrational degrees of freedom, leading to non-separable vibrational-electronic wavefunctions. Using a reduced analytical treatment of the entire non-adiabatic Hamiltonian, here we show that contributions from optically dark two-particle basis sets lead to increasing strength of vibronic coupling between resonant manifolds with successively higher vibrational quanta. We show that three resulting fundamental properties unique to excitons coupled through vibronic coupling - the physically relevant width of vibronic resonance, the extent of vibronic exciton delocalization, and the vibrational distortions associated with such excitons, are not captured under 1PA. These effects manifest as significant differences in peak intensities, positions and vibronic splittings in the cryogenic linear spectra, but may be overwhelmed in the presence of line broadening. The severely underestimated vibrational distortions and vibronic exciton delocalization in 1PA ultimately affects the wavepacket motions and quantum relaxation processes on the excited state vibronic manifolds. This is shown in the coherent quantum dynamics of vibronic eigenstates where population transfer rates become substantially slower under one-particle description. Reduced basis set approaches to treat extended proteins systems may lead to grossly inadequate description of vibronic resonances in photosynthetic proteins, and motivates new approaches where such effects could still be captured adequately through effective mode approaches\cite{Burghardt2005,Tiwari2017}. The paper is organized as follows -- Section 2 describes the Hamiltonian, the associated basis sets, and derives reduced analytical forms of the vibronic eigenvectors for both, exact and one-particle descriptions. Section 3 presents numerical simulations of linear spectra and uses the reduced analytic approach to highlight the spectroscopic features not captured under reduced basis set description. Limitations of one-particle basis set in capturing the vibronic resonance width, wavepacket dynamics, exciton delocalization and vibrational distortions are also rationalized. Section 4 presents the conclusions. \section{Theory} The formalism presented in this paper is based on the framework developed in refs.\cite{Tiwari2017, Tiwari2018}. We will work in the diabatic basis, and in the localized \textit{undisplaced} vibrational basis of the overall ground electronic state of the system. This vibrational basis set is opposite to the Lang-Firsov basis set which is often adopted\cite{Alexandrov2007} to separate the electronic and vibrational parts of the Holstein Hamiltonian by transforming to the nuclear coordinates of the displaced excited electronic state potentials. As highlighted in the supplementary text of ref.\cite{Tiwari2018}, in the undisplaced basis set the vibronic dimer problem has lesser number of electronically off-diagonal vibrational matrix elements, and allows to see vibronic states which are \textit{directly} coupled through Coulomb coupling with no change in the vibrational quantum numbers in the associated Franck-Condon (FC) factors. This results in simpler matrix transformations more suited to the analysis conducted in this paper. For the features of excitons coupled through vibronic resonance which we wish to illustrate, a dimer with Coulombically coupled pigments and one harmonic FC active vibrational mode per pigment serves the purpose with minimum added complexity. We will start with describing the dimer using an exact basis set description, and then simplify the basis to only include one-particle type states. The analysis of the dimer focuses on the vibronic resonance scenario, that is, resonance between the donor-acceptor excitonic energy gap and a quantum of vibrational excitation on the acceptor exciton. Such a resonance is likely in photosynthetic systems due to coincidences between exciton energy gaps and dense low-frequency vibrational spectrum\cite{Bocian1995,Freiberg2011} with weak FC displacements, and recently experimentally reported in several photosynthetic proteins\cite{Fuller2014,Fleming2014,Grondelle2014,ThyrhaugFMO2017,Dean2017,Palecek2017,Scholes2018}. The analysis presented here is in general valid for vibrations with weak FC displacements (d$\ll$1), but the parameters that are chosen for the purpose of illustration are experimentally established\cite{Tiwari2013} for the case of the FMO protein, and describe a vibronic resonance between the 2nd and 5th exciton energy gap with an intramolecular FC active vibrational frequency of 200 cm$^{-1}$. The parameters are described in Section 3. Similar parameters have also been used\cite{Tiwari2013,Tiwari2018,Mancal2012,Ishizaki2015} in several vibronic exciton models for the FMO protein. \subsection{Dimer with One Intramolecular Vibrational Mode per Pigment - Exact versus reduced basis sets} We consider two identical pigments labeled $A$ and $B$, with their nuclear motions restricted to one intramolecular vibrational degree of freedom. Isolated pigments are assumed to follow the Born-Oppenheimer separability of electronic and nuclear wavefunctions on all electronic states. It is also assumed that the ground and singly-excited electronic states of isolated pigments are sufficiently energetically separated from all other electronic states such that any perturbative effect due to vibronic coupling to other channels can be ignored, effectively resulting in a two-electronic level description for isolated pigments. The electronic potential energies of isolated pigments are assumed to be harmonic with respect to the vibrational mode, with vibrational frequency $\omega$. Upon electronic excitation in isolated pigments, the electronic potential energy of is assumed to shift linearly with respect to the vibrational coordinate such that the shape of the ground electronic state potential is preserved on the excited state. Further, the dimensionless Franck-Condon displacement in the excited state potential energy surface upon electronic excitation of either pigment is equal to $d$. The ground to singly-excited electronic state transition in isolated pigments is dipole-allowed and the transition dipoles are assumed to follow the Condon approximation. \\ The electronic basis for the dimer system is constructed from a tensor product of the site basis of respective pigments, resulting in four electronic basis states -- an overall ground electronic state of the dimer $\ket{0_A}\ket{0_B}$, where both pigments are in their ground electronic state, singly-excited states $\ket{A}\ket{0_B}$ and $\ket{0_A}\ket{B}$, and a doubly-excited state $\ket{A}\ket{B}$ where both pigments are excited. Thus, the total diabatic Hamiltonian for the dimer system, can be written in terms of dimensionless position and momentum operators for each pigment as -- \begin{eqnarray} {\hat{H}_{dimer}}&=&\sum_{i=A,B}{\frac{1}{2}\omega{(\hat{p_i}^2+\hat{q_i}^2)}\hat{{I}}_{4\text{x}4}}\nonumber \\ &+&(-\Delta/2-\omega{d}\hat{q}_A)\ket{A}\ket{0_B}\bra{A}\bra{0_B}\nonumber \\ &+&(+\Delta/2-\omega{d}\hat{q}_B)\ket{B}\ket{0_A}\bra{B}\bra{0_A}\nonumber \\ &+&(2\omega_{eg}-\omega{d}\hat{q}_A - \omega{d}\hat{q}_B)\ket{A}\ket{B}\bra{A}\bra{B}\nonumber \\ &+&{\hat{H}_{coupling}+\omega_{eg}\hat{{I}}_{4\text{x}4}} \label{eq1} \end{eqnarray} Here $\hat{I}_{4\text{x}4}$ is defined as the identity operator in the Hilbert space comprised by the four electronic basis states of the dimer such that $\hat{I}_{4\text{x}4}=\ket{0_A}\ket{0_B}\bra{0_A}\bra{0_B}+\ket{A}\ket{0_B}\bra{0_B}\bra{A}+\ket{0_A}\ket{B}\bra{B}\bra{0_A}\\+\ket{A}\ket{B}\bra{A}\bra{B}$. The energy is defined in frequency units. $\Delta$ is the difference between the ground to excited electronic state energy gap of the two pigments. The zero of energy is the zero-point vibrational level on the ground electronic state, such that $\omega_{eg}$ is the average of the ground to excited electronic energy gap of the two pigments. The electronic coupling Hamiltonian $\hat{H}_{coupling}$ couples different electronic states. It is assumed that only the Coulomb integrals contribute to electronic coupling, and no electron exchange occurs. Under the Heitler-London approximation\cite{London1927}, electronic couplings between states differing by one or more quanta of electronic excitation is ignored, such that $\hat{H}_{coupling} = J[\ket{A}\ket{0_B}\bra{0_A}\bra{B}+\ket{0_A}\ket{B}\bra{A}\bra{0_B}]$. Note that $J$ is assumed to be coordinate-independent. The bi-exciton binding energy arising due to difference in Coulomb interactions on the ground and doubly-excited states is assumed to be zero. It is also assumed that the Franck-Condon displacements on individual pigments are additive on the doubly-excited electronic state. Including the vibrational states localized on each pigment site, the vibronic basis states on a given electronic state of the dimer are $\ket{0_A}\ket{0_B}\ket{\nu_A}\ket{\nu_B}$, $\ket{A}\ket{0_B}\ket{\nu_A}\ket{\nu_B}$, etc. where $\nu_A$ and $\nu_B$ are whole numbers representing the number of vibrational quanta on the respective pigment electronic state. Note that the vibrational basis states are eigenstates of the undisplaced ground electronic state potential, but not the displaced excited state potentials. In the undisplaced vibrational basis, the matrix elements of the Hamiltonian $\hat{H}_{coup}$, such as $\bra{\nu_A'}\bra{\nu_B'}\bra{A}\bra{0_B} J \ket{0_A}\ket{B}\ket{\nu_B}\ket{\nu_A}$ simplify to $J\delta_{\nu_A',\nu_A}.\delta_{\nu_B',\nu_B}$. Thus, only direct electronically off-diagonal couplings between states with no change in the vibrational quanta are seen in this basis.\\ We introduce a shorthand notation for the above basis states for notational convenience - $\ket{0_A}\ket{0_B}$ is represented as $0$ such that the vibronic basis state $\ket{0_A}\ket{0_B}\ket{\nu_A}\ket{\nu_B}$ becomes $0_ {\nu_A\nu_B}$. Likewise, the basis state $\ket{A}\ket{0_B}$ will be represented as $A$ such that the vibronic basis state $\ket{A}\ket{0_B}\ket{\nu_A}\ket{\nu_B}$ becomes $A_ {\nu_A\nu_B}$. The doubly excited electronic basis state is represented as $AB$, such that the vibronic basis states $\ket{A}\ket{B}\ket{\nu_A}\ket{\nu_B}$ become $AB_ {\nu_A\nu_B}$. This basis sets where the allowed vibrational quanta on both, the ground and singly-excited pigments are unrestricted, comprise an exact basis set description for the dimer. Under this description, basis states such as, $A_ {\nu_A,\nu_B\ne 0}$, comprising of non-zero vibrational excitation on the ground electronic state of the pigment are referred to as two-particle basis states, whereas basis states such as $A_ {\nu_A,\nu_B=0}$ where vibrational quanta on an electronically unexcited pigment are restricted to zero, are referred to as one-particle basis states.\\ Based on the above basis set description, the number of singly-excited vibronic basis states for a system with $N$ pigments and $m$ intramolecular vibrational modes per pigment, each having a maximum of $n_{g,vib}$ ($n_{e,vib}$) vibrational quanta on the ground (excited) electronic state, scales as $N.(n_{e,vib})^{m}.(n_{g,vib})^{m(N-1)}$. Thus, for the dimer system considered here, with $n_{vib}$ vibrational quanta on the ground and singly-excited electronic state of the pigments, the number of basis states scale as 2$n_{vib}^2$. This exact basis set description for the dimer comes at a computational cost which scales rapidly with the complexity of spectroscopic signature being computed. For instance, calculations of 3$^{rd}$ order non-linear time-dependent response functions for simulating four-wavemixing spectroscopic signatures for the dimer will scale\cite{Jonas2014} as $4n_{vib}^8$. In the CES approximation, numerically equivalent to 1PA, the vibrational quanta $\nu$ on the ground electronic state of each pigment is restricted to $0$, that is, $n_g=1$. Under this approximation the number of basis states scale as $N.(n_{e,vib})^{m}$, which for the dimer system considered here, reduces to 2$n_{vib}$, such that a four-wavemixing calculation for a dimer will scale substantially slower as $4n_{vib}^4$, thus motivating the use of reduced basis sets for describing extended systems. In the notation introduced above, one-particle basis states will be denoted as $A_{\nu_A 0}$ and $B_{0 \nu_B}$. \subsection{Matrix Representation for the Singly-Excited Hamiltonian} In Eqn.~\ref{eq1}, the dimer Hamiltonian for the singly-excited electronic sub-space, $\hat{H}_{1}$, is given by - \begin{eqnarray} {\hat{H}_{1}}&=&\big[\omega_{eg}+\sum_{i=A,B}{\frac{1}{2}\omega{(\hat{p}^2_i+\hat{q}^2_i)}}\big]\hat{\text{I}}_{2\text{x}2}\nonumber \\ &+&\left[% \begin{array}{cc} -\Delta/2 & J \\ J & +\Delta/2 \\ \end{array}% \right] +\left[% \begin{array}{cc} -\omega{d}\hat{q}_A & 0 \\ 0 & -\omega{d}\hat{q}_B \\ \end{array}% \right] \label{eq2} \end{eqnarray}\\ Here ${\hat{\text{I}}}_{2\text{x}2}$ is the identity operator for the 2x2 singly-excited electronic sub-space. The second term corresponds to the purely electronic part, while the third term represents electronically diagonal but vibrationally off-diagonal part of the Hamiltonian. In an exact basis set description, the matrix elements of $\hat{H}_{1}$, \textit{excluding} the $\omega_{eg}$ and zero-point energy offsets, are --\\ \begin{eqnarray} {\hat{H}_{1}} = \left[ \scalemath{0.8}{ \begin{array}{*{15}c} \epsilon_{A_{00}} & q_{A01} & 0 & 0 &\ldots & J & 0 & 0 &0 &\ldots \\ q_{A10} & \epsilon_{A_{10}} & 0 & 0 &\ldots & 0 & J & 0 &0 &\ldots \\ 0 & 0 &\epsilon_{A_{01}} & q_{A01} &\ldots & 0 & 0 & J &0 &\ldots \\ 0 & 0 &q_{A10} & \epsilon_{A_{11}} &\ldots & 0 & 0 & 0 &J &\ldots \\ \vdots &\vdots & \vdots &\vdots &\ddots &\vdots &\vdots &\vdots &\vdots &\vdots \\ J & 0 & 0 & 0 &\ldots & \epsilon_{B_{00}} & 0 &q_{B01} &0 &\ldots \\ 0 & J & 0 & 0 &\ldots &0 &\epsilon_{B_{10}} &0 &q_{B01} &\ldots \\ 0 & 0 & J & 0 &\ldots &q_{B10} &0 &\epsilon_{B_{01}} &0 &\ldots \\ 0 & 0 & 0 & J &\ldots &0 &q_{B10} &0 &\epsilon_{B_{11}} &\ldots \\ \vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \\ \end{array} } \right] \label{eq3} \end{eqnarray}\\ The matrix elements such as $\epsilon_{A_{ij}}$ denote the respective site energies $-\Delta/2 + (i+j)\omega$. The elements $q_{Pij}$ denote the matrix elements $-\omega d \bra{\nu_P = i}\hat{q}_P\ket{\nu_P = j}$, where $P$ denotes pigment $A$ or $B$. The matrix elements of the position operator $\hat{q}_P$ are such that $\bra{\nu_{P}+1}\hat{q}_P\ket{\nu_P} = \sqrt{\frac{\nu_P +1}{2}}$. The matrix elements become zero when $i,j$ differ by more than one vibrational quanta. In the Hamiltonian in Eqn.~\ref{eq3}, the upper left and lower right domains correspond to $\ket{A}\ket{0_B}$ and $\ket{0_A}\ket{B}$ electronic sub-spaces, respectively. The vibrational basis states in the $\ket{A}\ket{0_B}$ electronic sub-space are arranged as $A_{00}, A_{10}, A_{01}, A_{11}$, etc., and correspondingly for the $\ket{0_A}\ket{B}$ electronic sub-space. In contrast to the exact Hamiltonian description, the equivalent one-particle Hamiltonian $\hat{H}_1^{1pa}$ becomes -- \begin{eqnarray} {\hat{H}_{1}^{1pa}} = \left[ \scalemath{0.8}{ \begin{array}{*{15}c} \epsilon_{A_{00}} & q_{A01} &\ldots & J &0 &\ldots \\ q_{A10} & \epsilon_{A_{10}} &\ldots & 0 &0 &\ldots \\ \vdots &\vdots & \ddots &\vdots &\vdots &\vdots \\ J & 0 &\ldots &\epsilon_{B_{00}} &q_{B01} &\ldots \\ 0 &0 &\ldots &q_{B10} &\epsilon_{B_{01}} &\ldots \\ \vdots &\vdots &\vdots &\vdots &\vdots &\ddots \\ \end{array} } \right] \label{eq4} \end{eqnarray}\\ Under one-particle description, only basis states $A_{\nu_A,0}$ and $B_{0,\nu_B}$ are allowed, such that the matrix elements of the coupling Hamiltonian only survive for $\nu_A = \nu_B = 0$. Thus, only the states $A_{00}$ and $B_{00}$ are directly coupled through Coulomb coupling. The above differences between exact dimer Hamiltonian versus one-particle description have been highlighted in ref. \cite{Tiwari2018}. Note that the Hamiltonian in Eqn.~\ref{eq4} is equivalent to the modified strong coupling approach discussed\cite{Petelenz2007} by Petelenz et al. in a vibrationally displaced basis set. The rest of this paper treats the above Hamiltonians analytically to elucidate the effects not captured in reduced basis set descriptions of resonantly coupled vibronic manifolds. \subsection{Resonant Manifolds in Exact versus One Particle Description}\label{theory} Following ref.~\cite{Tiwari2017}, diagonalizing the purely electronic part of the Hamiltonian and applying the diagonalizing transformation on the total Hamiltonian in Eqn.~\ref{eq2} yields $\hat{H}_{1}^{'}$ -- \begin{eqnarray} {\hat{H}_{1}^{'}}&=&\big[\omega_{eg}+\sum_{i=A,B}{\frac{1}{2}\omega{(\hat{p_i}^2+\hat{q_i}^2)}}]\hat{\text{I}}_1\nonumber \\ &+&\left[% \scalemath{0.8}{ \begin{array}{cc} -\frac{\Delta_{\text{ex}}}{2}-\omega{d}\cos[2](\theta_d)\hat{q}_A-\omega{d}\sin[2](\theta_d)\hat{q}_B & -\frac{\omega{d}\sin(2\theta_d)}{2}(\hat{q}_A-\hat{q}_B) \\ -\frac{\omega{d}\sin(2\theta_d)}{2}(\hat{q}_A-\hat{q}_B) & +\frac{\Delta_{\text{ex}}}{2}-\omega{d}\sin[2](\theta_d)\hat{q}_A-\omega{d}\cos[2](\theta_d)\hat{q}_B\\ \end{array}% } \right] \label{eq5} \end{eqnarray} with the diabatic mixing angle $2\theta_{d}=\arctan(2J/\Delta)$, and the excitonic splitting of $\Delta_{\text{ex}}=2\sqrt{(\Delta/2)^2+J^2}$ which is resonant with the FC active vibrational frequency $\omega$. The diabatic excitonic basis states $\ket{\alpha}$ and $\ket{\beta}$ are -- \begin{eqnarray} \ket{\alpha} &=& \cos(\theta_{d})\ket{A}\ket{0_B} -\sin(\theta_{d})\ket{0_A}\ket{B}\nonumber,\\ \ket{\beta} &=& \sin(\theta_{d})\ket{A}\ket{0_B} + \cos(\theta_{d})\ket{0_A}\ket{B} \label{eq6} \end{eqnarray} In the second term in Eqn.~\ref{eq5}, the vibrational coordinate dependent electronically off-diagonal matrix elements are responsible for vibronic mixing between singly-excited electronic states. Note that the vibrational basis states in the diabatic excitonic basis are still the localized undisplaced vibrational basis of the ground electronic state, such that the vibronic basis states in the diabatic excitonic basis become $\ket{\alpha}\ket{\nu_A}\ket{\nu_B}$ and $\ket{\beta}\ket{\nu_A}\ket{\nu_B}$, represented as $\alpha_{\nu_A \nu_B}$ and $\beta_{\nu_A \nu_B}$, respectively. The electronically diagonal but vibrationally off-diagonal term in the diabatic excitonic hamiltonian in Eqn.~\ref{eq5} describes the effective FC displacement on exciton $\alpha$ and $\beta$. For instance, on exciton $\alpha$, the effective FC displacements, $d^{\alpha}_A$ and $d^{\alpha}_B$ along $\hat{q}_A$ and $\hat{q}_B$ become $d\cos[2](\theta_d)$ and $d\sin[2](\theta_d)$, respectively. Under vibronic resonance, a quantum of vibrational excitation on the lowest acceptor exciton brings it in resonance with the lowest donor exciton, resulting in three isoenergetic basis states -- $\alpha_{10},\alpha_{01},\beta_{00}$, with energies $\epsilon_{\alpha_{10}}$, $\epsilon_{\alpha_{01}}$ and $\epsilon_{\beta_{00}}$ denoted as $\epsilon_1$, where the subscript $1$ denotes the total vibrational quantum on the acceptor. Using Eqn.~\ref{eq5}, this $3\times3$ resonant manifold can be explicitly expressed as $\hat{H}_{1,3\times3}^{'}$ -- \begin{equation} {\hat{H}_{1,3\times3}^{'}}=\left[% \scalemath{0.8}{ \begin{array}{*{25}c} \epsilon_{1} & 0 & -\omega{d}\sin(2\theta_{d})/2\sqrt{2} \\ 0 & \epsilon_{1} & \omega{d}\sin(2\theta_{d})/2\sqrt{2} \\ -\omega{d}\sin(2\theta_{d})/2\sqrt{2} & \omega{d}\sin(2\theta_{d})/2\sqrt{2} & \epsilon_{1} \end{array}% } \right] \label{eq7} \end{equation} Under the unitary transformation $U_{3\times3}$ -- \begin{equation} {U_{3\text{x}3}}=\left[ \nonumber \scalemath{0.8}{ \begin{array}{*{25}c} 1/\sqrt{2} & 1/\sqrt{2} & 0 \\ -1/\sqrt{2} & 1/\sqrt{2} & 0 \\ 0 & 0 & 1 \end{array}% } \right], \end{equation} $\hat{H}_{1,3\times3}^{'}$ can be transformed to $\hat{H}_{1,3\times3}^{''}$ -- \begin{equation} {\hat{H}_{1,3\times3}^{''}}=\left[% \scalemath{0.8}{ \begin{array}{*{25}c} \epsilon_{1} & 0 & -\omega{d}\sin(2\theta_{d})\sqrt{1/4} \\ 0 & \epsilon_{1} & 0 \\ -\omega{d}\sin(2\theta_{d})\sqrt{1/4} & 0 & \epsilon_{1} \end{array}% } \right] \label{eq8} \end{equation} The transformed Hamiltonian in Eqn.~\ref{eq8} shows that only 1 pair of states in the resonant manifold corresponding to a total one quantum of vibrational excitation on the acceptor exciton are coupled. Ref.~\cite{Tiwari2018} has shown that the rotated basis set resulting from the above transformation is the delocalized vibrational basis set for the $3\times3$ resonant manifold. Under the above transformation the resulting eigenvectors of the Hamiltonian in Eqn.~\ref{eq8} in increasing order of energy are -- \begin{align} &[(\ket{\alpha_{10}} - \ket{\alpha_{01}})/\sqrt{2} + \ket{\beta_{00}}]/\sqrt{2} \nonumber \\ &[\ket{\alpha_{10}} + \ket{\alpha_{01}}]/\sqrt{2} \nonumber \\ &[(\ket{\alpha_{10}} - \ket{\alpha_{01}})/\sqrt{2} - \ket{\beta_{00}}]/\sqrt{2} \nonumber \end{align} Similar to above, the resonant manifold corresponding to a total of 2 quantum of vibrational excitation on the acceptor exciton has five isoenergetic basis states -- ${\alpha_{20},\alpha_{02},\alpha_{11},\beta_{10},\beta_{01}}$, with energies denoted by $\epsilon_2$. Following the same procedure, Eqn.~\ref{eq5} can be explicitly written for this manifold as $\hat{H}_{1,5\times5}^{'}$ -- \begin{equation} {\hat{H}_{1,5\times5}^{'}}=\left[% \scalemath{0.8}{ \begin{array}{*{25}c} \epsilon_{2} & 0 & 0 & 0 & -\omega{d}\sin(2\theta_{d})/2 \\ 0 & \epsilon_{2} & 0 & 0 & \omega{d}\sin(2\theta_{d})/2 \\ 0 & 0 & \epsilon_{2} & \omega{d}\sin(2\theta_{d})/2\sqrt{2} & -\omega{d}\sin(2\theta_{d})/2\sqrt{2} \\ 0 & 0 & \omega{d}\sin(2\theta_{d})/2\sqrt{2} & \epsilon_{2} & 0 \\ -\omega{d}\sin(2\theta_{d})/2 & \omega{d}\sin(2\theta_{d})/2 & -\omega{d}\sin(2\theta_{d})/2\sqrt{2} & 0 & \epsilon_{2} \end{array}% } \right] \label{eq9} \end{equation} Using a unitary transformation $U_{5\times5}$ which converts the localized vibrational basis states to a delocalized vibrational basis states for the $5\times5$ manifold, $\hat{H}_{1,5\times5}^{'}$ in Eqn.~\ref{eq9} transforms to $\hat{H}_{1,5\times5}^{''}$ -- \begin{equation} {\hat{H}_{1,5\times5}^{''}}=\left[% \scalemath{0.8}{ \begin{array}{*{25}c} \epsilon_{2} & 0 & 0 & -\omega{d}\sin(2\theta_{d})\sqrt{2/4} & 0 \\ 0 & \epsilon_{2} & 0 & 0 & 0 \\ 0 & 0 & \epsilon_{2} & 0 & -\omega{d}\sin(2\theta_{d})\sqrt{1/4} \\ -\omega{d}\sin(2\theta_{d})\sqrt{2/4} & 0 & 0 & \epsilon_{2} & 0 \\ 0 & 0 & -\omega{d}\sin(2\theta_{d})\sqrt{1/4} & 0 & \epsilon_{2} \end{array}% } \right] \label{eq10} \end{equation} The prime on the Hamiltonians denotes the number of transformations made to the original Hamiltonian in Eqn.~\ref{eq3}. The analytical eigenvectors of the 3$\times$3 and 5$\times$5 Hamiltonians in Eqns.~\ref{eq8} and \ref{eq10} respectively, can be used to calculate the absorption and emission line strengths expected from a reduced analytical description of the full Hamiltonian. These calculations are shown in the Section \ref{params}. As highlighted by Eqn.~\ref{eq10}, in the resonant manifold corresponding to two quanta of vibrational excitation on the acceptor exciton, that is, the $5\times5$ resonant manifold, 2 pairs of states are vibronically coupled -- one pair with $-\omega d sin (2\theta_d) \sqrt{1/4}$ electronically off-diagonal coupling, and another with $-\omega d sin (2\theta_d) \sqrt{2/4}$ electronically off-diagonal coupling. \\ Using unitary matrix transformation similar to above, we find empirically that resonant manifolds corresponding to total $n$ quanta of vibrational excitation on the acceptor exciton, have $n$ pairs of vibronically coupled excitons, with couplings $-\omega d sin (2\theta_d) \sqrt{n_i/4}$, where $n_i$ ranges from 1 to $n$. \textit{Thus, in an exact description of vibronic resonance, higher manifolds lead to progressively denser and stronger vibronic coupling between excitons.} \\ In order to contrast the above scenario for a one-particle description of vibronic resonance, we start with the one-particle Hamiltonian $\hat{H}_{1}^{1pa}$ in Eqn.~\ref{eq4}, and as before, apply the diagonalizing transformation to absorb the Coulomb coupling $J$. This transforms $\hat{H}_{1}^{1pa}$ to $\hat{H}_{1}^{1pa'}$ -- \begin{eqnarray} {\hat{H}_{1}^{1pa'}}&=&\left[ \nonumber \scalemath{0.8}{ \begin{array}{*{25}c} \epsilon_{\alpha_{00}} & 0 &\ldots& 0 & 0 &\ldots \\ 0 & \epsilon_{A_{10}}&\ldots & 0 & 0 &\ldots\\ \vdots &\vdots &\ddots &\vdots &\vdots &\vdots\\ 0 & 0 &\ldots & \epsilon_{\beta_{00}} & 0 &\ldots\\ 0 & 0 &\ldots& 0 & \epsilon_{B_{01}}&\ldots\\ \vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array}% } \right] \\ &+&\left[% \scalemath{0.8}{ \begin{array}{*{25}c} 0 & -\omega{d}\cos(\theta_{d})/\sqrt{2} &\ldots & 0 & \omega{d}\sin(\theta_{d})/\sqrt{2} &\ldots\\ -\omega{d}\cos(\theta_{d})/\sqrt{2} & 0 &\ldots & -\omega{d}\sin(\theta_{d})/\sqrt{2} & 0 &\ldots \\ \vdots &\vdots &\ddots &\vdots &\vdots &\vdots\\ 0 & -\omega{d}\sin(\theta_{d})/\sqrt{2} &\ldots & 0 & -\omega{d}\cos(\theta_{d})/\sqrt{2} &\ldots \\ \omega{d}\sin(\theta_{d})/\sqrt{2} & 0 &\ldots & -\omega{d}\cos(\theta_{d})/\sqrt{2} & 0 &\ldots \\ \vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array}% } \right], \label{eq11} \end{eqnarray} where the electronic and vibrational parts have been written separately. In contrast to the exact Hamiltonian, in the one-particle description in Eqn.~\ref{eq4}, only $A_{00}$ and $B_{00}$ are directly coupled through Coulomb coupling, transforming to diabatic excitons $\alpha_{00}$ and $\beta_{00}$. In the second term in Eqn.~\ref{eq11}, only those FC displacement dependent terms are shown which change under this transformation. The near-resonant manifold in $\hat{H}_{1}^{1pa'}$ with one quanta of vibrational excitation on the acceptor is described by $\hat{H}_{1, 2\times2}^{1pa'}$ -- \begin{equation} {\hat{H}_{1, 2\times2}^{1pa'}}=\left[% \scalemath{0.8}{ \begin{array}{*{25}c} A_{10} & -\sqrt{1/4}\omega{d}\sin(2\theta_{d})/(\sqrt{2}\cos(\theta_{d})) \\ -\sqrt{1/4}\omega{d}\sin(2\theta_{d})/(\sqrt{2}\cos(\theta_{d})) & \beta_{00} \end{array}% } \right]. \label{eq12} \end{equation} \\ Comparing Eqn.~\ref{eq8} and Eqn.~\ref{eq12}, the following contrast against an exact description of the Hamiltonian is seen -- \textit{A.} Under one-particle approximation, the absence of $A_{01}$ two-particle basis state leads to a reduced, $2\times2$ near-resonant manifold. \textit{B.} The electronically off-diagonal vibronic coupling between the reduced manifold is weaker by a factor of $\sqrt(2)\cos(\theta_d)$. \textit{C.} The vibronic resonance condition dictated by experimental parameters, no longer ensures resonance, that is, the states $A_{10}$ and $\beta_{00}$ are not resonant. In order to artificially bring the states into vibronic resonance, the resonance criterion $\Delta_{ex} = \omega$, can be modified to $(\Delta_{ex} + \Delta)/2 = \omega$. \textit{D.} Higher manifolds with basis states such as $A_{20}$ and $B_{01}$ are not coupled through electronically off-diagonal vibronic coupling. Note that the reduction of $\sqrt{2}\cos(\theta_{d})$ in electronically off-diagonal vibronic coupling is only valid as long as the effect of $B_{10}$ on the 2$\times$2 manifold can be considered perturbatively. For example, for $\Delta = 0$ cm$^{-1}$ and $\omega = J$, no reduction in vibronic coupling is expected. However, $\beta_{00}$ becomes resonant with $B_{01}$, such that the above treatment should be modified to include the basis state $B_{01}$ in a 3$\times$3 manifold. \\ \begin{figure*}[h!] \centering \includegraphics[width=6 in]{fig1_manifold.png} \caption{A comparison of diabatic excitonic basis states expected in an exact (left) versus one-particle (right) description of the dimer Hamiltonian. The parameters dictating the energetic spacings are described in Section 3, and are modeled based on the experimentally established resonance between the 2$^{nd}$ and 5$^{th}$ excitonic energy gap and an intramolecular FC active vibrational frequency of 200 cm$^{-1}$ on the \textit{BChl a} pigments in the FMO protein. The zero of energy has been chosen to be the lowest acceptor exciton $\alpha_{00}$. \textbf{(Left)} $n_i$ denotes the number of pairs of vibronically coupled excitons $\alpha$ and $\beta$, with the corresponding vibronic coupling. For instance, $n_i=1,2$ implies 2 pairs of vibronically coupled excitons as dictated by Eqn.~\ref{eq10} - one pair coupled through $\sqrt(1/4)\omega d \sin(\theta_d)$, and another pair coupled through $\sqrt(2/4)\omega d \sin(\theta_d)$. The isoenergetic levels in the resonant manifolds have been vertically offset for clarity, with the energy denoted on top of the corresponding levels. \textbf{(Right)} In the one-particle description, only the first near-resonant manifold is coupled through a coupling matrix element and is weaker by a factor of $\sqrt{2}\cos(\theta_d)$ (Eqn.~\ref{eq12}). Higher manifolds are no longer in resonance with respective energies shown on top of the corresponding level. The horizontal dashed lines across the figure are drawn for comparing the relative energies of the levels in the two cases. } \label{fig:fig1} \end{figure*} Figure \ref{fig:fig1} summarizes the findings of this section using the vibronic resonance parameters for the FMO photosynthetic protein, described in detail in Section \ref{params}. The following sections use the above formalism for illustrating the spectroscopic features as well as fundamental aspects of resonantly coupled excitons, which are not captured in a reduced basis set description of vibronic resonance. \section{Results and Discussion} \label{params} In the following sections, we use the above formalism to compare properties of excitons coupled through a vibronic resonance expected from an exact versus one-particle descriptions. Several studies\cite{Briggs1972,Briggs2009,Petelenz2007,Schulze2014,Schroter2015,Megow2016,Painelli2019} have compared the exact versus one-particle descriptions, although primarily focusing on the linear absorption and emission lineshapes. The purpose of this paper is to illustrate, analytically and numerically, the above differences in one-particle versus exact descriptions of resonant vibronic manifolds, in terms of spectroscopic or fundamental properties which serve as good indicators of the wavepacket dynamics and quantum relaxation processes expected from resonantly coupled vibronic manifolds. \\ The analysis laid out in the previous section is valid in general for vibronic resonances between pigments in photosynthetic proteins, which have weak FC displacements ($d\ll1$) such that the perturbations on the pigment energies caused by neighboring vibrational manifolds can be considered small. In case of coupled pigments, the excitonic energy \textit{gaps} will not be perturbed up to second order\cite{Sinanoglu}. Fluorescence-line narrowing\cite{Freiberg2007}, resonance Raman and spectral hole-burning\cite{Small2001} studies of \textit{BChl a} pigments have shown a densely pack low-frequency FC vibrational spectrum of \textit{BChl a} pigments, with Huang-Rhys factors of the order of 0.03 or smaller. Several such vibrations have also been recently reported in two-dimensional spectroscopic studies\cite{Ogilvie2018} on isolated \textit{BChl a} pigments. For the purpose of illustration, we have chosen to analyze the case of vibronic resonance in the FMO protein complex comprising of \textit{BChl a} pigments using experimentally established parameters described previously\cite{Tiwari2013,Tiwari2018}. Briefly, we focus on the $\sim$200 cm$^{-1}$ \textit{BChl a} vibrational frequency. A Huang-Rhys factor of 0.025, typical for \textit{BChl a} pigment, is used to describe this FC active vibration. The 200 cm$^{-1}$ vibrational frequency is resonant with the exciton energy gaps in the FMO protein. Excitons energy gaps 1-3 and 2-5 are approximately resonant with the above vibrational frequency (see Table 7 of ref.~\cite{Aartsma1998}). Vibronic resonances at other vibrational frequencies are also likely. For instance, as noted in ref.\cite{Tiwari2018}, the low temperature excitonic splitting seen in the linear absorption spectrum of the FMO protein\cite{Aartsma1998,Wendling2000,Wendling2000} is resonant with a $\sim$160 cm$^{-1}$ FC active vibration of \textit{BChl a}.\\ We choose a site energy gap $\Delta$ = 150 cm$^{-1}$, and Coulomb coupling $J$ = 66.14 cm$^{-1}$ typical\cite{Aartsma1998,Klein2011} for FMO protein, but not directly accessible experimentally, to reproduce the expected excitonic energy gap, $\Delta_{ex}$ = 200 cm$^{-1}$. The energy gap for ground to singly-excited electronic transition, $\omega_{eg}$ is 11574 cm$^{-1}$. The $Q_y$ \textit{BChl a} transition dipole between the dimer pigments is assumed to be of equal magnitude and perpendicular. Note that the vibronic intensity borrowing and exciton delocalization effects discussed here are expected to be enhanced when constructive interference between pigment transition dipoles is possible. Similar parameters for FMO vibronic exciton models have been used in previous reports\cite{Tiwari2013,Tiwari2018,Mancal2012,Ishizaki2015}. The absorption and emission intensities have been calculated at 4K. While the differences in calculated line strengths between exact and one-particle descriptions are expected, it is essential to weigh in the obscuring effects of line-broadening caused by the low-frequency protein phonon sideband. A critically damped Brownian oscillator lineshape with frequency 70 cm$^{-1}$ and stabilization energy 15 cm$^{-1}$ is modeled to reproduce the total reorganization energy of $\sim$20 cm$^{-1}$ reported\cite{Freiberg2007} for FMO protein at 5K. The lineshape is plotted on top of the calculated intensities to show that even at cryogenic temperatures several key spectroscopic differences in one-particle description may be obscured. For all calculations, a total of 9 vibrational states are allowed on each electronic state for an exact description. For one-particle description of the excited states $\ket{A}\ket{0_B}$ and $\ket{0_A}\ket{B}$, only one vibrational state is allowed on the ground electronic state of the unexcited pigment. The chosen number of vibrational quanta ensured convergence of eigenvalues to less than 1x10$^{-4}$ for first 28 eigenvectors in the exact basis set description, and first 11 eigenvectors in the one-particle basis set description. \subsection{Numerically Exact Linear Spectra -- Reduced Analytical Description} Figure \ref{fig:fig2} calculates the linear absorption and emission transition strengths along with lineshapes, for exact (top panel) versus one-particle (middle panel) description of the dimer. Figure \ref{fig:fig1} shows that the resonance condition for the exact description case is not longer valid in case of one-particle description. The modified criterion, $(\Delta_{ex} + \Delta)/2 = \omega$ suggests that the vibrational frequency of the dimer can be explicitly adjusted to 175 cm$^{-1}$, to achieve resonance. It is informative to analyze the line strengths resulting from this modified criterion in order to gauge the usefulness of the 1PA. These are shown in the bottom panel of Figure \ref{fig:fig2}. Below we compare the numerically exact results in Figure \ref{fig:fig2} against peaks strengths and positions calculated using the analytical forms of the resonant manifold Hamiltonians in Eqns.~\ref{eq8} and \ref{eq12}. These comparisons are summarized in Table 1. Analytical comparisons serve a dual purpose. Firstly, since exact analytical solution to the full Hamiltonian in the intermediate coupling regime is not possible, a comparison of the exact results to those expected from \textit{only} considering the resonant manifold, while ignoring the remaining manifold, can help estimate the perturbative effect of the remaining vibronically coupled manifold. Secondly, an analytical approach which could well-approximate the exact numerical results could be useful to elucidate the differences which arise between the exact and one-particle treatments. \\ \begin{figure*}[h!] \centering \includegraphics[width=2.5 in]{fig2_abs_ems.png} \caption{\footnotesize A comparison of absorption and emission intensities, and cross-sections, calculated using an exact (top), one-particle (middle) and one-particle description with modified resonance criterion (bottom). The spectra correspond to the dimer model with vibronic resonance presented in Section 2. The `sticks' correspond to the transition strengths, and the lineshapes are the corresponding cross-sections. The cross-sections are normalized to the transition strength of the lowest exciton. The spectra are calculated at 4K such that all transitions start from the ground electronic and vibrational state. Blue and red curves denote absorption and emission cross-sections respectively. The peak positions and strengths are mentioned in Table 1.(\textbf{Top}) Spectra calculated using an exact basis set description of the dimer. (\textbf{Middle}) Dimer spectra calculated using the one-particle basis set. (\textbf{Bottom}) Dimer spectra calculated under one-particle basis set, but with vibrational frequency lowered to 175 cm$^{-1}$ in order to artificially bring the donor exciton state into resonance with the quantum of excitation on the acceptor pigment.} \label{fig:fig2} \end{figure*} Starting from Eqn.~\ref{eq5}, the electronically off-diagonal vibronic coupling elements couple the isoenergetic states comprising the \textit{resonant} manifold, while the electronically diagonal but vibrationally off-diagonal elements only couple the vibrational manifolds on the exciton differing by a quanta of vibrational excitation, for instance, $3\times3$ manifold with $5\times5$ manifold in Figure \ref{fig:fig1}. For the small FC displacements considered here, the perturbative effect from the latter coupling elements on the energy \textit{gaps} can be ignored upto second order in perturbation theory\cite{Sinanoglu}. The position of the lowest exciton is thus predicted to be $\omega_{eg}-\frac{\Delta_{ex}}{2} - \frac{1}{2}\omega d^2(1-\cos[2](\theta_{d})\sin[2](\theta_{d})) = 11469.5$ cm$^{-1}$, where the last term on the left hand side is the \textit{energetic offset} to the entire spectrum arising from second-order perturbations. This is within 0.02 cm$^{-1}$ of the numerically exact result. The strength of the ground to lowest exciton transition is $0.979$, compared to $e^{-(d^{\alpha}_{A})^2/2}.e^{-(d^{\alpha}_{B})^2/2} = 0.981$ expected FC intensity. The intensity of the 0-1 vibrational satellite in the emission spectrum arising from the lowest exciton, located at 11269.5 cm$^{-1}$, also conforms to the analytical value of $0.0192$ calculated using the FC factors arising from $d^{\alpha}_{A}$ and $d^{\alpha}_{B}$. As discussed in previous reports\cite{Schulze2014,Tiwari2018,Spano2009}, reduced intensity of vibrational satellite in the emission spectrum compared to what is expected from an isolated monomer is indicative of exciton delocalization. Features in the upper exciton region arising from considering only the $3\times3$ resonant manifold in Eqn.~\ref{eq8}, such as the intensity of the 0-1 vibrational progression of the lowest exciton, the vibronic splitting of $\omega d \sin(2\theta_d)$ = 29.6 cm$^{-1}$, along with the predicted intensities of the split peaks, are in good agreement with the numerically exact results, and summarized in Table 1. A similar analysis has been presented in ref.~\cite{Tiwari2018} using a delocalized vibrational basis. The analysis presented here is in the localized vibrational basis, and also considers the energetic offsets coming from second order perturbations.\\ \begin{table} \begin{tabularx}{\linewidth}{ c *{5}{C} } \toprule Transition & \multicolumn{2}{c}{Peak Positions (in cm$^{-1}$)} & \multicolumn{3}{c}{Line Strengths} \\ \midrule No. & Numerical & Analytical & Numerical & Analytical & \%|error| \\ \hline \\ 0 & 11469.5 & 11469.5 & 0.9792 & 0.9806 & 0.2 \\ 1$_{ems}$ & 11269.5 & 11269.5 & 0.0195 & 0.0192 & <1.5 \\ \hline \\ 1 & 11654.9 & 11654.7 & 0.4767 & 0.4939 & 3.4 \\ 2 & 11669.5 & 11669.5 & 0.0122 & 0.0123 & <0.2 \\ 3 & 11684.1 & 11684.3 & 0.5104 & 0.4939 & 3.3 \\ \hline \\ 4 & 11849.2 & 11848.6 & 0.0056 & 0.0054 & 3.6 \\ 5 & 11854.9 & 11854.7 & 0.0060 & 0.0062 & 3.3 \\ 6 & 11869.5 & 11869.5 & <10$^{-4}$ & <10$^{-4}$ & NA \\ 7 & 11884.1 & 11884.3 & 0.0064 & 0.0062 & 3.1 \\ 8 & 11889.8 & 11890.4 & 0.0032 & 0.0054 & 41 \\ \bottomrule \end{tabularx}\label{table1} \caption{Comparison of exact (numerical) versus analytically calculated line strengths in the linear spectra of the dimer described in an exact basis set and plotted in Figure \ref{fig:fig2}(top). 1$_{ems}$ denotes the transition corresponding to the 0-1 emission vibrational satellite.} \end{table} Table 1 also compares the numerical versus analytical peak positions and strengths for transitions to the $5\times5$ resonant manifold. Eqn.~\ref{eq10} predicts 2 pairs of vibronic splittings with 29.6 cm$^{-1}$ and 41.8 cm$^{-1}$, compared to 29.2 cm$^{-1}$ and 40.6 cm$^{-1}$ obtained numerically by considering all manifolds. The peak intensity of the 0-2 FC transition located at 11869.54 cm$^{-1}$ is less than 10$^{-4}$ as expected\cite{Tiwari2018} analytically as well. The transition strengths in the 5$\times$5 manifold can be calculated by evaluating the FC factors between the ground vibrational state and the analytical eigenvectors expected from the Hamiltonian in Eqn.~\ref{eq10}. Such an analysis yields analytical intensities of 0.0062 for the peaks split by $\sim$29 cm$^{-1}$, compared to 0.0060 and 0.0064 obtained numerically for the lower and upper split peak respectively. For the peaks split by $\sim$41 cm$^{-1}$, analytical transitions strengths of 0.0054 are expected. Numerically, these peaks have strengths of 0.0056 and 0.0032, for the lower and upper split peak, respectively.\\ Note that only the 2$^{nd}$ order perturbations to the eigenvalues of the resonant manifolds from the non-resonant neighboring manifolds are considered in the above analysis, while the perturbations to the eigenfunctions of the 3$\times$3 or 5$\times$5 Hamiltonians, which can contribute to the expected line strengths, are ignored. Despite that, transition strengths calculated from the reduced 5$\times$5 Hamiltonian in Eqn.~\ref{eq10} are generally in agreement to numerically exact results to within 4$\%$, and peak splittings to within $\sim$1 cm$^{-1}$, indicating negligible perturbations from non-resonant states to the resonant manifolds. Note that the the transition strength of the highest peak in the 5$\times$5 manifold is almost 1.7x smaller than expected analytically. This reflects the perturbative effect of the set of 7 resonant states of the higher manifold, on the energetically closest state of the 5$\times$5 manifold.\\ \textit{The above comparisons underscore the point that for weak FC displacements, the reduced 3$\times$3 and 5$\times$5 Hamiltonians in the diabatic excitonic basis can \textit{analytically} describe the effects of resonant vibronic coupling to within 4$\%$ of the exact result obtained by numerical diagonalization of the full Hamiltonian. However, as vibronically split states between neighboring manifolds become energetically closer, a reduced analytical treatment of only the resonant manifolds in the Hamiltonian is expected to breakdown.} \subsection{One-Particle Linear Spectra -- Reduced Analytical Description}\label{1pa} The dimer spectrum calculated under 1PA description is shown in the middle panel of Figure \ref{fig:fig2}. As expected from the modified near-resonant manifold and reduced vibronic coupling in the 1PA description, the peak positions and intensities in the upper exciton region are significantly different from the exact description. The lineshapes suggest that several of such differences can be easily obscured even at cryogenic temperatures. Below we rationalize the observed differences using a reduced analytical description for the 1PA case.\\ The 2$\times$2 near-resonant manifold is shaded in grey in the right panel of Figure \ref{fig:fig1}. Based on the one-particle Hamiltonian in Eqn.~\ref{eq11}, the near-resonant manifold is coupled to basis states $\alpha_{00}$ and $B_{01}$ through electronically diagonal but vibrationally off-diagonal coupling terms $-\omega d \cos (\theta_{d})/ \sqrt{2}$. Coupling matrix elements between higher vibrational states such as $A_{10}$, $A_{20}$, or $B_{01}$, $B_{02}$, remain unchanged as $-\omega d / \sqrt{2}$. Thus, within a given electronic sub-space, the concept of an effective FC displacements in going from site diabatic to excitonic basis is not well defined in a 1PA description, and FC factors with effective displacements in the site exciton basis cannot be used to predict line strengths. Therefore, estimation of line strengths of the lowest exciton and the vibrational satellites in the emission spectrum, will be done by perturbatively correcting the wavefunction upto 1$^{st}$ order. In order to analytically describe the peak positions in the 1PA spectra, we will follow an equivalent approach as for the exact description case, and consider the 2$^{nd}$ order perturbative effect of the upper states on the lowest exciton peak position. The near-resonant manifold is still described only by the 2$\times$2 reduced manifold. All the comparisons of numerical versus analytical results for transitions between $G_{00}$ and 1PA manifolds in right panel of Figure \ref{fig:fig1}, are summarized in Table 2. The corresponding transitions are shown in the middle panel of Figure \ref{fig:fig2}. \\ \begin{table} \begin{tabularx}{\linewidth}{ c *{5}{C} } \toprule Transition & \multicolumn{2}{c}{Peak Position (in cm$^{-1}$)} & \multicolumn{3}{c}{Line Strengths} \\ \midrule No. & Numerical & Analytical & Numerical & Analytical & \%|error| \\ \hline \\ 0 & 11469.8 & 11469.8 & 0.9819 & 0.9821 & 0.02 \\ 1$_{ems}$ & 11269.8 & 11269.8 & 0.0171 & 0.0173 & 1.2 \\ \hline \\ 1 & 11664.7 & 11665.6 & 0.8475 & 0.8727 & 2.9 \\ 2 & 11697.1 & 11699.1 & 0.1411 & 0.1273 & 7.8 \\ \hline \\ 3 & 11844.3 & 11844.0 & 0.0288 & 0.0286 & 0.7 \\ 4 & 11894.0 & 11894.0 & 0.0003 & 0.0003 & NA \\ \bottomrule \label{table2} \end{tabularx} \caption{Comparison of exact (numerical) versus analytically calculated line strengths in the linear spectra of the dimer described in a one-particle basis set and plotted in Figure \ref{fig:fig2}(middle). 1$_{ems}$ denotes the transition corresponding to the 0-1 emission vibrational satellite.} \end{table} The analytically expected position of the lowest exciton is calculated as -- \begin{equation} \omega_{eg}-\frac{\Delta_{ex}}{2} - \frac{1}{2}\omega d^2\left(\frac{\omega \cos[2](\theta_{d})}{\omega + (\frac{\Delta_{ex} - \Delta}{2})} + \frac{\omega \sin[2](\theta_{d})}{\omega + (\frac{\Delta_{ex} + \Delta}{2})}\right), \label{eq13} \end{equation} where the last term on the left hand side arises due to second order perturbation from the FC displacement dependent second term in Eqn.~\ref{eq11}. In order to calculate the line strength for the lowest exciton and its vibrational satellite in the emission spectrum, we consider the 1st order perturbative corrections to $\alpha_{00}$ from the states $A_{10}$ and $B_{01}$ given by -- \begin{eqnarray} \ket{0} &=&\ket{\alpha_{00}} +\big(\frac{-\omega{d}\cos(\theta_{d})/\sqrt{2}}{\epsilon_{\alpha_{00}}-\epsilon_{A_{10}}})\ket{A_{10}} +\big(\frac{\omega{d}\sin(\theta_{d})/\sqrt{2}}{\epsilon_{\alpha_{00}}-\epsilon_{B_{01}}})\ket{B_{01}}. \label{eq14} \end{eqnarray} Note that the above state is not normalized. From Eqn.~\ref{eq14}, the transition strength for ground state $G_{00}$ to the normalized lowest exciton is estimated to be 0.9821, which is within 0.02$\%$ of that calculated by numerical diagonalization. Similarly, the transition strength for the 0-1 emission satellite, located at 11269.8 cm$^{-1}$, is 0.0173, which is within 1.2$\%$ of the numerical result. \textit{Note that 1PA description predicts ~11$\%$ higher intensity of the 0-1 vibrational satellite in the emission spectrum compared to the exact description} -- 0.0171 versus 0.0195.\\ For the 2$\times$2 near-resonant manifold under the upper exciton, due to modification of the resonance condition, the diagonal energy difference between $\beta_{00}$ and $A_{10}$ states leads to a vibronic mixing angle of -- \begin{eqnarray} \theta_{VE}^{1pa} = \frac{1}{2} \tan^{-1} \left(\frac{\omega d \sin(2\theta_{d})/\sqrt{2}\cos(\theta_{d})}{\omega - \frac{\Delta_{ex} + \Delta}{2}}\right), \label{eq15} \end{eqnarray} which yields $\theta_{VE}^{1pa}$ = 20.9$^o$. \textit{Note that the vibronic mixing angle in exact description is 45$^o$. The mixing angle is substantially reduced because 1PA description does not capture the vibronic resonance condition.} As a result, the vibronic splitting predicted by the Eqn.~\ref{eq12} is $\left[(\omega - \frac{\Delta_{ex} + \Delta}{2})^2 + (\omega d \sin(2\theta_{d})/\sqrt{2}\cos(\theta_{d}))^2\right]^{1/2} = 33.5$ cm$^{-1}$, compared to 32.4 cm$^{-1}$ obtained by numerical diagonalization of full 1PA Hamiltonian.\\ Based on the above calculated vibronic splittings and the peak position of the lowest exciton, the approximate peak positions for the two peaks from Eqn.~\ref{eq12} are 11665.6 cm$^{-1}$ and 11699.1 cm$^{-1}$. Note that the above calculation of the peak positions assumes that the energetic offset imparted by second order perturbative correction to the lowest exciton position, also holds for the upper exciton states. However, the fact that FC displacement dependent coupling terms are different for the 4$\times$4 manifold versus higher vibrational states, implies that the above assumption, which was exact under two-particle basis set description of the dimer, will have limitations for 1PA case. The transition strength from the ground state ($\ket{G_{00}}$) to the above states are approximately $\cos[2](\theta_{VE}^{1pa}) = 0.8727$ and $\sin[2](\theta_{VE}^{1pa}) = 0.1273$, compared to $0.8475$ and $0.1411$ calculated numerically. \textit{Interestingly, unlike the exact case description of the dimer, a 1PA description does not predict a FC vibrational progression for the lowest exciton under the upper exciton region.}\\ In order to analytically estimate the peak positions for transitions arising from $G_{00}$ to manifolds above the 2$\times$2 manifold in Fig.\ref{fig:fig1}, we again assume the validity of the second order perturbative correction to the lowest exciton position. The expected positions of the next two peaks are estimated to be at $ 11469.8 + 375 = 11844.8$ cm$^{-1}$ and $ 11469.8 + 425 = 11894.8$ cm$^{-1}$. From the vibrational Hamiltonian in Eqn.~\ref{eq11}, the line strength for $G_{00}$ to $B_{01}$ transition can be well-estimated by considering the perturbative mixing of $B_{00}$ with $\beta_{00}$ upto 1$^{st}$ order, which is calculated to be $\left(\frac{\omega d \cos(\theta_d)/\sqrt{2}}{(\Delta_{ex} + \Delta)/2}\right)^2$. \textit{Note that the expected transitions in the 5$\times$5 exciton manifold, although very weak, show substantial deviations in the 1PA description.} \\ \subsection{Comparison of Exact versus One-Particle Linear Spectra} Based on the above discussion of the dimer linear spectra, it is seen that for exact basis set description, analytically treating only the states in the resonant manifold, along with second order perturbative corrections to the energetic offsets of the analytic eigenstates, can reproduce the absolute peak positions and vibronic splittings typically to within 0.5 cm$^{-1}$ of that obtained from numerical diagonalization of the entire Hamiltonian. The line strengths obtained using this analytical description are typically within 4\% of the numerical results. Using a similar analytical approach for 1PA description of the dimer shows that peak positions and vibronic splittings can be reproduced to typically within 1 cm$^{-1}$. Following the same approach, the line strengths in the 2$\times2$ near-resonant manifold come out to be within 8\% of the numerical result. Since there are no effective FC factors in 1PA description, a first order perturbative treatment allows for estimation of line strengths of the lowest exciton, and its 0-1 vibrational satellite in the emission spectrum to within 1\% of the exact result. \\ Due to resonantly coupled manifolds maximizing the contributions from two-particle basis states, the absorption spectra between the two descriptions of the dimer show pronounced differences in peak positions, vibronic splittings and intensities. Below we discuss some of the features, which may be observable in linear spectroscopic experiments at cryogenic temperatures, but are not reproduced by a 1PA description of vibronic resonance. \subsubsection{FC Progression of the Lowest Exciton}\label{fc} Ref.\cite{Tiwari2018} has discussed the effects of delocalized vibrations on hole burning spectra. The holes created by anti-correlated vibrations are expected to be washed out because of energetic inhomogeneity in vibronic splittings under the upper exciton. However, the position of the vibrational satellite peak exactly 200 cm$^{-1}$ away from the lowest exciton is not dependent on the anti-correlated inhomogeneity. With sufficiently high signal-to-noise ratio, this FC vibrational progression is expected to show up as a sharp satellite upon hole-burning the lowest exciton. However, as ref.\cite{Tiwari2018} points out, exciton delocalization along anti-correlated vibrational coordinates leads to 1/2x reduction in the intensity of this feature in a dimer. A similar reduction is calculated here analytically using the effective FC factors associated with $G_{00}$ to $(\alpha_{10} + \alpha_{01})/\sqrt{2}$ transition. The current analysis considers perpendicularly arranged pigment transition dipoles, and hence constructive interference effects between transition dipoles are not considered. For the case when pigment transition dipoles are arranged as a J-aggregate, the lower exciton $\alpha$ gains maximum intensity while the upper exciton loses intensity due to destructive interference between the dipoles. The additional transition dipole strength gained by $\alpha$ counters the reduction in effective FC factors, such that the vibrational satellite feature in the hole-burning spectra of J-type dimeric aggregates can be up to 2x stronger, and more likely to be above the experimental noise floor. Further, the vibrational satellite is expected to increase as the size of the J-aggregate becomes larger. From Figure \ref{fig:fig2}, it is seen that the vibrational satellite feature of the lowest exciton is completely missed by a 1PA description of the dimer. Instead, an artifact feature arising due to transition from $G_{00}$ to a state of predominantly $B_{01}$ character, labeled as transition 3 in Table 2, attains intensity as high as that expected from a true FC vibrational satellite. In an undisplaced vibrational basis, it can be seen that this transition is made possible due to mixing of $B_{01}$ with the upper exciton $\beta_{00}$ as seen in Eqn.~\ref{eq11}. It is therefore interesting to note that the strength of this feature is expected to decrease for J-aggregates because of a dark upper exciton, leading to a false suppression of the artifact. An opposite effect, that is, increasing artifact intensity of expected for H-aggregates. The missing FC satellite and the artifact indicated above may not be conspicuous under broad J- or H- bands in tubular aggregates at room temperature, where 1PA description can provide qualitative experimental agreement\cite{Megow2016} of linear spectra. \\ Briggs and co-workers have investigated \cite{Briggs2008} the performance of CES approximation, numerically equivalent to a 1PA description, in reproducing linear absorption spectra of molecular aggregates with an intramolecular vibration, across various coupling regimes classified by Simpson and Petersen \cite{Peterson1957}. As seen in Figures 7 and 8 of ref.~\cite{Briggs2008}, a good agreement of FC progressions between exact numerical diagonalization and 1PA description for weakly coupled dimeric or larger J-aggregates is achieved. However, for J-aggregates in strong or intermediate coupling regimes, shown in Figures 5 and 6 of ref.~\cite{Briggs2008}, the FC progressions are not reproduced by 1PA description for any aggregate size. For photosynthetic excitons discussed here, intermediate coupling regimes, where electronic and vibrational-electronic couplings become comparable, are typical. For the particular case of vibronic resonance discussed here, the contributions from two-particle states are maximized even under weak coupling regime due to resonant intensity borrowing from the upper exciton, suggesting that judging the efficacy of 1PA based on standard coupling criterion may not hold for the case of vibronic resonance. Petelenz et al. have analyzed\cite{Petelenz2007} a modified approach to 1PA, akin to the one adopted here, where coupling between 1PA basis states with different vibrational quanta are allowed, as opposed to a conventional 1PA approach where such coupling elements are not allowed. The modified 1PA approach accounts for larger number of intermolecular interactions for the same reduced basis set description. They report that the modified 1PA description well reproduces the FC progressions in the polarized absorption spectrum for weak couplings, although higher FC progressions, such as the 5$\times$5 manifold discussed here, are not reproduced. For intermediate to strong coupling cases, the 1PA approach causes FC artifacts near the upper exciton, and intensity borrowing effects between the upper and lower excitons are not captured. For example, see lowest panel of Figure 1 of ref.~\cite{Petelenz2007}. They have cautioned against relying on phenomenological line shapes to fit low resolution experimental absorption spectra. \subsubsection{0-1 Emission Vibrational Satellite of the Lowest Exciton}\label{ems} The 0-1 vibrational satellite in the low-temperature emission spectrum of molecular aggregates is an indicator \cite{Spano2009,Spano2011,Schulze2014,Tiwari2018} of exciton delocalization and coherence length. Taking into account the role of vibronic coupling when measuring enhanced radiative rates in molecular aggregates, Spano and co-workers have provided a direct determination\cite{Spano2011} of exciton coherence length through the observed ratio of photoluminescence intensity in the 0$^{th}$ and 1$^{st}$ emission bands. Similar effects of exciton delocalization on the emission line strengths have been investigated\cite{Schulze2014,Tiwari2018} in the context of photosynthetic excitons. From the analysis of emission intensity captured by exact versus 1PA description, shown in Tables 1 and 2 respectively, it is seen that analytical approach presented here reproduces the expected intensity within 1\%. For the exact description, effective FC displacements in the diabatic exciton basis, $d_{A}^{\alpha}$ and $d_{B}^{\alpha}$, yield 0-1 FC emission intensity $\frac{d^2}{2}\left({\cos[4](\theta_{d})+\sin[4](\theta_{d})}\right) e^{-\frac{d^2}{2}\left({\cos[4](\theta_{d})+\sin[4](\theta_{d})}\right)}$, which simplifies to the result of ref. \cite{Tiwari2018}. This 0-1 emission intensity carries contributions from two-particle basis states $A_{01}$ and $B_{10}$ which are neglected in the 1PA approximation of the lowest exciton derived in Eqn.~\ref{eq14}. Owing to these missing contributions from two-particle states, the 0-1 emission intensity in 1PA description is 10\% lower than what is expected. Thus, estimating exciton coherence lengths through photoluminescence under 1PA description overestimates exciton delocalization. Note that for the case of J or H aggregates, interference between one- and two-particle states, for example, constructive interference between $A_{10}$ and $B_{10}$ states for the case of J aggregate, can further exacerbate the effect of missing two-particle contributions in the emission intensities as well as in the polarized linear spectra. Similar interference effects have been reported\cite{Spano2003} by Spano et al. in the context of $\pi$-conjugated oligomeric aggregates. \subsubsection{Intensity Borrowing in the Upper Exciton, and Width of Vibronic Resonance}\label{resonance} From Figure \ref{fig:fig1}, it is clear that the largest changes in the linear spectra caused by vibronic resonance occur under the upper exciton. The Hamiltonian in Eqn.~\ref{eq8} obtained after the transformation $U_{3\times3}$, indicates that the vibronic splitting seen under the upper exciton occurs due to resonant intensity borrowing from the upper exciton $\beta_{00}$ to the lower exciton state $\frac{1}{\sqrt{2}}(\alpha_{10}-\alpha_{01})$. This is also discussed in ref.\cite{Tiwari2018} using a delocalized vibrational basis. Resonant intensity borrowing maximizes the contribution of optically dark two-particle states, such as $B_{10}$ and $A_{01}$ basis states which participate in the exciton states $\alpha_{10}$ and $\alpha_{01}$, respectively. As seen in Figure \ref{fig:fig1} right panel, missing two-particle contributions lead to modification of resonance condition such that the states $\beta_{00}$ and $A_{10}$ are off-resonant by $\left(\frac{\Delta_{ex}-\Delta}{2}\right)$, with their vibronic coupling reduced by a factor of $\sqrt{2}\cos(\theta_{d})$. Consequently, the intensity borrowing between the upper exciton and the vibrational quantum on the lower exciton is incomplete, resulting in only $\sim$14\% intensity redistributed to $A_{10}$ (compare upper panel to middle panel of Figure \ref{fig:fig2}). The vibronic splitting becomes 32.4 cm$^{-1}$, compared to 29.2 cm$^{-1}$ from an exact calculation, and the resulting peak positions under the upper exciton differ from the exact peak positions by as much as $\sim$13 cm$^{-1}$ (compare Tables 1 and 2).\\ The incomplete intensity borrowing is reflected by the vibronic mixing angle $\theta_{VE}$ in Eqn.~\ref{eq15} which reduces from perfect mixing (45$^o$) to incomplete mixing (20.9$^o$). As discussed earlier, this reduction can be artificially compensated for by adjusting the vibrational frequency to $\omega - \left(\frac{\Delta_{ex}-\Delta}{2}\right) = 175$ cm$^{-1}$. The resulting spectrum is shown in the lower panel of Figure \ref{fig:fig2}. With this adjustment, the linear absorption spectrum shows approximately equal intensity vibronic splittings under the upper exciton, thus providing a qualitatively similar low-temperature absorption lineshape compared to the exact description. Features of the resulting spectrum, namely the intensities and positions of vibronically split peaks, the FC vibrational progression artifact in absorption, and the reduced intensity of the 0-1 emission peak, are all consistent with those estimated from the reduced analytical approach discussed in Section \ref{1pa}. Note that adjustments to experimentally established resonance parameters cannot remedy the 0-1 FC artifact in absorption, and the reduced 0-1 emission intensity in one-particle description. In addition, adjusting the vibrational frequency to establish resonance leads to differences in the 0-1 emission peak position by as much as $\left(\frac{\Delta_{ex}-\Delta}{2}\right)$. \\ The vibronic splitting obtained after adjusting the vibrational frequency to achieve resonance, is still reduced by a factor of $\sqrt{2}\cos(\theta_{d})$ compared to the exact description (in addition to the reduction in splitting due to reduced vibrational frequency). However, the vibronically split lineshapes under the upper exciton may obscure such differences of 1PA description even at cryogenic temperatures. The B-term in asymmetric Raman scattering can lead\cite{LongBook} to anomalous depolarization ratios indicative of vibronic mixing. However, for the vibronically mixed pair of states considered here, both exact and 1PA descriptions predict B-terms of opposite signs for the two states in the pair. Hence, asymmetric Raman scattering measurements under the upper exciton may not be able to resolve the vibronic mixing. Ref. \cite{Tiwari2018} has discussed the physical significance of the width of vibronic resonance for photosynthetic pigments with dense low-frequency vibrational spectrum\cite{Bocian1995,Freiberg2011}, where multiple near-resonant modes can contribute to vibronic mixing. However, without explicit adjustment of multiple experimental parameters, \textit{a 1PA description is expected to significantly underestimate the role of near-resonant vibrations in photosynthesis}. \subsection{Vibronic Resonance Enhances Population Transfer} Several previous studies\cite{Briggs1972,Briggs2005,Briggs2008,Schulze2014,Painelli2019} on comparisons between reduced and exact descriptions of molecular aggregates have relied on absorption and emission lineshapes in order to assess the quality of the 1PA approximation. As pointed out earlier\cite{Petelenz2007}, phenomenological fits to linear spectra using a reduced basis set description may yield qualitative agreement by obscuring the changes in transition strengths and vibronic splittings discussed above. Below we argue that such changes become apparent when quantum dynamics expected from a 1PA description of vibronic resonance is compared with the exact description. \\ Following an earlier\cite{Peters2017} approach, in order to visualize the dynamics of vibronic excitons without the influence of the bath, we create a time-dependent superposition of excited state eigenvectors using an impulsive laser excitation. Since the bath vibration which couples strongly to the electronic Hamiltonian through resonant vibronic coupling is treated explicitly, the short time dynamics will be dictated by such a vibrational mode, while system-bath couplings, which couple weakly to this vibrational-electronic system, manifest on longer timescales. By ignoring the system-bath couplings, quantum relaxation processes such as quantum decoherence, electronic population and vibrational relaxation, are not considered, such that the resulting wavepacket motions are purely dictated by the explicit vibrational-electronic system Hamiltonian. Differences in the dynamics between a 1PA and exact descriptions will then solely arise from contributions of two-particle states. Any differences in the wavepacket dynamics will ultimately reflect the changes seen in the vibrational-electronic manifold in 1PA description (right panel of Figure \ref{fig:fig1}). \\ Under first order time-dependent perturbation, a light-matter interaction connects the initial state $\ket{G_{\nu_A,\nu_B}}$ to a set of final states $\ket{\psi_n}$ with energies $E_n$. This interaction can be expressed\cite{Jonas2003,Peters2017} by an operator $\mathbf{\hat{{I}}} + (1/i\hbar)\sum_{n}\ket{\psi_n}\bra{\psi_n}(-{\hat{\bm{\mu}}\cdot\vec{\bm{\epsilon}}})\ket{G_{\nu_A,\nu_B}}\bra{G_{\nu_A,\nu_B}}$, where $\hat{\bm{\mu}}$ denotes the operator for the transition dipole vector expressed in molecular coordinates, and $\vec{\bm{\epsilon}}$ is the unit vector for the electric field polarization in the laboratory coordinate frame. The electric field is assumed to be a spectrally constant delta function pulse with unit magnitude. The time-dependent wavepacket resulting from a linear combination of the projections of the Boltzmann factor weighted initial state, on the excited state eigenvectors, can be expressed as -- \begin{eqnarray} \ket{\psi(t)}&=& \frac{i}{\hbar}\sum_{n}\ket{\psi_n}\bra{\psi_n}{\hat{\bm{\mu}}.\vec{\bm{\epsilon}}}\ket{G_{{\nu_A},{\nu_B}}}\exp(-iE_nt/\hbar)\rho_{{\nu_A},{\nu_B}}, \end{eqnarray} where $\rho_{{\nu_A},{\nu_B}}$ is the Boltzmann occupation probability for the initial state $\ket{G_{{\nu_A},{\nu_B}}}$. Note that the contributions to the wavepacket starting from all Boltzmann weighted ground electronic states are allowed to interfere, whereas transition amplitudes to pigments $A$ and $B$ do not interfere because of perpendicular pigment transition dipoles. For the case of 1PA description of the dimer, only one-particle states such as $A_{\nu_A0}$ and $B_{0\nu_B}$ are considered. An electric field with polarization parallel to the donor pigment $B$ is used to excite the system and the resulting wavepacket is projected on the lowest acceptor state $A_{00}$, such that the square of this complex amplitude yields the time-dependent acceptor probability density, or population. The analytic forms of the eigenvectors $\psi_n$ derived using a reduced analytical treatment of the Hamiltonians in Eqns.~\ref{eq3} and \ref{eq4}, can be used to derive analytic expressions for time-dependent probability density. \begin{figure*}[h!] \centering \includegraphics[width=3 in]{fig3_dynamics.png} \caption{A comparison of quantum coherent dynamics expected from a superposition of vibronic eigenvectors. The plots show the time-dependent population on the acceptor pigment following an impulsive excitation with a laser polarized parallel to the donor pigment. The `2PA', `1PA' and `1PA$_{-}$reso' legends correspond to the eigenvectors which give rise to the spectra in Figure \ref{fig:fig2}, top, middle and bottom panels, respectively. The parameters are described in Section \ref{params}. (\textbf{top}) Comparison of exact versus 1PA description of the dynamics at 4K. The `1PA$_{-}$reso' dynamics corresponds to the case where modified resonance condition in 1PA description is compensated by artificially adjusting the vibrational frequency to bring it into resonance with the upper exciton. Vibronic resonance enhances population transfer such that $\sim$85\% of the population is transferred to the acceptor within 2.5 vibrational periods. (\textbf{bottom}) The above calculation at 300K. 1PA description does not capture the contributions to population transfer arising from 5$\times$5 and higher manifolds, shown in Figure \ref{fig:fig1}, because absence of two-particle states causes the corresponding 1PA manifolds to be uncoupled. In contrast, in the exact case (`2PA'), contributions from 5$\times$5 manifold interfere constructively with those from 3$\times$3 manifold, and lead to $\sim$88\% population transfer at 300K.} \label{fig:fig3} \end{figure*} Figure \ref{fig:fig3} (upper panel) shows the time-dependent population on the acceptor pigment after excitation of the donor pigment at 4K. Under the exact description for a dimer, $\sim$90\% of the population is transferred on the accpetor on timescales dictated by 29.2 cm$^{-1}$ electronically off-diagonal vibronic coupling in Eqn.~\ref{eq8}. The faster oscillations correspond to coherent superpositions of purely electronic character, and oscillate at the exciton energy gap of 200 cm$^{-1}$. Due to only partial electronic mixing between the pigments, given by the diabatic mixing angle $\theta_d$, only $\sim$40\% of the population is transferred without vibronically assisted energy transfer. In comparison, the 1PA description, which does not capture the resonance condition correctly (Figure \ref{fig:fig1}), predicts only $\sim$60\% population transfer. As discussed in Section \ref{resonance}, this is also reflected by the lower vibronic mixing angle $\theta_{VE}^{1pa}$ in 1PA description. When the vibronic resonance condition is modified by lowering the vibrational frequency to achieve resonance with the upper exciton (Section \ref{resonance}), 1PA description predicts $\sim$95\% population transfer, although on a noticeably slower timescale (slower by $\sim \sqrt{2}\cos(\theta_{d})$). Thus, limitations of reduced basis sets in describing resonantly coupled vibronic excitons, which may not be apparent under phenomenological fits of linear spectral lineshapes even at cryogenic temperatures, are obvious when considering quantum dynamics. If the FC displacement is also adjusted to compensate for the reduction in vibronic coupling matrix element, the 1PA description predicts similar dynamics as the exact description at 4K. Similar to the context of linear spectra in Section \ref{resonance}, adjustments to experimental parameters dictating vibronic resonance can compensate for the missing two-particle basis states to reproduce the low-temperature quantum dynamics, although, at the expense of large errors in the position of 0-1 vibrational satellites in the emission spectrum (Section \ref{ems}). \\ The loss of vibronic coupling between higher manifolds in 1PA description (Figure \ref{fig:fig1}) becomes apparent at higher temperatures. Figure \ref{fig:fig3} (lower panel) compares the above dynamics at 300 K, physiologically relevant for photosynthetic excitons. Based on the Boltzmann occupation proability for a dimer with 200 cm$^{-1}$ vibration on each pigment at 300K, only $\sim$38\% contribution to the dynamics is expected to arise from transitions between $G_{00}$ to 3$\times$3 manifold. Transitions between the ground states with one quantum of vibrational excitation, that is, $G_{10}$ and $G_{01}$ and the 2 pairs of vibronically coupled states in the 5$\times$5 manifold, each contribute by $\sim$14.6\%. Similarly, transitions between $G_{20}$, $G_{02}$ and $G_{11}$ to the three pairs of states in the 7$\times$7 manifold, each contribute by $\sim$5.5\%. Note that, as summarized in Figure \ref{fig:fig1}, the vibronic couplings between the extra pairs of vibronically coupled states in 5$\times$5 and 7$\times$7 manifolds become stronger by a factor of $\sqrt{2}$, $\sqrt{3}$, etc. Thus, population transfer rates in between these states will be proportionally faster. The overall effect of the interference between the above contributions is shown in the lower panel of Figure \ref{fig:fig3}, and indicates constructive interference between the individual contributions at 300K leading to $\sim$88\% of population transfer. For both 4K and 300K more than 85\% of the population is transferred within 2.5 vibrational periods. In contrast, the 1PA calculations with and without modifed resonance condition, do not transfer population beyond $\sim$39\%. Incomplete population transfer is a direct manifestation of uncoupled higher manifolds in 1PA description as shown in the right panel of Figure \ref{fig:fig1}. In case of linear spectra, broad lineshapes at room temperature will completely obscure any features of vibronic resonance missed by a 1PA description, whereas limitations of 1PA become evident in room temperature quantum dynamics arising from vibronic resonance. Note that in case of vibrations with larger Huang-Rhys factors, such as those in organic polymers, the limitations of reduced basis set descriptions in capturing the dynamics may become apparent even at lower temperatures. \\ Roden et al. have analyzed\cite{Briggs2009} the dynamics of molecular aggregates coupled to an effective intramolecular vibrational mode, where CES approximation, or a 1PA description, was found to be a good approximation for describing the exact quantum dynamics across coupling regimes. They have also reported that an intramolecular vibration can impede exciton propagation. Here we have shown that in case of vibronic resonances in the system, not necessarily limited to a dimer, a reduced basis set description is not adequate to describe the dynamics. Further, vibronic resonance \textit{enhances} population transfer, and this effect can be described analytically with reasonably good accuracy, using the reduced forms of the vibronic eigenvectors derived in this paper. Vibronic resonance assisted population transfer is fundamental to the nature of resonant vibronic coupling itself, and is further discussed in the following section. \subsection{Vibronic Resonance Enhances Exciton Delocalization} The above calculations of linear spectra and quantum dynamics arising from vibronic excitons highlight several spectroscopic differences between an exact versus 1PA description, which may lead to incorrect estimation of physically relevant quantities such as exciton coherence length, energy transfer timescales, and role of near-resonant vibrations, especially for systems with larger Huang-Rhys factors, or at higher temperatures. The remaining discussion in the paper summarizes the above expected differences in terms of two fundamental properties of excitons coupled through vibronic resonance, without resorting to calculations of temperature and dipole-orientation dependent spectroscopic signatures - vibronic exciton delocalization, and vibrational distortion associated with a delocalized excitation. \\ As mentioned in Section \ref{ems}, an experimental measure\cite{Spano2011} of exciton coherence length, that is, the number of aggregate sites over which the exciton is coherently delocalized, is the ratio of intensity in the low-temperature 0$^{th}$ and 1$^{st}$ emission bands. At higher temperatures, experimental estimations can become challenging due to broad lineshapes. Moreover, the reduced 0-1 emission intensity is a general feature of exciton delocalization, not specific to vibronic resonance. Exciton coherence function is often used to theoretically estimate the extent of delocalization in the presence of energetic disorder and vibronic coupling. K\"{u}hn and Sundstr\"{o}m have shown\cite{Kuhn1997} that the initial exciton delocalization is reduced by coupling to vibrations (compare Figure 8 lower and middle panels in ref.\cite{Kuhn1997}). Spano and co-workers\cite{Spano2011} have related the exciton coherence function to the experimentally measured 0-0 emission intensity and the exciton coherence length. Coherence function is sensitive to one-particle states and cannot capture the exciton delocalization in higher resonant manifolds caused by maximized contributions of two-particle states. Instead, we use another widely used metric to gauge exciton delocalization, the inverse participation ratio (IPR). Participation ratio was originally defined by Bell et al.\cite{Bell1970} in the context of delocalized normal modes in a glass lattice, and later extended by Thouless\cite{Thouless1974} to study extended and localized states of non-interacting electrons in a disordered lattice. For a purely electronic system of $N$ sites, the IPR is defined to vary between 1 to $1/N$, for a completely localized system, that is, zero electronic coupling between sites, and a perfectly delocalized wavefunction, respectively. Womick and Moran have defined IPR for vibronic excitons models where certain vibrations are explicitly treated in the system. The eigenvectors $\psi_n$ of the vibronic Hamiltonian can be expanded in the site diabatic basis as -- \begin{equation} \ket{\psi_n} = \sum_{S=A,B} \sum_{\nu_A,\nu_B} c_{S_{{\nu_A},{\nu_B}}}^n\ket{S_{{\nu_A},{\nu_B}}} \label{eq17} \end{equation} where $S$ denotes sites $A$ or $B$, with basis states $\ket{S_{{\nu_A},{\nu_B}}}$. The IPR is then defined as -- \begin{equation} \mbox{IPR}_n = \sum_{S}\left(\sum_{{\nu_A},{\nu_B}}(c_{S_{{\nu_A},{\nu_B}}}^n)^2\right)^2 \end{equation} With the above definition, we can analytically calculate the IPR using the reduced analytical description for the eigenvectors. Due to 4$^{th}$ power on the coefficients, the 1$^{st}$ order perturbative effect of the neighboring manifolds on the IPR, will be of the order of $(d/\sqrt{2})^4$ and can be ignored. The IPR for the lowest exciton $\alpha_{00}$, denoted by IPR$_0$, can then be calculated as -- \begin{equation} \mbox{IPR}_0 = \cos[4](\theta_{d}) + \sin[4](\theta_{d}) = 0.78 \end{equation} From above, we can see that a maximum IPR of 0.5 for the dimer also corresponds to the case of perfect mixing angle $\theta_{d} = 45^o$. Since the lowest exciton does not have contributions from two-particle states, IPR$_0$ remains the same under 1PA description as well. The reduced analytical forms of the 3$\times$3 manifold eigenvectors mentioned below Eqn.~\ref{eq8} and labeled here as $\ket{\psi_1}$,$\ket{\psi_2}$ and $\ket{\psi_3}$, can be used to analytically estimate the IPRs -- IPR$_{1,3}$ = 0.5, whereas IPR$_2$ = 0.78. It is seen that resonant vibronic mixing enhances the imperfect electronic mixing between the pigments $A$ and $B$, to perfectly delocalized vibronic excitons. In the linear absorption spectrum in Figure \ref{fig:fig2} (upper panel), this effect manifests as near perfect intensity borrowing under the upper exciton. State $\ket{\psi_2}$, which according to the Hamiltonian in Eqn.~\ref{eq8}, does not participate in vibronic mixing continues to be only partially delocalized, and appears only as a FC vibrational satellite of the lower exciton. It is instructive to see how the loss of two-particle basis states affects the IPR. Expressing the eigenvectors $\ket{\psi_1}$ and $\ket{\psi_2}$ of the 2$\times$2 1PA Hamiltonian (Eqn.~\ref{eq12}), in terms of 1PA vibronic mixing angle $\theta_{VE}^{1pa}$ (Eqn.~\ref{eq15}) -- \begin{align} &\ket{\psi_1} = \sin(\theta_{VE}^{1pa})\ket{A_{10}} + \cos(\theta_{VE}^{1pa})\ket{\beta_{00}} \nonumber \\ &\ket{\psi_2} = \cos(\theta_{VE}^{1pa})\ket{A_{10}} - \sin(\theta_{VE}^{1pa})\ket{\beta_{00}}, \nonumber \end{align} the IPR can be calculated as -- \begin{eqnarray} \mbox{IPR}_{1}^{1pa} &=& \left(\sin[2](\theta_{d})\cos[2](\theta_{VE}^{1pa}) + \sin[2](\theta_{VE}^{1pa})\right)^2 + \cos[4](\theta_{d})\cos[4](\theta_{VE}^{1pa}) \nonumber\\ \mbox{IPR}_{2}^{1pa} &=& \left(\sin[2](\theta_{d})\sin[2](\theta_{VE}^{1pa}) + \cos[2](\theta_{VE}^{1pa})\right)^2 + \cos[4](\theta_{d})\sin[4](\theta_{VE}^{1pa}). \label{eq20} \end{eqnarray} From Eqn.~\ref{eq20}, IPR$_1^{1pa}$ and IPR$_2^{1pa}$ is calculated to be 0.64 and 0.80, which are both within 5\% of that obtained by numerical diagonalization. Compared to the exact description, a modified resonance condition results in $\ket{\psi_1}$ and $\ket{\psi_2}$ not being perfectly delocalized excitons. On average the exciton delocalizaiton captured under 1PA is lesser by $\sim$2x. When the vibrational frequency is adjusted to compensate for the modified resonance condition in 1PA description, the vibronic mixing angle, $\theta_{VE}^{1pa}$ between the 2$\times$2 manifold increases back to 45$^o$. Correspondingly, the analytically calculated IPRs become 0.51, both within 2\% of that obtained by numerical diagonalization of the full 1PA Hamiltonian. \\ \begin{figure*}[h!] \centering \includegraphics[width=3.5 in]{fig4_ipr.png} \caption{Inverse Participation Ratio (IPR) for different vibronic exciton eigenvectors of increasing energy, denoted by vibronic state index. The vibronic state index corresponds to the states shown in Figure \ref{fig:fig1}, in increasing order of energy. The `2PA', `1PA' and `1PA$_{-}$reso' cases correspond to the linear spectra plotted in Figure \ref{fig:fig1}, top, middle and bottom panels, respectively. The parameters are described in Section \ref{params}. Only the first 16 vibronic eigenvectors are shown for each case.} \label{fig:fig4} \end{figure*} The IPR calculations using the full Hamiltonian are shown in Figure \ref{fig:fig4}, and contrast the exciton delocalization effects not described under 1PA. In line with analytical calculations, the lowest exciton is well-described under 1PA description. For the exact description, it is seen that one of the states in the resonant manifolds 3$\times$3, 5$\times$5, 7$\times$7, etc. does not contribute to vibronic mixing of $\alpha$ and $\beta$ excitons and remains partially only delocalized due to disorder $\Delta$ between the pigment sites. Ref.\cite{Tiwari2018} has described this state as having no vibrational excitation along the anti-correlated delocalized vibrational mode. Despite the site energetic disorder, all the remaining vibronically mixed excitons are perfectly delocalized due to vibronic resonance. This is counter intuitive to the idea that energetic disorder and scattering with phonons slows down exciton propagation causing localization\cite{Briggs2009}. In the case of \textit{resonant} vibronic mixing, it is seen here that \textit{energetic disorder and vibrational excitations can synergistically overcome the effect of disorder}. In contrast to above, the 1PA description captures exciton delocalization only when the resonance condition is artificially adjusted at the expense of substantially modifying linear spectroscopic features, such as the 0-1 emission peak position. Note that a 1PA description, even with modified parameters, does not describe exciton delocalization in the 5$\times$5 and higher resonant manifolds, and may not be suitable when describing extended systems with multiple pigments and site energetic disorder, or excited state relaxation mechanisms in a vibronic dimer. \FloatBarrier \subsection{Vibrational Distortion Radius} In order to analytically treat the intermediate coupling regime, McRae developed\cite{McRae1963} an approximation scheme where the effect of two-particle states, which become crucial in the intermediate coupling regime, is treated as a 1$^{st}$ order perturbative correction to 0$^{th}$ order `m-m' wavefunctions, or one-particle basis states, of the weak electronic coupling regime. Since two-particle states, and in general $n$-particle states, allow for a system to be vibrationally distorted out of the equilibrium geometry away from the site of electronic excitation, McRae has defined a vibrational distortion radius to quantify the region of molecular distortion around the electronically excited site. Similar definitions have also been provided by Soos et. al\cite{Soos2002}, and more recently by Spano and co-workers\cite{Spano2018} in the context of J- or H-aggregates of organic polymers. Following earlier definitions, the dimensionless nuclear distortion associated with vibronic eigenvector $\ket{\psi_n}$ in Eqn.~\ref{eq17}, can be written as -- \begin{equation} D_n(i) = \bra{\psi_n}\sum_{S=A,B}\ket{S}\bra{S}\frac{\hat{q}_{S+i}}{\sqrt{2}}\ket{\psi_n}, \label{eq21} \end{equation} $D_n(i)$ measures the dimensionless nuclear displacement from the ground state equilibrium nuclear geometry, $i$ sites away from the site of electronic excitation $S$. For a dimer, $i$ is either 0 or 1, such that $\hat{q}_{A+1} \equiv \hat{q}_B$, and vice versa. For a system of isolated molecules $A$ and $B$, each with a FC displacement $d$, $D(i=0) = d/\sqrt{2}$ and $D(i=1) = 0$, for either molecule. Substituting the eigenvectors defined in Eqn.~\ref{eq17} into Eqn.~\ref{eq21} leads to -- \begin{eqnarray}\label{eq22} D_n(0) =\frac{1}{2}\sum_{\nu_A,\nu_A'}\sum_{\nu_B,\nu_B'}\bigg({\sqrt{\mbox{max}(\nu_A,\nu_{A'})}} {c_{A\nu_A,\nu_B}^n} {c_{A\nu_A'\nu_B'}^n}\delta_{\nu_A\pm1,\nu_A'}\delta_{\nu_B,\nu_B'} \\ \nonumber + \sqrt{\mbox{max}(\nu_B,\nu_{B'})} {c_{B\nu_A\nu_B}^n}{c_{B\nu_A'\nu_B'}^n}\delta_{\nu_A,\nu_A'}\delta_{\nu_B\pm1,\nu_B'}\bigg) \end{eqnarray} and \begin{eqnarray}\label{eq23} D_n(1) =\frac{1}{2}\sum_{\nu_A,\nu_A'}\sum_{\nu_B,\nu_B'}\bigg({\sqrt{\mbox{max}(\nu_B,\nu_{B'})}} {c_{A\nu_A,\nu_B}^n} {c_{A\nu_A'\nu_B'}^n}\delta_{\nu_A,\nu_A'}\delta_{\nu_B\pm1,\nu_B'} \\ \nonumber + \sqrt{\mbox{max}(\nu_A,\nu_{A'})} {c_{B\nu_A\nu_B}^n}{c_{B\nu_A'\nu_B'}^n}\delta_{\nu_A\pm1,\nu_A'}\delta_{\nu_B,\nu_B'}\bigg) \end{eqnarray} \begin{figure*}[h!] \centering \includegraphics[width=3.5 in]{fig5_vdf.png} \caption{Vibrational Distortion Radius around the electronically excited site, $D(0)$, and electronically unexcited site $D(1)$, calculated for the dimer model considered here. Distortion radius is calculated in dimensionless displacement units using Eqns.~\ref{eq22} and \ref{eq23}. `2PA', `1PA' and `1PA$_{-}$reso' cases correspond to the exact description using two-particle states, one-particle description, and one-particle description with adjusted resonance condition. The linear spectra corresponding to these cases are shown in the top, middle and lower panels of Figure \ref{fig:fig2}, respectively. The vibronic state index corresponds to the states shown in Figure \ref{fig:fig1}, in increasing order of energy. Note that for a 1PA description, vibrational distortions on electronically unexcited sites are zero, and not plotted here. The dashed line shows the total distortion for the `2PA' case, that is, sum of `2PA $D(0)$' and `2PA $D(1)$' cases, and is constant at $d/\sqrt{2}$, where $d$ is the dimensionless FC displacement of an isolated monomer. Only the first 16 vibronic eigenvectors are shown for each case.} \label{fig:fig5} \end{figure*} \FloatBarrier Note that the above expression for the vibrational distortion radius is written in the undisplaced vibrational basis. Figure \ref{fig:fig5}, calculates the vibrational distortion radius around the electronically excited and unexcited sites, $D_n(0)$ and $D_n(1)$, respectively, for different vibronic eigenvectors. The vibronic state index corresponds to the manifolds shown in Figure \ref{fig:fig1}. For the lowest exciton $\alpha_{00}$, the vibrational distortion on the site of excitation for all cases are within $\sim$6\% of each other. Under exact description, the perturbative effects of two-particle states on the lowest exciton leads to a non-zero vibrational distortion away from the site of excitation as well. However, such distortions are restricted to zero in 1PA. \\ For states in the higher manifold, exact calculations show increasingly larger vibrational distortions. For example, as discussed in Section \ref{theory}, in the 3$\times$3 resonant manifold, 1 pair of states are mixed by electronically off-diagonal vibronic coupling, while one of the vibronic eigenvectors does not mix with exciton $\beta$. Figure \ref{fig:fig5} shows that the vibrational distortion experienced by the dimer system for this unmixed eigenvector is the same as that of the lowest exciton, whereas the pair of mixed eigenvectors are distorted equally away compared to the unmixed eigenvector (compare 2$^{nd}$ and 4$^{th}$ red circle with 1$^{st}$ and 3$^{rd}$ red circle). The vibronically unmixed eigenvectors in all the higher manifolds experience the same distortion as the lowest exciton, while vibrational distortions in pairwise mixed excitons are successively larger. In contrast, for a 1PA description with no explicit modification to the resonance condition (`1PA $D_0$'), two major differences, apart from vibrational distortion $D_1$ being restricted to zero, are seen -- 1. the pair of mixed eigenvectors (shown above Eqn.~\ref{eq20}), resulting from the 2$\times$2 Hamiltonian in Eqn.~\ref{eq12}, experience significantly different vibrational distortions compared to exact calculations. The 2$^{nd}$ state overestimates the actual vibrational distortion, while the 3$^{rd}$ state underestimates the distortion on the site of electronic excitation. When the resonance condition is adjusted, both states overcompensate the actual vibrational distortion (compare 2$^{nd}$ and 4$^{th}$ red points, with 2$^{nd}$ and 3$^{rd}$ blue and green points). 2. In contrast to increasing vibrational distortions in higher manifolds, 1PA description predicts no distortions.\\ In order to analytically compare $D_n$ to those calculated in Figure \ref{fig:fig5}, perturbative effects of the neighboring vibrational manifolds will have to be considered as well. For example, for the lowest exciton, a 1$^{st}$ order mixing of $\alpha_{00}$ with states separated by a vibrational quanta, such as $\alpha_{10}$, $\beta_{10}$, as dictated by the Hamiltonian in Eqn.~\ref{eq5} will have to be considered. Taking all the perturbative interactions into account, Eqns.~\ref{eq22} and \ref{eq23} yield distortions which are delocalized over both sites as dictated by the diabatic mixing angle -- $D_0(0) = \frac{d}{\sqrt{2}}\cos[2](\theta_{d})$ and $D_0(1) = \frac{d}{\sqrt{2}}\sin[2](\theta_{d})$. Note that the total distortion stays the same as expected for an isolated molecule. Similar analytical considerations for 3$\times$3 manifold eigenvectors requires considering basis states in the 5$\times$5 manifold as well, and becomes increasingly cumbersome. Note that a similar calculation in the displaced vibrational basis avoids matrix elements resulting from interactions between manifolds, as those are already accounted for by the choice of basis. However, as mentioned earlier, an undisplaced vibrational basis allows to visualize vibronic basis states coupled through \textit{direct} off-diagonal electronic couplings only, with no change in the initial and final vibrational quanta in the associated FC factors (compare the 8$\times$8 Hamiltonians in Sections S2 and S3 of Supporting Information of ref.\cite{Tiwari2018}). As a consequence, the analytic forms of the vibronic eigenvectors are considerably simpler in the undisplaced vibrational basis, and allow for comparisons to exact numerical diagonalization as discussed in Sections \ref{fc} and \ref{ems}. \\ Due to the same reason as above, the choice of undisplaced vibrational basis also allows to clearly rationalize the effect of electronically off-diagonal vibronic coupling on the vibrational distortion radius, without having to consider purely vibrational interactions with neighboring manifolds. For the vibronically mixed states $\psi_1$ and $\psi_3$ in the 3$\times$3 manifold, substituting the analytic eigenvectors below Eqn.~\ref{eq8}, into Eqns.~\ref{eq22} and \ref{eq23} yields -- \begin{eqnarray} D_1(0) &=& \frac{\sin(2\theta_{d})}{\sqrt{2}}\sqrt{\frac{1}{4}} \\ \nonumber D_1(1) &=& -\frac{\sin(2\theta_{d})}{\sqrt{2}}\sqrt{\frac{1}{4}} \\ \nonumber D_3(0) &=& -\frac{\sin(2\theta_{d})}{\sqrt{2}}\sqrt{\frac{1}{4}} \\ \nonumber D_3(1) &=& \frac{\sin(2\theta_{d})}{\sqrt{2}}\sqrt{\frac{1}{4}} \end{eqnarray} Vibrational distortions in pairwise mixed vibronic eigenvectors are equal and opposite. A similar calculation for $\psi_2$ yields zero distortion, as expected in the absence of resonant vibronic mixing. In general, for higher resonant manifolds, vibrational distortion in resonantly mixed eigenvectors increases as $\frac{\sin(2\theta_{d})}{\sqrt{2}}\sqrt{\frac{n_i}{4}}$, where $n_i$ ranges from 1 to total number of vibrational quanta on the acceptor exciton in the respective manifolds. Thus, \textit{vibrational distortion is directly proportional to the strength of vibronic coupling. Since vibronic coupling gets successively stronger in higher vibrational manifolds (Section \ref{theory} and Figure \ref{fig:fig1}), vibrational distortion in higher manifolds increases proportionally, as seen in Figure \ref{fig:fig5}.}\\ In the 1PA description of the dimer, $D(1) = 0$ due to absence of two-particle states. For the 2$\times$2 1PA manifold (Eqn.~\ref{eq12}), $D_{1,2}(0)$ is calculated by substituting the corresponding eigenvectors (above Eqn.~\ref{eq20}) into Eqn.~\ref{eq23} -- \begin{eqnarray} D_1(0) &=& \sin(2\theta_{VE}^{1pa}) \sin(\theta_{d})\sqrt{\frac{1}{4}} \nonumber \\ D_2(0) &=& -\sin(2\theta_{VE}^{1pa}) \sin(\theta_{d})\sqrt{\frac{1}{4}} \end{eqnarray} It is seen that vibrational distortion for the pairwise mixed states is reduced by a factor of $\sqrt{2}\cos(\theta_d)$. The same reduction in vibronic coupling was seen for 1PA manifolds in Figure \ref{fig:fig1}. The additional reduction to $D(0)$ caused by the imperfect vibronic mixing angle $\theta_{VE}$ can be compensated by explicitly adjusting the vibrational frequency to achieve resonance between $A_{10}$ and $\beta_{00}$ basis states. For a related dimer Hamiltonian, ref.~\cite{Peters2017} has calculated time-dependent variance of a wavepacket created by a superposition of resonantly coupled non-adiabatic vibronic eigenvectors. Resonant non-adiabatic coupling drives the wavepacket to become significantly wider, upto $\sim$3x within 200 fs, than what is nominally expected from a ground state $\beta_{00}$ wavepacket (see Figure 8a of ref.~\cite{Peters2017}). Here we have calculated the underlying molecular distortions resulting from resonant non-adiabatic coupling, which ultimately reflect in the wavepacket motions. Under 1PA description, even if explicit adjustment of resonance conditions can allow for qualitative agreement of linear spectral lineshapes and population transfer dynamics compared to exact description, the underlying molecular vibrational distortions are in significant disagreement with exact calculations. Biggs and Cina\cite{Cina2009} have discussed the influence of impulsive vibrational pre-excitation on the ground electronic state as a way to control excited state energy transfer in a dimer, where the excited state wavepacket amplitudes, not just population transfer rates, could be directly monitored through non-linear wavepacket interferometry. Based on above considerations, the \textit{wavepacket motions and vibrational-electronic dynamics described under reduced basis set descriptions are expected to be fundamentally different than that expected from an exact descriptions of vibronic resonance.} \\ \section{Conclusions} We have analyzed the validity of reduced basis set descriptions of a dimer with vibrational-electronic resonance, using experimentally dictated parameters typical for photosynthetic excitons. Using a analytical approach, valid as long as the effect of manifolds separated by a quantum of vibration can be treated perturbatively, we have shown that under vibronic resonance the contributions of two-particle states are maximized. Further, constructive interference between two-particle states leads to stronger vibronic couplings and more number of vibronically mixed states, in successively higher resonant manifolds. In contrast, absence of two-particle states in one-particle descriptions does not capture the above effects, such that a reduced basis set description is only suitable to partially describe the lowest near-resonant vibrational manifold. Additionally, we have shown that one-particle description significantly modifies the experimentally dictated vibronic resonance condition, as well as the underestimating the physically significant width of vibronic resonance.\\ Comparisons of linear spectra calculated using numerical diagonalization of the full Hamiltonian, show good agreement with analytically calculated transition intensities, peak positions and vibronic splittings for exact and one-particle descriptions. We further show that subtle features such as FC progression of the lowest exciton, and 0-1 emission intensity from the lowest exciton, are incorrectly described by 1PA description, leading to FC artifacts and incorrect estimations of exciton coherence length. For instance, a 10\% smaller 0-1 emission intensity as calculated by one-particle basis set implies a proportional overestimation of exciton coherence length. Larger Franck-Condon vibrational displacements, and interference effects between pigment transition dipoles for the case of J- or H- aggregates, or between one- and two-particle states, are expected to cause bigger deviations between one-particle and exact descriptions.\\ Features in the linear spectra which directly depend on vibronic resonance, such as vibronic splittings and strength of intensity borrowing under the upper exciton, are significantly different between exact and one-particle descriptions, with vibronic splittings and peak strengths differing by as much as 50\%. Further, the analytical form of the eigenvectors suggests that explicit adjustment of experimental parameters to compensate for the modified resonance condition can lead to qualitative agreements between exact versus one-particle descriptions of absorption lineshapes and vibronic splittings. However, such adjustments lead to large deviations in 0-1 emission peak positions, and do not remedy the FC artifacts and incorrect 0-1 emission intensities.\\ By comparing the exact versus one-particle wavepacket dynamics, we show that energetic disorder and vibration-electronic coupling can synergestically maximize population transfer at vibronic resonance. A one-particle description of population transfer predicts a rate slower by $\sqrt{2}\cos(\theta_{d})$. Even though broad spectral lineshapes at room temperature completely obscure expected differences in peak positions and intensities, we show that the effect of missing two-particle contributions in reduced basis set description becomes evident in room temperature wavepacket dynamics where vibronic enhancement of population transfer can only occur in the presence of two-particle contributions. \\ We also show that the above spectral and dynamical differences seen in reduced basis set descriptions, can be summarized by two fundamental properties unique to vibronic resonance -- the inverse participation ratio, and the molecular distortion radius. Using the inverse participation ratio as a metric for exciton delocalization, we show that vibronic resonance overcomes energetic disorder to cause all the resonantly mixed excitons to be perfectly delocalized over both pigments, while only partial delocalization is predicted by a reduced basis set description. Using a vibrational distortion radius to quantify the molecular distortion upon electronic excitation experienced on different sites, we show that the distortion increases proportionally with the strength of resonant vibronic coupling, such that excitation in higher vibronic manifolds lead to successively larger vibrational distortions on the unexcited pigment sites. Vibrational distortions are significantly underestimated in reduced basis set descriptions and not corrected even after adjustments to experimental parameters which dictate vibronic resonance. Due to significantly underestimated vibrational distortions in one-particle description of vibronic resonance, reduced basis set schemes are fundamentally not expected to correctly describe the resulting wavepacket motions and vibrational-electronic relaxation processes, motivating effective-mode approaches\cite{Tiwari2017,Burghardt2005} for extended aggregates, which can reduce Hamiltonian dimensionality without oversimplification of spectra and dynamics. \section{Acknowledgments} VT would like to thank Prof. David M. Jonas for helpful discussions. AS would like to acknowledge Junior Research Fellowship from the Indian Institute of Science (IISc). JSK would like to acknowledge Inspire Fellowship from the Department of Science and Technology, India. VT would like to acknowledge IISc startup grant number SG/MHRD-18-0020. This project is supported by Department of Atomic Energy, India under grant sanction number 58/20/31/2019-BRNS, and by Science and Engineering Research Board, India under grant sanction number CRG/2019/003691.
2024-02-18T23:40:56.048Z
2020-09-03T02:15:16.000Z
algebraic_stack_train_0000
3,716
17,045
proofpile-arXiv_066-2137
\section{Introduction}\label{sec:introduction} As one of the most popular algorithms in computer vision to extract and encode local features, Scale Invariant Feature Transform (SIFT) \cite{lowe2004distinctive} has been proven to be very robust against various distortions \cite{mikolajczyk2005performance, qin2014towards} and has been widely employed in many practical scenarios, e.g., content based image retrieval (CBIR) \cite{amsaleg2001cbir, torres2006cbir2, torres2009cbir3}, object recognition \cite{rothganger2006object}, visual tracking \cite{gauglitz2011evaluation}, and image matching \cite{li2019fast}. Due to its extreme popularity, the privacy and security issues regarding the SIFT features have been attracting increasing attention. For instance, in our recent studies, it was demonstrated that SIFT keypoints can be maliciously removed and forged with negligible distortions on the original image, making the decisions from SIFT-based systems untrustworthy \cite{li2019fast, li2017removal}. \begin{figure}[t!] \begin{subfigure}{.115\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq_input_000009.jpg} \end{subfigure} \begin{subfigure}{.115\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq_our_000009.png} \end{subfigure} \begin{subfigure}{.115\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet_input_000100.jpg} \end{subfigure} \begin{subfigure}{.115\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet_our_000100.png} \end{subfigure} \newline \newline \newline \begin{subfigure}{.115\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k_input_000079.jpg} \end{subfigure} \begin{subfigure}{.115\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k_our_000079.png} \end{subfigure} \begin{subfigure}{.115\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet_input_000038.jpg} \end{subfigure} \begin{subfigure}{.115\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet_our_000038.png} \end{subfigure} \caption{Reconstruction results of our proposed model on images of face, wheel, indoor and outdoor scenes. In each pair, the left is the input SIFT features and the right is the reconstructed image.} \label{fig:heading_figure} \end{figure} Noticing that SIFT features are often exposed to untrustworthy parties, we in this work thoroughly evaluate the privacy leakage problem of SIFT features. More specifically, we consider the following two scenarios, where full or partial SIFT features can be accessed by an adversary: \begin{itemize} \item \textbf{Scenario I}: Both the SIFT descriptors and their coordinates are accessible to the adversary. For instance, in 3D point clouds based systems \cite{irschara2009sfm, sattler2011fast, li2012worldwide, lim2015real}, 3D object recognition \cite{lowe2001object_recog} and panoramic image stitching \cite{brown2007automatic}, users need to provide both the SIFT descriptors \emph{and} the coordinates, potentially leaking them out. \item \textbf{Scenario II}: Only SIFT descriptors \emph{or} their coordinates are accessible to the adversary. For instance, in many CBIR systems \cite{amsaleg2001cbir, torres2006cbir2, torres2009cbir3} and copy-move forgery detection systems \cite{pan2010copymove, amerini2011copymove, li2019fast}, it is sufficient to only provide the SIFT descriptors, while not their coordinates. \end{itemize} In order to evaluate the risk of the information leakage from SIFT features, we need to know how much information is carried by them. A feasible solution to this question is to investigate to what extent the latent image can be recovered from these SIFT features or local features in general. Along this line, several approaches \cite{weinzaepfel2011reconstructing, angelo2012beyond, vondrick2013hoggles, kato2014image, desolneux2017stochastic, mahendran2015understanding} have been devised to reconstruct the images from local features, mainly under the assumption of \textbf{Scenario I}, i.e., full features can be accessed. The pioneer study was conducted by Weinzaepfel \textit{et al.} \cite{weinzaepfel2011reconstructing}, who attempted to reconstruct the image from SIFT features through patch searching, pasting and smoothing. However, due to the sparse nature of local descriptors, only some rough contours can be recovered, while the fine textures are missing. Angelo \textit{et al.} \cite{angelo2012beyond} later proposed an inverse optimization framework for recovering the latent image from the local binary descriptors, without relying on any external databases. Vondrick \textit{et al.} \cite{vondrick2013hoggles} addressed the problem of the image reconstruction from the histograms of gradient orientations (HOG) descriptors by using the dictionary representation. Through estimating the spatial arrangement of local descriptors over a large-scale image database, Kato \textit{et al.} \cite{kato2014image} presented a method to reconstruct the image from its Bag-of-Visual-Words (BoVW) feature. More recently, Desolneux \textit{et al.} \cite{desolneux2017stochastic} devised two reconstruction models for HOG features by adopting the Poisson editing, capable of recovering global shapes and many geometric details. To further improve the reconstruction performance, there is a recent trend of using deep convolutional neural networks (CNNs) and generative adversarial networks (GANs) \cite{dosovitskiy2016inverting, mahendran2015understanding, wu2019image, pittaluga2019revealing}. Dosovitskiy and Brox \cite{dosovitskiy2016inverting} proposed a reconstruction approach from local features through an encoder-decoder neural network. Wu \textit{et al.} \cite{wu2019image} then improved it by introducing GANs architecture and a multi-scale features generation. Further, Pittaluga \textit{et al.} \cite{pittaluga2019revealing} trained a cascade of U-Nets with extra convolutional layers to reveal scenes from the local features. Unfortunately, these methods tend to generate severe boundary artifacts and distorted structures. In this work, we first consider the case that the adversary can fully access the SIFT features (both descriptors and coordinates), i.e., under \textbf{Scenario I}. We thoroughly evaluate the privacy leakage of SIFT features by constructing a novel end-to-end, coarse-to-fine image reconstruction model, SIFT-LBP-Image (SLI), that consists of two networks. The first network, called LBP reconstruction network, attempts to learn the structural information of the latent image by transforming from SIFT features to LBP features, while the second one aims to reconstruct the pixel values guided by the learned LBP. Extensive experiments on three publicly available datasets \texttt{CelebA-HQ} \cite{karras2017progressive}, \texttt{MD-NYU} \cite{pittaluga2019revealing} and \texttt{ImageNet} \cite{deng2009imagenet} demonstrate that our proposed model can generate better results than the state-of-the-art competitors, both quantitatively and qualitatively (see Fig. \ref{fig:heading_figure} for some examples). Furthermore, we address more challenging cases where only partial SIFT features are available, i.e., under \textbf{Scenario II}. In the case that the SIFT coordinates are not accessible, we design two methods for predicting the missing coordinate information, which achieve modest success for highly-structured images (e.g., faces), while would fail for general settings (e.g., buildings). The challenge mainly comes from the fact that, for general cases, there is no strong correlation between the descriptor and its absolute coordinate, i.e., the extracted descriptor could be the same regardless the location of the keypoint. We also evaluate the possibility of reconstructing the latent image solely from the coordinates. It is found that the rough contour of the latent image can still be reconstructed, though the fine textures are missing. Our results would suggest that the coordinates play a more critical role in ensuring the privacy of the SIFT features. In other words, if the coordinates of the SIFT features can be well protected, the sensitive information leakage can be largely avoided. Our major contributions can be summarized as follows: \begin{itemize} \item We propose SLI, an end-to-end, coarse-to-fine deep generative model to recover the latent image from its SIFT features. \item Our model SLI achieves better reconstruction performance in comparison with several state-of-the-art methods \cite{desolneux2017stochastic, dosovitskiy2016inverting, pittaluga2019revealing} over a variety of challenging datasets including \texttt{CelebA-HQ} \cite{karras2017progressive}, \texttt{MD-NYU} \cite{pittaluga2019revealing} and \texttt{ImageNet} \cite{deng2009imagenet}. \item We investigate the challenging cases where the adversary can only access partial SIFT features (either descriptors or coordinates). To the best of our knowledge, it is the first work to specifically address the problem of reconstructing the latent image from the incomplete SIFT features. We demonstrate that the reconstruction performance is greatly degraded when coordinates are missing, especially for those images without regular structures. \end{itemize} The rest of this paper is organized as follows. Section \ref{sec:related_works} briefly reviews the SIFT and LBP algorithms. Section \ref{sec:methods} presents our proposed model SLI under \textbf{Scenario I} and Section \ref{sec:advanced_methods} introduces the reconstruction approaches under \textbf{Scenario II}. Extensive experiments are then given in Section \ref{sec:experiments}, and finally Section \ref{sec:conclusion} concludes. \section{Introduction of SIFT and LBP}\label{sec:related_works} In this section, we provide a brief introduction of SIFT and LBP algorithms. \subsection{SIFT Features Generation and Matching} The detection of SIFT keypoints and the generation of their corresponding descriptors can be roughly divided into four steps: i) establishment of scale space; ii) detection and refinement of extreme points; iii) assignment of dominant orientation; and iv) generation of descriptors. At step i), by repeatedly convolving an input image $\mathbf{I}$ with Gaussian filters at different scales, the Gaussian-blurred image $L(x,y, \sigma)$ can be computed as \begin{equation} L(x, y, \sigma) = \mathbf{I}(x, y) \otimes G(x, y, \sigma). \end{equation} Here $G(x, y, \sigma)$ is the Gaussian kernel at scale $\sigma$, i.e., \begin{equation} G(x, y, \sigma) = \frac{1}{2\pi\sigma^2}\mathrm{e}^{-(x^2+y^2)/2\sigma^2} \end{equation} At step ii), a series of candidate SIFT keypoints are detected from the local extrema within a $3 \times 3 \times 3$ cube of the Difference of Gaussians (DoG) domain, where the DoG image at scale $\sigma$ is calculated by the difference of adjacent Gaussian-blurred images \begin{equation} D(x, y, \sigma) = L(x, y, k\sigma) - L(x, y, \sigma), \end{equation} where $k$ is a predefined constant. In order to reject unstable extreme points in the DoG domain, a contrast threshold and an edge threshold are used for keypoints refinement. At step iii), the orientation of each point $(x, y, \sigma)$ is defined as \begin{equation} \theta(x, y, \sigma) = {\rm tan}^{-1}(\frac{d_y}{d_x}), \end{equation} where $d_x$ and $d_y$ are the horizontal and vertical gradients of $(x, y, \sigma)$. An orientation histogram is constructed by gathering the orientation of points in a local window centered at the SIFT keypoint. The maximum value in the orientation histogram is assigned as the dominant orientation to guarantee the rotation invariance. At step iv), a 128-dimensional descriptor $\mathbf{f}$ is calculated from the gradient information of 8 directions in a $16 \times 16$ local area centered at the SIFT keypoint. Through the above four steps, for the image $\mathbf{I}$, we can generate a list of $n$ keypoints $\mathcal{K} = \{\mathbf{k}_1, \mathbf{k}_2, \cdots, \mathbf{k}_n\}$ and their corresponding descriptors $\mathcal{F} = \{\mathbf{f}_1, \mathbf{f}_2, \cdots, \mathbf{f}_n\}$. Specifically, each SIFT keypoint $\mathbf{k}$ is a four-dimensional vector \begin{equation} \mathbf{k} = (x, y, \sigma, \theta), \end{equation} where $(x, y)$ denotes the coordinate of the SIFT keypoint in the image plane, $\sigma$ and $\theta$ represent the scale and dominant orientation, respectively. For a given image, its SIFT features are composed of two parts $\mathcal{K}$ and $\mathcal{F}$. Upon having the SIFT features, a SIFT keypoints matching algorithm was also suggested in \cite{lowe2004distinctive}. Specifically, let $\mathbf{d}=\{d_1, d_2, \cdots, d_{n-1}\}$ record the Euclidean distances between the descriptor $\mathbf{f}_i$ and the remaining descriptors $\{\mathbf{f}_j \}$ ($j \neq i$) in an increasing order, i.e., $d_1 \leq d_2 \leq \cdots \leq d_{n-1}$. Then a pair of reliable SIFT match exists if and only if \begin{equation} d_1 / d_2 < t, \end{equation} where $t \in (0, 1)$ is a predefined parameter commonly set as 0.8. \begin{figure}[t!] \centering \includegraphics[width=.485\textwidth]{imgs/LBP_extraction.jpg} \caption{An example of the LBP extraction. Left is the original $3 \times 3$ neighborhood. Right is the thresholded neighborhood, and the LBP feature of the centering pixel $P$ is $\mathbf{b} = 10011011$.} \label{fig:LBP_extraction} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{imgs/framework.jpg} \caption{Overview of our proposed SIFT-LBP-Image (SLI) reconstruction model. The number above each layer represents the size of the resolution, while the number below means the dimension.} \label{fig:framework} \end{figure*} \subsection{Local Binary Pattern (LBP)} LBP is a widely used texture descriptor originally proposed by Ojala \cite{ojala1996comparative}. The LBP features extraction process is to label each pixel of an image by thresholding its spatial neighborhood. Specifically, to extract the LBP features associated with the pixel $P$, we first obtain its $M\times N$ neighborhood denoted by $P_1, P_2, \cdots, P_{MN-1}$. Then the LBP features associated with $P$ is a string of binary bits $\mathbf{b} = b_1, b_2, \cdots, b_{MN-1}$. where \begin{equation} b_i= \begin{cases} 0& \mathrm{if}~ P_i \leq P\\ 1& \mathrm{otherwise} \end{cases}, \mathrm{for}~i = 1, \cdots, MN-1. \end{equation} An example of the LBP features extraction is illustrated in Fig. \ref{fig:LBP_extraction}, where $M=N=3$. LBP features essentially record the relative ordering within a block of pixels, capturing the information of edges, spots and other local structures \cite{zhang2010local}. LBP shows very good performance in many vision tasks, e.g., unsupervised texture segmentation \cite{ojala1999unsupervised}, face recognition \cite{ahonen2006face}, and image reconstruction \cite{waller2013image}. \section{Image Reconstruction from Full SIFT Features}\label{sec:methods} In this section, we consider the problem of reconstructing the latent image under \textbf{Scenario I}, i.e., full SIFT features are accessible to the adversary. We first present the architecture of the proposed SIFT-LBP-Image (SLI) deep generative model, and then give the details on the model optimization. We experimentally find that the scale $\sigma$ and the dominant orientation $\theta$ only bring negligible reconstruction performance gains, and hence, they are abandoned. In other words, the SIFT descriptors $\mathcal{F}$ and the associated coordinates $(x, y)$ are used as the features map to be injected into SLI. \subsection{SIFT-LBP-Image (SLI) Model} The architecture of the proposed SIFT-LBP-Image (SLI) model is illustrated in Fig. \ref{fig:framework}. As can be seen, SLI is an end-to-end, coarse-to-fine deep generative model, consisting of two networks. The first one called LBP reconstruction network transforms the SIFT features into LBP features, providing structural information to assist the subsequent image reconstruction network, which aims to complete the actual image reconstruction task. One of the reasons why we select LBP features under this circumstance is that it contains a great amount of structural information, capable of well guiding the image reconstruction task. As verified in \cite{waller2013image}, an image visually close to the original one could be reconstructed solely from its LBP features. Also, from the perspective of practical implementation, LBP is easy to be computed and very few parameters are involved. More importantly, as expected and will be verified experimentally, the conversion from SIFT features to LBP, and eventually to image significantly improves the reconstruction performance, compared with the challenging task of reconstructing the latent image directly from its SIFT features. Both networks follow an adversarial model \cite{goodfellow2014generative}, i.e., each network contains a generator based on U-Net architecture \cite{ronneberger2015u}, and a discriminator based on the PatchGAN \cite{isola2017image}. Let $\mathcal{K}$ and $\mathcal{F}$ be the SIFT keypoints and descriptors extracted from an input image $\mathbf{I} \in \mathbb{R}^{H \times W \times 3}$ in the grayscale channel. Denote $\mathbf{S} \in \mathbb{R}^{H \times W \times 128}$ as the input SIFT features map, where descriptors $\mathcal{F}$ are assigned to their corresponding coordinates and zero vectors elsewhere. At the training stage, the generator of the LBP reconstruction network $G_1 : \mathbb{R}^{H \times W \times 128} \to \mathbb{R}^{H \times W \times 1}$ takes $\mathbf{S}$ as input, and outputs the estimated LBP $\mathbf{L}_o$. During this process, the discriminator $D_1 : \mathbb{R}^{H \times W \times 1} \to \mathbb{R}$ works together with the $G_1$ to produce the result $\mathbf{L}_o$. Upon having a well-estimated LBP, we then use it to guide the image reconstruction process in the subsequent network. Specifically, the generator $G_2 : \mathbb{R}^{H \times W \times (128+1)} \to \mathbb{R}^{H \times W \times 3}$ takes $(\mathbf{S}$, $\mathbf{L}_o)$ as input, and outputs the final reconstructed result $\mathbf{I}_o$, with the assistance of the discriminator $D_2 : \mathbb{R}^{H \times W \times 3} \to \mathbb{R}$. At the testing stage, the procedure is similar, but without the need of using the two discriminators $D_1$ and $D_2$. For the $G_1$ or $G_2$, we adopt a pruned U-Net architecture \cite{ronneberger2015u} composed of an encoder and a decoder. In the encoder, each layer has a $4\times4$ convolution, an Instance Norm \cite{ulyanov2016instance} and a LeakyReLU \cite{xu2015empirical} with $\alpha=0.2$. The decoder has a symmetric structure, except that the convolution and LeakyReLU are replaced with the deconvolution and ReLU \cite{nair2010rectified}, respectively. Additionally, skip connections are used to concatenate the features from each layer of the encoder with the corresponding layer of the decoder. Experimentally, we find that the dilated convolutions in the original U-Net architecture \cite{ronneberger2015u} bring negligible improvements to the final reconstruction results. We hence prune the U-Net architecture by removing the dilated convolutions, so as to reduce the number of model parameters, which could speed up the training process. For the $D_1$ or $D_2$, we adopt the PatchGAN architecture \cite{isola2017image}. \subsection{Optimization of the proposed networks} For the optimization of the LBP reconstruction network, we use the combination of $\ell_1$ reconstruction loss \cite{ledig2017photo}, $\ell_2$ perceptual loss \cite{dosovitskiy2016perceptual} and adversarial loss \cite{martineau2018rsgan}. More specifically, the reconstruction loss is naturally defined as: \begin{equation}\label{loss:recon} \mathcal{L}_{r} = ||\mathbf{L}_{o} - \mathbf{L}_{g}||. \end{equation} The perceptual loss penalizes the reconstructed LBP that is not perceptually similar to the ground-truth LBP $\mathbf{L}_g$, and it can be defined as: \begin{equation}\label{loss:perc} \mathcal{L}_p=\sum_{h\in \mathcal{A}}||\varphi_h([\mathbf{L}_{o}, \mathbf{L}_{o}, \mathbf{L}_{o}])-\varphi_h([\mathbf{L}_{g}, \mathbf{L}_{g}, \mathbf{L}_{g}])||_2, \end{equation} where $\varphi_h$ is the activation map corresponding to the $h$-th layer of an ImageNet-pretrained VGG-16 network. The set $\mathcal{A}$ is formed by the layer indexes of $\rm conv2\_1$, $\rm conv3\_1$, $\rm conv4\_1$ layers. Here we concatenate three $\mathbf{L}_{o}$ or $\mathbf{L}_{g}$ as the input of layers in set $\mathcal{A}$ because VGG-16 fixes the input as three channels. Also, the Relativistic average GAN (RaGAN) \cite{martineau2018rsgan} can be calculated as follows: \begin{equation}\label{loss:adv} \mathcal{L}_{D_1} = -\mathbb{E}_{\mathbf{L}_g}\big[\mbox{log}\big(\widetilde{D}(\mathbf{L}_g)\big)\big]-\mathbb{E}_{\mathbf{L}_o}\big[\mbox{log}\big(1-\widetilde{D}(\mathbf{L}_o)\big)\big], \end{equation} \begin{equation} \mathcal{L}_{G_1} = -\mathbb{E}_{\mathbf{L}_o}\big[\mbox{log}\big(\widetilde{D}(\mathbf{L}_o)\big)\big]-\mathbb{E}_{\mathbf{L}_g}\big[\mbox{log}\big(1-\widetilde{D}(\mathbf{L}_g)\big)\big], \end{equation} where \begin{equation} \widetilde{D}(\mathbf{L}_g) = \mbox{sigmoid}\big(D_1(\mathbf{L}_g)-\mathbb{E}_{\mathbf{L}_o}[D_1(\mathbf{L}_o)]\big), \end{equation} \begin{equation} \widetilde{D}(\mathbf{L}_o) = \mbox{sigmoid}\big(D_1(\mathbf{L}_o)-\mathbb{E}_{\mathbf{L}_g}[D_1(\mathbf{L}_g)]\big). \end{equation} Finally, the loss functions for the LBP reconstruction network are defined by integrating the above three types of loss: \begin{equation} \mathcal{L}^{LBP}_{G_1} = \lambda_r\mathcal{L}_r + \lambda_p\mathcal{L}_p + \lambda_{g}\mathcal{L}_{G_1}, \end{equation} \begin{equation}\label{loss:dis} \mathcal{L}^{LBP}_{D_1} = \mathcal{L}_{D_1}, \end{equation} where $\lambda_r$, $\lambda_p$ and $\lambda_a$ are the parameters trading off different types of loss, whose settings will be clarified in Section \ref{sec:experiments}. For the loss function of the image reconstruction network, we similarly adopt the combination of $\ell_1$ reconstruction loss, $\ell_2$ perceptual loss and adversarial loss. Besides, to better optimize the high-level features of the image reconstruction network, we further introduce the style loss \cite{gatys2016image}, which is used to measure the differences between the covariances of the activation maps. This is an effective strategy to eliminate the ``checkerboard'' artifacts caused by deconvolution layers \cite{sajjadi2017enhancenet}. Typically, the style loss can be defined as: \begin{equation} \mathcal{L}_s= \sum_{h\in \mathcal{A}}|| \mathbf{G}^{\varphi_h}(\mathbf{I}_{o}) - \mathbf{G}^{\varphi_h}(\mathbf{I}_{g}) ||_2, \end{equation} where $\mathbf{G}^{\varphi_h}$ is a $3 \times 3$ Gram matrix constructed from the activation map $\varphi_h$. Finally, the loss functions for the image reconstruction network can be computed as: \begin{equation} \begin{aligned} \mathcal{L}^{IMG}_{G_2} =& \lambda_s\mathcal{L}_s + \\ &\lambda_r||\mathbf{I}_{o} - \mathbf{I}_{g}|| +\\ &\lambda_p\sum_{h\in \mathcal{A}}||\varphi_h(\mathbf{I}_{o})-\varphi_h(\mathbf{I}_{g})||_2 - \\ &\lambda_{g}\big[\mathbb{E}_{\mathbf{I}_o}\big[\mbox{log}\big(\widetilde{D}(\mathbf{I}_o)\big)\big]+\mathbb{E}_{\mathbf{I}_g}\big[\mbox{log}\big(1-\widetilde{D}(\mathbf{I}_g)\big)\big]\big], \end{aligned} \end{equation} \begin{equation} \mathcal{L}^{IMG}_{D_2} = -\mathbb{E}_{\mathbf{I}_g}\big[\mbox{log}\big(\widetilde{D}(\mathbf{I}_g)\big)\big]-\mathbb{E}_{\mathbf{I}_o}\big[\mbox{log}\big(1-\widetilde{D}(\mathbf{I}_o)\big)\big], \end{equation} where \begin{equation} \widetilde{D}(\mathbf{I}_g) = \mbox{sigmoid}\big(D_2(\mathbf{I}_g)-\mathbb{E}_{\mathbf{I}_o}[D_2(\mathbf{I}_o)]\big), \end{equation} \begin{equation} \widetilde{D}(\mathbf{I}_o) = \mbox{sigmoid}\big(D_2(\mathbf{I}_o)-\mathbb{E}_{\mathbf{I}_g}[D_2(\mathbf{I}_g)]\big). \end{equation} To stabilize the training process and alleviate the gradient vanishing problem, we first train the generator $G_1$ and the discriminator $D_1$ in the LBP network. Then we concatenate $G_1$ to the image reconstruction network, and perform an end-to-end training over $G_1$, $G_2$ and $D_2$ simultaneously. Here, Adam \cite{kingma2014adam} algorithm is adopted. We would also like to emphasize that we need to have access to the full SIFT features (both descriptors and coordinates) to train and use the SLI deep model. As mentioned previously, in many practical applications such as CBIR, the assumption on the availability of the full SIFT features is not valid, namely, the adversary can only access partial SIFT features: either descriptors or coordinates. In the next Section, we will tackle this challenge of reconstructing the latent image from partial SIFT features. \section{Image Reconstruction from Partial SIFT Features}\label{sec:advanced_methods} Since the SIFT features of a given image consist of a set of descriptors and coordinates, we consider two cases of partial SIFT features, namely, 1) absence of coordinates and 2) absence of descriptors. In the following, we discuss the latent image reconstruction for these two cases separately. \subsection{Absence of Coordinates} Clearly, SIFT descriptors without the corresponding coordinates cannot be directly used as model input of the proposed SLI presented in Section III. A natural solution to this problem is to somehow estimate the coordinates of these SIFT descriptors, and then the deep generative model SLI can be applied. It should be pointed out that estimating the coordinates from the SIFT descriptors is a very challenging (if possible) problem in \emph{general} settings, as SIFT descriptors could appear anywhere in an image if it is captured in different angles. In other words, for genetic images, the correlation between the SIFT descriptors and the coordinates is actually quite weak. The only hope for relatively accurate estimation of coordinates from SIFT descriptors exists for some highly-structured images, e.g., face images. Specifically, we propose two methods: reference-based and landmark-based approaches for the estimation of coordinates from SIFT descriptors, as demonstrated in Fig. \ref{fig:coordinates_reconstruction}. \subsubsection{Reference-based Method} Since it is very challenging to accurately model the relationship between SIFT descriptors and coordinates from one given image, we build up a reference dataset attempting to provide some prior knowledge. Let $\mathcal{F} = \{\mathbf{f}_1, \mathbf{f}_2, \cdots, \mathbf{f}_n\}$ be the given SIFT descriptors. For each SIFT descriptor in $\mathcal{F} $, the straightforward idea is to find the most similar descriptor from the reference dataset using a nearest neighbor (NN) algorithm, and then take its coordinate as the estimated one. Let $\mathcal{R} = \{\hat{\mathbf{I}}_1, \hat{\mathbf{I}}_2, \cdots, \hat{\mathbf{I}}_N\}$ be the reference dataset randomly sampled from the training set. Let also $\hat{\mathcal{F}}_{j} = \{\hat{\mathbf{f}}_1^j, \hat{\mathbf{f}}_2^j, \cdots \}$ be the set of SIFT descriptors extracted from image $\hat{\mathbf{I}}_j$ at coordinates $\{(x_1^j, y_1^j), (x_2^j, y_2^j), \cdots \}$. Define $\hat{\mathcal{F}}$ as the set recording all the SIFT descriptors, namely, \begin{equation} \hat{\mathcal{F}} = \bigcup\hat{\mathcal{F}}_j, j=1,\cdots,N. \end{equation} Then the coordinate of $\mathbf{f}_i \in \mathcal{F}$ can be estimated by using the following NN algorithm: \begin{equation}\label{eq:coor} (x^i,y^i) = c(\hat{\mathbf{f}}_j) \end{equation} where \begin{equation}\label{eq:mostSimilar} \hat{\mathbf{f}}_j = \arg\min\limits_{\hat{\mathbf{f}}_j \in \hat{\mathcal{F}}} d(\mathbf{f}_i, \hat{\mathbf{f}}_j) \end{equation} Here, $d(\cdot)$ computes the Euclidean distance of two descriptors and $c(\cdot)$ returns the coordinate of the input descriptor. Also, the reason why we use the NN algorithm rather than the SIFT matching algorithm is that, in most cases, the input and the reference image do not contain identical objects, making it almost impossible to find a matching pair. In the cases when multiple descriptors are projected to the same coordinate, we randomly keep one descriptor. \begin{figure*}[t!] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{imgs/ref-based.jpg} \caption{} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=\textwidth]{imgs/landmark-based.jpg} \caption{} \end{subfigure} \caption{Framework of reference-based (a) and landmark-based (b) methods for coordinates estimation. The number below each layer of the SIFT classifier means the dimension.} \label{fig:coordinates_reconstruction} \end{figure*} This straightforward method can find the globally most similar SIFT descriptors from the reference dataset; however, the recovered features map cannot guarantee the existence of stable object contours, mainly because these coordinates could be obtained from multiple reference images. To mitigate the aforementioned drawback, we propose to use the NN algorithm at the image level instead of the descriptor level. That is, for the whole set of descriptors $\mathcal{F}$, we first find \emph{one} reference $\hat{\mathcal{F}}^*$ with the minimum average distances, and then project each descriptor in $\mathcal{F}$ onto the coordinates of the nearest descriptor within $\hat{\mathcal{F}}^*$. Mathematically, \begin{equation} \hat{\mathcal{F}}^* = \arg\min\limits_{\hat{\mathcal{F}_j}} D(\mathcal{F}, \hat{\mathcal{F}_j}), j=1,\cdots,N, \end{equation} where \begin{equation} D(\mathcal{F}, \hat{\mathcal{F}_j}) = \frac{1}{n}\sum_{i}\min_{\hat{\mathbf{f}}_j}(d(\mathbf{f}_i, \hat{\mathbf{f}}_j)), \mathbf{f}_i \in \mathcal{F}, \hat{\mathbf{f}}_j \in \hat{\mathcal{F}_j}. \end{equation} Upon having $\hat{\mathcal{F}}^*$, the coordinate of $\mathbf{f}_i \in \mathcal{F}$ can be similarly estimated as (\ref{eq:coor}) and (\ref{eq:mostSimilar}) by replacing $\hat{\mathcal{F}}$ in (\ref{eq:mostSimilar}) with $\hat{\mathcal{F}}^*$. We now explain how to form the reference dataset $\mathcal{R}$, which is related to the training set for the deep generative model SLI. In this work, we consider three publicly available datasets \texttt{CelebA-HQ} \cite{karras2017progressive}, \texttt{MD-NYU} \cite{pittaluga2019revealing} and \texttt{ImageNet} \cite{deng2009imagenet}. \texttt{CelebA-HQ} has only one category consisting of face images; \texttt{MD-NYU} has two categories: buildings scenes and indoor scenes; and \texttt{ImageNet} \cite{deng2009imagenet} is much more diverse with one thousand categories. For a given dataset, $\mathcal{R}$ is formed by randomly picking one image from each category. For instance, $\mathcal{R}$ contains only one face image when \texttt{CelebA-HQ} is used, while $\mathcal{R}$ becomes a set with 1000 images in the case of \texttt{ImageNet}. We also have tried to increase the number of images picked from each category, but found that the improvements on the reconstructed images are very slight. \subsubsection{Landmark-based Method} The second method called landmark-based method only works for face images. Specifically, we train a classifier to roughly classify the SIFT descriptors into several pre-defined categories corresponding to different face regions, and then recover the coordinates. The schematic diagram of the landmark-based method is given in Fig. \ref{fig:coordinates_reconstruction} (b). At the training stage, we firstly use Dlib \cite{king2009dlib} to extract landmarks, and then classify the landmarks into seven categories: jaw, right/left brow, nose, right/left eye, mouth (labeled from 0 to 6 respectively). Formally, for a given face image $\mathbf{I}$, Dlib can detect its landmarks $\mathcal{M} = \{(x_i, y_i) | i\in[0, 67]\}$, where each coordinate represents a location of the facial region. For instance, the indexes $\mathbb{R}_0 = [0, 16]$ means jaw region and $\mathbb{R}_1 = [17, 21]$, $\mathbb{R}_2 = [22, 26]$, $\mathbb{R}_3 = [27, 34]$, $\mathbb{R}_4 = [35, 41]$, $\mathbb{R}_5 = [42, 48]$, $\mathbb{R}_6 = [48, 68]$ indicate right/left brow, nose, right/left eye and mouth regions respectively. Then, for each SIFT descriptor, we search the landmark using minimum Euclidean distance and assign the corresponding label to it. In the case that the minimum Euclidean distance is larger than 10, then another label 7 is assigned, which means that this SIFT descriptor belongs to the ``other'' category (i.e., non-facial region). Upon having the pairs of SIFT descriptor and its label, we train a classifier $C : \mathbb{R}^{1 \times 128} \to \mathbb{R}^{1 \times 8}$ to classify the SIFT descriptors into the aforementioned 8 categories. The classifier $C$ consists of six fully connected layers, where each layer is composed of a linear layer, a Batch Norm \cite{ioofe2015batch} and a ReLU in a sequential manner. For optimization, we adopt the widely used cross entropy loss, \begin{equation} \mathcal{L}_e = -\sum_{c=0}^{7}y_c\mbox{log}(C(\mathbf{f})_c), \end{equation} where $C(\mathbf{f})_c$ means the probability that the input descriptor $\mathbf{f}$ belongs to the category $c$, and $y_c$ is 1 if the category is the same as the sample category; otherwise 0. In the training process, we randomly select one thousand image (around 120,000 SIFT descriptors) from \texttt{CelebA-HQ}. Next, by using the landmark $\hat{\mathcal{M}} = \{(\hat{x}_i, \hat{y}_i)\}$ from an image $\hat{\mathbf{I}}$ of the training set as prior knowledge, we can generate coordinates $(\hat{x}_i + \epsilon, \hat{y}_i + \epsilon), i \in \mathbb{R}_c$ for each input SIFT descriptor according to its predicted label $c$, where a randomly generated integer $\epsilon \in [-3, 3]$ is added to reduce collisions. It should be noted that if the predicted label is 7, we simply discard this SIFT descriptor as it does not belong to any specific facial regions. \subsection{Absence of Descriptors} We now investigate another scenario of partial SIFT features where the SIFT descriptors are missing while the coordinates of SIFT keypoints are available to the adversary. For instance, SIFT keypoints could be used as robust reference points, in which case the coordinates changes are employed to rectify a distorted image \cite{fang2019screen}. It was also demonstrated that SIFT coordinates can be used for image quality assessment \cite{decombas2012iqa1, kakli2015iqa2}. The scenario with absence of descriptors has been largely neglected by the existing works \cite{weinzaepfel2011reconstructing, dosovitskiy2016inverting, wu2019image, pittaluga2019revealing}, which mainly focused on how the descriptors leak the information of the latent image. If fact, given the set of SIFT coordinates, it is a much less-challenging task to recover the latent image, compared with the case of lacking coordinates. Specifically, we first transform the coordinates into a binary feature map, where 1's are assigned to the locations with SIFT keypoint and 0's elsewhere. This binary feature map can be readily used as input to the second network of our proposed deep generative model SLI, and the first LBP reconstruction network is simply disabled. In addition, the first layer of the generator $G_2$ needs to be modified as one channel $G_2^\prime : \mathbb{R}^{H \times W \times 1} \to \mathbb{R}^{H \times W \times 3}$, so as to fit the dimension of the binary feature map. The other modules of SLI keep unchanged. \section{Experimental Results}\label{sec:experiments} The proposed deep generative model SLI is implemented using PyTorch framework. The training is performed on a desktop equipped with a Core-i7 and a single GTX 2080 GPU. The parameters in Adam are $\beta_1=0.5$, $\beta_2=0.999$ and learning rate $r=1\times10^{-4}$. We train the model with the batch size of 1 and the parameters trading off different terms in the loss functions are fixed to be $\lambda_r=100$, $\lambda_{p}=1$, $\lambda_{s}=10$ and $\lambda_a=0.2$. To embrace the concept of reproducible research, the code of our paper is available at: {https://github.com/HighwayWu/SIFT-Reconstruction}. \begin{table*}[h!] \caption{Quantitative comparison of different reconstruction methods over \texttt{CelebA-HQ}, \texttt{MD-NYU} and \texttt{ImageNet} among SIR \cite{desolneux2017stochastic}, IVR \cite{dosovitskiy2016inverting}, INV \cite{pittaluga2019revealing} and our proposed model SLI. $^-$Lower is better. $^+$Higher is better.} \centering \label{tab:quantitative} \begin{tabular}{ccccccccccccc} \toprule \multirow{2}{*}{Methods} & \multicolumn{4}{c}{\texttt{CelebA-HQ}} & \multicolumn{4}{c}{\texttt{MD-NYU}} & \multicolumn{4}{c}{\texttt{ImageNet}} \\ \cmidrule(r){2-5} \cmidrule(r){6-9} \cmidrule(r){10-13} & FID$^-$ & SSIM$^+$ & PSNR$^+$ & PRM$^+$(\%) & FID$^-$ & SSIM$^+$ & PSNR$^+$ & PRM$^+$(\%) & FID$^-$ & SSIM$^+$ & PSNR$^+$ & PRM$^+$(\%)\\ \midrule SIR \cite{desolneux2017stochastic} & 230.5 & 0.547 & 14.12 & 18.13 & 305.5 & 0.271 & 11.18 & 2.08 & 325.0 & 0.325 & 12.77 & 3.18 \\ IVR \cite{dosovitskiy2016inverting} & 143.5 & 0.540 & 17.62 & 25.79 & 363.5 & 0.305 & 13.55 & 1.82 & 294.8 & 0.308 & 14.21 & 8.27 \\ INV \cite{pittaluga2019revealing} & 73.5 & 0.641 & 17.11 & 28.78 & 136.4 & 0.478 & 13.91 & 8.11 & 189.7 & 0.482 & 15.11 & 29.47 \\ SLI (Ours) & \textbf{22.6} & \textbf{0.670} & \textbf{18.95} & \textbf{31.71} & \textbf{119.1} & \textbf{0.485} & \textbf{14.81} & \textbf{10.49} & \textbf{173.4} & \textbf{0.513} & \textbf{15.80} & \textbf{35.92} \\ \bottomrule \end{tabular} \end{table*} \begin{figure*}[h!] \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq/gt_000082.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq/in_000082.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq/sir_000082.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq/ivr_000082.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq/inv_000082.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq/our_000082.png} \end{subfigure} \newline \newline \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq/gt_000133.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq/in_000133.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq/sir_000133.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq/ivr_000133.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq/inv_000133.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/celebahq/our_000133.png} \end{subfigure} \newline \newline \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k/gt_000196.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k/in_000196.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k/sir_000196.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k/ivr_000196.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k/inv_000196.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k/our_000196.png} \end{subfigure} \newline \newline \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k/gt_000004.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k/in_000004.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k/sir_000004.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k/ivr_000004.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k/inv_000004.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/5k/our_000004.png} \end{subfigure} \newline \newline \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet/gt_000085.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet/in_000085.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet/sir_000085.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet/ivr_000085.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet/inv_000085.png} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet/our_000085.png} \end{subfigure} \newline \newline \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet/gt_000211.png} \caption{GT.} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet/in_000211.png} \caption{Input} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet/sir_000211.png} \caption{SIR \cite{desolneux2017stochastic}} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet/ivr_000211.png} \caption{IVR \cite{dosovitskiy2016inverting}} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet/inv_000211.png} \caption{INV \cite{pittaluga2019revealing}} \end{subfigure} \begin{subfigure}{.162\textwidth} \centering \includegraphics[width=\textwidth]{imgs/imagenet/our_000211.png} \caption{SLI (Ours)} \end{subfigure} \caption{Qualitative comparison of different reconstruction methods over \texttt{CelebA-HQ }, \texttt{MD-NYU} and \texttt{ImageNet}. For each row, the images from left to right are ground truth, input SIFT features map, results generated by SIR \cite{desolneux2017stochastic}, IVR \cite{dosovitskiy2016inverting}, INV \cite{pittaluga2019revealing} and the proposed model SLI, respectively.} \label{fig:experimental_results} \end{figure*} We evaluate the reconstruction performance of our method over three publicly available datasets: a high-quality human face dataset \texttt{CelebA-HQ} \cite{karras2017progressive}, a 3D point cloud dataset containing different indoor and outdoor scenes \texttt{MD-NYU} \cite{pittaluga2019revealing} and a large visualization dataset with one thousand category \texttt{ImageNet} \cite{deng2009imagenet}. The \texttt{CelebA-HQ} dataset contains 28,000 training images and 2000 testing images. The \texttt{MD-NYU} dataset has 8192 images in the training set and 1024 images in the testing set. The \texttt{ImageNet} dataset includes over 1.2 million training images and 100,000 testing images. \subsection{Evaluations under \textbf{Scenario I}}\label{sec:qualitative} We first compare the image reconstruction performance of different algorithms under \textbf{Scenario I}. For comparison purpose, we adopt three state-of-the-art SIFT-based image reconstruction methods: Stochastic Image Reconstruction (SIR) \cite{desolneux2017stochastic}, Inverting Visual Representations (IVR) \cite{dosovitskiy2016inverting}, and Revealing Scenes by Inverting (INV) \cite{pittaluga2019revealing}. Fig. \ref{fig:experimental_results} shows the reconstruction results for some representative testing images. As can be observed, SIR \cite{desolneux2017stochastic} can restore the main semantic information where the SIFT keypoints exist, whereas the areas with insufficient number of SIFT keypoints cannot be recovered satisfactorily. In addition, the reconstructed images lose all the color information. This is because SIR is based on Poisson editing rather than neural networks with training datasets. IVR \cite{dosovitskiy2016inverting} can reconstruct much more realistic color images by using CNNs. However, the reconstructed contents are highly blurry and many fine details are missing. Furthermore, even though INV \cite{pittaluga2019revealing} can produce pretty good results by adopting a deep GAN-based neural network, some broken or blurred textures can be observed. Compared with these methods, our proposed model can learn more reasonable structures and generate more realistic reconstructions, especially those fine structures and texture regions. In addition to the visual comparison of the reconstructed images, we also compare different methods quantitatively, as shown in Table \ref{tab:quantitative}. Here, we adopt the commonly used metrics, namely, structural similarity index (SSIM), peak signal-to-noise ratio (PSNR) and Frechet Inception Distance (FID) \cite{heusel2017fid}. SSIM and PSNR are the most widely used objective measurements of the image quality; however, they may assign inappropriate scores to perceptually accurate results \cite{nazeri2019edgeconnect}. Therefore, FID is often introduced to reflect the Wasserstein-2 distance between the feature space representations of real and generated images using a pre-trained Inception-V3 model \cite{szegedy2016inception}. It can be seen that our method consistently outperforms the competing algorithms under all these three criteria. \begin{figure*}[t!] \begin{subfigure}{\textwidth} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000019_gt.png} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000019_use25_in.jpg} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000019_use25.png} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000019_use50_in.jpg} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000019_use50.png} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000019_use75_in.jpg} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000019_use75.png} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000019_use100_in.jpg} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000019_use100.png} \end{subfigure} \end{subfigure} \newline \newline \begin{subfigure}{\textwidth} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000293_gt.png} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000293_use25_in.jpg} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000293_use25.png} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000293_use50_in.jpg} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000293_use50.png} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000293_use75_in.jpg} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000293_use75.png} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000293_use100_in.jpg} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/celebahq_000293_use100.png} \end{subfigure} \end{subfigure} \newline \newline \begin{subfigure}{\textwidth} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/5k_000300_gt.png} \caption{GT.} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/5k_000300_use25_in.jpg} \caption{In. (25\%)} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/5k_000300_use25.png} \caption{Res. (25\%)} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/5k_000300_use50_in.jpg} \caption{In. (50\%)} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/5k_000300_use50.png} \caption{Res. (50\%)} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/5k_000300_use75_in.jpg} \caption{In. (75\%)} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/5k_000300_use75.png} \caption{Res. (75\%)} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/5k_000300_use100_in.jpg} \caption{In. (100\%)} \end{subfigure} \begin{subfigure}{.105\textwidth} \centering \includegraphics[width=\textwidth]{imgs/robustness/5k_000300_use100.png} \caption{Res.(100\%)} \end{subfigure} \end{subfigure} \renewcommand\thefigure{7} \caption{Robustness evaluation by using different percentage of SIFT features as input. (b), (d), (f), (h) are the input SIFT features map with 25\%, 50\%, 75\% and 100\% of the original SIFT features, and (c), (e), (g), (i) are corresponding reconstruction results.} \label{fig:robustness} \end{figure*} \begin{figure}[t!] \centering \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=\textwidth]{imgs/re_extraction/000019.png} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=\textwidth]{imgs/re_extraction/000159.png} \end{subfigure} \renewcommand\thefigure{6} \caption{Re-matching examples. In each pair, left is the ground truth and right is the reconstruction result. Green lines represent matched SIFT pairs.} \label{fig:re_matching} \end{figure} Beyond the above traditional metrics for quantitative comparisons, we propose an additional metric by evaluating the percentage of re-matching (for short, PRM) between the ground truth SIFT descriptors and ones from the reconstructed results. This reflects how the reconstructed image preserves the fidelity of the latent image in the SIFT descriptor domain. More specifically, define $\mathcal{F}_g=\{\mathbf{f}_1^g, \mathbf{f}_2^g, \cdots, \mathbf{f}_m^g\}$ as the set of ground truth descriptors, and $\mathcal{F}_o=\{\mathbf{f}_1^o, \mathbf{f}_2^o, \cdots, \mathbf{f}_n^o\}$ as the set of reconstructed ones. Let $d_{i, 1}$ and $d_{i, 2}$ record the nearest and second-nearest Euclidean distances between the reconstructed descriptor $\mathbf{f}_i^o$ ($i \in [1, n]$) and the ground truth descriptors $\{\mathbf{f}_j^g \vert j \in [1, m]\}$. Then the PRM is defined as: \begin{equation} \mathrm{PRM}=\frac{1}{n}\sum_{i=1}^{n}T(d_{i,1}/d_{i,2}, t), \end{equation} where $T$ is a thresholding function incorporating the SIFT matching algorithm \cite{lowe2004distinctive}, \begin{equation} T(d_{i,1}/d_{i,2}, t) = \begin{cases} 1& \mathrm{if}~ d_{i,1}/d_{i,2} < t\\ 0& \mathrm{if}~ d_{i,1}/d_{i,2} \ge t \end{cases}. \end{equation} Here, $t$ is set to 0.8 for guaranteeing the reliable matching according to \cite{lowe2004distinctive}. Obviously, $\mathrm{PRM}$ takes a value in $[0, 1]$, representing the fidelity of the SIFT descriptors extracted from the reconstructed image. The $\mathrm{PRM}$ results of different methods are also compiled into Table \ref{tab:quantitative}. As can be seen, the proposed SLI achieves the best $\mathrm{PRM}$ performance among all the competing algorithms over three test datasets. Re-matching examples are also given in Fig. \ref{fig:re_matching}, where the green lines represent the matched SIFT pairs and the remaining isolated points indicate no match. \begin{figure}[t] \centering \begin{subfigure}{0.5\textwidth} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/ablation/in_000241.jpg} \includegraphics[width=\textwidth]{imgs/ablation/gt_000241.png} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/ablation/empty.png} \includegraphics[width=\textwidth]{imgs/ablation/wo_000241.png} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/ablation/Edge_pred_000241.png} \includegraphics[width=\textwidth]{imgs/ablation/Edge_res_000241.png} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/ablation/RTV_pred_000241.png} \includegraphics[width=\textwidth]{imgs/ablation/RTV_res_000241.png} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/ablation/LBP_pred_000241.png} \includegraphics[width=\textwidth]{imgs/ablation/LBP_res_000241.png} \end{subfigure} \end{subfigure} \begin{subfigure}{0.5\textwidth} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/ablation/in_000098.jpg} \includegraphics[width=\textwidth]{imgs/ablation/gt_000098.png} \caption{GT.} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/ablation/empty.png} \includegraphics[width=\textwidth]{imgs/ablation/wo_000098.jpg} \caption{wo} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/ablation/Edge_pred_000098.png} \includegraphics[width=\textwidth]{imgs/ablation/Edge_res_000098.png} \caption{w Edge} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/ablation/RTV_pred_000098.png} \includegraphics[width=\textwidth]{imgs/ablation/RTV_res_000098.png} \caption{w RTV} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/ablation/LBP_pred_000098.png} \includegraphics[width=\textwidth]{imgs/ablation/LBP_res_000098.png} \caption{w LBP} \end{subfigure} \end{subfigure} \caption{Effect of the guidance provided by different structures. (a) Input and ground truth. (b)-(e) The first and third rows show the inputs with empty, predicted Canny edges \cite{canny1986computational}, RTV \cite{xu2012rtv} and LBP \cite{ojala1996comparative}, respectively; the second and fourth rows present the corresponding reconstructed results.} \label{fig:ablation} \end{figure} Before ending the discussions under \textbf{Scenario I}, we evaluate the robustness of our proposed SLI. Fig. \ref{fig:robustness} shows the results of randomly using 25\%, 50\%, 75\% and 100\% SIFT features as the model input. Although the input has very high spatial sparsity (e.g., 25\% or 50\% features), the output images are still quite interpretable. These results indicate that the privacy leakage problem is very severe under \textbf{Scenario I}, as only a small portion of the SIFT features could lead to the disclosure of sensitive information. \subsection{Ablation Studies of SLI}\label{sec:ablation} We now conduct the ablation studies of our proposed SLI by analyzing how the LBP reconstruction network contributes to the final reconstruction. To this end, we retrain the model without the assistance of the LBP reconstruction network. Further, we consider to replace the LBP reconstruction network with some alternatives including the Canny edges \cite{canny1986computational} and RTV \cite{xu2012rtv}, which could also offer structural information for the image reconstruction \cite{nazeri2019edgeconnect, ren2019structureflow}. \begin{table}[t] \caption{Quantitative comparisons of the guidance provided by Canny edge \cite{canny1986computational}, RTV \cite{xu2012rtv} and LBP \cite{ojala1996comparative} respectively. $^-$Lower is better. $^+$Higher is better.} \centering \label{tab:ablation} \begin{tabular}{ccccc} \toprule \multirow{2}{*}{Methods} & \multicolumn{4}{c}{\texttt{CelebA-HQ}} \\ \cmidrule(r){2-5} & FID$^-$ & SSIM$^+$ & PSNR$^+$ & PRM$^+$(\%)\\ \midrule wo & 42.5 & 0.591 & 17.58 & 20.84 \\ w Edge & 27.9 & 0.634 & 18.55 & 29.26 \\ w RTV & 33.8 & 0.605 & 17.89 & 26.19 \\ w LBP & \textbf{22.6} & \textbf{0.670} & \textbf{18.95} & \textbf{31.71} \\ \midrule \midrule \multirow{2}{*}{Methods} & \multicolumn{4}{c}{\texttt{MD-NYU}} \\ \cmidrule(r){2-5} & FID$^-$ & SSIM$^+$ & PSNR$^+$ & PRM$^+$(\%)\\ \midrule wo & 213.3 & 0.398 & 13.49 & 5.34 \\ w Edge & 149.0 & 0.451 & 14.44 & 6.82 \\ w RTV & 200.7 & 0.408 & 13.89 & 5.84 \\ w LBP & \textbf{119.1} & \textbf{0.485} & \textbf{14.81} & \textbf{10.49}\\ \bottomrule \end{tabular} \end{table} The reconstruction results produced with different structural information are demonstrated in Fig. \ref{fig:ablation}. In many cases, the SIFT keypoints are poorly localized along an edge \cite{lowe2004distinctive}, or are too dense to be separated from the edge, making the transform from SIFT to edges inaccurate (e.g., the transformed edges in the first and third rows). Although RTV is a good representation of the global structures, the high-frequency information discarded by RTV results in unsatisfactory outputs. Meanwhile, from the perspective of the practical implementation, Canny edges and RTV extractions typically involve many parameters (e.g., the pre-filtering strength, the threshold for Canny edges, and the degree of smooth/sharpness for RTV), whose optimal setting should vary for different images. In contrast, LBP is easy to be computed and could be parameter-free. Also, the sufficient information (e.g., gradients) contained in LBP guides the learning direction better and makes the result sharper (e.g., eyes and nose), which could be further validated by the statistical reports in Table \ref{tab:ablation}. These observations would suggest that LBP is a more appropriate candidate for providing structural information in the case of image reconstruction from SIFT features. \subsection{Evaluations under \textbf{Scenario II}}\label{sec:recon_des} We now evaluate the performance of the image reconstruction from SIFT features under \textbf{Scenario II}, i.e., either absence of coordinates or absence of descriptors. We first try to reconstruct the image by using \textit{solely} SIFT descriptors as input, in which case the coordinates can be estimated through the reference-based and landmark-based methods presented in Section IV. The reconstruction results are illustrated in Fig. \ref{fig:recon_des}. For simplicity, we call the SLI model with coordinates estimated by the reference-based and landmark-based methods SLI-R and SLI-L, respectively. As can be observed, SLI-L can restore the main semantic information of the facial area, but the results are quite blurry. In contrast, SLI-R can generate sharper and more realistic reconstruction results, primarily thanks to the employment of a reference image. However, both SLI-L and SLI-R have a fatal problem, i.e., choosing a suitable landmark or reference is a crucial issue. As the data included in \texttt{CelebA-HQ} usually have the same skeleton (e.g., eyes, nose and mouth), we can easily project the input descriptors to the corresponding positions in the landmark or reference, while for the dataset (e.g., \texttt{MD-NYU} or \texttt{ImageNet}) that usually contains various categories, it is difficult to find one or more suitable images as the skeleton of SIFT descriptors. As also mentioned in Section \ref{sec:advanced_methods}, SLI-L and SLI-R could fail for generic images without regular structures. The last row of Fig. \ref{fig:recon_des} shows an example of such failure. Besides, the quantitative comparison of SLI-L and SLI-R are reported in Table \ref{tab:advanced}. It is found that SLI-R performs much better than SLI-L with 0.9 dB PSNR gain over \texttt{CelebA-HQ}. Also, as expected, they both perform poorly in the other datasets. \begin{figure}[t!] \centering \begin{subfigure}{0.5\textwidth} \centering \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/gt_000022.png} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/m1_in_000022.jpg} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/m1_000022.png} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/m3_in_000022.jpg} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/m3_000022.png} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/full_in_000022.jpg} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/full_000022.png} \end{subfigure} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/gt_000040.png} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/m1_in_000040.jpg} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/m1_000040.png} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/m3_in_000040.jpg} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/m3_000040.png} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/full_in_000040.jpg} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/full_000040.png} \end{subfigure} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/gt_000117.png} \caption{} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/ablation/empty.png} \caption{} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/ablation/empty.png} \caption{} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/m3_in_000117.jpg} \caption{} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/m3_000117.png} \caption{} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/full_in_000117.jpg} \caption{} \end{subfigure} \begin{subfigure}{.13\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_des/full_000117.png} \caption{} \end{subfigure} \end{subfigure} \caption{Image reconstruction from solely SIFT descriptors. (a) Ground truth. (b)-(c) Inputs and results of SLI-L. (d)-(e) Inputs and results of SLI-R. (f)-(g) Results of SLI with full SIFT features for comparison.} \label{fig:recon_des} \end{figure} \begin{figure}[t!] \centering \begin{subfigure}{0.48\textwidth} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/celebahq_gt_000075.png} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/celebahq_in_000075.png} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/celebahq_ol_000075.png} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/celebahq_SIFT_000075.jpg} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/celebahq_full_000075.png} \end{subfigure} \end{subfigure} \begin{subfigure}{0.48\textwidth} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/5k_gt_000017.png} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/5k_in_000017.png} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/5k_ol_000017.png} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/5k_SIFT_000017.jpg} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/5k_full_000017.png} \end{subfigure} \end{subfigure} \begin{subfigure}{0.48\textwidth} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/5k_gt_000161.png} \caption{} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/5k_in_000161.png} \caption{} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/5k_ol_000161.png} \caption{} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/5k_SIFT_000161.jpg} \caption{} \end{subfigure} \begin{subfigure}{.19\textwidth} \centering \includegraphics[width=\textwidth]{imgs/recon_location/5k_full_000161.png} \caption{} \end{subfigure} \end{subfigure} \caption{Image reconstruction from solely SIFT coordinates. (a) Ground truth. (b)-(c) Inputs (binary map) and corresponding results. (d)-(e) Results of SLI with full SIFT features for comparison.} \label{fig:recon_location} \end{figure} \begin{table*}[t!] \caption{Quantitative comparison of the image reconstruction using solely SIFT descriptors or coordinates (binary map) over \texttt{CelebA-HQ}, \texttt{MD-NYU} and \texttt{ImageNet}. $^-$Lower is better. $^+$Higher is better.} \centering \label{tab:advanced} \begin{tabular}{ccccccccccccc} \toprule \multirow{2}{*}{Methods} & \multicolumn{4}{c}{\texttt{CelebA-HQ}} & \multicolumn{4}{c}{\texttt{MD-NYU}} & \multicolumn{4}{c}{\texttt{ImageNet}} \\ \cmidrule(r){2-5} \cmidrule(r){6-9} \cmidrule(r){10-13} & FID$^-$ & SSIM$^+$ & PSNR$^+$ & PRM$^+$(\%) & FID$^-$ & SSIM$^+$ & PSNR$^+$ & PRM$^+$(\%) & FID$^-$ & SSIM$^+$ & PSNR$^+$ & PRM$^+$(\%)\\ \midrule SLI-L & 181.0 & 0.372 & 13.70 & 13.52 & - & - & - & - & - & - & - & - \\ SLI-R & 148.4 & 0.397 & 14.60 & 15.01 & 333.5 & 0.292 & 12.40 & 0.00 & 447.5 & 0.233 & 11.87 & 0.00 \\ Coordinates & 122.4 & 0.449 & 13.88 & 14.43 & 243.8 & 0.238 & 12.33 & 1.19 & 440.3 & 0.234 & 11.80 & 0.00 \\ SLI & \textbf{22.6} & \textbf{0.670} & \textbf{18.95} & \textbf{31.71} & \textbf{119.1} & \textbf{0.485} & \textbf{14.81} & \textbf{10.49} & \textbf{173.4} & \textbf{0.513} & \textbf{15.80} & \textbf{35.92} \\ \bottomrule \end{tabular} \end{table*} We then evaluate how much information can be reconstructed from \emph{solely} SIFT coordinates. Although SIFT coordinates are located in key regions of the image, they can only be represented as a binary map without specific image details. For the reconstruction result of using the binary map as the model input, a naive expectation is that the edge information can be well restored. Surprisingly, however, as shown in Fig. \ref{fig:recon_location}, the basic contours and contents of the objects in the image can be recovered, even though the lack of descriptors leads to blurred textures. The statistical results of the reconstructed images are also compiled into Table \ref{tab:advanced}. Compared with SLI-L and SLI-R, the reconstruction results from coordinates are slightly better. This validates the conclusion that the privacy leakage is not only through the descriptors, but also the coordinates. The above results also imply that the privacy leakage problem is much less severe under \textbf{Scenario II} than the cases under \textbf{Scenario I} , especially when the adversary cannot access the coordinates. \section{Conclusions}\label{sec:conclusion} In this work, we have thoroughly investigated the privacy leakage problem of the widely-used SIFT features. We have first considered the \textbf{Scenario I}, where the adversary can fully access the SIFT features. We have proposed a deep generative model SLI for reconstructing the latent image from its SIFT features. The proposed model has been formed with two networks: a LBP reconstruction network, which aims to convert the SIFT into LBP features, and an image reconstruction network, which generates the reconstruction results by using the transformed LBP as a guidance. We then have considered the \textbf{Scenario II}, where the adversary can only access the partial SIFT features. We have designed landmark-based and reference-based methods for estimating SIFT coordinates from the descriptors. Experimental results have been provided to demonstrate the superiority of the proposed model SLI under these two scenarios. Our results also have suggested that the privacy leakage problem can be largely avoided under \textbf{Scenario II}, especially when the adversary cannot access the coordinates. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
2024-02-18T23:40:56.079Z
2020-09-03T02:16:05.000Z
algebraic_stack_train_0000
3,718
9,992
proofpile-arXiv_066-2199
\section{Introduction}\label{introduction} Stellar bars are common in the local Universe, with well over half of disk galaxies having a bar visible on optical and near-infrared images \citep[e.g.,][]{1963ApJS....8...31D,1993RPPh...56..173S,2000ApJ...529...93K, 2002MNRAS.336.1281W,2004ApJ...607..103L,2007ApJ...659.1176M,2007ApJ...657..790M,2009A&A...495..491A,2012ApJ...761L...6M, 2015ApJS..217...32B,2016A&A...587A.160D,2019A&A...631A..94D}. Due to the non-axisymmetric mass distribution in bars, they stimulate angular momentum transfer and gas inflow in galaxy disks \citep[][]{1979MNRAS.187..101L,1980ApJ...237..404S,1989Natur.338...45S,1992MNRAS.259..345A}, and are thus an important agent in the secular evolution of galaxies \citep[see the review by][and references therein]{2013seg..book....1K}. The distribution of massive star formation (SF) in galaxy disks is conditioned by localized zones where gas clouds are both stable and dense enough to form stars, which \citet[][]{1989ApJ...344..685K}, following \citet[][]{1964ApJ...139.1217T}, parameterized to depend on gas surface density and velocity dispersion. Velocity shear can limit SF, however, acting against the condensation of massive clouds \citep[e.g.,][]{1998A&A...337..671R,2005MNRAS.361L..20S}. \citet[][]{2004A&A...413...73Z} nicely illustrated how shear in a strong bar can locally inhibit the massive SF from a clear systematic offset they observed in their Fabry-P\'erot H$\alpha$ data between regions of high non-circular motions and active SF in the bar of NGC~1530. Although this effect is hard to observe, and has not been seen in many other galaxies, it does illustrate graphically the relation between bar dynamics and SF morphology. In general, the occurrence of massive SF zones is governed by the location of the spiral arms and dynamical resonances, with SF often concentrated in spiral arms and the rings that can form near the resonances. In galactic bars, there is no uniform picture of where the SF occurs. Often there are regions of SF near the ends of the bar, and these can form parts of inner rings, as in NGC~5850 (Fig.~\ref{Fig_classB}, lower panel), or highlight the start of grand-design spiral arms, as in NGC~1300 (Fig.~\ref{Fig_classB}, upper panel). The sets of symmetric enhancements of stellar emission near the ends of the bar known as ansae are typically not star forming and have a stellar dynamical origin \citep[][]{2007AJ....134.1863M}. Bars stimulate gas inflow \citep[][]{1984MNRAS.209...93S,1985A&A...150..327C}, and where this inflow is slowed down in the vicinity of inner Lindblad resonances \citep[e.g.,][]{1994ApJ...424...84H,1995ApJ...454..623K,2010MNRAS.402.2462C} a nuclear ring can form and the gas accumulated within them can lead to important and visually striking star-forming nuclear rings, as in NGC~1097 (Fig.~\ref{Fig_classC}, upper panel). Bars have statistically been linked to enhanced gas concentration, and very clearly linked to enhanced SF in the central kpc region \citep[e.g.,][]{1980A&A....88..365H,1981A&A....93...93H,1986MNRAS.221P..41H, 1987ApJ...323...91D,1999ApJ...525..691S,2005ApJ...630..837J,2005ApJ...632..217S,2006ApJ...652.1112R,2017ApJ...838..105L} \citep[for a review of the early papers, see][]{2004ASSL..319..189K}, and often this manifests itself not as a ring but as a (circum)nuclear starburst, as in NGC~2712 or NGC~3185 \citep[Figs.~2~and~3 in][]{2016MNRAS.457..917J}. The SF can be limited to the central region, as in NGC~0936 (Fig.~\ref{Fig_classA}). Star formation can occur along the bar, but often does not. When it does, it can occur in a narrow linear or curved morphology either in the middle of the bar, as in NGC~7479 \citep[see Fig.~1 in][]{2001Ap&SS.276..491Z}, or along one of its edges, as in NGC~1365 (Fig.~\ref{Fig_classC}, middle panel). Many bars are devoid of SF, as in the case of NGC~5850 (Fig.~\ref{Fig_classB}, lower panel), showing only a central SF peak. Finally, the bar sweeping up gaseous material often leads to a dearth of gas, and thus SF, in symmetric regions on either side of the bar \citep[][]{2009A&A...501..207J}; a good example is NGC~3351, shown in Fig.~4 of \citet[][]{2016MNRAS.457..917J}. This was referred to as the ``SF desert'' by \citet[][]{2016MNRAS.457..917J,2018MNRAS.474.3101J} and the desert was confirmed from numerical modeling to consist of older stars by \citet[][]{2019MNRAS.489.4992D}. In this paper we use ultraviolet (UV) and H$\alpha$ imaging to study the distribution of SF in bars in a statistical manner rather than by considering the detailed morphology of individual galaxies, for a sample of more than 800 barred galaxies from the \emph{Spitzer} Survey of Stellar Structure in Galaxies \citep[S$^4$G;][]{2010PASP..122.1397S}. As a result, we do not use all the possible categories described earlier in this Introduction, but concentrate on whether SF occurs at the inner or outer ends of a bar, and/or within it, as described in Sect.~\ref{individual}. Our investigation builds on a small but very interesting body of past work. \citet[][]{2007A&A...474...43V} characterized the H$\alpha$ morphology of 45 suitable isolated galaxies from their AMIGA sample \citep[see][]{2007A&A...472..121V}, classifying them into three main groups depending on whether or not emission is present from the central and bar regions of a galaxy \citep[see also work by][]{1997A&A...326..449M,2019A&A...627A..26N}. Recently, \citet[][]{2020MNRAS.495.4158F} used 684 relatively face-on galaxies from the Mapping Nearby Galaxies at APO \citep[MaNGA;][]{2015ApJ...798....7B} survey, which have a high probability of being barred following the classification by volunteer citizens in a Galaxy Zoo 2 project \citep[][]{2013MNRAS.435.2835W}. They then classified their H$\alpha$ images according to whether a galaxy shows SF in the center, inner ring, ends of the bar, or within the bar, concluding that only low-mass galaxies host SF along their bars, and that both the physical and SF properties of bars are mostly governed by the galaxy stellar mass. We improve on several aspects of previous work, for example sample size (the samples of \citealt[][]{2007A&A...474...43V} or \citealt[][]{2019A&A...627A..26N} are small and did not probe the plentiful galaxies at the end of the Hubble sequence), the set of explored morphological and physical disk and bar parameters, and the quality of the multiwavelength imaging data. \citet[][]{2020MNRAS.495.4158F} use H$\alpha$ images derived from MaNGA that have limited physical resolution across their sample, and depend on criteria for bar classification that are hard to quantify but can introduce important biases (e.g., towards the most prominent bars, judging from their too small overall bar fractions). In addition, in Sect.~\ref{stacks} we introduce the stacking of UV bars (2D), based on the techniques developed by \citet[][]{2016A&A...596A..84D} at 3.6~$\mu$m, and significantly improve on the averaging of SF radial profiles (in 1D) pioneered by \citet[][]{2009A&A...501..207J} (in H$\alpha$), using hundreds of images per sample bin. These techniques probe with unprecedented statistical significance the spatial distribution of SF in disks, whose dependence on global galaxy properties is discussed in Sect~\ref{discussion_chapter}, as well as the possible effect of stellar bars enhancing or inhibiting SF. Finally, in Sect.~\ref{summarysection} we summarize the main results of this paper and their interpretation in light of galaxy evolution. \section{Stacking GALEX near- and far-UV images}\label{stacks} \citet[][]{2016A&A...596A..84D} obtained average 1D disk profiles and 2D bar density maps by stacking \emph{Spitzer} Infrared Array Camera (IRAC) 3.6 $\mu$m images, which trace old stellar populations, in order to characterize the stellar mass distribution of more than a thousand disk galaxies and reveal signatures of bar-induced secular evolution. Here, these averaging techniques are applied to Galaxy Evolution Explorer (GALEX) near-UV (NUV; $\lambda_{\rm eff}=2267\,\AA$) and far-UV (FUV; $\lambda_{\rm eff}=1516\,\AA$) images, so that SF activity in bars is analyzed with unprecedented statistical significance \citep[emission at these UV wavelengths trace recent SF, up to $\sim 100$ Myr; ][]{1998ARA&A..36..189K}. The UV stacks constitute a non-parametric characterization of the distribution of SF in bars, which may be useful for comparison with numerical models. We use the sky-subtracted and masked images from the \emph{GALEX/S$^4$G UV-IR Catalog} by \citet[][]{2018ApJS..234...18B} \citep[see also][]{2015ApJ...800L..19B}, comprising 1931 galaxies of all morphological types that were gathered from the GALEX GR6/7 Data Release\footnote{\href{http://galex.stsci.edu/GR6/}{http://galex.stsci.edu/GR6/}}, cross-matched with those of the S$^4$G, and reduced following \citet[][]{2007ApJS..173..185G}. {Roughly one-half of the} data belong to the GALEX All-Sky Imaging Survey (AIS), whose images had exposure times of $\sim$ 100 seconds and allow the detection of point sources down to $\approx 20$ AB mag \citep[e.g.,][]{2017ApJS..230...24B}. The rest of the galaxies were imaged in deeper GALEX surveys and had exposure times of 1000 seconds or more. The pixel size is 1.5 arcsec. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{./Fig1.pdf} \includegraphics[width=0.99\textwidth]{./Fig2.pdf} \caption{ Two-dimensional synthetic stellar bars constructed from co-added NUV (top) and FUV (bottom) images of disk galaxies that were oriented and scaled with respect to the bars, flipped to make the spiral arms wind clockwise (if needed), and grouped based on revised Hubble stage ($T$, increasing from left to right). The number of galaxies in each subsample is also indicated. Bar stacks are shown in units of mag arcsec$^{-2}$ (see vertical bar for thresholds and color-coding) and cropped to a radius $1.5 \cdot r_{\rm bar}$, so that all binned galaxies are covered radially. The dotted lines show isophotal contours with a step of 0.35 mag arcsec$^{-2}$. The ellipse represents the average ellipticity (3.6 $\mu$m) of the galaxies in the bin \citep[from][]{2015A&A...582A..86H,2016A&A...587A.160D}. The mean bar length is used as a unit, but the actual mean 3.6 $\mu$m bar lengths in kpc vary for each $T$-bin \citep[see Fig.~11 and Table~3 in][]{2016A&A...587A.160D} and are lowest among the faintest galaxies. } \label{Fig_ttype_bars_NUV} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{./Fig3.pdf} \includegraphics[width=0.49\textwidth]{./Fig4.pdf} \caption{ Azimuthally averaged mean NUV (left) and FUV (right) luminosity profiles (solid lines), in bins of numerical Hubble type, obtained from the 2D bar stacks shown in Fig.~\ref{Fig_ttype_bars_NUV}. The dashed lines correspond to the surface brightness cut along the bar major axis. The vertical dotted line indicates the bar end. FUV luminosities are converted to $\Sigma_{\rm SFR}$ (right $y$-axis of the right panel) using Eq.~\ref{SFR_eq}. } \label{Fig_ttype_bars_NUV_1D} \end{figure*} Mean FUV surface brightnesses ($\mu_{\rm FUV}$) are converted to SF rate surface densities ($\Sigma_{\rm SFR}$) following the prescription by \citet[][]{2012ARA&A..50..531K} and \citet[][]{2014ARA&A..52..415M} \citep[see Appendix~B in][]{2018ApJS..234...18B}, \begin{equation}\label{SFR_eq} {\rm log_{10}} (\Sigma_{\rm SFR}) [M_{\odot}\,{\rm yr}^{-1}\,{\rm pc}^{-2}] =1.239-0.4\cdot \mu_{\rm FUV} [{\rm AB \, {\rm mag} \, {\rm arcsec}^{-2}}], \end{equation} assuming a Kroupa initial mass function \citep[][]{2001MNRAS.322..231K}. These estimates are not corrected for extinction, and thus the values are lower boundaries of the true $\Sigma_{\rm SFR}$. Our parent sample is made up of the 1345 disk galaxies with inclinations $< 65^{\circ}$ \citep[according to][]{2015ApJS..219....4S} in the S$^4$G. Of these, 860 are barred according to \citet[][]{2015ApJS..217...32B}, of which 760 ($\sim 88\%$) have available NUV and FUV imaging from \citet[][]{2018ApJS..234...18B}. We also use a control subsample of 423 non-barred and not highly inclined galaxies with available GALEX UV data. \subsection{Average UV bars (2D)}\label{bar_uv_stack} In order to study in detail the distribution of SF in bars, FUV and NUV images are scaled to a common frame determined by the sizes ($r_{\rm bar}$) and orientations of the bars, measured visually by \citet[][]{2015A&A...582A..86H} using 3.6 $\mu$m S$^4$G images. Here we present a summary of the way the UV images are treated \citep[for further details see][]{2016A&A...596A..84D}: \begin{enumerate} \item Deprojection to face-on view using the orientation parameters for the outer disk from \citet[][]{2015ApJS..219....4S}. To make sure that deprojections are reliable, we only use galaxies with ``ok'' quality flags for the orientations. \item Fourier decomposition of the UV light distribution of the galaxy images \citep[up to 40 azimuthal modes, using the NIR-QB code;][]{1999AJ....117..792S,2002MNRAS.337.1118L}, and reconstruction of the image in a polar grid with 128 bins in the azimuthal direction \citep[][]{1999AJ....117..792S}. \item Rotation of the image with respect to the bar major axis, imposing a bar position angle equal to zero. \item Geometric reflection across the bar major axis to make the spiral arms wind clockwise (S-shaped) in case they wind counterclockwise (Z-shaped) in the 3.6 $\mu$m images. The correction of the orientation of the spiral arms (normally trailing, relative to the disk rotation) is important for our analysis: H{\sc\,ii} regions typically appear on the leading side of the bar \citep[e.g.,][]{2002AJ....124.2581S,2010A&A...521A...8P}. \item Scaling of the reoriented image to a grid of radius $3 \cdot \rm r_{\rm bar}$, and width of the radial bin of $0.05 \cdot r_{\rm bar}$. This ensures a good sampling of the bar (the median bar radii in our sample are $\sim$10 and $\sim$20 resolution elements in GALEX and \emph{IRAC} images, respectively) and also of the spiral arms slightly beyond the bar region. \item Having uniformly scaled all the images of barred galaxies to a common physical framework, we are in the position to take subsamples and perform the bar stacks: the mean FUV and NUV surface brightness (weighted in mag arcsec$^{-2}$) is obtained within each of the bins of the polar grid. Our stacking techniques yield roughly the same results (within uncertainties) regardless of the employed weighting when co-adding the light (flux or magnitudes) or the used measure of central tendency (mean or median) \citep[for further details see Fig.~2 and explanations in][]{2016A&A...596A..84D}. \end{enumerate} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{./Fig5.pdf} \includegraphics[width=0.49\textwidth]{./Fig6.pdf} \caption{ Azimuthally averaged mean FUV luminosity profiles obtained from the bar stacks of weakly and strongly barred galaxies \citep[based on the family classification from][]{2015ApJS..217...32B} with total stellar masses $10^{8.5}M_{\odot} < M_{\ast} < 10^{11}M_{\odot}$, considering separately early-type ($T<5$, \emph{upper panel}) and late-type galaxies ($T\ge 5$, \emph{lower panel}). The same plots using NUV are shown in Fig.~\ref{Fig_family_bars_NUV_1D}. } \label{Fig_family_bars_UV_1D} \end{figure} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{./Fig7.pdf} \includegraphics[width=0.49\textwidth]{./Fig8.pdf} \caption{ Mean $\mu_{\rm NUV}$ (left) and $\mu_{\rm FUV}$ (right) 1D profiles as a function of galactocentric radius for different subsamples defined as a function of the total stellar mass (in bins of 0.5 dex; see legend) \citep[see also][]{2018ApJS..234...18B}. Error bars correspond to the standard deviation of the mean ($\sigma/\sqrt{N_{\rm gals}}$). The dashed lines show the average luminosity profiles where the radial sample coverage is greater than 75$\%$ and lower than 100$\%$, and thus where uncertainties are larger (e.g., artificially created up-bending sections due to dominance of more extended UV disks with fainter extrapolated central surface brightnesses). } \label{Fig_mass_bars_NUV_FUV_1Dstack} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{./Fig9.pdf} \includegraphics[width=0.49\textwidth]{./Fig10.pdf} \caption{ As in Fig.~\ref{Fig_mass_bars_NUV_FUV_1Dstack}, but for the dispersion of the $\mu_{\rm NUV}$ (left) and $\mu_{\rm FUV}$ (right) luminosity profiles. } \label{Fig_mass_bars_FUV_1Dstack_disp} \end{figure*} Bar stacks resulting from the co-adding of FUV and NUV images and the binning of our sample in the Hubble sequence are shown in Fig.~\ref{Fig_ttype_bars_NUV}. Azimuthally averaged luminosity radial profiles and the surface brightness along the bar major axis are directly extracted from the bar stacks, and are shown in Fig.~\ref{Fig_ttype_bars_NUV_1D}. Uncertainties are estimated via the standard deviation of the mean ($\sigma/\sqrt{N_{\rm gals}}$), which is typically $\lesssim 0.2$ mag, as shown in Sect.~\ref{1-Dstacks}. The subsamples were binned by morphological types, separating S0s ($-3 \le T < 0$), early-type spirals ($0 \le T < 3$), intermediate-type spirals ($3 \le T < 5$), late-type spirals ($5 \le T < 8$), and Magellanic and irregular galaxies ($8 \le T \le 10$). The average ellipticity of stellar bars from \citet[][]{2015A&A...582A..86H}, obtained via ellipse fitting \citep[][]{1987MNRAS.226..747J} from 3.6~$\mu$m imaging, is highlighted with a black ellipse. A similar characterization of bar stacks as a function of the total stellar mass of the binned galaxies can be found in Appendix~\ref{mass_average_bar} (Figs.~\ref{Fig_mass_bars_NUV} and \ref{Fig_mass_bars_NUV_1D}). \subsubsection{Spatial distribution of UV emission}\label{UV_emission_spatial} Among spirals ($0 \le T< 8$), the UV emission leads with respect to the stellar bar \citep[e.g.,][]{2002AJ....124.2581S}. This is not the case for the S0s ($T \le 0$), where the UV emission is circumnuclear and does not follow the bars. In addition, the leading and trailing sides of the bars cannot be identified when $T>8$ because no spiral pattern is present, and thus the UV emission does not occupy a preferential side in bars hosted by irregular galaxies. Within the outer half of the bar ellipse (semi-major axis distances $> 0.5\cdot r_{\rm bar}$), the FUV flux on the leading side of the bar stacks (averaged over the two quadrants) is $21\%$, $16\%$, and $11\%$ higher than on the trailing side for early-, intermediate-, and late-type spirals, respectively. Clear differences stand out for early- and late-type spirals: when $0\le T<5$ (Cols. 2 and 3 of Fig.~\ref{Fig_ttype_bars_NUV}), the UV emission dominates in the circumnuclear regions and at the bar ends, with a deficit of light in the middle part of the bar, whereas for $T\ge5$ the distribution of UV light is almost uniform across the bar. These trends are more clearly seen in Fig.~\ref{Fig_ttype_bars_NUV_1D}: a hump at the bar end is noticeable in the surface brightness profiles of early- and intermediate-type spirals (especially in the cut along the bar major axis), whereas late-type galaxies present an exponential radial decay of the UV surface brightness. Late-type barred galaxies are brighter in UV wavelengths in general: this is not surprising, as these galaxies are known to be richer in gas and form stars more actively. We note that, in general, the trends are very similar in the NUV and FUV passbands. \subsubsection{Differences in UV emission between strongly and weakly barred galaxies}\label{diffs_strong_weak} A relation between the strength of the bar and the presence of SF regions along the bar has been hypothesized \citep[see, e.g., discussion in][and references therein]{2002ApJ...570L..55J}, but whether such a connection exists remains unclear. To test this, we derived FUV bar stacks after splitting our sample into weakly barred (I/S$\underline{\rm A}$B+I/SAB) and strongly barred (I/SA$\underline{\rm B}$+I/SB) galaxies (Fig.~\ref{Fig_family_bars_UV_1D}), based on the classification of galaxy families by \citet[][]{2015ApJS..217...32B} \footnote{\citet[][]{2016A&A...596A..84D,2016A&A...587A.160D} showed a correspondence between visual \citep[\underline{A}B/AB/A\underline{B}/B, from][]{2015ApJS..217...32B} and quantitative estimates (tangential-to-radial forces, normalized $m = 2$ Fourier amplitudes, intrinsic ellipticity) of the bar strength.}. Early-type ($T<5$) and late-type ($T\ge5\equiv$ Sc) galaxies are studied separately; not only are they characterized by remarkably different structural properties \citep[e.g.,][]{2016A&A...596A..84D,2016A&A...587A.160D}, but also by distinct distributions of SF, as seen from the UV stacks (see Sect.~\ref{UV_emission_spatial}). Among early-type spirals, the central FUV emission is $\sim$0.5 mag brighter for strongly barred galaxies, on average, than for their weakly barred counterparts. This translates into a difference in $\Sigma_{\rm SFR}$ larger than $50 \%$. On the other hand, weakly barred galaxies are characterized by a somewhat higher level of FUV emission in the middle and end parts of the bar. As discussed in Sect.~\ref{discussion_chapter}, we interpret that such differences can be related to the subtle effect of strong bars sweeping the disk gas and inducing circumnuclear starbursts. A different picture is identified among late-type galaxies ($T\ge 5$): strongly barred galaxies present more intense UV emission than weakly barred galaxies at all bar radii, and by more than 0.5 mag in the central parts in particular. The same trends are identified using NUV imaging (see Fig.~\ref{Fig_family_bars_NUV_1D}). \subsection{Average UV disks (1D)}\label{1-Dstacks} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{./Fig11.pdf} \caption{ Mean $\mu_{\rm FUV}$ 1D radial profiles in bins of total stellar mass (1 dex in width), separating barred (solid line) and non-barred galaxies (dashed line) ($95\,\%$ radial coverage). The vertical dotted lines indicate the mean bar size of the barred galaxies in each of the $M\ast$-bins. The same plot for $\mu_{\rm NUV}$ can be found in Fig.~\ref{Fig_mass_bars_NUV_1Dstack_bars_separated}. } \label{Fig_mass_bars_NUV_FUV_1Dstack_bars_separated} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{./Fig12.pdf} \caption{ Mean FUV 1D average luminosity profiles for barred (black) and non-barred (red) galaxies hosting an inner ring, with total stellar masses $10^{9.5} < M_{\ast}/M_{\odot} < 10^{11}$. Vertical bars correspond to the standard deviation of the mean. The luminosity profiles were scaled with respect to the deprojected semi-major axis of the inner ring, labeled SMA(inner ring), before they were co-added. } \label{Fig_inner_rings_SMA} \end{figure} To perform a direct comparison of the mean UV luminosity profiles of barred and non-barred galaxies, and also to study those hosting inner rings, we apply 1D averaging techniques \citep[for a characterization of average profiles without separation into barred and non-barred galaxies, see also Fig.~4 in][]{2018ApJS..234...18B}. This also allows a direct estimate of the dispersion and uncertainties in our stacks. Prior to the co-adding, the $m=0$ Fourier intensity profiles are resized to a common frame determined by the extent of the disk in physical units (up to 25 kpc, using a 0.125 kpc wide radial bin), using spline interpolation \citep[see][]{2016A&A...596A..84D,2017ApJ...835..252S}. The radial extent of the grid is controlled from the rough estimate of the galaxy outer radius that \citet[][]{2015ApJS..219....4S} used to encompass the image region in the 2D photometric decompositions. In the construction of the average profile we must take into account that for some galaxies the extent of their profiles is limited by the image field of view. We therefore limit the averaging to those radii that are covered by at least $75 \%$ of the galaxies in the bin, unless stated otherwise. Uncertainties on the stacks are estimated from the standard deviation of the mean, $\sigma/\sqrt{N_{\rm gals}}$, where $N_{\rm gals}$ corresponds to the number of galaxies comprised in a certain bin. We characterize the 1D radial profiles of UV surface brightness ($\mu$) after binning the sample based on the total stellar masses ($M_{\ast}$) of the host galaxies \citep[from][]{2015ApJS..219....3M}\footnote{$M_{\ast}$ was derived by \citet[][]{2015ApJS..219....3M} from 3.6~$\mu$m imaging using the calibration of the mass-to-light ratio by \citet[][]{2012AJ....143..139E}.}. In Fig.~\ref{Fig_mass_bars_NUV_FUV_1Dstack} we show the mean $\mu_{\rm NUV}$ and $\mu_{\rm FUV}$ obtained by scaling the density profiles to a common frame in physical units. The statistical dispersion of the luminosity profiles among the galaxies in each of the $M_{\ast}$-bins is shown in Fig.~\ref{Fig_mass_bars_FUV_1Dstack_disp}. It is larger in the FUV ($\sigma \le 1.3$ mag) than in NUV ($\sigma \le 1$ mag), while the standard deviation of the mean $\sigma/\sqrt{N_{\rm gals}}\lesssim 0.2$ mag at all radii due to the rich sampling, and hence the differences in the mean $\mu_{\rm FUV}$ and $\mu_{\rm NUV}$ for the different $M_{\ast}$ and bar family bins probed in this work are statistically significant. The UV luminosity follows an exponential slope as a function of radius (Fig.~\ref{Fig_mass_bars_NUV_FUV_1Dstack}) \citep[see also Fig.~3 in][]{2018ApJS..234...18B}, with a scale length that increases with increasing $M_{\ast}$. The luminosity profiles in the outskirts are brighter in more massive galaxies. When $M_{\ast}>10^{10}M_{\odot}$ a hump is detected in the inner regions, more clearly identified when $M_{\ast}>10^{10.5}M_{\odot}$. This feature is associated with the presence of bars. This is confirmed in Fig.~\ref{Fig_mass_bars_NUV_FUV_1Dstack_bars_separated}, where we study the mean $\mu_{\rm FUV}$ for barred and non-barred galaxies separately {(see also Fig.~\ref{Fig_mass_bars_NUV_1Dstack_bars_separated} for $\mu_{\rm NUV}$)}. When $M_{\ast}>10^{10}M_{\odot}$, barred galaxies have a deficit of FUV light within the bar region, which is not identified at the same radial distances in non-barred galaxies. Beyond that bar radius, the average FUV emission is again somewhat stronger in barred galaxies, hinting at a more active rate of SF in the spiral arms of barred galaxies, and possibly to the effect of bars redistributing gas across the disk. For the smaller $M_{\ast}$ bins, barred galaxies have stronger UV emission at all radii. Finally, we test a possible causal connection between the detection of SF along the bar and the presence of inner rings \citep[e.g.,][and references therein]{2019A&A...627A..26N}, where gas can accumulate \citep[][]{1984MNRAS.209...93S} and no longer migrate inwards. We note that GALEX UV imaging has been used to study SF in rings in previous work in the literature \citep[e.g.,][]{2013A&A...555L...4C,2015BaltA..24..426K}. In Fig.~\ref{Fig_inner_rings_SMA} we show the mean FUV emission for the disk galaxies hosting inner rings and pseudorings \citep[according to][]{2015ApJS..217...32B}. The $\mu_{\rm FUV}$ profiles are scaled with respect to the deprojected ring semi-major axis \citep[from][]{2015A&A...582A..86H}. We find an FUV peak close to the ring radius, showing the intense SF taking place in rings. We note that the peak of mean SF is not located at the semi-major axis (SMA) distance, but at $\sim 0.8\,$SMA, because inner rings are not intrinsically circular \citep[e.g.,][]{2014A&A...562A.121C}; in particular, the mean de-projected axis ratio of inner rings in the S$^4$G (inclinations lower than 65$^{\circ}$) is $0.76 \pm 0.01$ ($\sigma=0.12$), in the range $0.4-1$ \citep[][]{2019A&A...625A.146D}. We study separately barred and non-barred galaxies, resulting in very similar radial FUV distributions. Non-barred inner-ringed galaxies present slightly higher mean star formation rates (SFRs) along the disk than their barred counterparts. In Fig.~\ref{Fig_mass_bars_NUV_1Dstack_bars_separated} in Appendix~\ref{bars_NUV} we show similar profiles for the NUV, finding the same trends. We conclude that the spatial distribution of UV light in ringed galaxies is roughly the same for barred and non-barred galaxies; the implications are discussed in Sect~\ref{inner_ring_disc}. \section{Visual classification of the distribution of SF within bars in individual galaxies}\label{individual} In Sect.~\ref{stacks} we showed the statistical power of stacking techniques to characterize the SF activity in bars with a high signal-to-noise ratio and to detect low levels of SF. Nevertheless, by averaging hundreds of UV images we lose information on individual galaxies. In addition, the UV passbands are also not necessarily optimal for tracing the most recent SF bursts. Here, we compensate for these disadvantages by individually inspecting the distribution of FUV (same dataset as in Sect.~\ref{stacks}, comprising 760 galaxies, with the inclusion of 12 additional images from the GALEX GR6/7 Data Release) and continuum-subtracted H$\alpha$ emission \citep[that traces SF in the last $\sim$20 Myr;][]{1998ARA&A..36..189K} in a large comprehensive sample with accurately determined disk and bar physical properties. \subsection{Compilation of H$\alpha$ images for S$^4$G barred galaxies and continuum subtraction}\label{compilations_halpha} The sources of the H${\alpha}$ images used in this work are listed in Table~\ref{table_SF_class_sources} in Appendix~\ref{SF_class_sources}, and were mainly gathered from the NASA/IPAC Extragalactic Database (NED)\footnote{\href{http://ned.ipac.caltech.edu}{http://ned.ipac.caltech.edu}}. We started with the compilation of 281 continuum-subtracted images that were used in \citet[][]{2013A&A...555L...4C}, mostly from the \emph{Hubble} Space Telescope (HST) Archive\footnote{\href{http://archive.stsci.edu/hst/search.php}{http://archive.stsci.edu/hst/search.php}}. We updated this compilation by adding 152 new images, making a final sample 433 S$^4$G galaxies with available H$\alpha$ continuum-subtracted imaging. We produced additional H$\alpha$ continuum-subtracted images for 17 galaxies following \citet[][]{2004A&A...426.1135K} and \citet[][]{2006A&A...448..489K} \citep[see also][]{1999ApJS..124...95B}, namely four from the SPLUS survey \citep[][]{2019MNRAS.489..241M}, nine from JPLUS \citep[][]{2019A&A...622A.176C}, three from the ESO archive, and one (IC~1158) from the original compilation by \citet[][]{2013A&A...555L...4C}. We scaled $R$-band continuum images to match the intensity level of the continuum emission in the H$\alpha$ image. We measured the integrated intensity of at least six non-saturated foreground stars in the H$\alpha$ and $R$-band continuum images, and obtained the scaling factor from the average ratio of the intensities obtained for each star. If not enough foreground stars were available, we employed a second method in which the intensity of each pixel in the H$\alpha$ and $R$-band images were compared. If the color is constant across the image, the relation is expected to be roughly linear, with deviations associated with strong emission-line regions. In order to reduce the scatter and avoid possible saturated pixels, we re-binned some of the images and removed $\sim 1-10 \%$ of the brightest pixels of each image. Finally, the scaling factor was obtained from the slope of the linear regression fit. For further details the reader is referred to \citet{2004A&A...426.1135K}, who showed that the two methods described above (scaling based on stars and on pixel-to-pixel matching) give similar values of the H$\alpha$ continuum-level. For 31 galaxies we also used state-of-the-art data gathered with the Multi-Unit Spectroscopic Explorer \citep[MUSE;][]{2010SPIE.7735E..08B} integral field unit at the Very Large Telescope (VLT). The mosaics have two sources. The first is the ESO Science portal \footnote{\href{http://archive.eso.org/scienceportal/home}{http://archive.eso.org/scienceportal/home}} where science-ready mosaics of several of our galaxies can be downloaded. We produced the others by downloading the raw MUSE data from the ESO archive (see Table~\ref{table_SF_class_sources}). We reduced each exposure using the MUSE pipeline \citep{2012SPIE.8451E..0BW,2014ASPC..485..451W} under the \texttt{Reflex} environment \citep{2013A&A...559A..96F} using standard parameters. We manually aligned the exposures before combining them using the \texttt{muse\_exp\_combine} recipe to produce the final cube. For another 23 galaxies in our sample, fully reduced data cubes in the wavelength range 3750 - 7500 \AA\ exist from the CALIFA survey \citep{2012A&A...538A...8S,2014A&A...569A...1W,2016A&A...594A..36S} and are publicly available via the CALIFA DR3 website\footnote{\href{http://califa.caha.es/DR3}{http://califa.caha.es/DR3}}. We produced the continuum-subtracted H$\alpha$ maps from integral field unit (IFU) data by convolving with a filter of 20 \AA\ FWHM centered at the H$\alpha$ line (accounting for the Doppler shift using recession velocities taken from NED) and subtracting the adjacent continuum contribution within $\pm 50 \AA$, either manually (MUSE) or using PINGSoft\footnote{\href{https://www.inaoep.mx/~frosales/pings/html/software/}{https://www.inaoep.mx/$\sim$frosales/}} software \citep[][]{2011NewA...16..220R} (CALIFA). We checked the quality of the subtraction by comparing the resulting maps with archival H$\alpha$ continuum-subtracted images that existed for a few galaxies \citep[e.g., the image of IC$\,$0776 also appears in][]{2003A&A...400..451G}, in which case the CALIFA maps are not used (poorer resolution). Galaxies in which the bar was not fully covered by the IFU field of view were discarded. In total, we produce continuum-subtracted H$\alpha$ images for 70 galaxies that are also used for the statistical analysis presented in this paper. All of the 433 continuum-subtracted H$\alpha$ images used here are publicly available at the CDS associated with this publication and at NED. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{./Fig13.pdf} \caption{ Illustrative example of SF class A (only circumnuclear star formation): NGC~0936. Shown are the \emph{Spitzer} 3.6~$\mu$m (S$^4$G) (left), GALEX FUV (center), and continuum-subtracted H$\alpha$ images \citep[right; from][but subtraction performed by us]{2019MNRAS.489..241M}. } \label{Fig_classA} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{./Fig14.pdf} \includegraphics[width=0.5\textwidth]{./Fig15.pdf} \caption{ As in Fig.~\ref{Fig_classA}, but for SF class B (star formation at the bar ends, but not along the bar) and subclass ``a'' (circumnuclear SF): NGC~1300 (top) and NGC~5850 (bottom). H$\alpha$ images are from \citet[][]{2004A&A...426.1135K}. } \label{Fig_classB} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{./Fig16.pdf} \includegraphics[width=0.5\textwidth]{./Fig17.pdf} \includegraphics[width=0.5\textwidth]{./Fig18.pdf} \caption{ As in Fig.~\ref{Fig_classA}, but for SF class C (galaxies with SF along the bar) and subclass ``a'' (circumnuclear SF): NGC~1097 (top), NGC~1365 (middle), and NGC~3023 (bottom). H$\alpha$ images are respectively from \citet[][]{2003PASP..115..928K} and from the ESO archive (see Table~\ref{table_SF_class_sources}) (mosaics and continuum-subtraction performed by us from the MUSE archive, covering a smaller field of view than the FUV and 3.6~$\mu$m images). } \label{Fig_classC} \end{figure} \subsection{Classification method}\label{class_met} \begin{table} \centering \small \begin{tabular}{| c| c|} \hline \multicolumn{1}{|c}{\centering SF class} & \multicolumn{1}{|c|}{\centering Distribution of star formation} \tabularnewline \hline \hline A & SF only in the bar central region.\\ \hline B & SF at the ends of the bar, but not along the bar;\\ \hspace{0.75cm} Ba & with circumnuclear SF, \\ \hspace{0.75cm} Bb & without circumnuclear SF. \\ \hline C & SF along the bar;\\ \hspace{0.75cm} Ca & with circumnuclear SF,\\ \hspace{0.75cm} Cb & without circumnuclear SF.\\ \hline N & No flux detection.\\ \hline U & Uncertain classification.\\ \hline \end{tabular} \caption{ Classification system of bar SF classes adopted in this work. } \label{sf_system} \end{table} \begin{table} \centering \small \begin{tabular}{| c| c| c| c| c| c|} \hline \multicolumn{1}{|c}{\centering Sample} & \multicolumn{1}{|c}{\centering A} & \multicolumn{1}{|c}{\centering B} & \multicolumn{1}{|c}{\centering C} & \multicolumn{1}{|c}{\centering N} & \multicolumn{1}{|c|}{\centering U} \tabularnewline \hline \hline FUV & 100 & 196 & 382 & 11 & 83 \\ H$\alpha$ & 54 & 129 & 215 & 19 & 16 \\ \hline \end{tabular} \caption{ Number of galaxies belonging to each SF class A-B-C (see Table~\ref{sf_system}) in the samples with available FUV (upper row) and H$\alpha$ (lower row), and number of cases without emission (N) or uncertain classifications (U). } \label{number_classified} \end{table} We devise a classification method in which the distribution of SF at the bar region is assigned to a class (hereafter SF class) using criteria similar to those used in \citet[][]{2007A&A...474...43V} \citep[see also][]{1997A&A...326..449M,2019A&A...627A..26N,2020MNRAS.495.4158F}. Our classification system is given in Table~\ref{sf_system}. In Fig.~\ref{Fig_classA} we display an illustrative example (NGC~0936) of class A (only circumnuclear SF), showing the 3.6~$\mu$m, GALEX FUV, and continuum-subtracted H$\alpha$ images; in Fig.~\ref{Fig_classB} we show those of NGC~1300 (top) and NGC~5850 (bottom), of class B (SF at bar ends, but not along the bar); and in Fig.~\ref{Fig_classC} we show the images of NGC~1097 (top), NGC~1365 (middle), and NGC~3023 (bottom), which belong to class C (SF along the bar). In Table~\ref{table_SF_class_sources} in Appendix~\ref{SF_class_sources} we list the SF class assigned to each galaxy in our sample. For classes B and C, subclasses are also considered depending on whether we detect circumnuclear SF (``a'') or not (``b''); these are also listed in Table~\ref{table_SF_class_sources}, but are not analyzed here. We also note that in a number of cases (94 in FUV and 35 in H$\alpha$) we either did not detect SF or could not reliably classify its distribution. The assignment of a SF class to each galaxy in our sample was performed by F.D.M., who examined the whole sample twice (nine months time-spacing, allowing him to re-visit contradicting cases), consistently obtaining the same statistical trends (see next sections). The classifications of the FUV and H$\alpha$ samples were done independently, so that the visual analysis was unbiased. Images were navigated using \emph{SAOImage DS9} and the constrast was varied to make the H{\sc\,ii} knots, clumps, and filaments stand out. In Table~\ref{number_classified} we indicate the number of galaxies classified in each category A, B, and C, as well as the number of non-detections (N) and uncertain cases (class U). Of 772 barred galaxies with available FUV imaging, the percentages (and binomial errors) of SF classes A, B, and C are $13 \pm 1.2 \%$, $25.4 \pm 1.6 \%$, and $49.5 \pm 1.8 \%$, respectively, while $1.4 \pm 0.4 \%$ present no emission (N) and $10.8 \pm 1.1 \%$ are uncertain (U). For the 433 with available H$\alpha$ images, the percentages are consistently $12.5 \pm 1.6 \%$, $30 \pm 2.2 \%$, and $49.7 \pm 2.4 \%$ for SF classes A, B, and C, while $4.4 \pm 1 \%$ and $3.7 \pm 1 \%$ belong to classes N and U, respectively. We note that statistical trends of SF classes presented in the next sections are roughly the same regardless of the passband used; however, in some cases the classification in H$\alpha$ is not the same as in FUV (23 cases in class A, 29 in class B, 3 in class C) mainly as a consequence of differences in the traced SF timescales in the two passbands, image depth, resolution, size and irregularity of bars, or unavoidable subjectivity. In addition, we reassess the number of inner rings that are active, expanding the work by \citet[][]{2013A&A...555L...4C} by enlarging his collection of H$\alpha$ images for the barred galaxies in the S$^4$G. We focus on the inner-ringed galaxies identified by \citet[][]{2015ApJS..217...32B} in \emph{IRAC} 3.6 $\mu$m S$^4$G images. An example of a galaxy with an active inner ring (NGC~5850) is shown in Fig.~\ref{Fig_classB}. In Table~\ref{table_SF_class_sources} we indicate whether inner rings are active (rA) or passive (rP). Using FUV images the number of barred galaxies classified as rA is 253 ($90.4 \pm 1.8 \%$); only 23 ($8.2 \pm 1.6\%$) belong to class rP, while 4 ($1.4 \pm 0.7\%$) are uncertain (rU). When using H$\alpha$ these numbers are 175 ($80.6 \pm 2.7\%$), 35 ($16.1 \pm 2.5\%$), and 7 ($3.2 \pm 1.2 \%$), respectively. Further analysis is presented and discussed in Sect.~\ref{inner_ring_disc}. \begin{figure} \includegraphics[width=0.5\textwidth]{./Fig19.pdf} \includegraphics[width=0.5\textwidth]{./Fig20.pdf} \caption{Fraction of SF classes as a function of the revised Hubble stage, classified based on the FUV (upper panels, in blue) and continuum-subtracted H$\alpha$ emission (lower panels, in red). The subpanels correspond to SF classes A (only circumnuclear SF, \emph{top}), B (SF at the bar ends, but not along bar, \emph{middle}), and C (SF along bar, \emph{bottom}). Error bars correspond to binomial errors. Indicated for each category is the number of analyzed galaxies within the bin (bottom row of numbers), and the number of identified cases in each SF class (top row of numbers). In the upper left corners the total number of analyzed galaxies ($N_{\rm gal}$) are given. } \label{THUBBLE} \end{figure} \subsection{Frequency of SF categories as a function of morphological type}\label{ttype_SF} Our goal is to determine whether different distributions of SF in bars depend on the global properties of the host galaxy, focusing on the galaxies with SF classes A, B, and C, and excluding from the analysis uncertain cases (U). In Fig.~\ref{THUBBLE} we show the fraction of SF classes as a function of the morphological type of the host galaxies from \citet[][]{2015ApJS..217...32B}, including binomial counting errors (error bars), using classifications based on FUV and continuum-subtracted H$\alpha$ images. The histograms of the frequency of the three SF classes are significantly different. For SF class A, the distribution peaks for S0s ($\sim 60-70 \%$) and drops among the spirals ($\le 20 \%$). SF class B is dominant in early- and intermediate-type spirals ($\sim 40-60 \%$), and is a factor of $\sim 2$ higher than in their late-type counterparts. A negligible amount of lenticulars belong to SF class B. Lastly, for SF class C, the fraction increases with increasing Hubble type: a maximum frequency of $\sim 60-75 \%$ is found for Sc and irregular galaxies ($5 \le\emph{T}\le 10$), while a marginal $\sim 10-20 \%$ is found for S0s. In conclusion, the modes of the statistical distributions of SF classes A, B, and C are clearly segregated in the Hubble sequence, even though examples of all SF classes can be found for a given $T$-bin. These reported trends are qualitatively the same regardless of the passband used, either FUV or H$\alpha$. \subsection{Frequency of SF categories as a function of total stellar mass}\label{mstar_SF} While much can be learned from the study of galaxy properties in the Hubble sequence, it is also convenient to use quantifiable physical parameters such as the total stellar mass. In Fig.~\ref{MSTAR} we show the frequency of SF classes as a function of $M_{\ast}$, where clear trends stand out: e.g., the fainter the galaxy, the more frequent the SF class C is. That is, low-mass galaxies tend to host bars that are actively forming stars along the whole extent of the bar. Specifically, $\gtrsim 60 \%$ of the galaxies with $M_{\ast}<10^{10}M_{\sun}$ belong to SF class C, and the fraction declines with increasing $M_{\ast}$, very clearly in the FUV sample. Even so, $\sim 50 \%$ of the galaxies with $10^{10}M_{\odot}<M_{\ast}<10^{11}M_{\odot}$ are classified as C in the H$\alpha$ sample. The fraction of SF class A peaks for the highest $M_{\ast}$-bin. In general, the frequency of class B is higher among massive systems ($39.5\pm2.9\%$ and $36.4\pm3.3\%$ at FUV and H$\alpha$, respectively, when $M_{\ast}>10^{10}M_{\sun}$) than among their faint counterparts (fractions of $20\pm 2\%$ and $25.6\pm 3.1\%$), but the histogram is not peaked. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{./Fig21.pdf} \includegraphics[width=0.5\textwidth]{./Fig22.pdf} \caption{ As in Fig.~\ref{THUBBLE}, but as a function of total stellar mass of the host galaxy, in bins of 0.5 dex and in units of solar masses. } \label{MSTAR} \end{figure} \subsection{Frequency of SF categories as a function of gas fraction}\label{gas_frac} The global content of atomic hydrogen in the galactic disk indicates the principal fuel reservoir for SF, even though the main fuel for SF is molecular gas. In Fig. \ref{MHIMSTAR} we show the frequency of SF classes against the relative content of atomic gas (i.e., the mass of H{\sc\,i} gas normalized by the total stellar mass). Atomic gas masses are estimated as \citep[e.g.,][]{1988gera.book..522G,2018MNRAS.474.5372E,2019A&A...625A.146D} \begin{equation}\label{gasfrac} M_{\rm HI}=2.356 \cdot 10^5 \cdot D^2 \cdot 10^{0.4 \cdot (17.4-m21c)}, \end{equation} where $m21c$ is the corrected 21 cm line flux in magnitude from HyperLEDA and $D$ is the distance to the galaxy (in megaparsecs) adopted by \citet[][]{2015ApJS..219....3M}. We confirmed the good agreement between our heterogenous $M_{\rm HI}$ estimates and those available from The Arecibo Legacy Fast ALFA Survey \citep[ALFALFA;][]{2011AJ....142..170H} for the overlap between the two samples (less than $25\%$ of our galaxies). The distribution of the three main SF categories are somewhat different when studied versus $M_{\rm HI}$/$M_{\ast}$, resembling the behavior in the Hubble sequence (Sect.~\ref{ttype_SF}). The frequency of SF class A (C) decreases (increases) with increasing gas fraction. The mode of SF class B ($\sim 40 \%$) occurs for intermediate gas fractions with $-1.5 \le {\rm log_{10}} (M_{\rm HI}/M_{\ast}) \le -1.0$, and its distribution is rather flat ($\sim 20-30\%$) among the gas-rich galaxies. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{./Fig23.pdf} \includegraphics[width=0.5\textwidth]{./Fig24.pdf} \caption{ As in Fig.~\ref{THUBBLE}, but as a function of the H{\sc\,i} gas fraction (relative to the total stellar mass), in bins of 0.5 dex. } \label{MHIMSTAR} \end{figure} \subsection{Frequency of SF categories as a function of gravitational torque}\label{Qb_sec} We finally study the frequency of SF categories as a function of the gravitational torque measured by the tangential-to-radial force ratio ($F_{\rm T}/\left<F_{\rm R}\right>$) \citep[e.g.,][]{2001ApJ...550..243B,2002MNRAS.331..880L,2002MNRAS.337.1118L,2004ApJ...607..103L}. This test is especially relevant in that it sheds light on the physics driving SF across bars (see Sect.~\ref{dist_SF}): tangential forces trace bar-induced gas motions, while radial forces control circular velocities in the inner parts, and thus the degree of shear \citep[e.g.,][]{2005MNRAS.359.1065S}. In particular, we use the radial force profiles derived by \citet[][]{2016A&A...587A.160D} from $3.6\,\mu$m S$^4$G imaging, following \citet[][]{1981A&A....96..164C}: \begin{equation}\label{torquerad} Q_{\rm T}(r)=\frac{{\rm max}\left( |F_{\rm T}(r,\phi)| \right)}{\langle |F_{\rm R}(r,\phi)|\rangle}. \end{equation} Here $r$ and $\phi$ refer to the radial distance and azimuthal angle, respectively. Specifically, the maximum of $Q_{\rm T}$ at the bar region is used as proxy of the bar-induced perturbation strength \citep[e.g.,][]{2001ApJ...550..243B,2019A&A...631A..94D}, called $Q_{\rm b}$. We note that at the bar region the unaccounted dark halo contribution to radial forces is likely to be only minor, and becomes somewhat important for later types, implying a reduction of $\sim 20-25\%$ on $Q_{\rm b}$ for $T = 7-10$ \citep[][]{2016A&A...587A.160D}. The fraction of SF categories versus $Q_{\rm b}$ is shown in Fig.~\ref{QB}, confirming differences in the distribution of SF classes identified earlier. The occurrence of SF class B peaks at $\sim 40 \%$ for $0.2 \le$ $Q_{\rm b} \le 0.3$, and smoothly decreases towards both weaker and stronger bars. This is an intermediate case between SF classes A and C, which hints at a physical transition of SF and local dynamical conditions in bars. SF class C is more typical of barred galaxies with high gravitational torques (strong bars); there is a sharp increase in its frequency with increasing $Q_{\rm b}$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{./Fig25.pdf} \includegraphics[width=0.5\textwidth]{./Fig26.pdf} \caption{ As in Fig. \ref{THUBBLE}, but as a function of the bar strength, measured from the maximum of the tangential-to-radial force ratio at the bar region, using bins of 0.1. } \label{QB} \end{figure} \section{Discussion}\label{discussion_chapter} We report differences in the statistical distributions of star-forming and passive bars in the S$^4$G survey, pointing to the influence of global morphological and physical properties on the distribution of SF activity in the central regions of galaxies. The S$^4$G is representative of the local Universe despite not being complete in any quantitative form (e.g., volume) in its current version. It is currently under completion with analysis of new \emph{Spitzer} 3.6 $\mu$m for early-type galaxies with $T \le 0$ \citep[][]{2013sptz.prop10043S}, and new ground-based $i$-band imaging for relatively gas-poor late-type galaxies. We use archival GALEX FUV and NUV imaging of 772 barred galaxies and a compilation of 433 continuum-subtracted H$\alpha$ images combining CALIFA and MUSE IFU data cubes with archival imaging of better resolution and employ both stacking techniques and visual classifications. Here, we discuss the statistical trends reported in Sect.~\ref{stacks} (stacking techniques) and Sect.~\ref{individual} (visual inspection of images), which consistently yield similar results in all used passbands (H$\alpha$, NUV, and FUV), and their importance in shedding light on the regulation of the SF activity by bars. \subsection{Evidence for bar-induced secular evolution in the central regions of disk galaxies} The torques exerted by stellar bars are expected to provoke the flow of gaseous and stellar material within the disk, driving the secular evolution of the inner parts of the galaxy, as established in simulation models since the 1990s \citep[e.g.,][]{1992MNRAS.259..328A,1992MNRAS.258...82W,1993A&A...268...65F,1993RPPh...56..173S,1995ApJ...454..623K,2004A&A...424..799P,2016MNRAS.462L..41F}. Dark matter bars \citep[][]{2016MNRAS.463.1952P,2019MNRAS.488.5788C}, if present, might also contribute to this effect. Evidence of gas streams in barred spirals was reported in the 1960s \citep[e.g.,][studying NGC$\,$4027 and NGC$\,$7741]{1963AJ.....68..278D}. The funneled gas might eventually be spent in central starbursts, and contribute to the buildup and evolution of disk-like bulges \citep[][]{2004ARA&A..42..603K}. Observational evidence of secular evolution within the bar region comes from detections of inner rotating stellar and gaseous substructures in the center of barred galaxies \citep[e.g.,][and references therein]{2008A&A...485..695C,2009A&A...495..775P,2013seg..book....1K,2014A&A...572A..25M,2015MNRAS.451..936S}, including nuclear rings \citep[e.g.,][]{1995ApJ...454..623K,2005A&A...429..141K,2010MNRAS.402.2462C,2019MNRAS.488.3904L} and inner bars \citep[e.g.,][]{2002AJ....124...65E,2019MNRAS.484.5296D,2019MNRAS.482L.118M}; from enhanced central SF and chemical abundances \citep[e.g.,][]{2011MNRAS.416.2182E,2012ApJS..198....4O,2015A&A...584A..88F,2016A&A...595A..63V,2017ApJ...848...87C,2020MNRAS.tmp.2737L}; and in general from stellar populations analyses \citep[e.g.,][]{2007A&A...465L...9P,2009A&A...495..775P,2011ApJ...743L..13C,2011MNRAS.415..709S,2011A&A...529A..64P, 2012MNRAS.420.1092D,2013MNRAS.431.2397D,2014A&A...570A...6S,2015A&A...584A..90G,2016MNRAS.460.3784S,2017MNRAS.470L.122P,2019MNRAS.488L...6F, 2019MNRAS.482..506G,2020A&A...637A..56N}. On the other hand, the gas swept by bars can eventually fuel active galactic nuclei (AGN). However, in spite of the proposed mechanisms based on numerical simulations \citep[e.g.,][]{1989Natur.338...45S,2015MNRAS.446.2468E}, it remains unclear how galaxies drive this gas to the central $\sim 100$~pc to feed the supermassive black holes \citep[e.g.,][and references therein]{2000ApJ...529...93K,2004cbhg.symp..186W,2009ASPC..419..402H,2012ApJ...750..141L,2013ApJ...776...50C}. \subsection{Insights from UV stacking: strong bars redistribute gas and nourish central star formation} Based on the analysis of mean stellar density profiles derived from 3.6 $\mu$m images with a large unbiased sample, \citet[][]{2016A&A...596A..84D} and \citet[][]{2017ApJ...835..252S} provide evidence for bar-induced secular evolution of disk galaxies in terms of enhanced central mass concentration. In Sect.~\ref{bar_uv_stack} we describe how we applied these same averaging techniques to obtain mean bars and disks at UV wavelengths, tracing SF up to $10^{8}$~yr. We used GALEX images of S$^4$G galaxies from \citet[][]{2018ApJS..234...18B} and the parameterization of bars (length, position angle, shape, and strength) at 3.6~$\mu$m from \citet[][]{2015A&A...582A..86H} and \citet[][]{2016A&A...587A.160D}. Inferences on the distribution of SF based on UV (and to a lesser extent H$\alpha$) emission are affected by extinction, and thus any estimated SFR is a lower boundary to the true value. The dust tends to re-radiate the absorbed UV in mid-IR wavelengths. This is beyond the scope of this paper, but will be assessed in future work (Díaz-García et al. in prep.) for a subsample of S$^4$G galaxies. Highly inclined galaxies (where dust absorption is greatest) are not included in our analysis. We have not deconvolved the averaged luminosity profiles with the GALEX point spread function ($\sim 6$ arcsec), whose wings should produce the largest uncertainties in the outermost regions of the galaxies, which are not probed here. Stacking techniques allow us to average hundreds of UV images per sample bin and detect low levels of SF (Sect.~\ref{stacks}). This is an important step forward in the study of SF in bars, as we use a significantly larger sample compared to previous works (see Sect.~\ref{introduction}), resulting in a more in-depth analysis. Sample bins were defined based on detailed visual estimates of bar strength from \citet[][]{2015ApJS..217...32B}, thus testing with high statistical significance the role of bars in triggering or preventing SF. We showed that, among early-type disk galaxies, the average central UV emission is $\sim$ 0.5 mag brighter (i.e., $\gtrsim 50 \%$ larger $\Sigma_{\rm SFR}$) when only strongly barred galaxies are considered, relative to their weakly barred counterparts (the latter, in turn, present slightly higher levels of UV emission in the middle and end parts of the bar) (Figs.~\ref{Fig_family_bars_UV_1D}~and~\ref{Fig_family_bars_NUV_1D}). This is most likely related to the efficiency of strong bars sweeping the disk gas and nourishing central starbursts. The latter is predicted in numerical models whose output resemble early-type disk galaxies \citep[see][and references therein]{1993RPPh...56..173S}. On the other hand, the UV central surface brightness of barred galaxies is not significantly brighter than that of non-barred galaxies (Figs.~\ref{Fig_mass_bars_NUV_FUV_1Dstack_bars_separated}~and~\ref{Fig_mass_bars_NUV_1Dstack_bars_separated}), which may cast doubt on the role of bars in central SF enhancement. Among the massive galaxies ($M_{\ast}\in [10^{10}-10^{11}]\cdot M_{\odot}$) the central UV emission in barred galaxies is however higher relative to the underlying exponential disk. Following this line, \citet[][]{2016A&A...596A..84D} showed that the central deviation from an exponential slope of the mean stellar density profile is also larger in barred galaxies (see their Fig.~8). Moreover, since UV traces timescales that are about the same as dynamical ones (and even longer in the central parts of galaxies) the bar potential might have changed substantially after enhancing the central UV emission. In other words, some bars might have weakened or even dissolved after feeding gas to the circumnuclear regions \citep[e.g.,][]{2004ApJ...604..614S}, while SF still takes place at $z \approx 0$ out of gas reservoirs that can last for hundreds of Myr. However, bar dissolution is implausible according to most modern simulations \citep[e.g.,][and references therein]{2013seg..book..305A}. Barred galaxies present higher UV emission relative to their non-barred counterparts when $M_{\ast}<10^{10}M_{\odot}$. As for the largest $M_{\ast}$-bins, barred galaxies have somewhat brighter UV profiles beyond the radii where bars typically occur (Sect.~\ref{1-Dstacks}). Likewise, \citet[][]{2016A&A...596A..84D} showed that, on average, barred galaxies have disks with longer scale lengths and fainter extrapolated central surface brightnesses than non-barred galaxies \citep[see also][]{2013MNRAS.432L..56S,2019MNRAS.489.3553E}. This is probably related to bars causing a mixing of gas and stars and the spread of the disk \citep[e.g.,][]{2002MNRAS.330...35A,2011A&A...527A.147M,2012MNRAS.426L..46A,2015MNRAS.451..936S}\footnote{ For a recent analysis of the dependence on $M_{\ast}$ of bar-induced radial distribution of metals in the gas phase of spirals, see \citet[][and references therein]{2020MNRAS.tmp.2303Z,2020arXiv200712289Z}.}, perhaps as a result of the coupling between bar and spiral amplitudes \citep[][]{2010ApJ...715L..56S,2012A&A...548A.126M,2019A&A...631A..94D,2020MNRAS.497..933H}. In other words, spiral arms are loci of active SF, and their amplitudes are larger in barred galaxies than in non-barred ones \citep[see Fig.~9 in][]{2016A&A...596A..84D}. This translates into a higher UV emission beyond the bar radius. We conclude that bars are important agents in the regulation of the SF in disk galaxies. \subsection{Is SF quenching in galaxies bar-driven?} Bar-driven central starbursts have been proposed as the mechanism that eventually depletes the gas in barred galaxies, unless it is replenished from the outside. However, whether the presence of a bar is connected to the total SFR in a galaxy remains a matter of debate \citep[e.g.,][]{1986MNRAS.221P..41H,1988ApJ...329L..69D,1988MNRAS.231..465P, 1999A&A...351...43A,2002AJ....124.2581S,2007A&A...474...43V,2020ApJ...893...19W}. For instance, using H{$\alpha$} imaging of galaxies in the Coma \citep[][]{2015A&A...576A..16G} and Local superclusters \citep[][]{2012A&A...545A..16G}, \citet[][]{2015A&A...580A.116G} proposed that strong bars play an important role in the quenching of the SF of massive galaxies since $z=3$. This is supported by their observations at different $z$ of a declining bar fraction for non-quenched galaxies, and is also consistent with the study by \citet[][]{2013ApJ...779..162C}, who found a larger bar fraction among galaxies with a low total specific SFR (i.e., SFR divided by $M_{\ast}$). Similar trends have been found in the local Universe: a drop in the bar fraction among gas-rich galaxies was reported by \citet[][]{2012MNRAS.424.2180M} \citep[see also][]{2012MNRAS.423.3486W,2018MNRAS.473.4731K} based on the Sloan Digital Sky Survey \citep[SDSS;][]{2006AJ....131.2332G}. Further supporting this picture, \citet[][]{2020MNRAS.495.4158F} find a segregation in the SFR-$M_{\ast}$ relation as a function of scaled bar length, where SF classes (very similar to those used in this paper) also separate clearly. If true, the interpretation of these statistical trends is affected by a chicken or egg causality dilemma: are strong bars responsible for galaxy quenching \citep[e.g.,][]{2018A&A...609A..60K} or do they preferentially form in red gas-poor galaxies \citep[see, e.g.,][]{2013MNRAS.429.1949A,2010ApJ...719.1470V}? The causality between bars and quenching might also be linked to the observation that in lenticulars ($T\le0$), and in gas-poor massive galaxies in general, the UV emission is scant across the disk and only circumnuclear (Sect.~\ref{bar_uv_stack}) and does not follow the bar (see also Sects.~\ref{ttype_SF}, \ref{mstar_SF}, and \ref{gas_frac}). However, UV might not be the most reliable tracer of SF or young populations among the reddest galaxies, as discussed in \citet[][]{2018ApJS..234...18B}, i.e., the emission can also be coming from evolved stars (UV-upturn), for instance main-sequence turnoff or extreme horizontal branch stars \citep[e.g.,][]{2005ApJ...619L.111Y,2011ApJS..195...22Y}. However, elliptical galaxies are not included in our analysis, and the discussed effect is not expected to be so severe among S0s. In addition, the statistical trends when only using H$\alpha$ are basically the same. In Appendix~\ref{app_AIS_Agn} we check and confirm that the statistical trends for the frequency of SF classes are not determined by the presence of AGN \citep[as reported by][]{2010A&A...518A..10V}, which is a source of photoionization. Likewise, \citet[][]{2020MNRAS.495.4158F} used Baldwin, Phillips and Terlevich \citep[BPT;][]{1981PASP...93....5B} diagrams reproduced from IFU data across the bar spaxels from MaNGA to conclude that the bulk of the H$\alpha$ emission in barred galaxies is associated with SF and not with AGN emission. The connection between bars and quenching reviewed above is challenged by the fact that bars among late-type galaxies in the S$^4$G (typically gas rich) are unexpectedly frequent \citep[][]{2015ApJS..217...32B} and long \citep[][]{2016A&A...587A.160D} relative to the sizes of their host disks \citep[see also][]{2019MNRAS.489.3553E}, yet their age and exchange of angular momentum might be much different from earlier types. \citet[][]{2016A&A...587A.160D} speculated that many of the late-type bars identified in the S$^4$G would possibly be overlooked if they were observed at higher redshift, given their faint disks (see their Sect. 5.1). On the other hand, as discussed by \citet[][]{2015ApJS..217...32B}, the types of bars seen in nearby late-type galaxies may not necessarily be the ones we see at high redshift. \citet[][]{2018MNRAS.474.5372E} showed that SDSS-based studies tend to underestimate the bar fraction (mainly among low-mass, blue, gas-rich galaxies) due to poor spatial resolution and the correlation between bar size and stellar mass. He also found that the bar fraction is roughly constant with $g-r$ color and atomic gas fraction. In addition, \citet[][]{2019A&A...625A.146D} do not find differences on SFRs, gas fraction, or [FUV]-[3.6] color between barred and non-barred S$^4$G galaxies based on the use of clustering algorithms (self-organizing maps). On the other hand, it is known that S$^4$G missed galaxies due to sample selection based on H{\sc\,i} recessional velocities. However, this alone is not sufficient to explain the discrepancies between the SDSS and S$^4$G surveys, such as the overall lower bar fraction in the former or its sharp decrease towards low-mass gas-poor galaxies. We argue that a definite connection between bar fraction and SF quenching is still lacking in the literature. A new picture may arise from forthcoming surveys in the next decade with the next generation of telescopes (e.g., LSST, JWST, WFIRST, EUCLID). This will allow us to study the cosmic bar fraction \citep[e.g.,][]{2008ApJ...675.1141S,2010ApJ...714L.260N} with unprecedented depth and resolution, and with the aid of automated bar detections that are based on neural networks \citep[e.g.,][]{2018MNRAS.476.3661D}. \subsection{Spatial distribution of SF in galactic bars}\label{dist_SF} The distribution of ionized gas in the bar region, traced from the H$\alpha$ emission, can be distributed along the bar; concentrated in the nuclear or circumnuclear regions, with little or no emission from the bar; and in both the bar and the nuclear region \citep[][]{1997A&A...326..449M,2002AJ....124.2581S,2007A&A...474...43V,2008A&A...485....5Z}. Interest has emerged on this topic with the advent of large surveys \citep[e.g.,][]{2004A&A...414...23J} and the use of homogenous IFU data \citep[e.g.,][]{2019A&A...627A..26N,2020ApJ...898..116K}. Most of the work attempting to classify the SF in bars has been carried out with small samples. In order to provide the most complete study with a large unbiased sample of objects that are not highly inclined, in Sect.~\ref{class_met} we presented a simple visual classification system (outlined in Table~\ref{sf_system}) for the galaxies in the S$^4$G survey. \citet[][]{2020MNRAS.495.4158F} recently presented a study of the SFR and distribution in 684 barred galaxies surveyed in MaNGA with an approach similar to ours. While they study the frequency of SF categories as a function of the total stellar mass and global SFR, here we test how SF relates to $M_{\ast}$ and to other parameters such as $T$-type, gas fraction, and tangential-to-radial forcing. Their use of IFU data has advantages (e.g., analysis with a homogeneous dataset of BPT diagrams) and disadvantages (e.g., poorer angular resolution) compared to our data, and represents an important complement to our paper. To date, there is no clear understanding about the influence of local dynamical conditions on the SF activity in bars. The formation of new stars out of molecular clouds along the bars is expected to be regulated by the effect of shear, which can be controlled by orbits making up the bar \citep[e.g.,][]{1992MNRAS.259..345A}. \citet[][]{2002ApJ...570L..55J} discussed that SF can be induced in weak bars, owing to the weaker shocks and shear, and mention the case of galaxies like M$\,$100 \citep[][]{1989ApJ...343..602E}, NGC$\,$4254, and NGC$\,$4303 \citep[][]{1997PhDT........11K}. Numerical simulations by \citet[][]{1998ApJ...508..291V} show that weak shocks with speeds of order 20-30 km~s$^{-1}$ can indeed favor the collapse of gas and the formation of stars. In some cases, the distribution of molecular gas indicates that the SF along the bar appears to be inhibited in some locations of the dust lanes due to the high strength of shocks and shear stress \citep[e.g.,][]{1998A&A...337..671R}. This is confirmed in the fluid dynamics simulations by \citet[][]{1992MNRAS.259..345A}. Nevertheless, H{\sc\,ii} regions have been found under these conditions in other galaxies \citep[e.g.,][]{1997A&A...326..449M,2002AJ....124.2581S,2008A&A...485....5Z}. Furthermore, observations of H$\alpha$ velocity gradients showed that shear makes SF drop, whereas shocks enhance it in general \citep[][]{2004A&A...413...73Z}. In Sect.~\ref{bar_uv_stack} we showed that in bar stacks of spiral galaxies ($0<T<8$) the UV emission traces the stellar bars and dominates on their leading side, a behavior expected from simulations that model bar-triggered gas inflow \citep[e.g.,][]{1992MNRAS.259..345A}. H{\sc\,ii} regions on the leading side of the bars have been detected \citep[e.g.,][]{2002AJ....124.2581S,2010A&A...521A...8P,2020MNRAS.495.4158F}, and are expected to be due to the combined effect of shear and turbulence forces inhibiting SF in most places, but not on the leading side of the bar \citep[][]{2015MNRAS.446.2468E,2015MNRAS.454.3299R} {\citep[for pioneering theoretical input on the interplay between shear, shocks, and SF, see][]{1992MNRAS.259..328A}}. \subsection{Differences in the distributions of SF classes A-B-C} We find distinct distributions in the Hubble sequence of the loci of SF within bars, using both stacking techniques (Sect.~\ref{UV_emission_spatial}) and visual classifications (Sect.~\ref{ttype_SF}). Differences in the statistical distributions of the star-forming and passive bars are also reported as a function of physical properties, such as $M_{\rm HI}/M_{\ast}$ (Sect.~\ref{gas_frac}) and $F_{\rm T}/\left<F_{\rm R}\right>$ (Sect.~\ref{Qb_sec}). However, the segregation of SF classes (especially between B and C) is less clear as a function of $M_{\ast}$ (Sect.~\ref{mstar_SF}), which is not surprising as the Hubble sequence is not a mass sequence \citep[e.g., Fig.~1 in][]{2016A&A...596A..25L}. We also note that SF classes are not clustered (e.g., examples of any SF class can be found for any given $T$-bin). We find that bar stacks comprising late-type galaxies ($T \ge 5$) have SF that is more evenly distributed along the bar major-axis, and that the UV emission is higher for strong bars at all bar radii (Sect.~\ref{bar_uv_stack}). Likewise, by studying individual objects we show that the fraction of star-forming bars (category C) is larger for later types (Sect.~\ref{ttype_SF}). In Appendix~\ref{app_AIS_Agn} we checked that the depth of FUV imaging does not affect the statistical trends presented in this work: limiting the analysis to the deepest GALEX images yields the same results in the Hubble sequence. The correlation between Hubble type (in a narrower $T$ range) and the presence of SF along the bar has previously been reported from smaller samples that did not probe the plentiful galaxies at the end of the Hubble sequence \citep[e.g.,][]{1996ASPC...91...44P}. \citet[][]{1996RMxAA..32...89G} showed that SBb galaxies tend to host less SF along bars than SBc. Among the spirals, the dissimilarity in the distributions of SF classes B and C in the Hubble sequence is likely related to general differences in the mass distribution and photometric/kinematic properties of disks in galaxies with $T$-type higher or lower than $\sim 5$ \citep[][]{2016A&A...596A..84D}. The former have larger central mass concentrations than the latter, among which many galaxies are bulge-less \citep[][]{2015ApJS..219....4S,2016A&A...596A..84D}, and the shape of their rotation curves and mass distribution is remarkably different. Late-type gas-rich galaxies are characterised by low amplitude, slowly rising rotation curves \citep[e.g.,][]{1991ApJ...368...60P}. Thus, the shear ($\Gamma$) in these galaxies might be lower at the bar region \citep[favoring SF;][]{2005MNRAS.361L..20S} compared to their early-type counterparts, which can be estimated from the slope of the rotation curves ($V$) in the central regions: $\Gamma=-d {\rm ln} \Omega/d {\rm ln} r$, where $\Omega(r)=V/r$ is the angular velocity \citep[e.g.,][]{2006ApJ...645.1012S,2018MNRAS.477.1451F} at a given radius $r$, and $\Gamma=1$ in the flat regime. This interpretation is also favored by our observations that star-forming bars are typically hosted by disk galaxies with high tangential-to-radial force ratios (Sect.~\ref{Qb_sec}). $F_{\rm T}$ traces the bar-induced gravitational torques and the efficiency of the bar potential controlling the orbits of the gas \citep[][]{2015MNRAS.451..936S}. $\left<F_{\rm R}\right>$ determines the stellar contribution to the circular velocity \citep[e.g.,][]{2016A&A...587A.160D,2019A&A...625A.146D}, which in turn gives a lower bound for the rotation curve of the galaxy in the inner parts \citep[the nuclear regions tend to be baryon dominated according to, e.g.,][]{2016MNRAS.458.1199E}. For a given galactocentric radius and galaxy size, the higher the $\left<F_{\rm R}\right>$ values the larger the shear, and thus the torque parameter ($Q_{\rm b}$) is related to the SF activity. In conclusion, a lower shear is likely in Sc-irregular and in low-mass galaxies in general, where the inner slope and amplitude of the rotation curve are lowest \citep[e.g.,][]{2016A&A...587A.160D,2020A&A...635A.197D} and $Q_{\rm b}$ is largest \citep[e.g.,][]{2016A&A...587A.160D}. The latter is mainly due to the dilution of bar gravitational torques by the bulge contribution to the overall radial force field \citep[][]{2001A&A...375..761B,2002MNRAS.337.1118L}, which dominates over the dark matter halo dilution \citep[][]{2016A&A...587A.160D}. This is in spite of the fact that bar-induced tangential forces are probably stronger among the largest galaxies with the most massive bars \citep[massive disks host bars with large $m=2$ Fourier density amplitudes;][]{2016A&A...587A.160D}. On the other hand, for a given $M_{\ast}$, a higher $F_{\rm T}/\left<F_{\rm R}\right>$ can cause a twist on stellar orbits, enhancing the local shear. In addition, in Sect.~\ref{1-Dstacks} we showed that, on average, non-barred galaxies are characterized by exponentially decaying UV luminosity profiles without any light deficit in the central regions (unlike in barred ones), in agreement with reports by \citet[][]{2009A&A...501..207J} that were based on (1D) H$\alpha$ averaging. Altogether, this implies that the dynamical conditions determined by the axisymmetric stellar components alone cannot explain the inhibition of SF, and hence bars play a major role. We confirm the drop in the frequency of star-forming bars in galaxies with $M_{\ast}>10^{10}M_{\sun}$ (Sect.~\ref{mstar_SF}) reported by \citet[][]{2020MNRAS.495.4158F}. This is seen both in FUV and in H$\alpha$ ($\sim 40\%$ smaller sample); however, $\sim 1/2$ of the analyzed galaxies in the H$\alpha$ sample with $10^{10}M_{\odot}<M_{\ast}<10^{11}M_{\odot}$ belong to SF class C. Among the most massive galaxies, physical processes other than SF, such as gas shocks, can also account for the H$\alpha$ emission. \citet{2019A&A...627A..26N} used a sample of 16 galaxies (with $M_{\ast} \gtrsim 10^{10}M_{\odot}$) from the Close AGN Reference Survey \citep[CARS;][]{2017Msngr.169...42H} to study the properties of star-forming bars and non-star-forming bars using IFU MUSE data, and report that the SF along the bar is linked to the flatness of the surface brightness profile: the flattest bars are star-forming. The latter is not easy to reconcile with our report of a low fraction of SF bars in early-type galaxies \citep[see also][]{2020MNRAS.495.4158F}, which typically host flat bars \citep[e.g.][]{1985ApJ...288..438E,1996AJ....111.2233E,2015ApJ...799...99K,2016A&A...596A..84D}. This may be due to our use of a different and larger sample, and thus further analysis is needed. Galaxies of $T$-types $0 \le T < 5$, which are characterized by intermediate gas fractions and gravitational torques, predominantly have SF regions at the bar ends, but not along the bar. As discussed by \citet[][]{2020MNRAS.495.4158F}, the occurrence of intense H$\alpha$ at the bar ends has been postulated to be a consequence of the gas flows and shear at the kpc level, and cloud-cloud collisions and turbulence on a parsec scale \citep[][]{2015MNRAS.454.3299R}. These favorable physical conditions are likely to be present in early- and intermediate-type spirals. This trait is in principle not related to ansae structure \citep[stellar blobs at the end of the bars;][]{1965AJ.....70..501D} since most ansae are detected in early-type galaxies \citep[e.g.,][]{2007MNRAS.381..401L}. However, recent work by \citet[][]{2019MNRAS.488..590B} reports the detection of blue bar ansae in late-type galaxies, which could indeed be related to some of the UV enhancements (SF class B) characterized in this work or to highly oval star-forming inner rings. \citet[][]{2007AJ....134.1863M} argue that the nature of this ansae structure is in principle stellar dynamical in origin, yet one example of an ansae harboring SF is reported (NGC$\,$4151). The likelihood of a bar to host SF is correlated with the total relative content of H{\sc\,i} gas (Sect.~\ref{gas_frac}), which is explained by the behavior of SF classes in the Hubble sequence. However, the availability of gas is not sufficient to explain the statistical trends: quite a few early- and intermediate-type spirals have plenty of cold gas and host H{\sc\,ii} regions everywhere except the bar where dynamical conditions must be different. Galaxies with $T \ge 5$ are known to be dark matter dominated within the optical disk \citep[see Fig.~6 in][]{2016A&A...587A.160D}; thus, the distinct disk stability properties and the interplay between dark matter, disk temperature, and SF must play a role to explain our observations. Last but not least, one important ingredient that is missing from our analysis is the content of molecular gas. Ideally, we should use observations of the CO(1-0) line (115 GHz) and infer H$_2$ gas masses from the velocity-integrated line intensities across the bars \citep[e.g.,][]{1999ApJ...526...97R,2015ApJ...815...59P,2019A&A...621L...4G,2020MNRAS.tmp.1411M}, but such data are scarce for our large galaxy sample. In future work (Díaz-García et al. in prep.) we will study the relationship between the molecular gas column density and the surface density of the SFR (derived from GALEX UV and H$\alpha$ imaging compiled in this work, correcting for extinction using 22~$\mu$m WISE photometry) s for a subsample of S$^4$G galaxies, using the CO emission along the bars observed with the IRAM-30 m single dish. \subsection{Bars in late-type galaxies: resolution and non-stellar contaminants} Gas-rich galaxies often host clumpy bars (e.g., NGC$\,$3023 in Fig.~\ref{Fig_classC}) and low-quality imaging and the consequent blurring of SF regions can lead to a missclassification of bars among the latest types \citep[e.g.,][]{2014AAS...22320502S}, even at near-IR wavelengths, but this is not expected to be severe in the S$^4$G given the good quality of the data \citep[see, e.g., discussion in][]{2015ApJS..217...32B}. It is worth noting, however, that non-stellar emission (hot dust, polycyclic aromatic hydrocarbons, or asymptotic giant and red super-giant stars) can contaminate the 3.6~$\mu$m flux \citep[][]{2014ApJ...788..144M,2015ApJS..219....5Q}. This can cast doubt on whether there is actually an underlying bar pattern in the old stellar populations of some late-type galaxies, i.e., whether self-gravity can alone make SF clumps aligning without presence of $x_1$ orbits characterizing bars. While this might be the case for some bars, we argue that this does not explain the general picture for bars in late-type galaxies \citep[see discussion and observational characterization of bars in late-type galaxies in, e.g.,][]{2016A&A...596A..84D}. Non-stellar contaminants could also contribute to the greater FUV and NUV emission in strong bars (Sect.~\ref{bar_uv_stack}), as seen in bar stacks comprising late-type galaxies. In other words, the visual identification of strong bars in 3.6~$\mu$m images of clumpy gas-rich galaxies can be biased due to the contribution of non-stellar emission at the bar region \citep[for the analysis of the impact of non-stellar contaminant on bar forcing, see Appendix C in][]{2016A&A...587A.160D}. We also checked that our assignment of SF classes is not affected by the resolution of the employed imaging: the distribution of SF classes is uncorrelated with sizes of bars in pixels. \subsection{Gas inflow slowed down at the 1/4 ultraharmonic resonance} \label{inner_ring_disc} Many rings present recent SF and host young stars \citep[][]{1993AJ....105.1344B,1995ApJ...454..623K,2009A&A...501..207J}. However, rings lacking SF activity have also been found \citep[e.g.,][]{1991AJ....102.1715B,2013A&A...555L...4C}. \citet[][]{2010AJ....139.2465G} used a sample of 44 galaxies (26 non-barred or weakly barred, and 18 strongly barred) to show that the SFR within rings does not depend on the amplitude of the non-axisymmetric perturbation strength. More recently, \citet[][]{2019A&A...627A..26N} report a correlation between the lack of SF in a bar and the presence of an inner ring. This suggests that gas is caught in inner resonance rings \citep[that tend to live at the 1/4 ultraharmonic resonance; e.g.,][]{1993RPPh...56..173S,2000A&A...362..465R,2019A&A...625A.146D} and is prevented from funneling towards the center; this hypothesis is tested and discussed here. \citet[][]{2015ApJS..217...32B} identified 73 galaxies with inner rings and 268 with inner pseudo-rings (i.e., made of tightly wrapped spiral arms) in our sample of barred galaxies. Of the 73 galaxies with closed inner rings and available imaging (Sect.~\ref{class_met}), only $20.5 \pm 6.1\%$ and $18.5 \pm 4.8\%$ belong to class C (SF-bar) in the H$\alpha$ and FUV samples, respectively, which is much lower than the overall frequency of SF class C ($\sim 50 \%$) of the parent sample. The lower fraction of star-forming bars in galaxies with inner rings is qualitatively in agreement with the reports by \citet[][]{2019A&A...627A..26N}, whose subsample of four inner-ringed galaxies host non-star-forming bars. This can however be a consequence of inner rings living in massive galaxies \citep[e.g.,][]{2019A&A...625A.146D} where the suppression of SF in bars is greatest (Sect.~\ref{mstar_SF}). \citet[][]{2020MNRAS.495.4158F} also report that inner rings in H$\alpha$ maps are mainly detected in galaxies with total stellar masses higher than $10^{10}M_{\odot}$, which is not surprising as this is the $M_{\ast}$-threshold where the fraction of rings (as detected at near-IR wavelengths) starts to rise \citep[e.g.,][]{2015A&A...582A..86H,2019A&A...625A.146D}. On the other hand, we checked and confirmed that the inclusion of pseudorings makes the connection between SF in bars and presence of inner rings less clear: the fraction of inner-ringed galaxies with SF class C becomes $43 \pm 3.4 \%$ (H$\alpha$) and $42.6 \pm 2.8 \%$ (FUV); this may be explained by the fact that many pseudorings do not have a resonance origin (and hence trap less gas) or are hosted by late-type galaxies (with SF bars). In addition, part of the picture above is also consistent with our findings in Sect.~\ref{1-Dstacks}, in which we studied the mean UV radial profiles for galaxies hosting inner rings, normalized to the ring SMA. The radial distribution of SF relative to the inner ring loci is similar for barred and non-barred galaxies. We confirmed the inner dip in the UV emission for the subsample of barred galaxies hosting inner rings. Interestingly, this lack of UV emission is also detected in non-barred ringed galaxies, which are not expected to have their SF strongly suppressed in their central regions (as shown in Figs.~\ref{Fig_mass_bars_NUV_FUV_1Dstack_bars_separated}~and~\ref{Fig_mass_bars_NUV_1Dstack_bars_separated}). That is, SF is on average suppressed at radii smaller than the inner ring SMA, irrespective of the presence of a bar. Passive rings (i.e., lacking SF) are found only in early-type disk galaxies ($-3 \le T \le 2$), with a large fraction corresponding to ringlenses ($30-40 \%$) \citep[][]{2013A&A...555L...4C}. We updated the classifications by \citet[][]{2013A&A...555L...4C} (Sect.~\ref{class_met}) and confirm his results (Fig.~\ref{appendixrings} in Appendix~\ref{inner_rings_appendix}) by including the new H$\alpha$ images of barred galaxies from Sect.~\ref{compilations_halpha}, showing that passive rings are mainly hosted by lenticular galaxies, in which the fraction of active rings is $\lesssim 50 \%$. Naturally, this is a consequence of passive rings being hosted by galaxies with low relative contents of H{\sc\,i} gas, as shown in Fig.~\ref{appendixrings2}. Curiously enough, there are a number of late-type galaxies ($T\ge5$) hosting passive rings as well, two in FUV (NGC$\,$3389 and NGC$\,$3906) and six in H$\alpha$ (NGC$\,$3906, NGC$\,$4504, NGC$\,$7437, UGC$\,$04867, UGC$\,$09245, and UGC$\,$10791). It is also interesting that 16 barred galaxies in our sample host inner rings that are passive in H$\alpha$ but not in FUV. As discussed in \citet[][]{2013A&A...555L...4C}, it is possible to infer quenching timescales on the order of 20-100 Myr from rings presenting FUV emission (tracing SF up to 100 Myr), but not H$\alpha$ (tracing SF up to 20 Myr) \citep[][]{1998ARA&A..36..189K}. In particular, \citet[][]{2013A&A...555L...4C} estimated 200 Myr to be a lower bound for the dissolution timescale of inner rings (on the order of one orbital period at the ring SMA). We conclude that the gas funneled by non-axisymmetries, such as spiral arms, gets partially trapped at the inner rings. The gas no longer migrates to the nuclear regions, explaining the diminished UV and H$\alpha$ emission within the rings' SMA and along the bar. Nevertheless, the fact that a peak of UV and H$\alpha$ emission is still detected in the circumnuclear regions implies that the presence of inner rings does not control circumnuclear SF, nourished by gas reservoirs accumulated for several (hundreds of) Myr. \section{Summary and conclusions}\label{summarysection} The main goals of this study are to shed light on the role of galactic bars regulating the SF activity across disks, and to link the distribution of SF in bars to the global properties of the host galaxies. With unprecedented statistical significance, we studied the spatial distribution of SF regions in the inner parts of more than 800 nearby disk galaxies (within $\sim$40 Mpc) with inclinations lower than 65$^{\circ}$, drawn from the S$^4$G survey \citep[][]{2010PASP..122.1397S}. Two complementary methods were used: \begin{enumerate} \item We applied the stacking techniques developed in \citet[][]{2016A&A...596A..84D} to GALEX NUV and FUV imaging from the GALEX/S$^{4}$G Surface Brightness and Color Profiles Catalog \citep[][]{2018ApJS..234...18B}. Prior to averaging, subsamples were defined based on global physical properties such as total stellar mass ($M_{\ast}$), Hubble stage ($T$), and morphological family. \begin{enumerate} \item Bar stacks (2D) were built from co-added UV images (Figs.~\ref{Fig_ttype_bars_NUV}~and~\ref{Fig_mass_bars_NUV}) that were uniformly scaled and re-oriented with respect to the stellar bars, using bar parameters at 3.6 $\mu$m from \citet[][]{2015A&A...582A..86H} and \citet[][]{2016A&A...587A.160D}. The winding direction of the spiral arms was also systematically corrected to differentiate the leading and trailing sides of the bar. \item UV luminosity profiles were scaled to a common framework defined by the extent of the disks in physical units (and that of the sizes of inner rings) followed by the calculation of the radial 1D average and dispersion (Figs.~\ref{Fig_mass_bars_NUV_FUV_1Dstack}, \ref{Fig_mass_bars_FUV_1Dstack_disp}, and \ref{Fig_inner_rings_SMA}), so that we could study differences in SF between barred and non-barred galaxies (Fig.~\ref{Fig_mass_bars_NUV_FUV_1Dstack_bars_separated}). \end{enumerate} \item We classified the spatial distribution of SF regions by visually inspecting H$\alpha$ and GALEX FUV images. Our classification system devises three main categories (Table~\ref{sf_system}), namely: \begin{itemize} \item SF class A): only circumnuclear SF (accounting for $\sim 1/8$ of the galaxies in our sample) (Fig.~\ref{Fig_classA}), \item SF class B): SF at the bars ends, but not along the bar ($\sim 1/4$ of the sample) (Fig.~\ref{Fig_classB}), \item SF class C): SF along the bar ($\sim 1/2$ of the sample) (Fig.~\ref{Fig_classC}). \end{itemize} For this purpose, we assembled the largest compilation of continuum-subtracted H$\alpha$ images in the S$^4$G, comprising 433 galaxies (see Table~\ref{table_SF_class_sources}), and made them publicly available (via CDS). For 70 galaxies, we processed the continuum-subtraction ourselves from archival imaging and integral field unit datacubes (e.g., from the CALIFA and ESO archives). \end{enumerate} The main results of this paper are the following: \begin{itemize} \item Among massive galaxies with $M_{\ast}>10^{10}M_{\odot}$ (typically S0/a-Sbc), barred galaxies are characterized by a dip in the radial distribution of SF that is not seen in non-barred systems \citep[see also][]{2009A&A...501..207J} (Figs.~\ref{Fig_mass_bars_NUV_FUV_1Dstack},~\ref{Fig_mass_bars_NUV_FUV_1Dstack_bars_separated},~and~\ref{Fig_mass_bars_NUV_1Dstack_bars_separated}). This shows that bars are loci of SF suppression, quite plausibly because of the combined effect of gas flows and shear \citep[e.g.,][]{2015MNRAS.454.3299R}. \item The UV emission traces the stellar bars and mainly appears on their leading side of the bar stacks in spiral galaxies (S0a-Sdm) (Fig.~\ref{Fig_ttype_bars_NUV}). This is in agreement with the expectation from numerical models \citep[e.g.,][]{1992MNRAS.259..345A}. \item By studying individual galaxies we show that the distributions of SF classes A, B, and C are significantly different in the Hubble sequence. Whether a bar is star-forming or passive is likewise linked to global physical properties of the host galaxies (Figs.~\ref{THUBBLE}, \ref{MSTAR}, \ref{MHIMSTAR}, and \ref{QB}). \item In particular, massive, gas-poor, S0 galaxies tend to host SF exclusively in the circumnuclear regions (category A), which is probably linked to the role of bars in galaxy quenching postulated from studies at high-$z$ \citep[e.g.,][]{2015A&A...580A.116G} and simulations \citep[e.g.,][]{2018A&A...609A..60K}. \item The SF in late-type galaxies (Sc-Im) is evenly distributed along the bar major-axis. The UV emission is on average higher at all bar radii among strong bars, relative to their weakly barred counterparts (Figs.~\ref{Fig_family_bars_UV_1D}~and~\ref{Fig_family_bars_NUV_1D}). The fraction of star-forming bars (class C) is larger for later morphological types, larger H{\sc\,i} gas fractions, and higher tangential-to-radial force ratios, at both H$\alpha$ and UV wavelengths. We argue that shear has the smallest effect in these late-type galaxies, favoring SF \citep[e.g.,][]{2005MNRAS.361L..20S}. \item The SF activity dominates at the bar ends and the circumnuclear regions in bar stacks comprising galaxies of morphological types ranging between S0/a and Sbc (Fig.~\ref{Fig_ttype_bars_NUV_1D}). The UV emission gets weaker, relative to the outer exponential disk, in the intermediate parts of the bar. We confirm that SF class B) is typical of early- and intermediate-type spirals (Fig.~\ref{THUBBLE}), with distributions of gas fraction (Fig.~\ref{MHIMSTAR}) and torque parameter (Fig.~\ref{QB}) that peak between those of classes A and C, likely due to a higher shear in galaxies with larger central mass concentrations and bar amplitudes. \item Strongly barred early-type spiral galaxies are characterized by a $\sim$0.5 mag brighter central UV emission (Figs.~\ref{Fig_family_bars_UV_1D}~and~\ref{Fig_family_bars_NUV_1D}) (i.e., $\gtrsim 50 \%$ larger $\Sigma_{\rm SFR}$), compared to their weakly barred counterparts (that show a somewhat higher UV emission in the middle and end parts of the bar). These observations can be explained by the effect of the bar-induced gravitational torques sweeping the gas in the disk that eventually fuels starbursts in the central regions \citep[e.g.,][and references therein]{1993RPPh...56..173S}. \item In galaxies hosting inner rings, the mean radial UV luminosity profiles are similar for barred and non-barred galaxies. They show a local maximum close to the ring SMA, diminished UV emission in the region 0.3-0.7 SMA, and a nuclear peak (Fig.~\ref{Fig_inner_rings_SMA}). This can be explained by gas being partially trapped at the 1/4 ultraharmonic resonance \citep[][]{1984MNRAS.209...93S,1996FCPh...17...95B}, causing a halt in its migration to the nuclear regions, irrespective of the presence of a bar. \item The latter is further supported by the low fraction ($<1/3$) of galaxies with closed inner rings belonging to class C. We also confirm that, while most inner rings detected at 3.6 $\mu$m are active in FUV and H$\alpha$ passbands, the frequency of passive rings is highest among S0s, accounting for $> 50 \%$ \citep[][]{2013A&A...555L...4C}. \end{itemize} This work highlights the connection between bars and SF activity in the central parts of local disk galaxies, using an unprecedentedly large unbiased sample based on the analysis of GALEX UV and continuum-subtracted H$\alpha$ imaging. Differences in the typical spatial distribution of SF in galactic bars are dependent on physical and morphological global properties of the host galaxies. We encourage these trends to be further studied elsewhere with numerical models. \begin{acknowledgements} We thank the anonymous referee for comments that improved this paper. We thank Stéphane Courteau, Jesús Falcón-Barroso, Estrella Florido, Ryan Leaman, Ute Lisenfeld, Isabel Pérez, Glenn van de Ven, Simon Verley, and Almudena Zurita for useful discussions. We thank Serafim Kaisin for providing continuum-subtracted H$\alpha$ images for the galaxies NGC$\,$3384, UGC$\,$07257, and UGC$\,$07534. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sk$\l$odowska-Curie grant agreement No 893673. SDG acknowledges support from the Spanish Public Employment Service (SEPE). We acknowledge financial support from the European Union's Horizon 2020 research and innovation programme under Marie Sk$\l$odowska-Curie grant agreement No 721463 to the SUNDIAL ITN network, from the State Research Agency (AEI-MCINN) of the Spanish Ministry of Science and Innovation under the grant "The structure and evolution of galaxies and their central regions" with reference PID2019-105602GB-I00/10.13039/501100011033, and from the IAC project P/300724 which is financed by the Ministry of Science and Innovation, through the State Budget and by the Canary Islands Department of Economy, Knowledge and Employment, through the Regional Budget of the Autonomous Community. AYKB acknowledges financial support from the Spanish Ministry of Economy and Competitiveness (MINECO), project Estallidos AYA2016-79724-C4-2-P. This research makes use of IDL (\href{https://www.harrisgeospatial.com/docs/using_idl_home.html}{https://www.harrisgeospatial.com/docs/using$\_$idl$\_$home.html}), python (\href{http://www.python.org}{http://www.python.org}), Matplotlib \citep[][]{Hunter2007}, and Astropy \citep[][]{2013A&A...558A..33A,2018AJ....156..123A}. {\it Facilities}: GALEX, \emph{Spitzer} (IRAC). \end{acknowledgements} \bibliographystyle{aa}
2024-02-18T23:40:56.446Z
2020-11-24T02:03:21.000Z
algebraic_stack_train_0000
3,732
15,607
proofpile-arXiv_066-2221
\section{Introduction} Interest in nanoscale robotic systems has led researchers to investigate different frameworks to initiate reliable communication between nanomachines. One solution is molecular communication, which is a paradigm inspired by nature, that entails utilizing chemical signals as carriers of information. The transmitter of this diffusion-based channel releases particles into an aqueous or gaseous medium, where the particles propagate until they arrive at the receiver; the receiver then detects and decodes the information in these particles~\cite{farsad2016comprehensive,srinivas2012molecular,pierobon2011diffusion}. As another solution, the emergence of plasmonic nanoantennas has paved the way towards electromagnetic (EM) communication among nanodevices, where both the Terahertz (THz) band~\cite{7955066,7086348,elayan2017photothermal,elayan2018end} and optical frequency range~\cite{johari2018nanoscale} are possible candidates. Specifically, in-vivo wireless nanosensor networks (iWNSNs) have emerged to provide fast and accurate disease diagnosis and treatment. These networks are expected to operate inside the human body in real time while establishing reliable wireless transmission among nanobiosensors~\cite{shubair2015vivo}. One active research topic within molecular communications involves establishing interfaces to connect the molecular paradigm with its external environment~\cite{kisseleff2017magnetic, 8467351, liu2017using, krishnan2018wireless}. The authors in~\cite{kisseleff2017magnetic} proposed a wearable magnetic nanoparticle detector to be used as an interface between a molecular communication system deployed inside the human body and a signal processing unit located outside. In~\cite{8467351}, the authors presented a biological signal conversion interface which translates an optical signal into a chemical one by changing the pH of the environment. Moreover, a redox-based experimental platform has been introduced in~\cite{liu2017using} to span the electrical and molecular domains. This wet-lab coupling paves the way towards novel generation of bio-electronic components that serve as the basis of intelligent drugs, capable of biochemical and electrical computation and actuation. Furthermore, in a very recent work, the authors in~\cite{krishnan2018wireless}, identified genes that control cellular function upon responding to EM fields that penetrate deep tissue non-invasively. Their experimental results complement the growing arsenal of technologies dedicated to the external control of cellular activity in-vivo. Among the biological structures found in the human body, protein molecules are heterogeneous chains of amino acids; they perform their biological function by coiling and folding into a distinct three dimensional shape as required. Changes in protein level, protein localization, protein activity, and protein-protein interactions are critical aspects of an inter-cellular communication process collectively known as {\em signal transduction}. One important feature associated with protein structures is that their vibrational modes are found in the THz frequency range~\cite{turton2014terahertz}. These modes provide information about protein conformational change, ligand binding and oxidation state~\cite{knab2006hydration}. Therefore, by triggering protein vibrational modes using THz EM waves, we can direct mechanical signaling inside protein molecules, in turn controlling changes in their structure and, as a result, activating associated biochemical events~\cite{matellan2018no}. In this work, we bridge the gap between EM (specifically, THz radiation) and molecular communication; We consider a communication link which consists of a nanoantenna transmitter, a protein receiver and a Markovian signal transduction channel. We are interested especially in the process at the receiving end of signal transduction, where a protein changes conformation due to the induced THz signal. Since this problem can be thought of fundamentally as an information transmission problem, our aim in this paper is to compute the mutual information of this communication link. In fact, gaining a detailed understanding of the input-output relationship in biological systems requires quantitative measures that capture the interdependence between components. Hence, a closed form expression for the mutual information rate under independent, identically distributed (IID) inputs is derived and maximized to find the capacity for different protein interaction scenarios. By finding the mutual information rate, experimenters are guided into the amount of information the protein signaling pathway carries. The main contributions of the paper are as follows:\begin{itemize} \item We model the stochastic protein dynamics actuated through THz waves as a discrete-time, finite-state channel. We present both a two-state and a multi-state model to emulate protein dynamics. In the two-state model, a change in the protein state is triggered through the applied nanoantenna THz force. In the multi-state model, a cascade of changes in the protein configuration is stimulated, where links between different protein states are controlled through the targeted application of THz force. \item We analytically derive the mutual information and compute the capacity under different constraints for the two-state and multi-state protein models. The achieved theoretical rates indicate the existence of a ubiquitous mechanism for information transmission between the nanoantenna and the protein with a clear physical significance. \end{itemize} Biological systems can be generally modelled with microstates; this could refer to the covalently modified state, conformational state, cellular location state, etc. Each of these states defines a certain attribute related to either the protein structure or function~\cite{duan2002describing}. In our work, the biological meaning of state refers to the conformational state, which we consider as either Unfolded or Folded for the two-state model. In the case of the multi-state model, we refer to multiple intermediate states. An example is the photoactive membrane protein, \textit{Bacteriorhodopsin}. The cycle of this protein consists of several states including a resting state and a series of photo-intermediate states, each of which is associated with a conformational change~\cite{markelz2008terahertz}. The transition between protein states regulates biological processes, including cell signaling. Thereafter, the methodology presented in this work sheds light on various opportunities that impact applications concerning drug discovery, biosensing as well as disease control and prevention. The rest of the paper is organized as follows. In Sec.~\ref{Sec2}, the system model of the stimulated protein signal transduction pathway is presented. In Sec.~\ref{Sec3}, a communication system based on Markov finite-states is developed to capture protein dynamics. In Sec.~\ref{Sec4}, a two-state protein model is formulated. The model is further extended and generalized to take into account multi-state protein interactions in Sec.~\ref{Sec5}. In Sec.~\ref{Sec7}, the numerical results of the models are illustrated while providing a clear physical insight. Finally, we draw our conclusions in Sec.~\ref{Sec8}. \section{System Model} \label{Sec2} \subsection{The Physical Process} Living cells communicate with each other through a series of biochemical interactions referred to as signal transduction networks. A molecular process referred to as mechanotransduction, governs the transmission of mechanical signals from the extracellular matrix to the nucleus~\cite{martino2018cellular}. Proteins, which are considered major drivers of signal transduction, display a status change in response to mechanical stimulation. In our work, we consider a mechanotransduction communication channel, composed of a nanoantenna transmitter and a protein receiver. We assume that the nanoantenna is tuned to a specific frequency depending on the protein type. As such, the interaction between the nanoantenna and the protein gives rise to a mechanical response~\cite{matellan2018no}. According to structural mechanics, if an external harmonic excitation has a frequency which matches one of the natural frequencies of the system, then resonance occurs, and the vibrational amplitude increases~\cite{bassani2017terahertz}. This is the case with protein molecules as the value of their vibrational frequency is given as~\cite{carpinteri2017terahertz} \begin{equation} f_{protein}\approx\frac{1}{2\pi}\sqrt\frac{\kappa}{m}. \end{equation} $\kappa$ and $m$ are the stiffness and the mass of the protein molecule, respectively. On average, proteins have a stiffness of $10^2$ Nm$^{-1}$ and a mass of $10^{-24}$ kg yielding a vibrational frequency in the order of $10^{12}$, thereby matching the THz nanoantenna frequencies~\cite{jornet2013graphene}. The capability to predict collective structural vibrational modes at THz frequencies has long attracted the research community. This interest has been fortified by the development of THz spectroscopic techniques used to investigate the response of biomolecules~\cite{xie2014application}. In particular, vibrations can be dipole active, and thus probed using THz dielectric spectroscopy. The detected molecular motions in the picosecond range correspond to collective vibrational modes or very fast conformational changes. An extensive review by Markelz explores measurements of the THz dielectric response on molecules, where the author concludes that the response is highly sensitive to hydration, temperature, binding and conformational change~\cite{markelz2008terahertz}. The investigated dielectric response of proteins includes both a relaxational response from the amino acid side chains along with a vibrational response from the correlated motions of the protein structure~\cite{knab2006hydration,son2014terahertz}. The authors in~\cite{carpinteri2017terahertz} associate such a vibrational phenomenon with the mechanical behavior of proteins, which act as oscillating structures in response to THz radiation. The induced electro-chemical force allows the identification of relevant resonant frequencies, which may enable a conceptual interpretation of the protein biological function. These frequencies, which range from hundreds of GHz to tens of THz, can be mathematically captured using modal analysis. For instance, in lysozyme, a highly delocalized hinge-bending mode that opens and closes the binding cleft was found by normal mode calculations~\cite{brooks1985normal}. In addition, measurements of chlorophyll proteins showed an increase in the THz absorbance with denaturing, which arise due to the protein side chains' rotational motion~\cite{hua2007investigation}. Further, measurements reported in~\cite{turton2014terahertz} on lysozyme proteins showed sharp vibrational peaks at 1.15 and 2.80 THz. In addition, other measurements provided in~\cite{nicolai2016fingerprints}, showed that the Hsp70 protein, referred to as molecular chaperon, possessed distinct spectra for protein states at sub-THz frequencies. These measurements indicate that a nanoantenna can selectively target the vibrational mode of the protein related to either folding or unfolding and induce a conformational change. In fact, in~\cite{balu2008terahertz}, the authors provide a description of the modes of three proteins, namely, Rhodopsin, Bacteriorhodopsin and D96N bacteriorhodopsin mutant. This gives an indication of the selectivity of these vibrational modes showcasing the capability to single out proteins with a degree of accuracy. In addition to initiating information flow by inducing folding behavior, stimulating proteins by EM waves may provide knowledge of the misfolded protein structure. This potentially makes possible future efforts to rationally design drugs that prevent misfolding events along with the the evolution of certain conditions and diseases. \subsection{Boltzmann Distribution} Signaling inside proteins results in a spring-like effect which shifts their minimum energy~\cite{orr2006mechanisms}. Protein structures are therefore investigated using energy functions where they obey statistical laws based on the Boltzmann distribution. On the one hand, the energy levels of EM waves in the THz frequency band are very low, corresponding to 1-12 meV~\cite{siegel2004terahertz,saeedkia2013handbook}. These values match energies in the range of $10^{-21}$ Joules. Since the energy expended $=$ force $\times$ distance, and we deal with protein conformational changes, measured in nanometers~\cite{howard2001mechanics}, this will yield forces in the piconewton range. On the other hand, this energy scale conform with energies required for ATP hydrolysis, ranging from $1$ $k_{b}T$ to $25$ $k_{b}T$ (here, $k_b$ is Boltzmann's constant and $T$ temperature in Kelvin ; 1 $k_{b}T$ at $300$ Kelvin $\approx$ $4 \times10^{-21}$)~\cite{howard2001mechanics}. Thereby, utilizing a THz force to drive a protein activity and a controlled molecular response is compatible with intra-body energetics. The protein conformational change from one state to another mimics a stretch activated channel. Based on statistical mechanics, the Boltzmann distribution provides probability that a system will be in a certain state as a function of the state's energy and system temperature. The probability of the protein existing in a certain state $i$ is \begin{equation} P_i=\frac{1}{Z} \exp \left[ \frac{-E_i}{k_bT} \right], \label{eq:general1} \end{equation} where $E_i$ is the Gibbs free energy of the state and $Z$ is a normalization factor which results from the constraint that the probabilities of all accessible states must add up to one, i.e., the normalization factor is given by \begin{equation} Z=\sum_{i=1}^{M}\exp \left[ \frac{-E_i}{k_bT} \right], \label{eq:general2} \end{equation} where $M$ is the number of states accessible to the protein network. In our model, the Boltzmann distribution is altered to take into account the nanoantenna THz force. By applying an external force, $F$, the average position of the mechanotransduction channel is shifted, thereby impacting the state probability of the protein. This relation can be seen when finding the energy difference between states given as \begin{equation} \Delta E=\Delta E^0_{ij}-F \Delta\ell, \label{eq:energyf} \end{equation} where $\Delta E_{ij}^0= E_i-E_j $ is the difference in Gibbs free energy between initial state $i$ and final state $j$. $\Delta\ell$ denotes the change in the protein length, which corresponds to a conformational change in the protein structure requiring work $\phi(F) = F \Delta\ell$. Gibbs free energy expresses the thermodynamic energy reflecting the chemical potential between interacting proteins~\cite{rietman2016thermodynamic}. In fact, upon the change of concentration of one molecular species, the reactions in which these molecular species participate are affected. Hence, a change in one protein concentration will percolate through the network changing its energy. The final result represents perturbation in the network leading to changes in the energetic landscape, or Gibbs energy of the molecule~\cite{rietman2017personalized}. If the protein is subject to a force, a natural reaction coordinate is the length of the protein in the direction of the force, and the total energy difference is given in~\eqref{eq:energyf}. \subsection{Stochastic Model of Protein Folding} To model the stochasticity of proteins involved upon triggering them by a THz force, we use the kinetic master equation at the single protein level since it captures the chemical kinetics of the receptor~\cite{higham2008modeling}. Such approach is similar to the ones presented in~\cite{eckford2015information,eckford2016finite,eckford2018channel}. A transition rate matrix $R$ describes the rate at which a continuous time Markov chain moves between states. Elements $r_{ij}$ (for $i \neq j$) of matrix $R$ denote the rate departing from state $i$ and arriving in state $j$. Diagonal elements $r_{ii}$ are defined such that \begin{equation} r_{ii}= \sum _{j\neq i} r_{ij}. \end{equation} In addition, the probability vector, $\mathbf{p}(t)$, as a function of time $t$ satisfies the transition rates via the differential equation \begin{equation} \frac{d\mathbf{p}(t)}{dt}=\mathbf{p}(t)R. \label{eq:master_v2} \end{equation}To represent the protein change of state as a discrete-time Markov chain, we discretize the time into steps of length $\Delta t$. As such, the master equation provided in~\eqref{eq:master_v2} becomes \begin{equation} \frac{d \mathbf{p}(t)}{dt}= \mathbf{p}(t) R = \frac{ \mathbf{p}(t+ \Delta t)- \mathbf{p}(t)}{\Delta t}+o(\Delta t). \label{eq:discretization} \end{equation}We neglect the terms of order $o(\Delta t)$ and manipulate~\eqref{eq:discretization} to have\begin{equation} \begin{aligned} \mathbf{p}(t+ \Delta t) &= \Delta t \mathbf{p}(t)R+ \mathbf{p}(t)= \mathbf{p}(t)(I+ \Delta tR), \label{eq:8} \end{aligned} \end{equation} where $I$ is the identity matrix. If we denote $\mathbf{p}_{i}= \mathbf{p}(i\Delta t),$ we arrive at a discrete time approximation to~\eqref{eq:8} as, \begin{equation} \mathbf{p}_{i+1}= \mathbf{p}_{i}(I+\Delta tR). \end{equation} Thus, we obtain a discrete-time Markov chain with a transition probability matrix $Q$ given as \begin{equation} Q=I+\Delta t R. \label{eq:matrixQ} \end{equation} \section{Protein Conformational Interaction as a Communication System} \label{Sec3} We now discuss how induced protein interactions can be described as information-theoretic communication systems: that is, in terms of input, output, and conditional input-output probability mass function (PMF). The channel input is the nanoantenna force transmitted to the protein receptor: at the interface between the receptor and the environment, the receptor is sensitive to the induced force, undergoing changes in configuration as force is applied. The channel output is the state of the protein. A Markov transition PMF dictates the input-output relationship since the protein state depends on both the current input and the previous state. This relationship is given as \begin{equation} p_{\mathbf{Y}|\mathbf{X}}(\mathbf{y}|\mathbf{x})=\prod_{i=1}^{n}\ p_{\mathbf Y_{i}| \mathbf X_{i},\mathbf Y_{i-1}}(y_{i}|x_{i},y_{i-1}), \label{eq:cond1} \end{equation}where $p_{\mathbf Y_{i}|\mathbf X_{i}, \mathbf Y_{i-1}}(y_{i}|x_{i},y_{i-1})$ is provided according to the appropriate entry in matrix $Q$ given in~\eqref{eq:matrixQ} and $n$ is the fixed channel length. For any communication system with inputs $\mathbf{x}$ and outputs $\mathbf{y}$, the mutual information, $\mathcal{I}(\mathbf{X};\mathbf{Y})$, provides the maximum information rate that may be transmitted reliably over the channel for a given input distribution. Maximizing this mutual information over the input distribution provides the channel capacity. This analysis is important in order for us to identify the maximum rate by which a protein can receive information and, thereby, we assess the impact of THz force on communication. For tractability, we restrict inputs to the set of IID input distributions, where $p_{\mathbf{X}}(\mathbf{x})=\prod_{i=1}^{n}p_{\mathbf{X}}(x_i)$. The authors in~\cite{thomas2016capacity} showed that the IID input distribution was capacity achieving (i.e., max achievable rate) for two-state intensity-driven Markov chains. T he protein state $\mathbf{y}$ forms a time-homogeneous Markov chain given as \begin{equation} p_{\mathbf{Y}}(\mathbf{y})=\prod_{i=1}^{n} p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(y_{i}|y_{i-1}), \label{eq:marg1} \end{equation} where $y_0$ is null and \begin{equation} p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(y_i|y_{i-1})=\sum_{x_{i}}p_{\mathbf{Y}_{i}|\mathbf{X}_{i},\mathbf{Y}_{i-1}}(y_i|x_i,y_{i-1})p_{\mathbf{X}}(x_{i}). \label{eq:cond2} \end{equation} The mutual information can be written as \begin{equation} \begin{split} \mathcal{I}(\mathbf{X};\mathbf{Y})=\sum_{i=1}^{n} \sum_{{y_i}} \sum_{{y_{i-1}}} \sum_{x_i} p_{\mathbf Y_i, \mathbf X_i,\mathbf Y_{i-1}}(y_i,x_i,y_{i-1})\\ \log\frac{p_{\mathbf Y_i| \mathbf X_i,\mathbf Y_{i-1}}(y_i|x_i,y_{i-1})}{p_{\mathbf Y_i| \mathbf Y_{i-1}}(y_i|y_{i-1})}. \label{eq:1} \end{split} \end{equation} Thereafter, the channel capacity is given as \begin{equation} C= \max_{p_{\mathbf{X}}(\mathbf{x})} \,\ \mathcal{I}(\mathbf{X};\mathbf{Y}). \end{equation} In our analysis, we deal with the input, $\mathbf{\mathbf{x}}$, as either a discrete or continuous parameter. We use the bisection method to compute the capacity for the discrete case and deploy the Blahut-Arimoto (BA) algorithm to find the capacity for the continuous scenario. In fact, given an input-output transition matrix, the classical BA algorithm is a general numerical method for computing the capacity channel~\cite{blahut1972computation}. The maximization of the mutual information is attained through an alternating maximization procedure to the global maximum. A variation of the BA algorithm is the constrained BA method, which incorporates an average power constraint on the channel inputs. We provide several capacity measures with different constraints for the EM-triggered protein communication channel. Specifically, we derive the capacity per channel use and with average energy constraint. Capacity per channel use is a suitable measure in applications involving targeted therapy or targeted drug delivery. The capacity with an average energy constraint is a useful measure for efficient intra-body communication, where both medium compatibility and safety metrics are practical constraints accounted for. In each case, the optimum input distribution and the resulting maximized capacity measures are attained. \section{Two-State Protein Model} \label{Sec4} \begin{figure} \centering \includegraphics[width=0.2\textwidth]{model1.png} \footnotesize \caption{Two-state protein model represented by unfolded ($\mathbf{U}$) and folded ($\mathbf{F}$) Markov states.} \label{fig:model} \end{figure} \subsection{Mathematical Model} In our two-state model, the protein resembles a binary biological switch, represented using a finite-state Markov chain. The states of the protein depicted are the folded, $\mathbf{F}$, and unfolded, $\mathbf{U}$, as those govern the activation of biological processes and chemical interactions. The input to our mechanotransduction channel is the force induced by the nanoantenna, while the output is the state of the protein. In continuous time, the protein folding can be represented as a Poisson process, transitioning between $\mathbf{F}$ and $\mathbf{U}$. We let $p_{\mathbf{Y}}(t)=[p_{\mathbf{F}}(t), p_{\mathbf{U}}(t)]$ denote the time-varying vector of state occupancy probabilities. As demonstrated in Fig.~\ref{fig:model}, in this system, the transition rate from unfolded, $\mathbf{U}$, to folded, $\mathbf{F}$, is $\alpha$, while the transition rate from $\mathbf{F}$ to $\mathbf{U}$ is $\beta$. The latter transition is considered a relaxation process which returns the protein to the unfolded state. Such process is independent of the excitation signal since protein folding is entropically unfavorable~\cite{anfinsen1973principles}. The main reason for protein to get folded is to acquire its function. The function implies a general architecture of the protein which has to be stable in time and flexible enough to allow the biological process to occur. Therefore native state of a protein is not necessarily the most stable one. To model the two-state conformational change which captures the behavior of a protein, the normalization factor, provided in~\eqref{eq:general2}, is given by \begin{equation} Z=\exp\left[\frac{-E_{\mathbf{U}}}{k_{b}T}\right]+\exp\left[\frac{-E_\mathbf{F}}{k_{b}T}\right], \label{eq:normz} \end{equation} where $E_{\mathbf{U}}$ and $E_{\mathbf{F}}$ denote the Gibbs free energies associated with the unfolding and folding states, respectively. As such, the steady-state probability of the protein being in one state, the folded for example, can be found from~\eqref{eq:general1} and~\eqref{eq:normz} as \begin{equation} p_{\mathbf{Y}}(y=\mathbf{F})=\frac{1}{1+\exp\left[ \frac{\Delta E}{k_{b}T} \right]}. \label{eq:steady_state1} \end{equation} The transition rates controlling such two-state interaction are given by the rate matrix $R_{1}$ as \begin{equation} R_{1}=\begin{bmatrix}-\alpha & \alpha \\ \beta & -\beta \\ \end{bmatrix}. \end{equation}From~\eqref{eq:matrixQ}, the transition probability matrix yields \begin{equation} Q_{1}=\begin{bmatrix}1-\alpha \Delta t & \alpha\Delta t \\ \beta \Delta t & 1-\beta\Delta t \label{eq:prob_matr} \end{bmatrix}. \end{equation} \subsection{Kinetic Detailed Balance} The steady state probability is the eigenvector of the stochastic matrix, which can be found using the following relation \begin{equation} \mathbf{p}_{\mathbf{Y}}(\mathbf{y})Q =\mathbf{p}_{\mathbf{Y}}(\mathbf{y}). \label{eq:ss} \end{equation} Hence, for our two-state Markov model the steady-states yield \begin{equation}\label{eq:cases} p_{\mathbf{Y}}(y)= \begin{cases} \frac{\alpha}{\alpha+\beta}, & y= \mathbf{F}\\ \frac{\beta}{\alpha+\beta},& y= \mathbf{U}. \end{cases} \end{equation} The relationship between $\alpha$ and $\beta$ can therefore be found by equating~\eqref{eq:steady_state1} and~\eqref{eq:cases} for $y= \mathbf{F}$, resulting in \begin{equation} {\beta}={\alpha}\, \exp\left( \frac{\Delta E}{k_{b}T} \right). \label{eq:relationship_alpha_beta} \end{equation}\eqref{eq:relationship_alpha_beta} satisfies the detailed balance theory, which has been formulated for kinetic systems~\cite{coester1951principle}. Detailed balance ensures the compatibility of kinetic equations with the conditions for thermodynamic equilibrium. The rate constants pulling against an applied force resembles a biased random walk that allows the protein to perform work per unit step, i.e., $\phi(F)= F \Delta\ell$, in agreement with the second law of thermodynamics and as shown in~\eqref{eq:energyf}. Since the value of the energy, $\Delta E$, gets altered when the system is subject to an external force, the value of $\alpha$ (the probability of the forward transition rate) will also vary accordingly. As such, $\alpha$ can be divided into $\alpha_{\mathbf{NF}}$, the natural transition rate when no force is applied, and $\alpha_{\mathbf{AF}}$, the transition rate when a force is applied, resulting in an average folding probability. The values of $\alpha_{\mathbf{NF}}$ and $\beta$ for different proteins can be found from experimental studies available in the literature since protein folding is a naturally occurring phenomenon driven by the change in Gibbs energy~\cite{fisher1999study}. Therefore,~\eqref{eq:relationship_alpha_beta} can take two different forms depending on whether the system is being subject to an external force or not as follows \begin{numcases} {\beta=} {\alpha_{\mathbf{NF}}}\, \exp\left( \frac{\Delta E}{k_{b}T}\right),\,\,\, \Delta E= \Delta E_{ij}^{0} \label{eq:relationship_alpha_beta1} \\ \alpha_{\mathbf{AF}}\,\exp\left( \frac{\Delta E}{k_{b}T}\right),\,\,\, \Delta E= \Delta E_{ij}^{0}+ \phi(F) \label{eq:relationship_alpha_beta2} \end{numcases} Here, $\mathbf{NF}$ and $\mathbf{AF}$ correspond to No Force and Applied Force, respectively. \subsection{Capacity of Two-State Protein Conformation} \subsubsection{Discrete Case} Based on our developed model, we let $\mathbf{\mathbf{x}}$ denote a binary input which stimulates the protein. This input is induced either due to intra-body interactions with no external force or could be triggered due to an applied THz\ nanoantenna force, in which $\mathbf{\mathbf{x}}\in \left \{ \mathbf{NF}, \mathbf{AF}\right\}$. The channel output is the state of the protein given as either unfolded or folded, where $\mathbf{y}\in \left\{ \mathbf{U}, \mathbf{F}\right\}$. We have, as a result, a discrete channel, where the inputs and outputs form vectors. In order to find the capacity, we follow the formulation presented in Sec. III. Assuming the previous state of the protein, $y_{i-1}=\mathbf{U}$, we have\begin{equation} \begin{split} p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{F}|\mathbf{U})&=\sum_{x_{i}}p_{\mathbf{Y}_{i}|\mathbf{X}_{i},\mathbf{Y}_{i-1}}(\mathbf{F}|x_i,\mathbf{U})p_{\mathbf{X}}(x_{i})\\ &=p_{\mathbf{NF}}\alpha_{\mathbf{N}\mathbf{F}}+p_{\mathbf{AF}}\alpha_{\mathbf{A}\mathbf{F}}=\bar \alpha, \label{eq:alpha_bar} \end{split} \end{equation}and $p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{U}|\mathbf{U})=1-\bar \alpha$. Here, $\bar \alpha$ represents the average folding probability. On the other hand, if $y_{i-1}=\mathbf{F}$, \begin{equation} \begin{split} p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{U}|\mathbf{F})&=\sum_{x_{i}}p_{\mathbf{Y}_{i}|\mathbf{X}_{i},\mathbf{Y}_{i-1}}(\mathbf{U}|x_i,\mathbf{F})p_{\mathbf{X}}(x_{i}) \\ &=\beta, \end{split} \end{equation} and $p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{F}|\mathbf{F})=1-\beta$. The transition probability matrix provided in~\eqref{eq:prob_matr} can now be written as \begin{equation} \label{eq:sys_mat} \bar Q_{1}=\begin{bmatrix}1-\bar\alpha\Delta t & \bar\alpha\Delta t \\ \beta \Delta t & 1-\beta\Delta t \\ \end{bmatrix}. \end{equation} In addition, the steady state probabilities given in~\eqref{eq:cases} are adjusted to take into account the average folding probability, $\bar\alpha$. The mutual information, $\mathcal{I}(\mathbf{X};\mathbf{Y})$, which was given in~\eqref{eq:1}, can also be represented as \begin{equation} \mathcal{I}(\mathbf{X};\mathbf{Y})=H(Y_i|Y_{i-1})-H(Y_i|X_i,Y_{i-1}), \label{eq:mutual_info3} \end{equation} for $i\in \{1,2,...,n\}$. To compute~\eqref{eq:mutual_info3}, we use the binary entropy function as follows \begin{equation} \mathcal{H} (p)=-p\log p-(1-p)\log (1-p). \end{equation} Then, each term in the right hand side of~\eqref{eq:mutual_info3}, is dealt with separately. $H(Y_i|Y_{i-1})$ yields \begin{equation} \begin{split} &=p_{\mathbf{Y}}({\mathbf{U}})H(Y_i|Y_{i-1}=\mathbf{U})+p_{\mathbf{Y}}({\mathbf{F}})H(Y_i|Y_{i-1}=\mathbf{F})\\ &=\frac{\beta}{\bar\alpha+ \beta} \mathcal H(\bar\alpha) +\frac{\bar\alpha}{\bar\alpha+ \beta}\mathcal H(\beta). \end{split} \end{equation} In a similar manner, $H(Y_i|X_i,Y_{i-1})$ results in \begin{equation} \begin{split} &=\sum_{x_i}p_\mathbf{X}(x_i)p_{\mathbf{Y}}(\mathbf{U})H(Y_i|X_i=x_i,Y_{i-1}=\mathbf{U}) \\ &+\sum_{x_i}p_\mathbf{X}(x_i)p_{\mathbf{Y}}(\mathbf{F})H(Y_i|X_i=x_i,Y_{i-1}=\mathbf{F})\\ &=\frac{\beta}{\bar\alpha+ \beta} \left( p_{\mathbf{NF}}\mathcal{H}(\alpha_{\mathbf{N}\mathbf{F}})+p_{\mathbf{AF}}\mathcal{H}(\alpha_{\mathbf{A}\mathbf{F}}) \right)+\frac{\bar\alpha}{\bar\alpha+ \beta}\mathcal{H}(\beta). \end{split} \end{equation} By substituting back into~\eqref{eq:mutual_info3}, the mutual information yields \begin{equation} \begin{aligned} \mathcal{I}(\mathbf{X};\mathbf{Y})= \frac{\beta}{\bar\alpha+ \beta} \left( \mathcal H(\bar\alpha)-p_{\mathbf{NF}}\mathcal{H}(\alpha_{\mathbf{N}\mathbf{F}})- p_{\mathbf{AF}}\mathcal{H}(\alpha_{\mathbf{A}\mathbf{F}})\right).\\ \label{eq:final_eq} =\frac{\mathcal{H}(p_{\mathbf{NF}}\alpha_{\mathbf{N}\mathbf{F}}+p_{\mathbf{AF}}\alpha_{\mathbf{A}\mathbf{F}})-p_{\mathbf{NF}}\mathcal{H}(\alpha_{\mathbf{N}\mathbf{F}})- p_{\mathbf{AF}}\mathcal{H}(\alpha_{\mathbf{A}\mathbf{F}})}{1+\left( p_{\mathbf{NF}}\alpha_{\mathbf{N}\mathbf{F}}+p_{\mathbf{AF}}\alpha_{\mathbf{A}\mathbf{F}} \right)/\beta}. \end{aligned} \end{equation} Finally, the capacity of the two-state model is found by maximizing~\eqref{eq:final_eq} with respect to the nanoantenna applied force as \begin{multline} C=\max _{p_\mathbf{AF}} \frac{\mathcal{H}(p_{\mathbf{NF}}\alpha_{\mathbf{N}\mathbf{F}}+p_{\mathbf{AF}}\alpha_{\mathbf{A}\mathbf{F}})}{1+\left( p_{\mathbf{NF}}\alpha_{\mathbf{N}\mathbf{F}}+p_{\mathbf{AF}}\alpha_{\mathbf{A}\mathbf{F}} \right)/\beta}\\ +\frac{-p_{\mathbf{NF}}\mathcal{H}(\alpha_{\mathbf{N}\mathbf{F}})- p_{\mathbf{AF}}\mathcal{H}(\alpha_{\mathbf{A}\mathbf{F}})}{1+\left( p_{\mathbf{NF}}\alpha_{\mathbf{N}\mathbf{F}}+p_{\mathbf{AF}}\alpha_{\mathbf{A}\mathbf{F}} \right)/\beta} \label{eq:capacity}. \end{multline} It is sufficient to maximize over $p_{\mathbf{AF}}$ since $p_{\mathbf{NF}}=1-p_{\mathbf{A}\mathbf{F}}$. \\ \subsubsection{Continuous Case} In the previous part, we developed the model as a discrete case given a binary input binary output system. Nonetheless, an in-depth picture for the capacity associated with protein conformational transitions is attained by applying a continuous input. By having the nanoantenna force transmit continuously, the capacity versus applied force can be studied over a range of values. This is achieved by expanding $\bar\alpha$ in~\eqref{eq:alpha_bar} to become \begin{equation} \bar \alpha= \alpha_{\mathbf{N}\mathbf{F}}p_{\mathbf{N}\mathbf{F}}+\sum_{i=1}^{N-1} \alpha_{\mathbf{A}\mathbf{F}}(f_i)p_{\mathbf{A}\mathbf{F}}(f_i), \,\,\, \label{eq:alphabar} \end{equation} where $p_{\mathbf{AF}}(f_i)$ denotes the probability of applying a force, $f_i$, towards the protein. The dependency of $\alpha_{\mathbf{AF}}$ on the force factor has been demonstrated in~\eqref{eq:relationship_alpha_beta2}. We find the capacity for the two-state model under the constraint of a maximum applied force per channel use as \begin{equation} \begin{aligned} & \underset{p_{\mathbf{A}\mathbf{F}}}{\text{max} \,\,} & & \mathrm{\mathcal{I}(\mathbf{X};\mathbf{Y})} \\ & \text{subject to} & & 0\leq F_{applied}\leq \ F_{{max}}. \\ \end{aligned} \label{eq:sys_const1} \end{equation} ${F}_{{max}}$ in this case is the maximum amount of nanoantenna applied force and ${p_\mathbf{AF}}$ is the probability vector of applied forces. The objective function in~\eqref{eq:sys_const1} is concave with respect to the input probability vector and the constraint is linear; hence, the optimization problem is concave. Therefore, the solution of the problem can be obtained using the BA algorithm. The algorithm begins with the transition probability matrix, initially defined in~\eqref{eq:sys_mat}, but extended to take into account the $N$ maximum force samples along with an arbitrary but valid, choice for ${p_\mathbf{AF}}$. Since the mutual information in~\eqref{eq:sys_const1} is concave in terms of the input probability, the output of the algorithm is the optimal, capacity-achieving, input probability distribution, ${\hat p_\mathbf{AF}}$. \section{Multi-State Protein Model} \label{Sec5} \subsection{Mathematical Model} Successive events occur inside a living cell through a sequence of protein activation in which signaling cascades are often illustrated by kinetic schemes. Although a node in a network is represented by a single protein, the protein itself can have multiple gene products with many conformations. Each node of the protein can slightly differ in sequence. Such differences allow a node to bind with hundreds of partners at different times and perform many essential biological functions~\cite{tsai1996protein}. In this section, we further extend the two-state protein conformation model to consider the transition between different protein configurations in order to more accurately resemble the protein signaling pathway especially when there are multiple folding routes from different starting points~\cite{graves1999protein}. As such, we generalize the two-state model presented previously to take into account multiple-states. The selectivity attained by using THz signals allows us to target specific links in a given network in order to create controlled interactions. These macroscopic interactions resemble the creation or removal of edges between nodes in a graph~\cite{vishveshwara2002protein}. By targeting the THz force on specific locations of the protein molecule, distinct responses can be induced. We let $\mathbf{p}_{\mathbf{Y}}(t)=\left[ p_{y_{1}}(t), p_{{y_{2}}}(t),...., p_{y_{m+1}}(t) \right]$ be the probability vector accounting for $n=m+1$ states and $m$ links. In this case, the generalized rate matrix yields \begin{equation} R=\begin{bmatrix}-\alpha_1 & \alpha_1 & 0 & 0 & ....& .... \\ \beta_{1}\ & -(\beta_1+\alpha_2) & \alpha_2 & 0&....&.... \\ 0 & \beta_2 & -(\beta_2+\alpha_3) & \alpha_3&....&.... \\ : & : & : & :&:&:\\ : & : & : & :&\beta_m& -\beta_m\\ \end{bmatrix}. \end{equation}Following the same formulation presented in~\eqref{eq:matrixQ}, the generalized probability matrix is given in~\eqref{eq:prob_matrixg}. We note that throughout the analysis, we will use $\bar Q$ rather than $Q$, where each $\alpha_{j}$ is replaced by $\bar \alpha_j$, indicating an average state change probability. \begin{figure*} \centering \begin{minipage}{0.75\textwidth} \begin{align} \bar Q=\left[ \begin{array}{ccccccc} 1-\bar\alpha_1\Delta t & \bar\alpha_1 \Delta t & 0 & 0&...& ... \\ \beta_{1} \Delta t\ & 1-(\beta_1+\bar\alpha_2)\Delta t & \bar\alpha_2 \Delta t & 0& ... &... \\ 0 & \beta_2 \Delta t & 1-(\beta_2+\bar\alpha_3)\Delta t & \bar\alpha_3 \Delta t & ...& ... \\ : & : & : & : & : & : \\ : & : & : & : & \beta_m \Delta t \ & 1-\beta_m \Delta t\\ \end{array} \right]. \label{eq:prob_matrixg} \end{align} \hrule \end{minipage} \end{figure*} To compute the mutual information, $\mathcal{I}(\mathbf{X};\mathbf{Y})$, for the multi-state conformational model, we follow the same approach as in the previous section, where we provide a generalization of the formulation. First, following~\eqref{eq:mutual_info3}, we first compute $H(Y_i|Y_{i-1})$ as \begin{equation} \begin{split} &=p_{\mathbf{Y}}(y_1)\mathcal{H}(\bar\alpha_{1})+\sum_{j=2}^{m} p_{\mathbf{Y}}(y_j)\bigg( \mathcal{H}(\beta_{j-1})+\mathcal{H}(\bar\alpha_{j}) \bigg)\\ &\hspace*{0.5in} + p_{\mathbf{Y}}(y_{m+1})\mathcal{H}(\beta_{m}). \end{split} \end{equation} Then, we find $H(Y_i|X_i,Y_{i-1})$ as \begin{equation} \begin{split} &=p_{\mathbf{Y}}(y_1)\bigg(p_{\mathbf{AF_{1}}}\mathcal{H}(\alpha_{\mathbf{AF_1}})+p_{\mathbf{NF_{1}}}\mathcal{H}(\alpha_{\mathbf{NF_1}})\bigg) \\ &+ \sum_{j=2}^{m} p_{\mathbf{Y}}(y_j)\bigg(\mathcal{H} (\beta_{j-1})+\bigg( p_{\mathbf{AF_{j}}}\mathcal{H}(\alpha_{\mathbf{AF_j}})+p_{\mathbf{NF_{j}}}\mathcal{H}(\alpha_{\mathbf{NF_j}})\bigg)\bigg)\\ & \hspace*{0.5in} + p_{\mathbf{Y}}(y_{m+1})\mathcal{H}(\beta_{m}). \end{split} \end{equation} Substituting back in~\eqref{eq:mutual_info3} we get \begin{multline} \mathcal{I}(\mathbf{X};\mathbf{Y}) = \sum_{j=1}^{m}p_{\mathbf{Y}}(y_j)\mathcal{H}(\bar\alpha_{{j}})- \sum_{j=1}^{m} p_{\mathbf{Y}}(y_j)\\ \bigg( p_{\mathbf{AF}_{{j}}}\mathcal{H}(\alpha_{\mathbf{AF}_{j}})+p_{\mathbf{NF}_{j}}\mathcal{H}(\alpha_{\mathbf{NF}_{j}})\bigg). \label{eq:final_eqg} \end{multline}The capacity of the multi-state protein model is found by maximizing~\eqref{eq:final_eqg} with respect to the nanoantenna applied force as \begin{multline} C=\max _{p_\mathbf{AF}} \bigg[ \sum_{j=1}^{m} p_{\mathbf{Y}}(y_j)\mathcal{H}(\bar\alpha_{j}) - \sum_{j=1}^{m} p_{\mathbf{Y}}(y_j)\\\bigg( p_{\mathbf{AF}_{{j}}}\mathcal{H}(\alpha_{\mathbf{AF}_{j}})+p_{\mathbf{NF}_{j}}\mathcal{H}(\alpha_{\mathbf{NF}_{j}}\bigg)\bigg]. \label{eq:capacityg} \end{multline} In this case, $p_{\mathbf{AF}}$ is a vector constituting the probability of force applied to the $m$ links. \subsection{Example: Four State Protein Model} \begin{figure}[h] \centering \includegraphics[width=0.46\textwidth]{model2.png} \footnotesize \caption{Multi-state protein model with several transitions.} \label{fig:model_2} \end{figure} To show the applicability of the protein multi-state model, we apply it to a 4 state protein chain. We have the probability occupancy vector as, $\mathbf{p}(t)=\left[p_{\mathbf{A}}(t), p_{\mathbf{B}}(t), p_{\mathbf{C}}(t), p_{\mathbf{D}}(t)\right].$ The relationship between the states is formulated using a Markov transition PMF, which is previously given in~\eqref{eq:cond1} and~\eqref{eq:cond2}. Hence, based on Fig.~\ref{fig:model_2}, if the previous state, $y_{i-1}=\mathbf{A}$, we have\begin{equation} \begin{split} p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{B}|\mathbf{A})&=\sum_{x_{i}}p_{\mathbf{Y}_{i}|\mathbf{X}_{i},\mathbf{Y}_{i-1}}(\mathbf{B}|x_i,\mathbf{A})p_{\mathbf{X}}(x_{i})\\ &=p_{\mathbf{NF_{1}}}\alpha_{\mathbf{N}\mathbf{F_{1}}}+p_{\mathbf{AF_{1}}}\alpha_{\mathbf{A}\mathbf{F_1}}=\bar \alpha_{1}, \end{split} \end{equation} and $p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{A}|\mathbf{A})=1-\bar \alpha_{1}$. On the other hand, if $y_{i-1}=\mathbf{B}$, \begin{equation} \begin{split} p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{A}|\mathbf{B})&=\sum_{x_{i}}p_{\mathbf{Y}_{i}|\mathbf{X}_{i},\mathbf{Y}_{i-1}}(\mathbf{A}|x_i,\mathbf{B})p_{\mathbf{X}}(x_{i}) \\ &=\beta_1, \end{split} \end{equation} and $p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{B}|\mathbf{B})=1-(\beta_1 +\bar\alpha_2$). The relationship between the remaining states follows accordingly. Using~\eqref{eq:ss}, the steady state probabilities are found as \begin{equation}\label{eq:cases2} p_{\mathbf{Y}}(y)= \begin{cases} \frac{\beta_1 \beta_2 \beta_3}{\beta_1\beta_2 \beta_3+\bar\alpha_1\beta_2 \beta_3+ \bar\alpha_1\bar\alpha_2\beta_3+\bar\alpha_1\bar\alpha_2\bar\alpha_3}, & y= \mathbf{A}\\ \\ \frac{\bar\alpha_1 \beta_2 \beta_3}{\beta_1\beta_2 \beta_3+\bar\alpha_1\beta_2 \beta_3+ \bar\alpha_1\bar\alpha_2\beta_3+\bar\alpha_1\bar\alpha_2 \bar\alpha_3},& y= \mathbf{B} \\ \\ \frac{\bar\alpha_1 \bar\alpha_2 \beta_3}{\beta_1\beta_2 \beta_3+\bar\alpha_{1}\beta_2 \beta_3+ \bar\alpha_1\bar\alpha_2\beta_3+\bar\alpha_2\bar\alpha_3\bar\alpha_1},& y= \mathbf{C}\\ \\ \frac{\bar\alpha_1 \bar\alpha_2 \bar\alpha_3 }{\beta_1\beta_2 \beta_3+\bar\alpha_1\beta_2 \beta_3+ \bar\alpha_1\bar\alpha_2\beta_3+\bar\alpha_1\bar\alpha_2\bar\alpha_3},& y= \mathbf{D} \end{cases} \end{equation} In~\eqref{eq:cases2}, we have considered the steady states after a force has been applied to the system, i.e., each $\alpha_{j}$ is replaced by $\bar\alpha_{j}$. We note also that the same relationship between $\alpha$ and $\beta$ holds as~\eqref{eq:relationship_alpha_beta} in Sec.~\ref{Sec3}. Finally, both the mutual information and capacity are found by substituting the given states in~\eqref{eq:final_eqg} and~\eqref{eq:capacityg} accordingly. \subsection{Capacity with Average Energy Constraint} A variation on the optimization in~\eqref{eq:sys_const1} is when the average energy of applied nanoantenna force per channel use is also constrained. In this case, the constrained BA algorithm is deployed to find the capacity of the multi-state protein model. The resulting optimization problem is given as \begin{equation} \begin{aligned} & \underset{p_{\mathbf{A}\mathbf{F}}}{\text{max} \,\,} & & \mathrm{\mathcal{I}(\mathbf{X};\mathbf{Y})} \\ & \text{subject to} & & \sum_{i} p_{AF_i} E_{i} \leqslant E^{max}, \\ &&& 0\leq p_{AF_{i}}\leq 1. \\ \end{aligned} \label{eq:sys_const2} \end{equation} $E_i$ is the energy applied to link $i$. The capacity with average energy constraint $E^{max}$ is defined as \begin{eqnarray} C & = &\max_{p_\mathbf{{AF}}}\left[ \sum_{i} p_{AF_{i}} \bar Q\log \frac{\bar Q}{\sum_{i}p_{AF_i}\bar Q} \right. \nonumber \\ & & \hspace*{0.65in} \left. - \lambda(\sum_{i}p_{AF_{i}}E_{i}-E^{max})\right]. \label{eq:const2} \end{eqnarray} Here, $\bar Q$ is the transition probability matrix defined in~\eqref{eq:prob_matrixg}. The cost function in~\eqref{eq:const2} is parametrized using Lagrange multiplier $\lambda$. The procedure followed to optimize the input distribution is similar to that without the average energy constraint. The additional step involves obtaining a value for $\lambda$ after updating the distribution vector $p_{\mathbf{AF}}$. This can be obtained using a simple bisection search. \section{Numerical Results} \label{Sec7} In this section, we demonstrate the results of numerically simulating our developed models. The aim of the presented work is to find the information rates by which protein molecules convey information when triggered by THz nanoantennas. Several scenarios are presented to take into account different protein configurations undergoing either single or multiple signaling interactions. \subsection{Discrete Case Result} In our discrete scenario, the system is binary, where the nanoantenna force is either present or absent as mathematically formulated in Sec.~\ref{Sec4}. The mutual information is calculated from the analytically derived model and the capacity is computed using a bisection search. This method is guaranteed to converge to a root, which is the value of $p_{\mathbf{AF}}$ that maximizes the capacity in our case. The discrete scenario proves the existence of a communication channel, where information can be transmitted upon triggering the protein by THz EM waves. Figs.~\ref{fig:combined1} and~\ref{fig:combined2} illustrate the mutual information curves for $\beta=0.1$ and $\beta=0.9$, respectively. The value of $\alpha_{\mathbf{NF}}$ is fixed to $0.1$ while the values of $\alpha_{\mathbf{AF}}$ vary for both cases. As expected, the higher the value of $\alpha_{\mathbf{AF}}$, the higher the capacity since the value of $\alpha_{\mathbf{AF}}$ corresponds to the probability of folding. In addition, we notice that higher values of $\beta$ indicate a higher capacity. This observation can be deduced from~\eqref{eq:final_eq}, where an increased value of $\beta$ corresponds to a higher value of $\mathcal{I}(\mathbf{X};\mathbf{Y})$. The values of $p_{\mathbf{AF}}$ which maximize the capacity are clearly indicated using circles on the demonstrated 2D plots of the mutual information curves. \begin{figure}[htp] \subfigure[]{% \includegraphics[height=5.3 cm, width=7cm]{part1a-eps-converted-to.pdf}% } \subfigure[]{% \includegraphics[height=5.3 cm, width=7cm]{part1b-eps-converted-to.pdf}% } \caption{(a) 3D contour plot of the mutual information curve where $p_{\mathbf{AF}}$ and $\alpha_{\mathbf{AF}}$ are varied. (b) 2D plot showing the maximizing values of $p_{\mathbf{AF}}$ by circles. $\alpha_{\mathbf{NF}}=0.1$ and $\beta=0.1$, while $\alpha_{\mathbf{AF}}$ varies from the bottom from $0.1$ to $0.9$ with a $0.2$ increment.} \label{fig:combined1} \end{figure} \begin{figure}[htp] \centering \subfigure[]{% \includegraphics[height=5.3 cm, width=7cm]{part2a-eps-converted-to.pdf}% } \subfigure[]{% \includegraphics[height=5.3 cm, width=7cm]{part2b-eps-converted-to.pdf}% } \caption{(a) 3D contour plot of the mutual information curve where $p_{\mathbf{AF}}$ and $\alpha_{\mathbf{AF}}$ are varied. (b) 2D plot showing the maximizing values of $p_{\mathbf{AF}}$ by circles. $\alpha_{\mathbf{NF}}=0.1$ and $\beta=0.9$, while $\alpha_{\mathbf{AF}}$ varies from the bottom from $0.1$ to $0.9$ with $0.2$ increment.} \label{fig:combined2} \end{figure} \subsection{Capacity Per Channel Use Result} For the case of a continuous force, the BA algorithm is deployed to find the capacity. The attained result further fortifies the discrete case by providing a more detailed analysis of how the capacity varies as a function of force. We utilize the relationships given in~\eqref{eq:alphabar} and~\eqref{eq:sys_const1} to simulate this scenario. Protein conformational changes are measured in nanometers (nm) and forces are given on the scale of piconewtons (pN)~\cite{valle2017multidomain}. The value for the protein conformational distance was fixed at $\Delta\ell= 2$ nm for maximum forces ranging between $0-100~$pN. The selected force range of the nanoantenna reflects THz transmissions based on intra-body link budget analysis~\cite{7955066} and force sensitivity at the cellular level~\cite{matellan2018no}. Fig.~\ref{fig:cont1} demonstrates the capacity as a function of the applied nanoantenna force. We observe that given a fixed value of $\beta$ and $\alpha_{\mathbf{NF}}$, the value of the capacity increases upon increasing the nanoantenna applied force. In addition, the higher the value of $\alpha_{\mathbf{NF}}$, the higher the achieved capacity for the value of $\beta=0.9$. In order to understand such behavior, the change in Gibbs free energy, $\Delta E_{ij}^0$, must be examined. In fact, $\Delta E_{ij}^0$ is computed using the relationship presented in~\eqref{eq:relationship_alpha_beta1}, which is rearranged to yield \begin{equation} \Delta E_{ij}^0=k_{b}T \ln\left[\frac{\alpha_{\mathbf{NF}}}{\beta}\right]. \label{eq:concluded_relation} \end{equation} By increasing the value of $\alpha_{\mathbf{NF}}$, $\Delta E_{ij}^{0}$ witnesses increments until it approaches equilibrium ($\Delta E_{ij}^{0}=0$) at $\alpha_{\mathbf{NF}}=0.9$. The equilibrium state indicates a chemical balance, where no work should be done on the system as it is currently in a stable state. As such, the amount of force directed from the nanoantenna will be solely dedicated towards increasing the capacity at which the protein receives information. Hence, no force will be lost in order to first stabilize the system and then contribute to the capacity. Even for low values of $\alpha_{\mathbf{NF}}$, a capacity-achieving channel is attained upon applying a force. This indicates that the presented EM-molecular interface allows transmission of information under different biological scenarios, where the EM force can be regarded as a powerful tool that controls the energy pathways of proteins. \begin{figure}[!h] \centering \includegraphics[height=5.25cm, width=8.1cm]{single_state-eps-converted-to.pdf} \footnotesize \caption{The channel capacity as a function of the nanoantenna applied force. The value of $\beta$ is fixed to $0.9$ while the value of $\alpha_{\mathbf{NF}}$ varies.} \label{fig:cont1} \end{figure} \subsection{Capacity Result with Average Energy Constraint} For the multi-state protein model formulated in Sec.~\ref{Sec5}, we opt to find the capacity by which a cascade of protein configurations transduce information and carries out interactions upon THz stimulation. This scenario sparks a resemblance of enzymes and receptors that are activated via protein phosphorylation. In addition, the selectivity provided by using a THz nanoantenna allows us to control $\alpha_{\mathbf{AF}}$ by governing $p_{\mathbf{AF}}$ applied to each link and therefore bias our network in a specific direction. The constrained BA algorithm is deployed, where an average energy constraint is applied to the capacity as formulated in Sec.~\ref{Sec5}-C. For simulations, we will use the model illustrated in Fig.~\ref{fig:model_2}, constituting of 4 protein states. We examine different values of $\alpha_{\mathbf{NF}}$ while assuming $\alpha_{\mathbf{NF_1}}=\alpha_{\mathbf{NF_2}}=\alpha_{\mathbf{NF_3}}$. The value of $\beta$ is studied when it is either fixed or varied for the three links. By selecting different values of $\beta$, we can analyze how forward transition rates are impacted as nanoantenna force is being applied to the system. \subsubsection{ Fixed $\beta$} Since protein interaction reflects a biological phenomenon, a protein network will favor the condition which achieves equilibrium. As such, at equilibrium, the system will always have the highest capacity as indicated by Figs.~\ref{fig:model2} and~\ref{fig:model22}. The results match the conclusion achieved in Sec.~\ref{Sec7}-B, indicated by~\eqref{eq:concluded_relation}. When the system is out of equilibrium, heat dissipation occurs and work should be done to bring the system back to equilibrium, therefore reducing the attained capacity. It can be also noticed that the maximum achieved capacity of Figs.~\ref{fig:model2} and~\ref{fig:model22} is lower compared to Fig.~\ref{fig:cont1}. This is attributed to the energy constraint set by $E^{max}$ in~\eqref{eq:const2}. The chosen $E^{max}$ value corresponds to the typical energy consumed by a motor protein~\cite{howard2001mechanics}. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{multi_0_9-eps-converted-to.pdf} \footnotesize \caption{The channel capacity for the multi-state protein model as a function of the nanoantenna applied force. The value of $\beta$ is fixed to $0.9$ for the three links while the value of $\alpha_{\mathbf{NF}}$ varies.} \label{fig:model2} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{multi_0_1-eps-converted-to.pdf} \footnotesize \caption{The channel capacity for the multi-state protein model as a function of the nanoantenna applied force. The value of $\beta$ is fixed to $0.1$ for the three links while the value of $\alpha_{\mathbf{NF}}$ varies.} \label{fig:model22} \end{figure} \subsubsection{Different $\beta$} Figs.~\ref{fig:mixedb1} and~\ref{fig:mixedb2} show the channel capacity for the multi-state protein model as a function of the nanoantenna force when the value of $\beta$ is set different for each link. The capacity of the system depends on the combination of $\beta$ and $\alpha_{\mathbf{NF}}$ for the three links as reflected from the mutual information formula. The maximum capacity is achieved when the overall free energy values of the system, composed in our case of the three links, is closest to equilibrium. This relationship is deduced from~\eqref{eq:concluded_relation} and is given as \begin{equation} \Delta E_{ij}^0=k_{b}T \sum_{k=1}^{m} \ln\left[ \frac{\alpha_{\mathbf{NF}_k}}{\beta_k}\right]. \label{eq:concluded_relation2} \end{equation} This case resembles a more realistic intra-body scenario because unfolding rates between protein intermediates are not necessarily equal. Our results match the fact that physical systems in equilibrium have a statistical tendency to reach states of maximum entropy or minimum Gibbs free energy~\cite{rietman2016thermodynamic}. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{multi_diff1-eps-converted-to.pdf} \footnotesize \caption{The channel capacity for the multi-state protein model as a function of the nanoantenna applied force. The value of $\beta$ is different for each link where $\beta_1=0.5$, $\beta_2=0.6$, $\beta_3=0.2$.} \label{fig:mixedb1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{multi_diff2-eps-converted-to.pdf} \footnotesize \caption{The channel capacity for the multi-state protein model as a function of the nanoantenna applied force. The value of $\beta$ is different for each link where $\beta_1=0.3$, $\beta_2=0.5$, $\beta_3=0.7$.} \label{fig:mixedb2} \end{figure} \section{Conclusion and Discussion} \label{Sec8} In this paper, we present a communication system which bridges the link between EM nanonetworks and molecular paradigms. The developed stimuli-responsive system constituting of a nanoantenna transmitter and a protein receiver, paves the way towards controlled intra-body interactions at a molecular level. The key idea relies on stimulating the protein vibrational modes to induce a change in their state. Protein conformational changes activate biochemical events that transduce through intra-body pathways. The presented mathematical model uses the Boltzmann distribution to represent the system states. For the communication channel, a Markov chain finite-state model is used to represent the system inputs and outputs. Both a two-state and a multi-state protein model are developed. In the former model, the focus is on a single folding and unfolding interaction which results in a controlled biological change in the medium followed by a cascade of reactions. Such a model is inspired from mechanosensitive channels that adopt two fundamental conformational channel states separated by an energy barrier. In the latter model, we investigate a series of interactions representing a protein undergoing intermediate changes in configuration, where we generalize the presented two-state model. Expressions for the mutual information are derived for both cases, indicating the possible information rates achieved by stimulating proteins by THz nanoantennas. Several capacity constraints are also introduced to make sure the system is compatible with the intra-body medium. The results attained indicate a feasible communication platform for information transmission between the nanoantenna and the protein. It also expresses a fundamental link between kinetics and thermodynamics since protein interactions favor conditions of equilibrium even when an external force is applied to the system, which shows that the results adhere to the second law of thermodynamics. The results agree with the fact that a time-homogeneous Markov chain converges to the Gibbs equilibrium measure, i.e., thermal equilibrium. In essence, the concept of mutual information introduced in this work not only indicates the amount of information the protein signaling pathway carries but can also be further interpreted in terms of molecular disorder, where the highest capacity is obtained when minimum energy is lost. Such a conclusion will result in various medical opportunities where proteins are controlled and directed towards certain favorable interactions. As a future direction, we aim to present a mathematical model that captures the interaction between THz waves and protein dynamics from a mechanical perspective. This involves studying the resonance response associated with protein conformational changes by modeling the protein as a large set of coupled harmonic oscillators. The mechanical model must be integrated with the current work in order to have a complete system that relates the triggered natural frequencies of proteins to the probability of folding. In addition, the authors would like to further study the relationship between THz waves and misfolded proteins associated with neurodegenerative diseases. This involves understanding how THz waves may alter the pathological mechanisms and how this knowledge can be reflected to develop disease-modifying therapeutic strategies. \bibliographystyle{IEEEtran}
2024-02-18T23:40:56.540Z
2020-09-03T02:16:59.000Z
algebraic_stack_train_0000
3,736
9,053
proofpile-arXiv_066-2280
\section{Introduction} Overlapping divergences make the practical treatment of UV divergences in a quantum field theory cumbersome. In modern approaches, there exist various ways of tackling this issue, all based, in one way or another, on the use of infinitesimal or finite variations of the Feynman graphs with respect to appropriate parameters. The best known among these approaches is certainly the functional renormalization group \cite{Dupuis:2020fhh}, but there exist other possibilities, such as the one put forward in Ref.~\cite{tHooft:2004bkn}, see also Ref.~\cite{Baker:1976vz,Collins:1984xc} for even older proposals. The benefit of these approaches is that one does not need to worry about possible overlapping divergences, since they are, if any, automatically disentangled. There might be situations, however, where one needs to assess the absence of overlapping divergences in a given quantity build out of Feynman graphs. One recent example of this situation is reported in Ref.~\cite{Blaizot:2021ikl} where the overlapping divergences that appear in the two-particle-irreducible (2PI) formalism for the case of a scalar $\varphi^4$ theory are disentangled with the help of the functional renormalization group and classified into divergences of the two-point function, divergences of the four-point function, and divergences of higher derivatives $\delta^n\Phi[G]/\delta G^n$ (with $n\geq 3$) of the so-called Luttinger-Ward functional $\Phi[G]$, a functional of the propagator that enters the definition of the 2PI effective action. At first sight, there seems to be too many independent divergences as compared to the expected ones in scalar $\varphi^4$ theory. However, a careful analysis reveals that the divergences of $\delta^3\Phi/\delta G^3$ (and then also those of subsequent derivatives) are non-overlapping, which, in turn, implies that they are entirely governed by those of the two- and four-point functions. Here we extend and refine the discussion of Ref.~\cite{Blaizot:2021ikl} to describe, on general grounds, how two subgraphs of a given Feynman graph can overlap. This allows us to derive a series of ``non-overlap'' theorems for one-particle-irreducible subgraphs with $2$, $3$ and $4$ external legs. For other interesting works that relate to overlapping divergences, see for instance \cite{Kreimer:1998iv,Connes:2002ui}. The absence of overlapping divergences is intimately related to the possibility of constructing a {\it skeleton expansion} for a given vertex function with high enough external legs. By this, it is meant that, instead of computing such vertex function by summing all the Feynman graphs it is made of, one can first sum the so-called skeleton graphs in this list, and then, in each skeleton, replace each line by the full propagator, each tree-level trilinear coupling by the full three-point function (if any), and each tree-level quartic coupling by the full four-point function. In this way, one can hide any reference to the bare mass and the bare trilinear and quartic couplings. This is a well known result quoted for instance in Ref.~\cite{tHooft:2004bkn,Lu:1991qr}, with various applications such as for instance in the context of conformal theory \cite{Mack:1973kaa,Petkou:1994ad,Petkou:1995vu,Petkou:1996np,Goncalves:2018nlv}. A proof of this result is however difficult to find in the literature. In this paper, using the non-overlap theorems metioned above, we provide a simple justification of the skeleton expansion for vertex functions with more than five legs, in the case of scalar field theories. We also discuss how the skeleton expansion can be extended to other classes of graphs, in particular to the derivatives of the Luttinger-Ward functional. Even though the non-overlap theorems apply to any theory, for convenience we consider the simpler framework of a scalar theory. We do not restrict the type of interaction, however, which could be any $\varphi^n$, with $n\geq 3$. In fact, we could consider various of these interactions simultaneously. In Sec.~II, we introduce various definitions. In particular, we define graphs and subgraphs and describe how a subgraph is inserted within a given graph with the help of connecting and returning lines. In Sec.~III, this is used to describe how two subgraphs of a given graph can overlap with each other. In Sec.~IV, we restrict to the case of one-particle-irreducible subgraphs and derive the non-overlap theorems which are then used in Sec.~V to justify the skeleton expansion of vertex functions with more than five legs. We then extend this result to other classes of functions, including the high enough derivatives of the Luttinger-Ward functional. \section{Graphs and subgraphs}\label{sec:graphs} In perturbative calculations, quantities are computed by summing Feynman graphs made of two basic elements: free propagators that are represented graphically as lines, and vertices that are represented by points with a certain number of legs.\footnote{For instance, the $\varphi^n$ interaction vertex has $n$ legs.} We stress that vertex legs are not to be seen as lines, but rather as little anchors on which lines can be attached (or not). In what follows, we introduce more precisely the notion of graph together with some related concepts. In particular, we describe how a subgraph of a graph is inserted within that graph by means of both connecting lines and returning lines. This will then allow us to describe all possible overlaps between subgraphs of a given graph. \subsection{Graphs} We define a {\it graph} ${\cal G}$ as any collection of vertices and lines with the property that the two ends of any line of ${\cal G}$ are attached to vertices of ${\cal G}$. We can distinguish two types of vertices within the graph: those whose legs are all connected to lines of ${\cal G}$ are called {\it internal vertices}, while the others are called {\it external (or boundary) vertices.} The legs of external vertices are of two types: legs attached to lines of ${\cal G}$ and legs attached to no line. We call the latter the {\it external legs} of the graph ${\cal G}$ and denote them as $n_{\rm ext}({\cal G})$ in the following. We stress that our definition of graph excludes the possibility of lines with one end not attached to a vertex. This is just a convenient choice for the subsequent discussion, and, if needed, we can always attach such free lines to the external legs of a graph. Reciprocally, any graph including such free lines is associated to a unique graph that has no external lines. We also exclude lines which are not connected to any vertex. These are just trivial elements (disconnected from the rest) that can again be added at will when needed. There are no other restrictions for the moment, so the graphs could be one-particle-reducible, unamputated or even disconnected. Restrictions will be considered when appropriate. \begin{figure}[t] \begin{center} \includegraphics[height=0.3\textheight]{./example.pdf} \caption{An example of graph with six external legs $e_1,\dots, e_6$ in $\varphi^4$ theory. The thick lines highlight one particular subgraph with six external legs as well, $e_1$, $e_2$ and $e'_3,\dots,e'_6$. We have chosen a one-particle-irreducible graph for illustration but the discussion in Secs.~\ref{sec:graphs} and \ref{sec:overlap} applies to any type of graph as defined in Sec.~\ref{sec:graphs}. The leg labels $e'_1$ and $e'_2$ have been introduced for later purpose, see Sec.~\ref{sec:overlap}.} \label{fig:example} \end{center} \end{figure} In Fig.~\ref{fig:example}, we draw one example of graph in $\varphi^4$ theory. We shall use it recurrently to illustrate the various notions to be introduced below. \subsection{Subgraphs} A subgraph $\bar {\cal G}$ of a graph ${\cal G}$ is any collection of vertices and lines of ${\cal G}$ that forms a graph in the sense defined above. We write this as $\bar {\cal G}\subset {\cal G}$.\footnote{In this paper, we use a set theory notation (which differs slightly however from its use in set theory), similar to the one used in \cite{Kreimer:1998iv}.} We mention that any internal vertex of $\bar {\cal G}$ is necessarily an internal vertex of ${\cal G}$. In contrast, an external vertex of $\bar {\cal G}$ can be either an external vertex of ${\cal G}$ or an internal vertex of ${\cal G}$. Let us also mention that, when seen as a part of ${\cal G}$, the legs of the external vertices of $\bar {\cal G}$ are now of three different types: legs attached to lines of $\bar {\cal G}$, legs attached to lines of ${\cal G}$ that are not in $\bar {\cal G}$ and legs attached to no line (and thus corresponding to external legs of the original graph ${\cal G}$). Among these three types of legs attached to the external vertices, we refer to the last two as the external legs of the subgraph $\bar {\cal G}$. It is clear that the subgraph can be made a separate entity, disconnected from the original graph, by cutting all lines that are attached to its external legs. Indeed, these are the only lines that connect a vertex of the subgraph to a vertex in the rest of the graph.\footnote{Moreover, once separated from the rest, the external legs of the subgraph $\bar {\cal G}$ coincide with the external legs of the graph $\bar {\cal G}$ seen as a separated entity, as defined in the previous section.} An example of subgraph is shown in Fig.~\ref{fig:example}. We see clearly what are the external vertices, and thus the external lines that need to be cut to make the subgraph disconnected from the original graph (these are the lines connected to the legs $e'_3,\dots,e'_6$). \subsection{Connecting lines and returning lines} The subgraph $\bar {\cal G}$ is said to be {\it dense} within ${\cal G}$ if its vertices exhaust all vertices of ${\cal G}$. In the opposite case, we can define a new subgraph, known as the {\it complementary graph} of $\bar {\cal G}$ within ${\cal G}$, denoted ${\cal G}/\bar {\cal G}$ and formed by all the remaining vertices and all the lines that connect them. Together with the vertices of $\bar {\cal G}$, the vertices of ${\cal G}/\bar {\cal G}$ exhaust all the vertices of ${\cal G}$. This is not so, however, for the lines. Indeed, there might be lines that connect one vertex of $\bar {\cal G}$ and one vertex of ${\cal G}/\bar {\cal G}$ and which, therefore, do not belong neither to $\bar {\cal G}$ nor to ${\cal G}/\bar {\cal G}$. We call these lines the {\it connecting lines} of $\bar {\cal G}$ within ${\cal G}$. Obviously, these can also be seen as the connecting lines of ${\cal G}/\bar{\cal G}$ within ${\cal G}$. \begin{figure}[t] \begin{center} \includegraphics[height=0.38\textheight]{./subgraph.pdf} \caption{A subgraph $\bar {\cal G}$ is inserted in a graph ${\cal G}$ by means of $n_c$ connecting lines and $n_r$ returning lines. By definition, the complementary graph ${\cal G}/\bar {\cal G}$ has no returning lines. In contrast, we cannot hide the returning lines of $\bar {\cal G}$ via a redefinition of $\bar {\cal G}$ because, in general, $\bar {\cal G}$ will be forced upon us by the context.} \label{fig:subgraph} \end{center} \end{figure} There might also be certain lines that connect vertices of $\bar {\cal G}$ but which do not belong to $\bar {\cal G}$. We call these {\it returning lines} of $\bar {\cal G}$ within ${\cal G}$. Such lines can exist because, when selecting the subgraph $\bar {\cal G}$, we choose lines and vertices of ${\cal G}$ but our choice does not necessarily include all lines that connect the selected vertices with each other. On the other hand, our definition of the complementary subgraph ${\cal G}/\bar {\cal G}$ is such that all the lines connecting the vertices of ${\cal G}/\bar {\cal G}$ are elements of ${\cal G}/\bar {\cal G}$. In other words, ${\cal G}/\bar {\cal G}$ does not have any returning lines within ${\cal G}$. Of course, we could redefine the subgraph $\bar {\cal G}$ such that it includes the returning lines as well, and therefore such that $\bar {\cal G}$ and ${\cal G}/\bar {\cal G}$ are treated in a more symmetrical way. However, we shall not do this here for a precise reason: in the following discussion, the subgraph $\bar {\cal G}$ will be imposed on us by the context, while we will always be free to choose ${\cal G}/\bar {\cal G}$ such that it does not have any returning lines. Note finally that, in the case of a dense subgraph, there are only returning lines, no connecting lines. On the other hand, the absence of connecting lines does not necessarily imply that the subgraph $\bar {\cal G}$ is dense within ${\cal G}$ since the complementary subgraph ${\cal G}/\bar {\cal G}$ could be disconnected from $\bar {\cal G}$. The equivalence works in the case of a connected graph ${\cal G}$ though. The notions of connecting and returning lines provide a graphical representation of how a given subgraph $\bar {\cal G}$ is inserted within a graph ${\cal G}$, see Fig.~\ref{fig:subgraph}. This structure will be central in the following developments. If we take the example of Fig.~\ref{fig:example}, we see that the considered subgraph has one returning line (the thin line connected to the legs $e'_5$ and $e'_6$) and two connecting lines (the two thin lines attached respectively to the legs $e'_3$ and $e'_4$). The remaining lines and vertices (in the bottom right of the figure) form the complementary graph. \section{Overlapping subgraphs}\label{sec:overlap} We are now ready to discuss how two subgraphs $\bar {\cal G}_1$ and $\bar {\cal G}_2$ of a given graph ${\cal G}$ can overlap with each other. In fact, for the moment, the original graph ${\cal G}$ will play no role and we can equally think in terms of the overlap of two original graphs. By overlapping subgraphs or graphs, we mean that $\bar {\cal G}_1$ and $\bar {\cal G}_2$ have certain vertices and lines in common. We shall in fact consider the collection of all common vertices and lines between $\bar {\cal G}_1$ and $\bar {\cal G}_2$. It is quite obvious that, if a line is common to $\bar {\cal G}_1$ and $\bar {\cal G}_2$, then the two vertices attached to its ends are also common to $\bar {\cal G}_1$ and $\bar {\cal G}_2$. It follows that this common collection of lines and vertices forms a graph, referred to as the {\it overlap graph between $\bar {\cal G}_1$ and $\bar {\cal G}_2$,} which we denote $\bar {\cal G}_1\cap \bar {\cal G}_2$ in what follows. \begin{figure}[t] \begin{center} \includegraphics[height=0.32\textheight]{./overlap.pdf} \caption{Overlap between two graphs $\bar {\cal G}_1$ and $\bar {\cal G}_2$. The overlap graph $\smash{\bar {\cal C}=\bar{\cal G}_1\cap\bar{\cal G}_2}$ is the (maximal) common subgraph of $\bar {\cal G}_1$ and $\bar {\cal G}_2$. This common subgraph has $n_{c_i}$ connecting lines within $\bar {\cal G}_i$ and $n_{r_i}$ returning lines within $\bar {\cal G}_i$. We denote by $x$, $x_1$ and $x_2$ the numbers of external legs of $\bar {\cal C}$, $\bar {\cal G}_1/\bar {\cal C}$ and $\bar {\cal G}_2/\bar {\cal C}$ that are connected neither to connecting lines nor to returning lines.} \label{fig:overlap} \end{center} \end{figure} \subsection{Overlap pattern} This common graph $\bar {\cal G}_1\cap \bar {\cal G}_2$ is in fact a subgraph of both $\bar {\cal G}_1$ and $\bar {\cal G}_2$. We can then apply the results of the previous section twice and introduce two sets of connecting lines, $n_{c_1}$ and $n_{c_2}$ in number, as well as two sets of returning lines, $n_{r_1}$ and $n_{r_2}$ in number. This leads to the graphical representation shown in Fig.~\ref{fig:overlap}, where, for later use, we have also introduced the numbers of external legs of $\bar {\cal G}_1\cap \bar {\cal G}_2$, $\bar {\cal G}_1/(\bar {\cal G}_1\cap \bar {\cal G}_2)$ and $\bar {\cal G}_2/(\bar {\cal G}_1\cap \bar {\cal G}_2)$ attached neither to connecting lines nor to returning lines, and denoted respectively as $x$, $x_1$ and $x_2$. It is important to stress that no connecting line of $\bar {\cal G}_1\cap \bar {\cal G}_2$ within $\bar {\cal G}_1$ can be a connecting line of $\bar {\cal G}_1\cap \bar {\cal G}_2$ within $\bar {\cal G}_2$, or vice-versa. Otherwise, this line would be a line of both $\bar {\cal G}_1$ and $\bar {\cal G}_2$ and then an element of $\bar {\cal G}_1\cap \bar {\cal G}_2$, that is not a connecting line. The same remark applies to the returning lines. \subsection{Counting external legs} The external legs of $\bar {\cal G}_1$ are those labelled $x_1$, $x$ as well as the $n_{t_2}\equiv n_{c_2}+2n_{r_2}$ legs attached to the connecting and returning lines of $\bar {\cal C}$ within $\bar {\cal G}_2$. Reciprocally, the external legs of $\bar {\cal G}_2$ are those labelled $x_2$, $x$ as well as the $n_{t_1}\equiv n_{c_1}+2n_{r_1}$legs attached to the connecting and returning lines of $\bar {\cal C}$ within $\bar {\cal G}_1$. We can then write \beq n_{\rm ext}(\bar {\cal G}_1) & = & x_1+x+n_{t_2}\,,\label{eq:s1}\\ n_{\rm ext}(\bar {\cal G}_2) & = & x_2+x+n_{t_1}\,.\label{eq:s2} \eeq On the other hand, the number of external legs of $\bar {\cal G}_1\cap\bar {\cal G}_2$ is \beq n_{\rm ext}(\bar {\cal G}_1\cap\bar {\cal G}_2)=x+n_{t_1}+n_{t_2}\,.\label{eq:s3} \eeq Finally, it will be convenient to consider the {\it union} of $\bar {\cal G}_1$ and $\bar {\cal G}_2$ obtained by putting together all the vertices and lines of $\bar {\cal G}_1$ and $\bar {\cal G}_2$. This is clearly a graph which we denote $\bar {\cal G}_1\cup \bar {\cal G}_2$. Its number of external legs is given by \beq n_{\rm ext}(\bar {\cal G}_1\cup \bar {\cal G}_2)=x_1+x+x_2\,.\label{eq:s4} \eeq Using Eqs.~(\ref{eq:s1})-(\ref{eq:s4}), it is then easily checked that \beq\label{eq:fusion_2} n_{\rm ext}(\bar {\cal G}_1\cap\bar {\cal G}_2)+n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=n_{\rm ext}(\bar {\cal G}_1)+n_{\rm ext}(\bar {\cal G}_2)\,. \eeq This formula strongly reminds the well known relation between the cardinals of two finite sets $X_1$, $X_2$ and the cardinals of the sets $X_1\cup X_2$ and $X_1\cap X_2$.\footnote{More precisely, one has ${\rm card}(X_1\cap X_2)+{\rm card}\,(X_1\cup X_2)={\rm card}\,X_1+{\rm card}\,X_2\,.$} We stress however that Eq.~(\ref{eq:fusion_2}) is not a trivial application of the corresponding formula between the cardinals of the sets of external legs of $\bar {\cal G}_1$, $\bar {\cal G}_2$, $\bar {\cal G}_1\cup\bar {\cal G}_2$ and $\bar {\cal G}_1\cap \bar {\cal G}_2$ because the sets of external legs of $\bar {\cal G}_1$ or $\bar {\cal G}_2$ are not subsets of the set of external legs of $\bar {\cal G}_1\cup \bar {\cal G}_2$ and so the union of the sets of external legs of $\bar {\cal G}_1$ and $\bar {\cal G}_2$ is not the set of external legs of $\bar {\cal G}_1\cup \bar {\cal G}_2$. Instead, the formula (\ref{eq:fusion_2}) needs to be seen as consequence of the overlapping structure depicted in Fig.~\ref{fig:overlap}. \subsection{Listing the possible overlaps}\label{sec:list} The previous formulas allow us to list all possible overlaps between $\bar {\cal G}_1$ and $\bar {\cal G}_2$. First it follows from Eq.~(\ref{eq:fusion_2}) that a necessary condition for $\bar {\cal G}_1$ and $\bar {\cal G}_2$ to have an overlap is that \beq\label{eq:central} n_{\rm ext}(\bar {\cal G}_1\cap \bar {\cal G}_2)\leq n_{\rm ext}(\bar {\cal G}_1)+n_{\rm ext}(\bar {\cal G}_2)\,. \eeq This also means that, given $n_{\rm ext}(\bar {\cal G}_1)$ and $n_{\rm ext}(\bar {\cal G}_2)$, we can obtain all possible overlaps between $\bar {\cal G}_1$ and $\bar {\cal G}_2$ by considering all possible values of $n_{\rm ext}(\bar {\cal G}_1\cap\bar{\cal G}_2)$ compatible with the constraint (\ref{eq:central}) and, for each of these values, solve the system (\ref{eq:s1})-(\ref{eq:s3}) for $x$, $x_1$ and $x_2$ as a function of $n_{t_1}$ and $n_{t_2}$. One finds \beq x & = & n_{\rm ext}(\bar {\cal G}_1\cap\bar {\cal G}_2)-n_{t_1}-n_{t_2}\,,\label{eq:x}\\ x_1 & = & n_{\rm ext}(\bar {\cal G}_1)-n_{\rm ext}(\bar {\cal G}_1\cap\bar {\cal G}_2)+n_{t_1}\,,\label{eq:x1}\\ x_2 & = & n_{\rm ext}(\bar {\cal G}_2)-n_{\rm ext}(\bar {\cal G}_1\cap\bar {\cal G}_2)+n_{t_2}\,,\label{eq:x2} \eeq with the constraints \beq && n_{t_1}+n_{t_2}\leq n_{\rm ext}(\bar {\cal G}_1\cap\bar {\cal G}_2)\,,\label{eq:c1}\\ && n_{t_1}\geq n_{\rm ext}(\bar {\cal G}_1\cap\bar {\cal G}_2)-n_{\rm ext}(\bar {\cal G}_1)\,,\label{eq:c2}\\ && n_{t_2}\geq n_{\rm ext}(\bar {\cal G}_1\cap\bar {\cal G}_2)-n_{\rm ext}(\bar {\cal G}_2)\,.\label{eq:c3} \eeq Any possible solution defined by the values of $x$, $x_1$, $x_2$, $n_{t_1}$ and $n_{t_2}$ is called an {\it overlap mode}. In App.~A, we determine the number of overlap modes and in Table I, we collect the various overlap modes for subgraphs with $0$ and $1$ external legs. An example of overlap of two subgraphs with $6$ external legs each is provided in Fig.~\ref{fig:example} where the highlighted subgraph, obtained by cutting the lines attached to the legs $e'_3,\dots,e'_6$, overlaps with the subgraph obtained by cutting the lines attached to the legs $e'_1$ and $e'_2$. This overlap mode is characterized by $n_{t_1}=4$ (with $n_{c_1}=2$ and $n_{r_1}=1$), $n_{t_2}=2$ (with $n_{c_2}=2$ and $n_{r_2}=0$), $x=0$, $x_1=2$ and $x_2=4$. We mention that the above equations make no direct reference to the numbers or connecting lines $n_{c_i}$ or to the numbers of returning lines $n_{r_i}$, but rather to the combination $n_{t_i}=n_{c_i}+2n_{r_i}$ and, in fact, one can interprete the returning lines as a degenerate case of connecting line which does not connect to any complementary graph but loops back instead to the subgraph. This allows to simplify the graphical representation given in Fig.~\ref{fig:overlap} by ignoring the returning lines, or, more precisely, by hiding them as part of the connecting lines.\footnote{We could introduce {\it overlapping submodes} by considering all the possible ways one can distribute the given $n_{t_i}=n_{c_i}+2n_{r_i}$ among $n_{c_i}$ and $n_{r_i}$. However the distinction between connecting lines and returning lines does not play a very deep role in what follows, see however the short discussion in Sec.~\ref{sec:E}.}\\ \begin{table}[ht] \begin{tabular}{|| c|c|c|c|c|c|c|c|c ||} \hline \;$n_{\rm ext}({\bar {\cal G}_1})$\; & \;\;$n_{\rm ext}({\bar {\cal G}_2})$\;\; & \;\;$n_{\rm ext}({\bar {\cal G}_1\cap\bar {\cal G}_2})$\;\; & \;\;$n_{t_1}$\;\; & \;\;$n_{t_2}$\;\; & \;\;$x$\;\; & \;\;$x_1$\;\; & \;\;$x_2$\;\; & \;\;$n_{\rm ext}({\bar {\cal G}_1}\cup\bar {\cal G}_2)$\;\;\\ \hline \hline \;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\;\\ \hline \;0\;\; & \;\;1\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;1\;\; & \;\;1\;\;\\ \hline \;0\;\; & \;\;1\;\; & \;\;1\;\; & \;\;1\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\;\\ \hline \;1\;\; & \;\;1\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;1\;\; & \;\;1\;\; & \;\;2\;\;\\ \hline \;1\;\; & \;\;1\;\; & \;\;1\;\; & \;\;0\;\; & \;\;0\;\; & \;\;1\;\; & \;\;0\;\; & \;\;0\;\; & \;\;1\;\;\\ \hline \;1\;\; & \;\;1\;\; & \;\;1\;\; & \;\;0\;\; & \;\;1\;\; & \;\;0\;\; & \;\;0\;\; & \;\;1\;\; & \;\;1\;\;\\ \hline \;1\;\; & \;\;1\;\; & \;\;1\;\; & \;\;1\;\; & \;\;0\;\; & \;\;0\;\; & \;\;1\;\; & \;\;0\;\; & \;\;1\;\;\\ \hline \;1\;\; & \;\;1\;\; & \;\;2\;\; & \;\;1\;\; & \;\;1\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\; & \;\;0\;\;\\ \hline \end{tabular} \caption{\label{table1} overlap modes between subgraphs with $0$ and $1$ external legs as obtained using Eqs.~(\ref{eq:x}), (\ref{eq:x1}) and (\ref{eq:x2}) and the constraints (\ref{eq:c1})-(\ref{eq:c3}). We have omitted certain cases that are deduced from the ones in the table using the exchange $\bar {\cal G}_1\leftrightarrow \bar {\cal G}_2$. For the cases listed here $n_{t_i}\leq 1$ and thus there are no returning lines.} \end{table} So far the analysis concerned any type of subgraphs of a given graph. In particular, the subgraphs did not need to be connected. In the next section, we particularize the analysis to specific classes of subgraphs for which we show that overlaps are not possible. \section{The case of one-particle-irreducible subgraphs} A {\it one-particle-irreducible (1PI) subgraph} is a connected graph that cannot be made into two disconnected pieces by cutting just one line. In this section we analyze the possibility of overlap between two such subgraphs. More precisely, we look for generic enough conditions under which such overlaps are excluded. Of course, it will be implicitly assumed here that none of the subgraphs in question is a subgraph of the other one (in particular, they are assumed to be distinct). Otherwise they always overlap, in a trivial manner. Moreover, since the union of two overlapping 1PI subgraphs is also 1PI, it is necessarily contained in one of the 1PI components of the original graph ${\cal G}$. Thus, without loss of generality, we can assume that the original graph is 1PI (and in particular connected). A 1PI subgraph with $p$ external legs is called a {\it $p$-insertion.} This notion includes the (1PI) graph itself if the latter has $p$ external legs, and also any single vertex associated to the $\varphi^p$ interaction if the latter was included in the theory. We shall now introduce certain notions associated to $p$-insertions and then analyze the conditions under which the $2$-, $3$- and $4$-insertions cannot overlap. \subsection{Definitions}\label{sec:def} A graph ${\cal G}$ is called a {\it $p$-skeleton} if it contains no other $p$-insertion than the graph itself (if it has $p$ external legs) or those made of a single vertex (if the $\varphi^p$ interaction vertex is part of the model). We mention that a 1PI graph is necessarily a $0$-skeleton (since it is connected) and also a $1$-skeleton (since any non-trivial $1$-insertion would be necessarily connected to the rest of the graph by a line). More generally, we call $p_1/p_2$-skeleton, a graph that is both a $p_1$- and a $p_2$-skeleton.\\ Given two $p$-insertions $\bar {\cal G}_1$ and $\bar {\cal G}_2$ of a graph ${\cal G}$, it might occur that one of them is a subgraph of the other, say $\bar {\cal G}_1\subset \bar {\cal G}_2$. This defines a partial ordering over the set of $p$-insertions of the graph ${\cal G}$. As any partial ordering over a finite set, it admits maximal elements, that is elements that are larger than any other element that is ordered with respect to them. In the present context, we refer to these maximal elements as {\it maximal} $p$-insertions. They correspond to $p$-insertions that are not themselves subgraphs of another $p$-insertion within the graph ${\cal G}$. Obviously, a maximal $p$-insertion cannot be a subgraph of another maximal $p$-insertion of the same graph, unless these two maximal $p$-insertions coincide.\\ The union of two overlapping $p$-insertions is another $q$-insertion. This is because, if there was a way to split the resulting graph by cutting one line, the cut should lie in any of the two original $p$-insertions. But this is impossible since the latter are 1PI by definition. If we now consider the particular case of the union of two overlapping (and distinct) maximal $p$-insertions, then necessarily $q\neq p$ since otherwise the union would have created a new $p$-insertion that is distinct from the original ones and that is larger than any of them, in contradiction with the fact that the latter were both assumed to be maximal. We shall make use of this result below. \subsection{Non-overlap theorems} It is not very difficult to see what is the added value of considering 1PI subgraphs. First, the number of connecting lines of $\bar {\cal G}_1\cap \bar {\cal G}_2$ within $\bar {\cal G}_i$ is either $n_{c_i}=0$ or $n_{c_i}\geq 2$, and in the first case, we necessarily have $n_{r_i}\geq 1$, otherwise one subgraph would be included in the other one. It follows that $n_{t_i}\geq 2$. From Eq.~(\ref{eq:s3}) this implies $n_{\rm ext}(\bar {\cal G}_1\cap\bar {\cal G}_2)\geq 4$, and, combining this with Eq.~(\ref{eq:fusion_2}), we arrive at \beq\label{eq:in4} 4+n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)\leq n_{\rm ext}(\bar {\cal G}_1)+n_{\rm ext}(\bar {\cal G}_2)\,. \eeq For not two large values of $n_{\rm ext}(\bar {\cal G}_1)$ and $n_{\rm ext}(\bar {\cal G}_2)$ this is a strong constraint on $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)$ that will allow us finding certain obstructions to the presence of overlaps.\\ Consider first the case of $2$-insertions, with $\smash{p_1\equiv n_{\rm ext}(\bar {\cal G}_1)=2}$ and $\smash{p_2\equiv n_{\rm ext}(\bar {\cal G}_2)=2}$. From Eq.~(\ref{eq:in4}) it follows that $\smash{n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=0}$. Since the original graph ${\cal G}$ is assumed to be 1PI, this means that $\bar {\cal G}_1\cup \bar {\cal G}_2$ is the graph ${\cal G}$ itself, and, therefore, that the latter cannot have any external leg. We have thus arrive at a first ``non-overlap'' theorem: the only possibility for a overlap of $2$-insertions (self-energies) is within a graph with no external legs. In other words:\\ {\bf Theorem 2:} $2$-insertions cannot have an overlap within a (1PI) graph with external legs.\\ \noindent{Let us mention that we know exactly how this overlap occurs in the case of a graph with no external legs since $n_{\rm ext}(\bar {\cal G}_1\cap \bar {\cal G}_2)=4$ from Eq.~(\ref{eq:fusion_2}) and therefore $\smash{n_{t_1}=n_{t_2}=2}$ from Eqs.~(\ref{eq:c1})-(\ref{eq:c3}).\\ Consider next $\smash{p_1\equiv n_{\rm ext}(\bar {\cal G}_1)=3}$ and $\smash{p_2\equiv n_{\rm ext}(\bar {\cal G}_2)=3}$. In this case, the inequality (\ref{eq:in4}) leaves room for the cases $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=0$, $1$, $2$. We could analyze the various overlap modes using the discussion in the previous section. However, our purpose here is to find conditions for non-overlap. To this purpose, we note that in the cases $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=0,1$, we have again $\bar {\cal G}_1\cup\bar {\cal G}_2=\bar {\cal G}$ and therefore these cases can only exist if the original (1PI) graph ${\cal G}$ has $0$ or $1$ external legs. In the case $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=2$, we do not necessarily have $\bar {\cal G}_1\cup\bar {\cal G}_2=\bar {\cal G}$, so this case is possible if the original graph has two external legs or if it contains $2$-insertions. We then arrive at a second non-overlap theorem:\\ {\bf Theorem 3:} $3$-insertions cannot have a overlap within a (1PI) $2$-skeleton graph that has strictly more than two external legs.\\ Let us finally consider $\smash{p_1\equiv n_{\rm ext}(\bar {\cal G}_1)=4}$ and $\smash{p_2\equiv n_{\rm ext}(\bar {\cal G}_2)=4}$. In this case, the inequality (\ref{eq:in4}) leaves room for the values $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=0,\dots,4$. This situation is a bit peculiar because, contrary to the previous cases, the highest possible value of $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)$ allowing for an overlap, that is $4$, coincides precisely with the number of external legs of the insertions we are probing. So we cannot just get rid of this overlap mode by restricting to $4$-skeleton graphs, for this would mean that there are no $4$-insertions to consider in the first place (aside from the trivial ones). Here is where the notion of maximal $4$-insertions and the result quoted at the end of Sec.~\ref{sec:def} comes in handy. Indeed, suppose that we restrict our analysis to 1PI graphs that are $2$- and $3$-skeletons. Then, what we can show is the following third non-overlap theorem:\\ {\bf Theorem 4:} Maximal $4$-insertions cannot have a overlap within a (1PI) $2/3$-skeleton graph that has strictly more than three external legs.\\ Restricting to $2/3$-skeleton graph with strictly more than $3$ legs immediately gets rids of the cases $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=0,\dots,3$. Restricting to maximal $4$-insertions gets rid of the case $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=4$ because, as we already discussed above, the union of two overlapping (distinct) maximal $4$-insertions cannot be a $4$-insertion.\\ The above results have been obtained by using the inequality (\ref{eq:in4}). An altenerative strategy consists in listing all possible overlaps of a given type and check that none of them fulfills the premises of the above theorems. This is done in Fig.~\ref{fig:list} where the possible overlaps between $2$-, $3$- and $4$-insertions are listed. In each figure, the blobs make reference to the blobs in Fig.~\ref{fig:overlap}, with the little difference that we have hidden the returning lines as part of the connecting lines, see the discussion at the end of Sec.~\ref{sec:list}. \begin{figure}[t] \begin{center} \hspace{-10.5cm}\includegraphics[height=0.10\textheight]{./list2.pdf}\\ \vglue6mm \hspace{-6.7cm}\includegraphics[height=0.22\textheight]{./list3.pdf}\\ \vglue6mm \hspace{-3.1cm}\includegraphics[height=0.33\textheight]{./list4.pdf}\\ \caption{Possible overlaps between two $2$-insersions, two $3$-insertions or two $4$-insertions. None of them complies with the premises of the theorems $2$, $3$ and $4$. In other words, the theorems apply whenever their premises are fulfilled.} \label{fig:list} \end{center} \end{figure} \subsection{Overlapping insertions of different order} We can also consider overlaps between insertions of different order. Take for instance $p_1=2$ and $p_2=3$. The inequality (\ref{eq:in4}) implies $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)\leq 1$. Thus an overlap between these insertions can only occur in graphs with zero or one external leg, or, in other words, a $2$-insertion and a $3$-insertion cannot have an overlap within a graph with strictly more than one leg:\\ {\bf Theorem 23:} A $2$-insertion and a $3$-insertion cannot overlap within a (1PI) graph with strictly more than one external leg.\\ For $\smash{p_1=2}$ and $\smash{p_2=4}$, an overlap can occur only in the cases $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=0,1,2$. This is a situation similar to the one we encountered for the overlap of two $4$-insertions: the highest possible value of $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)$ allowing for an overlap, that is $2$, coincides precisely with the number of external legs of one of the insertions we are probing. Consider then not an overlap between an arbitrary $2$-insertion and an arbitrary $4$-insertion, but rather between a maximal $2$-insertion and an arbitrary $4$-insertion. This type of overlap cannot occur in a graph with strictly more than one external leg. Indeed, the cases $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=0,1$ are trivially excluded, whereas the case $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=2$ is excluded because otherwise $\bar {\cal G}_1\cup\bar {\cal G}_2$ would correspond to a $2$-insertion that contains strictly $\bar {\cal G}_1$, in contradiction with the fact that $\bar {\cal G}_1$ was assumed to be maximal. We arrive then at the following result\\ {\bf Theorem 24:} A maximal $2$-insertion and a $4$-insertion cannot overlap within a (1PI) graph with strictly more than one external leg.\\ Finally, for $\smash{p_1=3}$ and $\smash{p_2=4}$, an overlap can occur only in the cases $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=0,1,2,3$. This is a situation similar to the one we encountered for the overlap of two $4$-insertions: the highest possible value of $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)$ allowing for an overlap, that is $3$, coincides precisely with the number of external legs of one of the insertions we are probing. Consider then not an overlap between an arbitrary $3$-insertion and an arbitrary $4$-insertion, but rather between a maximal $3$-insertion and a arbitrary $4$-insertion. This type of overlap cannot occur in a $2$-skeleton graph with strictly more than two external legs. Indeed, the cases $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=0,1,2$ are excluded in a trivial way, whereas the case $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=3$ is excluded because otherwise $\bar {\cal G}_1\cup\bar {\cal G}_2$ would correspond to a $3$-insertion that contains strictly $\bar {\cal G}_1$, in contradiction with the fact that $\bar {\cal G}_1$ was assumed to be maximal. We arrive then at the following result\\ {\bf Theorem 34:} A maximal $3$-insertion and a $4$-insertion cannot overlap within a (1PI) $2$-skeleton graph with strictly more than two external legs.\\ These results can once again be derived by listing all possible overlaps between $2$-, $3$- and $4$-insertions, see Fig.~\ref{fig:list2}. \begin{figure}[t] \begin{center} \hspace{-9.6cm}\includegraphics[height=0.11\textheight]{./list23.pdf}\\ \vglue6mm \hspace{-6.2cm}\includegraphics[height=0.11\textheight]{./list24.pdf}\\ \vglue6mm \includegraphics[height=0.11\textheight]{./list34.pdf}\\ \caption{Possible overlaps between a $2$-insersion and a $3$-insertion, a $2$-insersion and a $4$-insertion, and a $3$-insersion and a $4$-insertion. None of them complies with the premises of the theorems $23$, $24$ and $34$. In other words, the theorems apply whenever their premises are fulfilled. We note that the first diagram in the last row has strictly more than $3$ external legs, just as in the premise of theorem $34$. However the $3$-insertion that overlaps with the $4$-insertion is not maximal, so this case is not excluded by the theorem.} \label{fig:list2} \end{center} \end{figure} \subsection{Connecting lines versus returning lines}\label{sec:E} So far, we made no distinction between connecting and returning lines. This was possible because they play essentially the same role. In particular, with insertions, we have $n_{c_i}\geq 2$ or, in the case where $n_{c_i}=0$, $2n_{r_i}\geq 2$ which allowed us to use $n_{t_i}\geq 2$. One may want to make a distinction between connecting lines and returning lines, and, in particular treat the cases $n_{c_i}=0$ and $n_{c_i}\geq 2$ separately. When proceeding this way, one is lead to discuss three cases of overlap, a {\it generic overlap} with $n_{c_1}\geq 2$ and $n_{c_2}\geq 2$ and {\it non-generic overlaps} with $n_{c_1}=0$ or $n_{c_2}=0$, or both. Using a terminology that we introduced above, the generic overlap corresponds to the case where $\bar {\cal G}_1\cap \bar {\cal G}_2$ is neither dense within $\bar {\cal G}_1$ nor within $\bar {\cal G}_2$, whereas the non-generic overlaps correspond to the cases where $\bar {\cal G}_1\cap\bar {\cal G}_2$ is dense either within $\bar {\cal G}_1$ or within $\bar {\cal G}_2$, or within both. There is nothing to add to the discussion in the previous section in the case of a generic overlap. In the case of non-generic overlaps, however, the analysis can be slightly refined. Indeed, if $n_{c_1}=0$ and because the subgraphs under consideration are connected, we need to have $x_1=0$ which implies that $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=x+x_2$. Moreover, because $n_{r_1}\geq 1$, it follows from Eq.~(\ref{eq:s2}) that $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)\leq p_2-2$. Similarly, if $n_{c_2}=0$, we need to have $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)\leq p_1-2$. It is easily seen that these constraints are stronger than (\ref{eq:in4}). More precisely, in the case $p_1=p_2=2$, we obtain the same constraint $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=0$. However, in the case $p_1=p_2=3$, we find $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=0,1$ which is the stronger then the constraint that we foud earlier and which allows to enlarge the premise of theorem $3$ to the case of graphs with strictly more than one external leg. Similarly, for $p_1=p_2=4$, we find $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=0,1,2$ which allows to enlarge the premise of theorem $4$ to graphs with strictly more than two external legs and to any type of $4$-insertion, not necessarily maximal.\footnote{Arbitrary $4$-insertions can have a generic overlap though. In a $2/3$-skeleton graph, we necessarily have $n_{\rm ext}({\bar {\cal G}_1}\cup {\bar {\cal G}_2})=4$ and then $n_{\rm ext}({\bar {\cal G}_1}\cap {\bar {\cal G}_2})=4$, which, according to (\ref{eq:c1})-(\ref{eq:c3}), leads to all the overlapping modes complying with $n_{t_1}+n_{t_2}\leq 4$. Since $n_{t_i}\geq 2$ in the generic case, we must have $n_{t_i}=2$ and thus $n_{c_i}=2$ and $n_{r_i}=0$.} In the case where $p_1\neq p_2$, with $p_i=2$, $3$ or $4$, it is easily checked that the constraints are the same as those obtained above so the premises of theorems 23, 24 and 34 are not changed. \subsection{Higher order insertions} Consider now the case $p_1=p_2=5$. The inequality (\ref{eq:in4}) imposes $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)\leq 6$. We see here that, even if we restricted to $2/3/4$-skeleton graphs and to maximal $5$-insertions, we could find overlaps with $n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)=6$. From (\ref{eq:fusion_2}), this implies $n_{\rm ext}(\bar {\cal G}_1\cap\bar {\cal G}_2)=4$ which, according to (\ref{eq:c1})-(\ref{eq:c3}), leads to all the overlap modes complying with $n_{t_1}+n_{t_2}\leq 4$. It follows that, without further restrictions, the non-overlap theorems derived above are specific to the cases $p=2$, $3$ and $4$. We can find nonetheless a non-overlap theorem in the case $p=5$ if we further restrict the possible subgraphs under consideration. Assume for instance that we inquire about the overlap of two-particle-irreducible (2PI) subgraphs, that is subgraphs that cannot be split apart by cutting two lines. In the case of a generic overlap, we have $n_{t_i}\geq 3$ and therefore $n_{\rm ext}(\bar {\cal G}_1\cap\bar {\cal G}_2)\geq 6$, from which it follows now that \beq 6+n_{\rm ext}(\bar {\cal G}_1\cup\bar {\cal G}_2)\leq n_{\rm ext}(\bar {\cal G}_1)+n_{\rm ext}(\bar {\cal G}_2)\,. \eeq This time, overlap of $3$-insertions can only occur if the original graph has no external legs, $4$-insertions cannot overlap if the graph has strictly more than $2$ legs and $5$-insertions cannot overlap if the graph has strictly more than $4$ legs. Moreover, maximal $6$-insertions cannot overlap if the graph has strictly more than $6$ legs. These results extend almost identically to the case of a non-generic overlap.\footnote{For the $3$-insertions, we need to require the graph to have strictly more than $1$ leg, for the $5$-insertions, it is enough to require the graph to have strictly more than $3$ legs, and for the $6$-inserstions it is enough to require the graph to have strictly more than $4$ legs and the result applies to arbitrary $6$-insertions, not necessarily maximal.} \section{Application: Hiding the bare mass as well as the trilinear and quartic bare couplings} In this section we build upon the previous results to show how, for a large class of functions, it is possible to hide the dependence on the bare mass as well as the dependence on the trilinear and quartic bare couplings, using the two-, three- and four-point functions. For simplicity, we first show how this is done for the vertex functions $\Gamma^{(n)}$ with $n\geq 5$. To do so, we show that these vertex functions admit a skeleton expansion, that is rather than computing them by adding all the perturbative graphs they are made of, we can alternatively sum over all the $2/3/4$-skeleton graphs in this list, and then replace each free propagator by the full two-point function $G=[\Gamma^{(2)}]^{-1}$, each tree-level $3$-vertex by the full three point function $\Gamma^{(3)}$ and each tree-level $4$-vertex by the full four-point function $\Gamma^{(4)}$. As already mentioned in the Introduction, this is a known result. It is however interesting to see how it derives from the non-overlap theorems of the previous section. At the end of the section, we argue that this result extends in fact to a larger class of functions. \subsection{Hiding the bare mass} Consider a 1PI graph ${\cal G}$ with external legs. We define a {\it chain} of ${\cal G}$ as any connected sequence of lines $G_0$ and $2$-insertions $\Sigma_i$ of the form \beq G_0 \Sigma_1 G_0\Sigma_2 G_0\cdots G_0\Sigma_n G_0\,. \eeq Since we have assumed that the 1PI graph ${\cal G}$ has external legs, the starting $G_0$ is necessarily different from the ending one. Moreover, we request that this sequence is complete, that is that one cannot add additional $\Sigma_k G_0$'s or $G_0\Sigma_k$'s. In a graph with external legs, it is always possible to identify unambiguously all the chains. It may happen that certain lines $G_0$ are not connected to any self-energy. We call these {\it trivial} chains. Given two chains ${\cal C}_1$ and ${\cal C}_2$ of ${\cal G}$, we say that ${\cal C}_2$ is a {\it subchain} of ${\cal C}_1$ if it is a chain of one of the $2$-insertions of ${\cal C}_1$. This relation which we denote as ${\cal C}_2\subset {\cal C}_1$ defines a partial ordering over the set of chains of ${\cal G}$. As any partial ordering over a finite set, it admits maximal elements which we call {\it maximal chains.} Now, according to theorem $2$ above, in a graph with external legs, $2$-insertions cannot have any overlap (unless of course one of them is a subgraph of the other). It is then easily verified that maximal chains cannot have any overlap either (unless of course one of them is a subchain of the other). Let us now use this result to show how to hide the bare mass in $\Gamma^{(n)}$, with $n\geq 3$. We start by writing $\Gamma^{(n\geq 3)}$ as \beq \Gamma^{(n\geq 3)}=\sum_{{\cal D}} {\cal D}[G_0,\{g^{(m\geq 3)}_{\rm b}\}]\,,\label{eq:g3} \eeq where the sum runs over all Feynman graphs ${\cal D}$ that contribute to $\Gamma^{(n\geq 3)}$. Depending on the context, ${\cal D}$ denotes the graph itself or the corresponding Feynman integral. It depends on the bare mass $m_{\rm b}$ via the bare free propagator $G_0$. We have also made explicit the dependence on the various bare couplings $\{g^{(m\geq 3)}_{\rm b}\}$. Since maximal chains do not overlap in the graphs contributing to $\Gamma^{(n\geq 3)}$, one can unambiguously associate to each graph ${\cal D}$, a $2$-skeleton graph denoted ${\cal D}_2$ and obtained from ${\cal D}$ by replacing any maximal chain by a trivial chain. It is convenient to momentarily associate a different label to each trivial chain appearing in ${\cal D}_2$, so that ${\cal D}_2$ is a function of these various chains ${\cal D}_2[G_1,\dots ,G_p,\{g^{(m\geq 3)}_{\rm b}\}]$. The original graph ${\cal D}$ can now be written in terms of its associated $2$-skeleton as \beq {\cal D}[G_0,\{g^{(m\geq 3)}_{\rm b}\}]={\cal D}_2[{\cal C}_1,\dots,{\cal C}_p,\{g^{(m\geq 3)}_{\rm b}\}]\label{eq:rel2} \eeq where the ${\cal C}_i$ are the maximal chains of ${\cal D}$ that were replaced by trivial chains $G_i$ in order to obtain the $2$-skeleton ${\cal D}_2[G_1,\dots,G_p,\{g^{(m\geq 3)}_{\rm b}\}]$. We mention that there is no pre-factor in the right-hand side of Eq.~(\ref{eq:rel2}). This is because the symmetry factors factorize: the symmetry factor of ${\cal D}$ equals the symmetry factor of ${\cal D}_2$ times the symmetry factors of the ${\cal C}_i$. This property relates to the fact that the replacement of maximal chains by trivial chains is unambiguous, see below for more details. Let now sum both sides of Eq.~(\ref{eq:rel2}) over the graphs ${\cal D}$ contributing to $\Gamma^{(n\geq 3)}$. We perform the sum in two steps. First we sum over all graphs ${\cal D}$ that have the same ${\cal D}_2$, and then we sum over all the possible $2$-skeletons ${\cal D}_2$. Because we are summing over all possible graphs of $\Gamma^{(n\geq 3)}$ and because this does not put any restrictions on the chains that can appear in the right-hand side of Eq.~(\ref{eq:rel2}) for a given ${\cal D}_2$, we find that the sum over all graphs ${\cal D}$ that share the same ${\cal D}_2$ replaces each chain in the right-hand side of Eq.~(\ref{eq:rel2}) by the sum of all possible chains, that is the two-point function $[\Gamma^{(2)}]^{-1}$: \beq {\cal D}_2\big[[\Gamma^{(2)}]^{-1},\dots,[\Gamma^{(2)}]^{-1},\{g^{(m\geq 3)}_{\rm b}\}\big] \eeq which we denote for simplicity as ${\cal D}_2\big[[\Gamma^{(2)}]^{-1},\{g^{(m\geq 3)}_{\rm b}\}\big]$. We now need to sum over all possible skeletons ${\cal D}_2$, and we then find \beq \Gamma^{(n\geq 3)}=\sum_{{\cal D}_2} {\cal D}_2\big[[\Gamma^{(2)}]^{-1},\{g^{(m\geq 3)}_{\rm b}\}\big]\,.\label{eq:2res} \eeq \subsection{Hiding the trilinear and quartic bare couplings}\label{sec:hiding} Let us now consider $n\geq 4$ and start from Eq.~(\ref{eq:2res}). Since this sum is made of $2$-skeletons that have strictly more than $2$ legs, we can apply theorem $3$. This means that to any graph ${\cal D}_2$, we can unambiguously associate a $2/3$-skeleton ${\cal D}_{23}\in{\cal P}_{23}$ by shrinking any maximal $3$-insertion to a trivial one. Using the same argument as above, we find\footnote{One could wonder here why we are not applying our strategy to $\Gamma^{(3)}$ since theorem $3$ applies to graphs with strictly more than two external legs. This has to do with our choice of definition of a $p$-skeleton diagram which allows for the presence of $p$-insertions equal to the whole graph (in the case the latter has $p$-external legs). Shrinking such insertion to a trivial one would lead to a skeleton graph but would miss many others. We could redefine $p$-skeletons as graphs that contain only trivial, tree-level $p$-insertions. In this case, no skeleton would be missed and Eq.~(\ref{eq:3res}) would apply also to $\Gamma^{(3)}$. This has limited interest however, for the latter identity is a tautology. Similar remarks apply to Eq.~(\ref{eq:4res}) and $\Gamma^{(4)}$.} \beq \Gamma^{(n\geq 4)}=\sum_{{\cal D}_{23}} {\cal D}_{23}\big[[\Gamma^{(2)}]^{-1},\Gamma^{(3)},\{g^{(m\geq 4)}_{\rm b}\}\big].\label{eq:3res} \eeq Finally, let us now consider $n\geq 5$ and start from Eq.~(\ref{eq:3res}). Since this sum is made of $2/3$-skeletons that have strictly more than $3$ legs, we can apply theorem $4$. Using the same strategy as above, we conclude that \beq \Gamma^{(n\geq 5)}=\sum_{{\cal D}_{234}} {\cal D}_{234}\big[[\Gamma^{(2)}]^{-1},\Gamma^{(3)},\Gamma^{(4)},\{g^{(m\geq 5)}_{\rm b}\}\big]\,\label{eq:4res} \eeq where the sum runs over the $2/3/4$-skeleton graphs contributing to $\Gamma^{(n\geq 5)}$. Note that it was important to first resum the three-point function. Otherwise, the graphs would not have been $3$-skeletons and we could not have applied theorem $4$. One could wonder whether the $2/3$-skeleton graph ${\cal D}_{23}$ (obtained from ${\cal D}$ by first replacing each maximal chain by a trivial chain and then any maximal $3$-insertion by a trivial one) coincides with ${\cal D}_{32}$ (obtained via a similar procedure but in opposite order). The identification ${\cal D}_{23}$=${\cal D}_{32}$ relies on the non-overlap theorem $23$ and grants also that the graphs ${\cal D}_{23}$ are all the $2/3$-skeletons originally present in the collection of graphs ${\cal D}$, that is that no $3$-skeleton graph disappeared in the reduction from ${\cal D}$ to ${\cal D}_2$. Similar remarks apply to ${\cal D}_{234}$, ${\cal D}_{243}$, ${\cal D}_{342}$, ${\cal D}_{324}$, ${\cal D}_{423}$ and ${\cal D}_{432}$. Note finally that we cannot continue the procedure (\ref{eq:g3}) $\rightarrow$ (\ref{eq:2res}) $\rightarrow$ (\ref{eq:3res}) $\rightarrow$ (\ref{eq:4res}) further because maximal $5$-insertions can overlap within $2/3/4$-skeleton graphs, as discussed in the previous section. \subsection{Symmetry factors} The above results rely on the factorization of symmetry factors. Let us now show how this factorization comes about. Consider for instance a graph ${\cal D}_2\big[[\Gamma^{(2)}]^{-1},\{g^{(m\geq 3)}_{\rm b}\}\big]$. After identifying the maximal $3$-insertions of the graph, which we write $V^{(3)}_1,\dots,V^{(3)}_p$, the graph rewrites in terms of the associated $2/3$-skeleton as \beq {\cal D}_2\big[[\Gamma^{(2)}]^{-1},\{g^{(m\geq 3)}_{\rm b}\}\big]=\alpha \,{\cal D}_{23}\big[[\Gamma^{(2)}]^{-1},V^{(3)}_1,\dots,V^{(3)}_3,\{g^{(m\geq 4)}_{\rm b}\}\big]\,, \eeq where the pre-factor $\alpha$ accounts for a potential mismatch between the symmetry factor of ${\cal D}_2$ and the symmetry factor of ${\cal D}_{23}$ multiplied by the symmetry factors of the $V^{(3)}_i$. We now show that $\alpha=1$, meaning that the symmetry factors factorize. To see this, we note that in order to compute the symmetry factor of ${\cal D}_2\big[[\Gamma^{(2)}]^{-1},\{g^{(m\geq 3)}_{\rm b}\}\big]$, we can first compute the symmetry factor of a $(n+3p)$-point function $R$ obtained by chopping off the $3$-insertions from the original graph and then connecting the chopped legs back to the $V^{(3)}_i$. The only thing that one needs to pay attention to is that, by computing the symmetry factor in this alternative way, we are missing some Wick contraction resulting from the possibility of redistributing the various tree-level vertices of the of the original graph (for simplicity, we assume here that there are only trilinear vertices) among each of the $V^{(3)}_i$ or $R$. Denoting $n_i$ the number of vertices in each of the $V_i$ and by $n$ the number of vertices in the $R$, this produces a factor $(n+n_1+\dots+n_p)!/(n!n_1!\cdots n_p!)$ in the counting of Wick contractions. If we denote by $N_X$ the number of Wick contractions of a given contribution $X$, we have then \beq N_{{\cal D}_2}=\frac{(n+n_1+\dots+n_p)!}{n!n_1!\cdots n_p!} N_R\,N_{V^{(3)}_1}\cdots N_{V^{(3)}_p}\,. \eeq But the symmetry factor is equal to the number of Wick contractions divided the factorial of the number of vertices and $3!$ (since we are here considering cubic vertices) elevated to the number of vertices. It follows that \beq s_{{\cal D}_2} & = & \frac{N_{{\cal D}_2}}{(n+n_1+\dots+n_p)!(3!)^{n+n_1+\dots+n_p}}\nonumber\\ & = & \frac{N_R}{n!(3!)^n} \frac{N_{V_1}}{n_1!(3!)^{n_1}}\cdots \frac{N_{V_p}}{n_p!(3!)^{n_p}}=s_R s_{V^{(3)}_1}\cdots s_{V^{(3)}_p}\,. \eeq Applying the same formula to ${\cal D}_{23}$, with $V^{(3)}_i=g^{(3)}_{\rm b}$, we find $s_R=s_{{\cal D}_{23}}$ and thus \beq s_{{\cal D}_2}=s_{{\cal D}_{23}} s_{V^{(3)}_1}\cdots s_{V^{(3)}_p}\,. \eeq which is the announced factorization of symmetry factors. The same reasoning applies to the resummation of chains or four-point functions. \subsection{Extension} So far we have we have considered the case of 1PI graphs. However, it is pretty clear that our results apply to a larger class of graphs. Consider first the non-overlap theorems. They apply to any disconnected graph whose connected pieces fulfill the premises of these theorems. For instance, theorem $4$ applies to any disconnected graph whose connected parts are $2/3$-skeletons with strictly more than three external legs, and so on. Next, let us wonder how the possibility to hide bare parameters extends to functions other than the $\Gamma^{(n)}$'s. Consider for instance a quantity given as an infinite sum of $2$-skeleton graphs whose connected pieces have strictly more than two external legs. It is clear that theorems 2, 3 and 23 apply to each of these graphs and one can therefore associate unambiguously $2/3$-skeleton graphs to each of these graphs. If we now assume that the infinite sum of graphs puts no restriction on the $2$- and $3$-insertions that can appear (this is a property that needs to be verified for each infinite class of graphs that one may consider; it is of course obvious for the $\Gamma^{(n)}$'s) then we can proceed as for the $\Gamma^{(n)}$'s and hide the dependence on the bare mass and trilinear bare coupling using the full two- and three-point functions. A direct application of this result is the elimination of the bare parameters in the higher derivatives $\delta^n\Phi/\delta G^n$ (with $n\geq 3$) of the Luttinger-Ward functional $\Phi[G]$. This functional is the sum of two-particle-irreducible graphs with no external legs, that is graphs that cannot be split apart by cutting two lines. The derivatives $\delta^n\Phi/\delta G^n$ are also sums of two-particle-irreducible graphs but only with respect to cuts that leave the external legs associated to a given $\delta/\delta G$ on the same side of the cut. It is easily seen that the connected components of any $\delta^n\Phi/\delta G^n$ with $n\geq 3$ obey the premises of theorem $3$ above. Moreover, the two-particule irreductibility puts no constraint on the possible $3$-insertions that can occur.\footnote{It only imposes that the $3$-insertions cannot be attached to two external legs originating from the same derivative $\delta/\delta G$.} One can then follow the same strategy as in Sec.~\ref{sec:hiding} to show that $\delta^n\Phi/\delta G^n$ with ($n\geq 3$) admits a skeleton expansion in terms of the full two- and three-point functions. The corresponding $2/3$-skeletons obey the premises of theorem $4$ and since the two-particule irreductibility puts no constraint on the possible $4$-insertions that can occur, one can proceed one step further and derive a skeleton expansion in terms of the full two-, three- and four-point functions. As already mentioned in the Introduction, this has been recently put into good use to formulate a finite set of flow equations for $\Phi$-derivable approximations that make no reference to the bare theory, see Ref.~\cite{Blaizot:2021ikl}. \section{Final remarks} \subsection{Applications} The possibility to express $\Gamma^{(n\geq 5)}$ as an infinite sum of $2/3/4$-skeleton graphs with propagators, three- and four-vertices given respectively by $[\Gamma^{(2)}]^{-1}$, $\Gamma^{(3)}$ and $\Gamma^{(4)}$ is a useful tool in order to truncate infinite hierarchies of equations that appear in continuum approaches to Quantum Field Theory, such as the Dyson-Schwinger tower of equations or the functional renormalization group hierarchy. In such frameworks, a given $n$-point function is typically expressed in terms of higher ones, leading to an infinite tower of equations. Now, by moving deep down enough the hierarchy and by using the present result, it is clear that one can replace the infinite tower of equations by a finite number of them in which the highest $n$-point functions are expressed in terms of lower ones. The hierarchy is thus closed at the price of expressing some of the $n$-point functions as infinite sums of skeleton graphs in terms of the lower $n$-point functions. But since one can truncate this infinite sum of skeleton according to the number of loops of the skeletons, one obtains a systematically improvable scheme in which one only needs to solve a finite number of equations of the hiearchy. We also mention that in theories where primarily divergent $n$-point functions have at most $n=4$ external legs, this gives a very graphical explanation of why, in a renormalizable theory, higher $n$-point functions (with $n\geq 5$) are finite once the primarily divergent functions have been renormalized. Indeed once written in terms of $2/3/4$-skeletons, there are no other subdivergences in the graph than those of the two-, three- and four-point functions. Moreover, there are no global divergences since $n\geq 5$. In the case of a theory such as $\varphi^6$ in $d=3$ dimensions, which is also renormalizable but features primarly divergent functions with $6$ legs, this graphical explanation does not apply since maximal $6$-point functions can overlap within any graph and therefore there are inevitably overlapping divergences. \subsection{Connection to $n$PI effective actions} The present approach pretty much resembles that followed with $n$-particle-irreducible ($n$PI) effective actions \cite{deDominicis:1964uu,Berges:2004pu,Carrington:2010qq}. Let us here emphasize some differences however. In fact, the present approach deals only with quantities for which the bare mass and the trilinear and quartic bare coupling can be hidden into the two-, three- and four-point functions while avoiding graph over-counting. In contrast, the $n$PI framework deals with the sum of vacuum graphs $\ln Z$ for which none of the above non-overlap theorems apply. Indeed, for such graphs, there is no unambiguous way to identify the maximal $2$-, $3$- and $4$-insertions. It is still possible to rewrite this sum of vacuum graphs as a sum of skeletons.\footnote{The notion of skeleton needs to be slightly extended though, as compared to the definition given in the present paper. For instance, a $2$-skeleton with no external legs is usually defined as a graph in which one cannot isolate a self-energy by cutting two distinct lines. This definition incorporates more graphs than the ones that are enclosed in the definition of the present paper since one can then allow for $2$-insertions that are closed on each other by means of a single line.} However, this writing always involve certain terms that depend on the bare mass and the bare couplings and requires additional terms to avoid double counting. For instance, within the 2PI framework, using the notion of cycles \cite{deDominicis:1964uu}, one can show that \beq \ln Z=\Gamma[G]=\frac{1}{2}{\rm Tr}\,\ln G+\frac{1}{2}{\rm Tr}\,G_0^{-1}G+\Phi[G]\,,\label{eq:Z} \eeq where $\Phi[G]$ is the Luttinger-Ward function referred to above. The first two terms account for the overcounting of graphs that arise from the fact that there is no unique way to identify maximal $2$-insertions in $\ln Z$. Moreover the second term depends explicitely on the bare mass $m^2_{\rm b}$, so it is not possible to fully hide this bare parameter in this case (although we stress that this remaining dependence is rather trivial). Similar remarks apply to the rewriting of $\ln Z$ in terms of the three- and four-point functions leading to the so-called $3$PI and $4$PI effective actions $\Gamma[G,\Gamma^{(3)}]$ and $\Gamma[G,\Gamma^{(3)},\Gamma^{(4)}]$ which express $\ln Z$ in terms of $G\equiv[\Gamma^{(2)}]^{-1}$ and $\Gamma^{(3)}$, or $G$, $\Gamma^{(3)}$ and $\Gamma^{(4)}$ respectively. The same remarks apply to the $n$-point functions obtained by imposing a stationnarity condition to any of these functionals. For instance, the four-point function as derived from $\Gamma[G,\Gamma^{(3)},\Gamma^{(4)}]$ is given by an equation that still makes explicit reference to the quartic bare coupling. In contast, higher $n$-point functions admit a representation in which no such reference to the bare parameters appears. \section{Conclusion} In this article, we have studied how two arbitrary subgraphs of a given Feynman graph can overlap with each other. When restricting to 1PI subgraphs, we have shown how this allows to derive useful ``non-overlap'' theorems for the cases of $2$-, $3$- and $4$- insertions. One consequence of these is the well known skeleton expansion for vertex functions $\Gamma^{(n)}$ with $n\geq 5$ which allows one to entirely hide any reference to the bare mass, as well as the trilinear and quartic bare couplings using the two-, three- and four-point functions, and this without any over-counting correction. We have also discussed how this result can be extended to other classes of functions, in particular to iterated derivatives of the Luttinger-Ward functional. As discussed in Ref.~\cite{Blaizot:2021ikl}, the previous results have applications in the renormalization of the 2PI effective action and the corresponding $\Phi$-derivable approximations, as well as in the construction of new truncation schemes for the functional renormalization group hierarchy but their potential range of applicability is definitely larger. \acknowledgements{I would like to thank J.~P. Blaizot for useful discussions and collaboration on related topics and J.~P. Blaizot and D.~M. van Egmond for carefully reading the manuscript and making some useful comments.}
2024-02-18T23:40:56.897Z
2021-03-15T01:10:55.000Z
algebraic_stack_train_0000
3,746
11,454
proofpile-arXiv_066-2300
\section{Introduction} Deep neural networks have achieved promising performance on various tasks in natural language processing, machine intelligence, etc., \cite{bahdanau2014neural,devlin2018bert, silver2016mastering,spampinato2017deep,sun2016deep}. However, the ability of deep neural networks for continual learning is limited, where the network is expected to continually learn knowledge from sequential tasks \cite{Hsu18_EvalCL}. The main challenge for continual learning is how to overcome catastrophic forgetting \cite{french1999catastrophic,mccloskey1989catastrophic,ratcliff1990connectionist}, which has drawn much attention recently. In the context of continual learning, a network is trained on a stream of tasks sequentially. The network is required to have \textit{plasticity} to learn new knowledge from current task, and also \textit{stability} to retain its performance on previous tasks. However, it is challenging to simultaneously achieve plasticity and stability in continual learning for deep networks, and catastrophic forgetting always occurs. This phenomenon is called \textit{plasticity-stability dilemma} \cite{mermillod2013stability}. Recently, various strategies for continual learning have been explored, including regularization-based, distillation-based, architecture-based, replay-based and algorithm-based strategies. The regularization-based strategy focuses on penalizing the variation of parameters across tasks, such as EWC \cite{kirkpatrick2017overcoming}. The distillation-based strategy is inspired by knowledge distillation, such as LwF \cite{li2017learning}. The architecture-based strategy modifies the architecture of network on different tasks, such as \cite{abati2020conditional,li2019learn}. The replay-based strategy utilizes data from previous tasks or pseudo-data to maintain the network performance on previous tasks, such as \cite{aljundi2019gradient,ostapenko2019learning}. The algorithm-based strategy designs network parameter updating rule to alleviate performance degradation on previous tasks, such as GEM \cite{lopez2017gradient}, A-GEM \cite{chaudhry2018efficient} and OWM \cite{zeng2019continual}. In this paper, we focus on the setting of continual learning where the datasets from previous tasks are inaccessible. We first propose two theoretical conditions respectively for stability and plasticity of deep networks in continual learning. Based on them, we design a novel network training algorithm called Adam-NSCL for continual learning, which forces the network parameter update to lie in the null space of the input features of previous tasks at each network layer, as shown in \figref{fig:idea}. The layer-wise null space of input features can be modeled as the null space of the uncentered covariance of these features, which can be incrementally computed after learning each task. Since it is too strict to guarantee the existence of null space, we approximate the null space of each layer by the subspace spanned by singular vectors corresponding to smallest singular values of the uncentered covariance of input features. We embed this strategy into the Adam optimization algorithm by projecting the candidate parameter update generated by Adam \cite{kingma2015adam} into the approximate null space layer by layer, which is flexible and easy to implement. % We conduct various experiments on continual learning benchmarks in the setting that the datasets of previous tasks are unavailable, and results show that our Adam-NSCL is effective and outperforms the state-of-the-art continual learning methods. We also empirically verify the rationality of the approximate null space. The paper is organized as follows. We first introduce related works in \secref{sec:related}. In \secref{sec:method}, we present the mathematical conditions and then propose network training algorithm for continual learning in \secref{sec:appro}. In \secref{sec:exp}, we conduct experiments to verify the efficacy of our approach. \begin{figure}[!t] \centering \includegraphics[scale=0.8]{plot_idea.pdf} \caption{To avoid forgetting, we train network in the layer-wise null space of the corresponding uncentered covariance of all input features of previous tasks.} \label{fig:idea} \end{figure} \section{Related Work}\label{sec:related} We next review the related works of continual learning in the following five categories. \textbf{Regularization-based strategy.} The basic idea of this strategy is to penalize the changes of network parameters when learning current task to prevent catastrophic forgetting. The typical methods include EWC \cite{kirkpatrick2017overcoming}, SI \cite{zenke2017continual}, MAS \cite{aljundi2018memory}, RWalk \cite{chaudhry2018riemannian} and NPC \cite{paik2020overcoming}. They impose regularization on the network parameters, and each network parameter is associated with an importance weight computed by different methods. These importance weights are required to be stored for tasks in continual learning. Under the Bayesian framework, VCL \cite{nguyen2018variational}, CLAW \cite{adel2019continual} and IMM \cite{lee2017overcoming} take the posterior distribution of network parameters learned from previous tasks as the prior distribution of network parameters on current task, which implicitly penalizes the changes of network parameters under the Bayesian framework. \textbf{Distillation-based strategy.} Inspired by knowledge distillation \cite{hinton2015distilling}, this strategy takes the network learned from previous tasks as teacher and the network being trained on current task as student, and then utilize a distillation term to alleviate performance degradation on previous tasks, such as LwF \cite{li2017learning}, GD-WILD \cite{lee2019overcoming}, lifelong GAN \cite{zhai2019lifelong}, MCIL \cite{liu2020mnemonics}, LwM \cite{dhar2019learning}, etc. Due to inaccessibility to the full datasets of previous tasks, they commonly use data of current task \cite{dhar2019learning,li2017learning}, external data \cite{lee2019overcoming}, coreset of previous tasks \cite{liu2020mnemonics,rebuffi2017icarl} or synthetic data \cite{zhai2019lifelong}, resulting in distributional shift \cite{2020Generative} to the original datasets. \textbf{Replay-based strategy.} The replay-based strategy trains networks using both data of the current task and ``replayed'' data of previous tasks. Some existing works focus on selecting a subset of data from previous tasks~\cite{aljundi2019gradient,prabhu2020gdumb}, resulting in imbalance between the scale of datasets from current and previous tasks~\cite{wu2019large,zhao2020maintaining}. An alternative approach is to learn generative model to generate synthetic data to substitute the original data~\cite{hu2018overcoming,kemker2018fearnet,osta2019learning,shin2017continual}. They do not need to store data of previous tasks, however, the performance is significantly affected by the quality of generated data, especially for complex natural images. \textbf{Architecture-based strategy.} In this strategy, the network architecture is dynamically modified by expansion or mask operation when encountering new tasks in continual learning. Methods of network expansion modify the network architecture by increasing network width or depth to break its representational limit when facing new tasks \cite{hung2019compacting,li2019learn,yoon2018lifelong}. This strategy may result in a powerful but redundant network that is computationally expensive and memory intensive. An alternative approach is to assign different sub-networks to different tasks by masking the neurons \cite{abati2020conditional,rajasegaran2019random,serra2018overcoming} or weights \cite{mallya2018piggyback}. The mask associated with each task needs to be learned and stored in the memory. \textbf{Algorithm-based strategy.} This strategy performs continual learning from the perspective of network training algorithm. It focuses on designing network parameter updating rule to guarantee that network training on current task should not deteriorate performance on previous tasks. GEM \cite{lopez2017gradient} computes the parameter update by solving a quadratic optimization problem constraining the angle between the parameter update and the gradients of network parameters on data of previous tasks. A-GEM ~\cite{chaudhry2018efficient} is an improved GEM without solving quadratic optimization problem. A-GEM constrains that the network parameter update should be well aligned with a reference gradient computed from a random batch of data from the previous tasks. Both of GEM and A-GEM need to store data of previous tasks. Different from GEM and A-GEM, OWM \cite{zeng2019continual} projects the parameter update into the orthogonal space of the space spanned by input features of each linear layer. The computation of the space projection matrix relies on the unstable inversion of matrix. Our method called Adam-NSCL is a novel network training algorithm for continual learning, which trains networks in the approximate null space of feature covariance matrix of previous tasks to balance the network plasticity and stability. It does not require to design regularizers, revise network architecture and use replayed data in our method. Compared with OWM, Adam-NSCL relies on null space of feature covariance for achieving plasticity and stability with theoretical and empirical analysis, and overcomes the unstable matrix inversion in OWM. More discussions on the differences are in \secref{expres}. \section{Analysis of Stability and Plasticity}\label{sec:method} In this section, we first present the preliminaries on the setting of continual learning, then propose mathematical conditions on the network parameter updates for the stability and plasticity, as the basis of our algorithm in \secref{sec:appro}. \subsection{Preliminaries} In the setting of continual learning, a network $f$ with parameters $ \mathbf{w}$ is sequentially trained on a stream of tasks $\{\mathcal{T}_1, \mathcal{T}_2,\dots\}$, where task $\mathcal{T}_t$ is associated with paired dataset $\{X_t, Y_t\}$ of size $n_t$. The output of network $f$ on data $X_t$ is denoted as $f(X_t,\mathbf{w})$. The initial parameters of network $f$ with $L$ linear layers on task $\mathcal{T}_t$ is set as $\tilde{\mathbf{w}}_{t-1}=\{\tilde{w}^1_{{t-1}}, \dots, \tilde{w}^L_{{t-1}}\}$ which is the optimal parameters after trained on task $\mathcal{T}_{t-1}$. When training $f$ on task $\mathcal{T}_t$ at the $s$-th training step, we denote the network parameters as $\mathbf{w}_{t,s}=\{w^1_{{t,s}},\dots, w^L_{{t,s}}\}$. Correspondingly, the parameter update at the $s$-th training step on task $\mathcal{T}_t$ is denoted as $\Delta \mathbf{w}_{t,s}=\{\Delta w^1_{{t,s}},\dots, \Delta w^L_{{t,s}}\}$. When feeding data $X_p$ from task $\mathcal{T}_p$ $(p\leq t)$ to $f$ with optimal parameters $\tilde{\mathbf{w}}_{t}$ on task $\mathcal{T}_t$, the input feature and output feature at the $l$-th linear layer are denoted as $X^l_{{p,t}}$ and $O^l_{{p,t}}$, then $$O^l_{{p,t}}=X^l_{{p,t}} \tilde{w}^l_{{t}}, \ \ X^{l+1}_{{p,t}}=\sigma_l(O^l_{{p,t}})$$ with $\sigma_l$ as the nonlinear function and $X^1_{{p,t}}=X_p$. For the convolutional layer, we can reformulate convolution as the above matrix multiplication, which is unified with the fully-connected layer. Specifically, for each 3-D feature map, we flat each patch as a row vector, where the patch size is same as the corresponding 3-D convolutional kernel size, and the number of patches is the times that the kernel slides on the feature map when convolution. Then these row-wise vectors are concatenated to construct the 2-D feature matrix $X^l_{{p,t}}$. 3-D kernels at the same layer are flatted as column vectors of the 2-D parameter matrix. \subsection{Conditions for continual learning}\label{sec:suff} When being trained on the current task $\mathcal{T}_t$ without training data from previous tasks, the network $f$ is expected to perform well on the previous tasks, which is challenging since the network suffers from catastrophic forgetting. To alleviate the plasticity-stability dilemma, we propose two conditions for continual learning that guarantee the stability and plasticity respectively as follows. To derive Condition \ref{prop1} for network stability, we first present the condition of network parameter update to retain training performance on succeeding training tasks in Lemma \ref{lemma1} and Lemma \ref{lemma2}, depending on data of previous tasks. Then, we further propose the equivalent condition in Condition \ref{prop1}, free from storing data of previous tasks. After that, we present Condition \ref{prop2} for network plasticity. \begin{lemma}\label{lemma1} Given the data $X_p$ from task $\mathcal{T}_p$, and the network $f$ with $L$ linear layers is trained on task $\mathcal{T}_t$ ($t>p$). If network parameter update $\Delta w^l_{{t,s}}$ lies in the null space of $X^l_{{p,t-1}}$, \ie, \begin{equation}\label{sc1} X^l_{p,t-1} \Delta w^l_{{t,s}} = 0, \end{equation} at each training step $s$, for the $l$-th layer of $f$ $(l=1,\dots,L)$, we have $X^l_{{p,t}}=X^l_{p,t-1}$ and $f(X_p,\tilde{\mathbf{w}}_{t-1})=f(X_p,\tilde{\mathbf{w}}_{t})$. \end{lemma} \begin{proof} Please refer to the supplemental material. \end{proof} Lemma \ref{lemma1} tells us that, when we train network on task $\mathcal{T}_t$, the network retains its training loss on data $X_p$ in the training process, if the network parameter update satisfies Eqn.~(\ref{sc1}) at each training step. Considering that we initialize parameters $\mathbf{w}_{t,0}$ by $\tilde{\mathbf{w}}_{t-1}$, \ie, the optimal parameters on task $t-1$ for $t>1$, we have the following corollary. \begin{cor}\label{cor1} Assume that network $f$ is sequentially trained on tasks $\{\mathcal{T}_1,\mathcal{T}_2, $ $ \cdots\}$. For each task $\mathcal{T}_t$ $(t>1)$ and $p<t$, if Eqn. \eqref{sc1} holds at every training step on task $\mathcal{T}_t$, we have $X^l_{{p,p}}=X^l_{p,t}$ $(l = 1, \cdots, L)$ and $f(X_p,\tilde{\mathbf{w}}_{t})=f(X_p,\tilde{\mathbf{w}}_{p})$. \end{cor} Corollary \ref{cor1} suggests that the training loss on data $X_p$ is retained if the network trained on the following tasks satisfies Eqn.~(\ref{sc1}) and network parameters at each task are initialized by the trained network of the last task. We further denote $ \bar{X}^l_{t-1}=[{X_{1,1}^l}^{\top},\cdots,{X_{t-1,t-1}^l}^{\top}]^\top$, which is the concatenation of input features of $l$-th network layer on each task data $X_p$ ($p < t$) using trained network parameters on task $\mathcal{T}_p$. Then the following lemma holds. \begin{lemma}\label{lemma2} Assume that $f$ is being trained on task $\mathcal{T}_t$ ($t>1$). If $\Delta w^l_{{t,s}}$ lies in the null space of $\bar{X}^l_{t-1}$ at each training step $s$, \ie, \begin{equation} \bar{X}^l_{t-1} \Delta w^l_{{t,s}}=0, \end{equation} for $l=1,\cdots,L$, we have $f(X_p,\tilde{\mathbf{w}}_{t})=f(X_p, \tilde{\mathbf{w}}_{p})$ for all $p=1,\cdots,t-1$. \end{lemma} Lemma \ref{lemma2} guarantees the stability of $f$. However, it is inefficient since it requires to store all features $X^l_{{p,p}}$ of $f$ for all $p<t$, which is memory-prohibited. To overcome this limitation, we propose the following Condition \ref{prop1} based on uncentered feature covariance $\bar{\mathcal{X}}^l_{t-1}\triangleq\frac{1}{\bar{n}_{t-1}} (\bar{X}^{l}_{t-1})^\top \bar{X}^l_{t-1}$ to guarantee stability, where $\bar{n}_{t-1}$ is the total number of seen data, \ie, the number of rows of $\bar{X}^l_{t-1}$ \begin{prop}[stability] \label{prop1} When $f$ is being trained on task $\mathcal{T}_t$, $\Delta w^l_{{t,s}}$ at each training step $s$ should lie in the null space of the uncentered feature covariance matrix $\bar{\mathcal{X}}^l_{t-1}$ for $l = 1, \cdots, L$, \ie, \begin{equation}\label{eqcond1} \bar{\mathcal{X}}^l_{t-1} \Delta w^l_{{t,s}}=0. \end{equation} \end{prop} It is easy to verify that the null space of $\bar{X}^l_{t-1}$ equals to null space of the uncentered feature covariance of $\bar{\mathcal{X}}^l_{t-1}$. Therefore, if Condition \ref{prop1} holds, we have $f(X_p,\tilde{\mathbf{w}}_{t})=f(X_p,\tilde{\mathbf{w}}_{p})$ for all $p<t$ according to Lemma \ref{lemma2}, \ie, the performance of $f$ on previous task data will not be degraded after learning current task. \textit{\textbf{Memory analysis.}} The memory consumption of storing $\bar{\mathcal{X}}^l_{t}$ is fixed, irrelevant to the number of tasks and data. Specifically, if we denote the dimension of the feature at layer $l$ as $h^l$, the size of $\bar{X}^l_{t}$ is $\bar{n}_{t} \times h^l$ (usually $\bar{n}_{t}\gg h^l$), while the size of $\bar{\mathcal{X}}^l_{t}$ is $h^l \times h^l$. Therefore, Condition \ref{prop1} supplies a more memory efficient way to guarantee the stability of $f$ than Lemma \ref{lemma2}. Condition \ref{prop1} guarantees the stability of network in continual learning. The other requirement of continual learning is the plasticity of $f$ concerning the ability to obtain new knowledge from current task. Condition \ref{prop2} will provide the condition that guarantees the plasticity of network $f$. \begin{prop}[plasticity] \label{prop2} Assume that the network $f$ is being trained on task $\mathcal{T}_t$, and $\mathbf{g}_{t,s}=\{g^1_{t,s},\dots,g^L_{t,s}\}$ denotes the parameter update generated by a gradient-descent training algorithm for training $f$ at training step $s$. $\langle\Delta \mathbf{w}_{t,s}, \mathbf{g}_{t,s}\rangle > 0$ should hold where $\langle \cdot, \cdot\rangle$ represents inner product. \end{prop} If parameter update $\Delta\mathbf{w}_{t,s}$ satisfies Condition \ref{prop2} when training $f$ on task $\mathcal{T}_t$, the training loss after updating parameters using $\Delta\mathbf{w}_{t,s}$ will decrease, \ie, the network can be trained on this task. Please see supp. for proof. \begin{figure}[!tp] \centering \includegraphics[scale=0.75]{plot_alg.pdf} \caption{The pipeline of our algorithm.} \label{figalg} \end{figure} \section{Network Training in Covariance Null Space}\label{sec:appro} In this section, we propose a novel network training algorithm called Adam-NSCL for continual learning based on Adam and the proposed two conditions respectively for stability and plasticity. In continual learning, we maintain layer-wise feature covariance matrices, which are incrementally updated using features of the coming task. Given the current task, starting from the network trained on previous tasks, we update the network parameters on current task using Adam-NSCL where the candidate parameter update generated by Adam is projected into the approximate null space of corresponding feature covariance matrix layer by layer to balance the network stability and plasticity. Figure \ref{figalg} illustrates the pipeline of our proposed continual learning algorithm. After learning task $\mathcal{T}_{t-1}$, we feed $X_{t-1}$ to the learned network $f(\cdot,\tilde{\mathbf{w}}_{t-1})$ to get the input feature $X^l_{t-1}$ at each layer. Then we compute the uncentered covariance ${\mathcal{X}}^l_{t-1}=\frac{1}{n_{t-1}} (X^l_{t-1})^\top X^l_{t-1}$ with $n_{t-1}$ as the number of data on task $\mathcal{T}_{t-1}$. Subsequently, we update the uncentered feature covariance matrix for each layer by \begin{equation} \bar{\mathcal{X}}^l_{t-1}=\frac{\bar{n}_{t-2}}{\bar{n}_{t-1}} \bar{\mathcal{X}}^l_{t-2}+ \frac{n_{t-1}}{\bar{n}_{t-1}} {\mathcal{X}}^l_{t-1}, \end{equation} with $\bar{n}_{t-1}=\bar{n}_{t-2} + n_{t-1}$ as the total number of seen data. Following that, we compute the approximate null space of $\bar{\mathcal{X}}^l_{t-1}$. When training network with $\tilde{\mathbf{w}}_{t-1}$ as initialization on task $\mathcal{T}_t$, we first utilize Adam to generate a candidate parameter update $\mathbf{g}_{t,s}$ at $s$-th step ($s=1,2,\dots$), then project $\mathbf{g}_{t,s}$ into the approximate null space of $\bar{\mathcal{X}}^l_{t-1}$ layer by layer to get the parameter update $\Delta \mathbf{w}_{t,s}$. In the following, we first introduce derivation of the approximate null space in \secref{sec:appns}, and discuss the projection satisfying Conditions \ref{prop1} and \ref{prop2}. Subsequently, we present our proposed continual learning algorithm in \AlgRef{alg} and the way to find the approximate null space in \AlgRef{alg:null}. \subsection{Approximate null space}\label{sec:appns} According to Condition \ref{prop1}, for the stability of network in continual learning, we can force the parameter update to lie in the null space of uncentered covariance of all previous input features at each network layer. However, it is too strict to guarantee the existence of null space. Therefore, we propose \AlgRef{alg:null} to find the approximate null space based on SVD of the uncentered covariance matrix. By applying SVD to $\bar{\mathcal{X}}^l_{t-1}$, we have \begin{equation} U^l, \Lambda^l, {(U^{l})^\top}=\text{SVD}(\bar{\mathcal{X}}^l_{t-1}), \end{equation} where $U^l=[U^l_{1}, U^l_{2}]$ and $\Lambda^l=\begin{bmatrix} \Lambda^l_{1} & 0\\ 0&\Lambda^l_{2} \end{bmatrix}$. If all singular values of zero are in $\Lambda^l_{2}$, \ie, $\Lambda^l_{2}=0$, then $\bar{\mathcal{X}}^l_{t-1} U^l_{2} = U^l_{1}\Lambda^l_{1} (U^l_{1})^\top U^l_{2}=0$ holds, since $U^l$ is an unitary matrix. It suggests that the range space of $U^l_{2}$ is the null space of $\bar{\mathcal{X}}^l_{t-1}$. Thus we can get the parameter update $\Delta w^l_{{t,s}}$ lying in the null space of $\bar{\mathcal{X}}^l_{t-1}$ by \begin{equation} \label{update} \Delta w^l_{{t,s}}=U^l_{2} (U^l_{2})^\top g^l_{{t,s}} \end{equation} with $U^l_{2} (U^l_{2})^\top$ as projection operator \cite[Eqn. (5.13.4)]{meyer2000matrix}. Thus we get $\Delta w^l_{{t,s}}$ satisfying Condition \ref{prop1}. \begin{figure}[!bp] \centering \includegraphics[scale=0.5]{eigen_img.pdf} \caption{Singular values of uncentered covariance matrix at different layers of pretrained ResNet-18 on ImageNet ILSVRC 2012. Orange curves denote the singular values smaller than $50\lambda^l_{\text{min}}$.} \label{svf} \end{figure} \begin{algorithm}[!t] \caption{Adam-NSCL for continual learning} \label{alg} \textbf{Inputs:} Datasets $\{{X}_t, {Y}_t\}$ for task $\mathcal{T}_t\in \{\mathcal{T}_1, \mathcal{T}_2,\dots\}$; network $f(\cdot, \mathbf{w})$ with $L$ linear layers; learning rate $\alpha$. \\ \textbf{Initialization:} Initialize $\tilde{\mathbf{w}}_{0}$ randomly, $\bar{\mathcal{X}}_0^l=0$, number of seen data $\bar{n}_0=0$. \begin{algorithmic}[1] \STATE{\textit{\# sequential tasks}} \FOR{task $\mathcal{T}_t\in \{\mathcal{T}_1, \mathcal{T}_2,\dots\}$} \IF{$t>1$} \STATE{\textit{\# compute the approximate null space}} \STATE{Get $U^l_{2}$, $\bar{\mathcal{X}}^l_{{t-1}}$ and $\bar{n}_{t-1}$ $(l=1,\dots,L)$ by \AlgRef{alg:null} with $\{{X}_{t-1}, {Y}_{t-1}\}$, $f(\cdot, \tilde{\mathbf{w}}_{t-1})$, $\bar{\mathcal{X}}^l_{{t-2}}$ and $\bar{n}_{t-2}$ as inputs.} \ENDIF \STATE{\textit{\# train $f(\cdot, \mathbf{w})$ on task $\mathcal{T}_t$.}} \STATE{Set $s=0$ and $\mathbf{w}_{t,0}=\tilde{\mathbf{w}}_{t-1}$;} \WHILE{not converged} \STATE{Sample a batch $\{\mathbf{x}, \mathbf{y}\}$ from $\{{X}_t, {Y}_t\}$.} \STATE{Compute $f(\mathbf{x}, \mathbf{w}_{t,s})$, then get candidate parameter update $\mathbf{g}_{t,s}=\{g^1_{{t,s}},\dots,g^L_{{t,s}}\}$ by Adam.} \IF{$t=1$} \STATE{$\Delta w^l_{{t,s}}=g^l_{{t,s}}$, $l=1,\dots, L$} \ELSE \STATE{$\Delta w^l_{{t,s}}=U^l_{2} (U^l_{2})^\top g^l_{{t,s}}$, $l=1,\dots, L$} \ENDIF \STATE{$w^l_{{t,s+1}}=w^l_{{t,s}} - \alpha \Delta w^l_{{t,s}}$, $l=1,\dots, L$} \STATE{$s=s+1$} \ENDWHILE \ENDFOR \end{algorithmic} \end{algorithm} However, it is unrealistic to guarantee that there exists zero singular values. Inspired by Principal Component Analysis, if considering $U^l_{1}$ as principal components, $\bar{\mathcal{X}}^l_{t-1}$ can be approximated by $U^l_{1}\Lambda^l_{1} (U^l_{1})^\top$, which indicates that $\bar{\mathcal{X}}^l_{t-1} U^l_{2} \approx U^l_{1}\Lambda^l_{1} (U^l_{1})^\top U^l_{2}=0$, \ie, we can take the range space of $U^l_{2}$ as the approximate null space of $\bar{\mathcal{X}}^l_{t-1}$ where $U^l_{2}$ corresponds to the smallest singular values in $\Lambda^l_{2}$. We adaptively select $\Lambda^l_{2}$ with diagonal singular values $\lambda \in \{\lambda|\lambda \leq a \lambda^l_{{\text{min}}}\}$ $(a>0)$, where $\lambda^l_{{\text{min}}}$ is the smallest singular value. Furthermore, to empirically verify the rationality of the approximation, we utilize the proportion $R$ of $\bar{\mathcal{X}}^l_{t-1}$ explained by $U^l_{2}$ \cite{jolliffe2016principal} as \begin{equation}\label{ratio} R=\frac{\Sigma_{\lambda\in \mbox{diag}\{\Lambda^l_{2}\}}\lambda}{\Sigma_{\lambda\in \mbox{diag}\{\Lambda^l\}}\lambda}, \end{equation} where ``$\mbox{diag}$'' denotes the diagonal elements. If $R$ is small, the sum of singular values of $\Lambda^l_2$ is negligible, suggesting that it is reasonable to approximate null space of uncentered covariance matrix by the range space of $U^l_2$. \begin{algorithm}[!t] \caption{Updating the uncentered covariance incrementally and computing the null space.} \label{alg:null} \textbf{Inputs:} Dateset $\{{X}_{t-1}, {Y}_{t-1}\}$ of size $n_{t-1}$ for task $\mathcal{T}_{t-1}$; network $f(\cdot, \tilde{\mathbf{w}}_{t-1})$; $\{\bar{\mathcal{X}}^l_{{t-2}}\}_{l=1}^L$; hyperparameter $a>0$; number of seen data $\bar{n}_{t-2}$.\\ \textbf{Output:} $U^l_2$; $\{\bar{\mathcal{X}}^l_{{t-1}}\}_{l=1}^L$; $\bar{n}_{t-1}$ \begin{algorithmic}[1] \STATE{\textit{\# Compute the uncentered covariance on task $\mathcal{T}_{t-1}$}} \STATE{Initialize uncentered covariance matrices $\mathcal{X}^l_{t-1}$ on task $\mathcal{T}_{t-1}$ as 0 for $l=1,\dots,L$.} \FOR{batch $\{\mathbf{x}, \mathbf{y}\}$ from $\{{X}_{t-1}, {Y}_{t-1}\}$} \STATE{Get the input feature $\mathbf{x}^l$ at the $l$-th layer $(l=1,\dots,L)$ by forward propagating $\mathbf{x}$ on $f(\cdot,\tilde{\mathbf{w}}_{t-1})$.\\} \STATE{${\mathcal{X}}^l_{t-1} ={\mathcal{X}}^l_{t-1} + (\mathbf{x}^l)^\top\mathbf{x}^l$ for $l=1,\dots,L$.} \ENDFOR \STATE{${\mathcal{X}}^l_{t-1} = \frac{1}{n_{t-1}} {\mathcal{X}}^l_{t-1}$ for $l=1,\dots, L$.} \STATE{$\bar{n}_{t-1}=\bar{n}_{t-2}+n_{t-1}$.} \STATE{\textit{\# Update the uncentered covariance $\bar{\mathcal{X}}^l_{t-2}$.}} \STATE{$\bar{\mathcal{X}}^l_{t-1}=\frac{\bar{n}_{t-2}}{\bar{n}_{t-1}} \bar{\mathcal{X}}^l_{t-2}+ \frac{n_{t-1}}{\bar{n}_{t-1}} {\mathcal{X}}^l_{t-1}$, $l=1,\dots,L$.} \STATE{\textit{\# Compute the approximate null space for each layer}} \STATE{$U^l, \Lambda^l, {(U^{l})^\top}=$SVD($\bar{\mathcal{X}}^l_{t-1}$).} \STATE{Get $\Lambda^l_{2}$ with diagonal singular values $\lambda \in \{\lambda|\lambda<a\lambda^l_{\text{min}}\}$ where $\lambda^l_{\text{min}}$ is the smallest singular value.} \STATE{Get singular vectors $U^l_{2}$ that correspond to $\Lambda^l_{2}$.} \end{algorithmic} \end{algorithm} \textbf{Example}. We take the pretrained ResNet-18 on dataset of ImageNet ILSVRC 2012 \cite{ILSVRC15} as example. Figure \ref{svf} shows the curves of singular values of uncentered covariance matrix $\bar{\mathcal{X}}^l_{t}$ of each linear layer indexed by $l$ with $a=50$. All proportions $R$ of different layers are smaller than 0.05, indicating that the selected $U^l_{2}$ corresponding to smallest singular values is negligible to explain $\bar{\mathcal{X}}^l_{t}$. Therefore, it is reasonable to approximate the null space by the range space of $U^l_{2}$ As a summary, for a novel task $\mathcal{T}_t$, our continual learning algorithm (shown in \figref{figalg}) projects parameter update $\mathbf{g}_{t, s}=\{g^l_{t,s}\}_{l=1}^L$ at $s$-th training step generated by Adam into the approximate null space layer by layer, and get the parameter update $\Delta \mathbf{w}_{t,s}=\{\Delta {w}^l_{t,s}\}_{l=1}^L$ following Eqn.~(\ref{update}). We prove that $\langle\Delta \mathbf{w}_{t,s}, \mathbf{g}_{t,s}\rangle \geq0$ always holds, which can be found in the supplemental material. To guarantee that the network can be trained on task $\mathcal{T}_t$ using the above parameter updating rule, it is supposed that $\langle\Delta \mathbf{w}_{t,s}, \mathbf{g}_{t,s}\rangle >0$ as discussed in Condition \ref{prop2}. We empirically found that it holds in our experiments and our algorithm can succeed in decreasing training losses on sequential training tasks. Our Adam-NSCL is summarized in \AlgRef{alg}. The training process is to loop over incoming tasks, and task $\mathcal{T}_t$ ($t>1$) is trained by Adam with gradients projected into the null space of accumulated covariance (line 15 in \AlgRef{alg}). The null space is obtained by \AlgRef{alg:null} after learning task $\mathcal{T}_{t-1}$. In \AlgRef{alg:null}, we first feed all training data of task $\mathcal{T}_{t-1}$ to accumulate covariance in lines 3-8, and update the uncentered covariance in line 10. Then we compute the approximate null space in lines 12-14 by SVD. The hyperparameter $a$ controls the balance of stability and plasticity. Larger $a$ suggests that we use larger approximate null space $U_2^l$ covering more small singular values, then the null space assumption in Eqn. (\ref{eqcond1}) hardly holds, reducing the stability in continual learning. On the other hand, larger $a$ leads to larger approximate null space, enabling us to update network parameters in a larger null space, increasing the plasticity to learn knowledge on current task. \section{Experiments}\label{sec:exp} We apply Adam-NSCL algorithm to different sequential tasks for continual learning\footnote{https://github.com/ShipengWang/Adam-NSCL}. After introducing experimental setting, we show the results compared with SOTA methods, following which we empirically analyze our algorithm. \subsection{Experimental setting}\label{sec:expset} We first describe the experimental settings on datasets, implementation details, metrics and compared methods. \textbf{Datasets}. We evaluate on continual learning datasets, including 10-split-CIFAR-100, 20-split-CIFAR-100 and 25-split-TinyImageNet. {10-split-CIFAR-100} and {20-split-CIFAR-100} are constructed by splitting CIFAR-100 \cite{krizhevsky2009learning} into 10 and 20 tasks, and the classes in different tasks are disjoint. {25-split-TinyImageNet} is constructed by splitting TinyImageNet \cite{wu2017tiny} containing $64\times64$ RGB images into 25 tasks, which is a harder setting due to longer sequence. \textbf{Implementation details}. Adam-NSCL is implemented using PyTorch \cite{pytorch}. We take ResNet-18 \cite{he2016identity} as the backbone network in our experiments. All tasks share the same backbone network but each task has its own classifier. The classifier will be fixed after the model is trained on the corresponding task. For batch normalization layer, we regularize its parameters using EWC \cite{kirkpatrick2017overcoming}. The learning rate starts from $5\e{-5}$ and decays at epoch 30, 60 with a multiplier 0.5 (80 epochs in total). The batch size is set to 32 for 10-split-CIFAR-100 and 16 for the other two datasets. The regularizer coefficient of EWC for penalizing parameters at batch normalization is set to 100. At each linear layer, to approximate null space of uncentered covariance, we set $a=30$ for 20-split-CIFAR-100 while $a=10$ for the other two datasets. We also study the effect of $a$ in \secref{sec:ana}. \textbf{Compared methods}. We compare our method with various continual learning methods including \textit{EWC} \cite{kirkpatrick2017overcoming}, \textit{MAS} \cite{aljundi2018memory}, \textit{MUC-MAS} \cite{muc2020liu}, \textit{SI} \cite{osta2019learning}, \textit{LwF} \cite{li2017learning}, \textit{GD-WILD} \cite{lee2019overcoming}, \textit{GEM}, \cite{lopez2017gradient}, \textit{A-GEM} \cite{chaudhry2018efficient}, \textit{MEGA} \cite{guo2020improved}, InstAParam \cite{mitigating} and \textit{OWM} \cite{zeng2019continual}. For a fair comparison, the backbone networks employed in these methods are all ResNet-18. \textit{EWC}, \textit{MAS}, \textit{MUC-MAS} and \textit{SI} regularize the changes of parameters across tasks, where each parameter is associated with a weight of importance. \textit{LwF} and \textit{GD-WILD} are based on knowledge distillation using different dataset for distillation to preserve learned knowledge on previous tasks. GEM, A-GEM, \textit{MEGA} and OWM focus on designing network training algorithm to overcome forgetting. InstAParam is based on architecture-based strategy. Among these methods, EWC, MAS, MUC-MAS and SI need to store the importance weight in memory, GD-WILD, GEM A-GEM and MEGA need to store data of previous tasks, and OWM needs to store the projection matrix which is incrementally computed with an approximate inversion of matrix. \textbf{Evaluation metrics}. We employ the evaluation metrics proposed in \cite{lopez2017gradient}, including backward transfer (BWT) and average accuracy (ACC). BWT is the average drop of accuracy of the network for test on previous tasks after learning current task. Negative value of BWT indicates the degree of forgetting in continual learning. ACC is the average accuracy of the network on test datasets of all seen tasks. With similar ACC, the one having larger BWT is better. \subsection{Experimental results}\label{expres} We next compare different continual learning algorithms. The details on the comparative results on three datasets are presented as follows. \begin{table}[!tbp] \begin{center} \scalebox{0.89}{ \begin{tabular}{l|cc} \toprule \multirow{2}{5pt}{Method} & \multicolumn{2}{|c}{10-split-CIFAR-100} \\ \cmidrule{2-3} &ACC (\%) & BWT(\%) \\ \midrule EWC \cite{kirkpatrick2017overcoming} &70.77& -2.83 \\ MAS \cite{aljundi2018memory} &66.93& -4.03 \\ MUC-MAS \cite{muc2020liu} &63.73& -3.38\\ SI \cite{osta2019learning} &60.57& -5.17\\ LwF \cite{li2017learning} &70.70& -6.27 \\ InstAParam \cite{mitigating} & 47.84& -11.92\\ ${}^*$GD-WILD \cite{lee2019overcoming} & 71.27& -18.24 \\%&$-$\\ ${}^*$GEM \cite{lopez2017gradient} &49.48& {2.77} \\ ${}^*$A-GEM \cite{chaudhry2018efficient} &49.57& -1.13 \\%&$-$\\ ${}^*$ MEGA \cite{guo2020improved} &54.17&-2.19\\ OWM \cite{zeng2019continual} &68.89& -1.88\\%&823\\ \midrule Adam-NSCL &\textbf{73.77}& -1.6 \\%&823\\ \bottomrule \end{tabular} } \end{center} \caption{Comparisons of ACC and BWT for ResNet-18 sequentially trained on 10-split-CIFAR-100 using different methods. Methods required to store previous data are denoted by ${}^*$.} \label{final10} \end{table} \begin{figure}[!b] \centering \includegraphics[scale=0.45]{eigen_cifar.pdf} \caption{The illustration of proportion values of $R$ and singular values of uncentered covariance matrix at 2-th, 7-th, 12-th and 17-th linear layers of network trained on 10-split-CIFAR-100. Orange curves denote the singular values smaller than $10\lambda^l_{\text{min}}$.} \label{exp1eigen} \end{figure} \textbf{10-split-CIFAR-100}. The comparative results on 10-split-CIFAR-100 are illustrated in \tabref{final10}, where the proposed Adam-NSCL achieves the highest ACC 73.77\% with competitive BWT -1.6\%. The BWT values of GEM and A-GEM are better than Adam-NSCL, however, their ACC values are 49.48\% and 49.57\%, significantly lower than ours. EWC, LwF and GD-WILD achieve marginally worse ACC compared with Adam-NSCL, but the BWT values of LwF and GD-WILD are much lower. Both ACC and BWT values of MAS, MUC-MAS and SI are much lower than ours. OWM has comparable BWT with our Adam-NSCL, but the ACC of OWM is 4.88\% lower than ours. Overall, our Adam-NSCL is the most preferable method among all these compared methods for continual learning. To justify the rationality of approximate null space, we show the curves of singular values in descending order and proportion values of $R$ defined in Eqn. \eqref{ratio} for the 2-th, 7-th, 12-th and 17-th layers of network in \figref{exp1eigen} on sequential tasks. As the results indicate, all proportion values of $U_2^l$ are smaller than 0.05, indicating that it is reasonable to take the range space of insignificant components $U_2^l$ as the approximate null space at the $l$-th layer $(l=1,\dots,L)$. \begin{table}[!tbp] \begin{center} \scalebox{0.89}{ \begin{tabular}{l|cc} \toprule \multirow{2}{5pt}{Method} & \multicolumn{2}{|c}{20-split-CIFAR-100} \\ \cmidrule{2-3} &ACC (\%) & BWT(\%) \\ \midrule EWC \cite{kirkpatrick2017overcoming} &71.66& -3.72 \\ MAS \cite{aljundi2018memory} &63.84& -6.29 \\ MUC-MAS \cite{muc2020liu} &67.22& -5.72\\ SI \cite{osta2019learning} &59.76& -8.62\\ LwF \cite{li2017learning} &74.38& -9.11\\ InstAParam \cite{mitigating} &51.04&-4.92\\ ${}^*$GD-WILD \cite{lee2019overcoming} & \textbf{77.16}& -14.85 \\%&$-$\\ ${}^*$GEM \cite{lopez2017gradient} & 68.89& -1.2 \\ ${}^*$A-GEM \cite{chaudhry2018efficient} &61.91& -6.88 \\%&$-$\\ ${}^*$MEGA \cite{guo2020improved}&64.98&-5.13\\ OWM \cite{zeng2019continual} &68.47& -3.37\\%&823\\ \midrule Adam-NSCL &\textbf{75.95}& -3.66 \\%&823\\ \bottomrule \end{tabular} } \end{center} \caption{Comparisons of ACC and BWT for ResNet-18 sequentially trained on 20-split-CIFAR-100 using different methods.} \label{final20} \end{table} \textbf{20-split-CIFAR-100}. The comparisons on 20-split-CIFAR-100 dataset are shown in \tabref{final20}. Our method achieves the second best ACC 75.95\%. Though GD-WILD achieves 1.21\% higher ACC than ours, the BWT of GD-WILD is 11.19\% lower than that of our Adam-NSCL. Furthermore, GD-WILD requires to save data of previous tasks and a large mount of external data. EWC, GEM and OWM achieve 4.29\%, 7.06\% and 7.48\% lower ACC values compared with our method. LwF has marginally lower ACC than ours, but its BWT value is significantly worse than ours. Other methods including MAS, MUC-MAS, SI and A-GEM fail to achieve comparable results as ours. Therefore, our Adam-NSCL outperforms the other compared methods for continual learning. \begin{table}[!tbp] \begin{center} \scalebox{0.9}{ \begin{tabular}{l|cc} \toprule \multirow{2}{5pt}{Method} & \multicolumn{2}{|c}{25-split-TinyImageNet} \\ \cmidrule{2-3} &ACC (\%) & BWT(\%) \\ \midrule EWC \cite{kirkpatrick2017overcoming} & 52.33 &-6.17\\% &2400\\ MAS \cite{aljundi2018memory} &47.96 &-7.04 \\%&100 \\ MUC-MAS \cite{muc2020liu} &41.18 &-4.03\\%&400 \\ SI \cite{osta2019learning} & 45.27 & -4.45\\%&100\\ LwF \cite{li2017learning} &56.57 & -11.19\\%&$-$\\ InstAParam \cite{mitigating} &34.64&-10.05\\ ${}^*$GD-WILD \cite{lee2019overcoming} & 42.74 &-34.58 \\%&$-$\\ ${}^*$A-GEM \cite{chaudhry2018efficient} &53.32 &-7.68 \\%&$-$\\ ${}^*$MEGA \cite{guo2020improved} &57.12&-5.90\\ OWM \cite{zeng2019continual} &49.98&-3.64\\%&823\\ \midrule Adam-NSCL &\textbf{58.28} &-6.05\\%&823\\ \bottomrule \end{tabular} } \end{center} \caption{Performance comparisons for ResNet-18 sequentially trained on 25-split-TinyImageNet using different methods.} \label{finaltiny} \end{table} \begin{figure}[!tbp] \centering \includegraphics[scale=0.89]{three.pdf} \caption{Stability and plasticity analysis. Top: 10-split-CIFAR-100. Middle: 20-split-CIFAR-100. Bottom: 25-split-TinyImageNet.} \label{ex:balance} \end{figure} \textbf{25-split-TinyImageNet}. As shown in \tabref{finaltiny}, on 25-split-TinyImageNet dataset, the proposed Adam-NSCL outperforms the other compared methods with the best ACC and competitive BWT values. Specifically, Adam-NSCL achieves the best ACC 58.28\% with comparable BWT -6.05\%. Though the BWT of Adam-NSCL is marginally lower than MUC-MAS, SI and OWM, these compared methods achieve 16.68\%, 13.01\% and 8.3\% lower ACC than ours. LwF achieves the second best ACC, but with much inferior BWT compared with our Adam-NSCL. With marginally lower BWT, EWC and MAS achieve 5.95\% and 10.32\% lower ACC than Adam-NSCL. We now discuss the difference between Adam-NSCL and OWM~\cite{zeng2019continual}. The main difference of Adam-NSCL and OWM is the way to find the null space as discussed in \secref{sec:related}. Computing the projection matrix in OWM relies on the approximate generalized inversion of feature matrix, and the approximate error can be accumulated when incrementally update the projection matrix. While in Adam-NSCL, the null space is specified by the uncentered feature covariance which can be incrementally computed without approximate error. Additionally, Adam-NSCL consistently performs better than OWM on 10-split-CIFAR-100 and 20-split-CIFAR-100, as shown in Tabs. \ref{final10} and \ref{final20}, where Adam-NSCL achieves 4.88\% and 7.48\% larger ACC and similar BWT in comparison with OWM respectively. On 25-split-TinyImageNet, Adam-NSCL has significantly better ACC with comparable BWT than OWM, as shown in \tabref{finaltiny}. \begin{figure}[!tb] \centering \includegraphics[scale=0.60]{loss.pdf} \caption{The curves of training losses of network on tasks $\mathcal{T}_1$, $\mathcal{T}_2$ and $\mathcal{T}_3$ when the network is trained on sequential tasks.} \label{exp:loss} \end{figure} \subsection{Model analysis} \label{sec:ana} \textbf{Stability and plasticity analysis}. To study the balance of stability and plasticity, which is controlled by $a$, we compare the performance of our Adam-NSCL by varying $a=10,20,30,40,50$. According to \figref{ex:balance}, BWT becomes worse when $a$ is larger, suggesting that the network forgets more learned knowledge of previous tasks with lager $a$. Since ACC is affected by both of stability and plasticity, it increases first and then decreases with the increase of $a$ at the middle and bottom sub-figures of \figref{ex:balance}. \textbf{Evolution of training loss}. To justify the proposed Adam-NSCL indeed guarantees the stability of network training on sequential tasks, we show the curves of training losses on the tasks $\mathcal{T}_1, \mathcal{T}_2, \mathcal{T}_3$ after learning new tasks in \figref{exp:loss} on 10-split-CIFAR-100. According to \figref{exp:loss}, the training losses of the network on previous tasks are retained after learning new tasks, verifying that Adam-NSCL, with Condition \ref{prop1} as basis, guarantees the stability of network. \section{Conclusion}\label{conc} In this paper, we address the \textit{plasticity-stability dilemma} for continual learning, constraining that the datasets of previous tasks are inaccessible. We propose two theoretical conditions to guarantee stability and plasticity for network parameter update when training networks on sequential tasks. Then we design a novel continual learning algorithm Adam-NSCL, which is based on Adam. The candidate parameter update generated by Adam is projected into the approximate null space of uncentered feature covariance matrix of previous tasks. Extensive experiments show that the proposed algorithm outperforms the compared methods for continual learning. In the future, we consider to further improve the approximation of null space and conduct theoretical analysis for our algorithm. \textbf{Acknowledgment.} This work was supported by NSFC (U20B2075, 11690011, 11971373, U1811461, 12026605) and National Key R\&D Program 2018AAA0102201. \pagebreak \twocolumn[ \centering \textbf{\huge Supplemental Materials} \vspace{1cm} ] We first introduce additional notations here. When feeding data $X_p$ from task $\mathcal{T}_p$ $(p\leq t)$ to $f$ with parameters $\mathbf{w}_{t,s}$, the input feature and output feature at the $l$-th linear layer are denoted as $X_{p,t,s}^l$ and $O_{p,t,s}^l$ respectively, then $$O^l_{{p,t,s}}=X^l_{{p,t,s}} {w}^l_{{t,s}}, \ \ X^{l+1}_{{p,t,s}}=\sigma_l(O^l_{{p,t,s}})$$ with $X_{p,t,s}^1=X_p$. In addition, by denoting the learning rate as $\alpha$, we have $$w_{t,s}^l = w_{t,s-1}^l - \alpha \Delta w_{t,s-1}^l, \\ l=1,\dots,L.$$ \setcounter{lemma}{0} \setcounter{prop}{1} \section*{Appendix A} In this appendix, we show the proof of Lemma \ref{lemma11} in the manuscript. Lemma \ref{lemma11} tells us that, when we train network on task $\mathcal{T}_t$, the network retains its training loss on data $X_p$ in the training process, if the network parameter update satisfies Eqn. \eqref{1} at each training step. We first recall Lemma \ref{lemma11} as follows, then give the proof. \begin{lemma}\label{lemma11} Given the data $X_p$ from task $\mathcal{T}_p$, and the network $f$ with $L$ linear layers is trained on task $\mathcal{T}_t$ ($t>p$). If network parameter update $\Delta w^l_{{t,s}}$ lies in the null space of $X^l_{{p,t-1}}$, \ie, \begin{equation}\label{1} X^l_{p,t-1} \Delta w^l_{{t,s}} = 0, \end{equation} at each training step $s$, for the $l$-th layer of $f$ $(l=1,\dots,L)$, we have $X^l_{{p,t}}=X^l_{p,t-1}$ and $f(X_p,\tilde{\mathbf{w}}_{t-1})=f(X_p,\tilde{\mathbf{w}}_{t})$. \end{lemma} \begin{proof} The proof is based on the recursive structure of network and iterative training process. We first prove that $X_{p,t,1}^l=X^l_{p,t-1}$ and $f(X_p, \mathbf{w}_{t,1}) = f(X_p, \tilde{\mathbf{w}}_{t-1})$ hold for $s=1$, and then illustrate that $X_{p,t,s}^l=X^l_{p,t-1}$ and $f(X_p, \mathbf{w}_{t,s}) = f(X_p, \tilde{\mathbf{w}}_{t-1})$ hold for each $s>1$, which suggests that Lemma \ref{lemma1} holds. When $s=1$, considering that we initialize parameters $\mathbf{w}_{t,0}=\tilde{\mathbf{w}}_{t-1}$, we have \begin{equation} X^l_{{p,t,0}} = X^l_{{p,t-1}}, \ \ O^l_{{p,t,0}} = O^l_{{p,t-1}}. \end{equation} Therefore, at the first layer $(l=1)$ where $X_{p,t,1}^1=X_{p,t,0}^1=X^1_{{p,t-1}}$ (all of them equal to $X_p$ when $l=1$), \begin{align}\label{layer} O_{p,t,1}^1 & = X_{p,t,1}^1 w_{t,1}^1 \notag \\ &= X_{p,t,0}^1 (w_{t,0}^1 - \alpha \Delta w_{t,0}^1) \notag \\ &= X_{p,t,0}^1 w_{t,0}^1 - \alpha X^1_{p,t-1} \Delta w_{t,0}^1 \notag \\ &=X_{p,t,0}^1 w_{t,0}^1 \notag \\ &=O_{p,t,0}^1, \end{align} where the fourth equation holds due to Eqn. \eqref{1}. Furthermore, we have \begin{equation}\label{4} X_{p,t,1}^2=\sigma_1(O_{p,t,1}^1)=\sigma_1(O_{p,t,0}^1)=X_{p,t,0}^2=X_{p,t-1}^2, \end{equation} \ie, the input feature $X_{p,t,1}^2$ equals to $X_{p,t-1}^2$ at the second linear layer, based on which, we can recursively prove that $$O_{p,t,1}^l=O_{p,t,0}^l=O_{p,t-1}^l$$ and $$X_{p,t,1}^l=X_{p,t,0}^l=X_{p,t-1}^l$$ for $l=3, \dots, L$ by replacing $l=1$ with $l=2,\dots,L$ in Eqns. \eqref{layer} and \eqref{4}, then we have $f(X_p, \mathbf{w}_{t,1}) = f(X_p, $ $\tilde{\mathbf{w}}_{t-1})$. We now have proved that $X^l_{{p,t,s}} = X^l_{{p,t-1}}$, $O^l_{{p,t,s}} = O^l_{{p,t-1}}$ $(l=1,\dots,L)$ and $f(X_p, \mathbf{w}_{t,s}) = f(X_p, \tilde{\mathbf{w}}_{t-1})$ hold for $s=1$. Considering the iterative training process, we can prove that \begin{equation*} X^l_{{p,t,s}} = X^l_{{p,t-1}}, \ \ O^l_{{p,t,s}} = O^l_{{p,t-1}} \ \ (l=1.\dots,L) \end{equation*} and $$f(X_p, \mathbf{w}_{t,s}) = f(X_p, \tilde{\mathbf{w}}_{t-1})$$ hold for $s=2,...$, by repeating the above process with $s=2,...$. Finally, we have $X^l_{{p,t}}=X^l_{p,t-1}$ and $f(X_p,\tilde{\mathbf{w}}_{t-1})=f(X_p,\tilde{\mathbf{w}}_{t})$, since Lemma \ref{lemma1} holds for each $s\geq1$. \end{proof} \section*{Appendix B} We first recall the Condition \ref{prop21} in the manuscript as follows, then prove that parameter update $\Delta\mathbf{w}_{t,s}$ satisfying condition \ref{prop21} is the descent direction, \ie, the training loss after updating parameters using $\Delta\mathbf{w}_{t,s}$ will decrease. \begin{prop}[plasticity] \label{prop21} Assume that the network $f$ is being trained on task $\mathcal{T}_t$, and $\mathbf{g}_{t,s}=\{g^1_{t,s},\dots,g^L_{t,s}\}$ denotes the parameter update generated by a gradient-descent training algorithm for training $f$ at training step $s$. $\langle\Delta \mathbf{w}_{t,s}, \mathbf{g}_{t,s}\rangle > 0$ should hold where $\langle \cdot, \cdot\rangle$ represents inner product. \end{prop} We now discuss the reason why $\Delta\mathbf{w}_{t,s}$ is the descent direction, if it satisfies condition \ref{prop21}. For clarity, we denote the loss for training network $f$ as $\mathcal{L}(\mathbf{w})$ which ignores the data term with no effect. The discussion can also be found in Lemma 2 of the lecture\footnote{\url{http://www.princeton.edu/~aaa/Public/Teaching/ORF363_COS323/F14/ORF363_COS323_F14_Lec8.pdf}}. By denoting the learning rate as $\alpha$, and $h(\alpha) \triangleq \mathcal{L}(\mathbf{w}_{t,s}- \alpha \Delta\mathbf{w}_{t,s})$, according to Taylor's theorem, we have \begin{equation*} h(\alpha) = h(0) + \nabla_\alpha h(0) + o(\alpha), \end{equation*} \ie, \begin{align*} \mathcal{L}(\mathbf{w}_{t,s} - \alpha \Delta\mathbf{w}_{t,s}) = \mathcal{L}(\mathbf{w}_{t,s}) - \alpha \langle\Delta \mathbf{w}_{t,s}, \mathbf{g}_{t,s}\rangle + o(\alpha), \end{align*} where $\frac{|o(\alpha)|}{\alpha}\to0$ when $\alpha \to 0$. Therefore, there exists $\bar{\alpha}>0$ such that $$|o(\alpha)|< \alpha |\langle\Delta \mathbf{w}_{t,s}, \mathbf{g}_{t,s}\rangle|, \ \ \forall \alpha \in (0, \bar{\alpha}).$$ Together with the condition $\langle\Delta \mathbf{w}_{t,s}, \mathbf{g}_{t,s}\rangle > 0$, we can conclude that $\mathcal{L}(\mathbf{w}_{t,s} - \alpha \Delta\mathbf{w}_{t,s})<\mathcal{L}(\mathbf{w}_{t,s})$ for all $ \alpha \in (0, \bar{\alpha})$. Therefore, parameter update $\Delta\mathbf{w}_{t,s}$ satisfying condition \ref{prop2} is the descent direction. \section*{Appendix C} Here, we give the proof of $\langle\Delta \mathbf{w}_{t,s}, \mathbf{g}_{t,s}\rangle \geq 0$ with $\Delta w^l_{t,s} = U_2^l(U_2^{l})^\top g_{t,s}^l$, which is claimed in Sec 4.1 of the manuscript. The proof mainly utilizes the properties of Kronecker product \cite[Eqns. (2.10) and (2.13)]{graham2018kronecker}. \begin{align}\label{inner} \langle\Delta \mathbf{w}_{t,s}, \mathbf{g}_{t,s}\rangle &= \sum_{l=1}^L \langle U_2^l(U_2^{l})^\top g_{t,s}^l, g_{t,s}^l\rangle \notag \\ &=\sum_{l=1}^L \text{vec}(U_2^l(U_2^{l})^\top g_{t,s}^lI)^\top \text{vec}(g_{t,s}^l)\notag \\ &=\sum_{l=1}^L \text{vec}((U_2^{l})^\top g_{t,s}^l)^\top (I\otimes (U_2^{l})^\top )\text{vec}(g_{t,s}^l)\notag \\ &=\sum _{l=1}^L \text{vec}((U_2^{l})^\top g_{t,s}^l)^\top\text{vec}((U_2^{l})^\top g_{t,s}^l) \notag \\ &\geq0, \end{align} where vec$(\cdot)$ is the vectorization of $\cdot$, $I$ is the identity matrix and $\otimes$ is the Kronecker product. \section*{Appendix D} We now discuss the difference between our algorithm and OWM \cite{zeng2019continual} in details as follows. (1) We provide novel theoretical conditions for the stability and plasticity of network based on feature covariance. (2) The null space of ours is defined as the null space of feature covariance matrix which is easy to be accumulated after each task (refer to Q1 \& Alg. \textcolor{red}{2}). While the projection matrix in OWM is $\mathbf{P}_l=\mathbf{I}_l-\mathbf{A}_l(\mathbf{A}_l^\top \mathbf{A}_l+\beta_l \mathbf{I}_l)^{-1}\mathbf{A}_l^\top$ where $\mathbf{A}_l$ consists of all previous features of layer $l$. (3) With the coming of new tasks, our covariance matrix is incrementally updated without approximation error, while $\mathbf{P}_l$ of OWM is updated by recursive least square, where the approximation error of matrix inversion (because of the additionally introduced $\beta_l \mathbf{I}$) will be accumulated. (4) Our approach relies on a hyperparameter $a$ in line 14 of Alg. \textcolor{red}{2}, for approximating the null space of covariance, which can balance the stability and plasticity as discussed in lines 572-579 and Fig. \textcolor{red}{5}. It is easy to set the hyperparameter (line 614 and Figs. \textcolor{red}{4, 5}). But we find that it is hard to tune the hyperparameter $\beta_l$ in OWM for each layer to balance the approximation error and computational stability. (5) Experimental comparison with OWM on three benchmarks are shown in Tabs. \textcolor{red}{1-3}. The ACC of ours are 4.88\%, 7.48\% and 8.3\% higher than OWM with comparable BWT. Please refer to Q4 for comparison on ImageNet with deeper networks. We will clarify these differences by extending the discussions in Sect. \textcolor{red}{2}. {\small \bibliographystyle{ieee_fullname}
2024-02-18T23:40:56.980Z
2021-03-18T01:14:15.000Z
algebraic_stack_train_0000
3,750
8,820
proofpile-arXiv_066-2311
\section{Introduction} \setcode{utf8} \vocalize Library catalogues comprise a large number of bibliographic records consisting of entries that provide specific descriptions of library holdings. Records for Arabic and other non-Roman-script language materials ideally include Romanized entries to help researchers without language expertise, e.g., Figure~\ref{loc-example}. There are many Romanization standards such as the ISO standards used by French and other European libraries, and the ALA-LC (American Library Association and Library of Congress) system \cite{LOC:2017:romanization} widely adopted by North American and UK affiliated libraries. These Romanizations are applied manually by librarians across the world -- a tedious error-prone task. In this paper, we present, to our knowledge, the first reported results on automatic Romanization of {\it undiacritized} Arabic bibliographic entries. This is a non-trivial task as it requires modeling of Arabic phonology, morphology and even semantics. We collect and clean a 2.5M word corpus of parallel Arabic and Romanized bibliographic entries, and evaluate a number of models that vary in terms of complexity and resource dependence. Our best system reaches 89.3\% exact word Romanization on a blind test set. We make our data and code publicly available for researchers in Arabic NLP.\footnote{\url{https://www.github.com/CAMeL-Lab/Arabic\_ALA-LC\_Romanization}} \begin{figure}[] \centering \includegraphics[width=0.48\textwidth]{LOC-Entry.pdf} \caption{A bibliographic record for \newcite{Taymur:1927:qabr} in Romanized and original Arabic forms. } \label{loc-example} \end{figure} \section{Related Work} \paragraph{Arabic Language Challenges} Arabic poses a number of challenges for NLP in general, and the task of Romanization in particular. Arabic is morphologically rich, uses a number of clitics, and is written using an Abjad script with optional diacritics, all leading to a high degree of ambiguity. The Arabic script does not include features such as capitalization which is helpful for NLP in a range of Roman script languages. There are a number of enabling technologies for Arabic that can help, e.g., MADAMIRA \cite{Pasha:2014:madamira}, Farasa \cite{Abdelali:2016:farasa}, and CAMeL Tools \cite{Obeid:2020:cameltools}. In this paper we use MADAMIRA to provide diacritics, morpheme boundaries and English gloss capitalization information as part of a rule-based Romanization technique. \paragraph{Machine Transliteration} {\it Transliteration} refers to the mapping of text from one script to another. Romanization is specifically transliteration into the Roman script \cite{Beesley:1997:romanization}. There are many ways to transliterate and Romanize, varying in terms of detail, consistency, and usefulness. Commonly used name transliterations \cite{Al-Onaizan:2002:machine} and so-called Arabizi transliteration \cite{Darwish:2014:arabizi} tend to be lossy and inconsistent while strict orthographic transliterations such as Buckwalter's \cite{Buckwalter:2004:buckwalter} tend to be exact but not easily readable. The ALA-LC transliteration is a relatively easy to read standard that requires a lot of details on phonology, morphology and semantics. There has been a sizable amount of work on mapping Arabizi to Arabic script using a range of techniques from rules to neural models \newcite{chalabi-gerges-2012-romanized, Darwish:2014:arabizi,Al-Badrashiny:2014:automatic,guellil2017arabizi,YOUNES2018238, Shazal:2020:unified}. In this paper we make use of a number of insights and techniques from work on Arabizi-to-Arabic script transliteration, but apply them in the opposite direction to map from Arabic script to a complex, detailed and strict Romanization. We compare rule-based and corpus-based techniques including a Seq2Seq model based on the publicly available code base of \newcite{Shazal:2020:unified}. \section{Data Collection} \label{datasets} \begin{table}[t!] \centering \begin{tabular}{|l|r|r|c|} \hline \textbf{Split} & \multicolumn{1}{c|}{\textbf{Bib Records}} & \textbf{Entries} & \textbf{Words} \\ \hline\hline \textbf{Train} & 85,952 (80\%) & 479,726 & $\sim$2M \\\hline \textbf{Dev} & 10,744 (10\%) & 59,964 & $\sim$250K \\\hline \textbf{Test} & 10,743 (10\%) & 59,752 & $\sim$250K \\\hline\hline \bf Total & 107,439 \verb| | & 599,442 & $\sim$2.5M \\\hline \end{tabular} \caption{Corpus statistics and data splits.} \label{datsets} \end{table} \paragraph{Sources} We collected bibliographic records from three publicly available xml dumps stored in the machine-readable cataloguing (MARC) standard, an international standard for storing and describing bibliographic information. The three data sources are the Library of Congress (LC) (10.5M), the University of Michigan (UMICH) (680K), and New York University Abu Dhabi's Arabic Collections Online (ACO) (12K), amounting to 11.2 million records in total. \paragraph{Extraction} From these collections, we extracted 107,493 records that are specifically tagged with the Arabic language code (MARC 008 ``ara''). \paragraph{Filtering} Within the extracted records we filter out some of the entries using two strategies. First, we used a list of 33 safe tags (determined using their definitions and with empirical sampling check) to eliminate all entries that include a mix of translations, control information, and dates. The star-marked tags in Figure~\ref{loc-example} are all included, while the rest are filtered out. Second, we eliminated all entries with mismatched numbers of tokens. This check was done after a cleaning step that corrected for common errors and inconsistencies in many entries such as punctuation misalignment and incorrect separation of the conjunction \<و+> {\it wa+}\footnote{Strict orthographic transliteration using the HSB scheme \cite{Habash:2007:arabic-transliteration}. } `and' clitic. As a result of this filtering, a small number of additional records are eliminated since all their entries were eliminated. The total number of retained records is 107,439. The full details on extraction and filtering are provided as part of the project's public github repo (see footnote 1). \paragraph{Data Splits} Finally, we split the remaining collection of records into Train, Dev, and Test sets. Details on the number of records, entries, and words they contain is presented in Table~\ref{datsets}. We make our data and data splits available (see footnote 1). \section{Task Definition and Challenges} As discussed above, there are numerous ways to ``transliterate'' from one script to another. In this section we focus on the Romanization of undiacritized Arabic bibliographic entries into the ALA-LC standard. Our intention is to highlight the important challenges of this task in order to justify the design choices we make in our approaches. For a detailed reference of the ALA-LC Arabic Romanization standard, see \cite{LOC:2012:Arabic}. \paragraph{Phonological Challenges} While Romanizing Arabic consonants is simple, the main challenge is in identifying unwritten phonological phenomena, e.g., short vowels, under-specified long vowels, consonantal gemination, and nunnation, all of which require modeling Arabic diacritization. \paragraph{Morphosyntactic Challenges} Beyond basic diacritization modeling, the task requires some morphosyntactic modeling: examples include (a) proclitics such as the definite article, prepositions and conjunctions are marked with a hyphen, (b) case endings are dropped, except before pronominal enclitics, (c) the silent Alif, appearing in some masculine plural verbal endings, is ignored, and (d) the Ta-Marbuta ending can be written as {\it h} or {\it t} depending on the morphosyntactic state of the noun. For more information on Arabic morphology, see \cite{Habash:2010:introduction}. \paragraph{Semantic Challenges} Proper nouns need to be marked with capitalization on their first non-clitic alphabetic letter. Since Arabic script does not have ``capitalizations'', this effectively requires named-entity recognition. The Romanization of the word \<القاهرة> {\it AlqAhr{{p}}} `Cairo' as {\it al-Q\=ahirah} in Figure~\ref{loc-example} illustrates elements from all challenge types. \paragraph{Special Cases} The Arabic ALA-LC guidelines include a number of special cases, e.g., the word \<بن> {\it bn} `son~of' is Romanized as {\it ibn}, and proper noun \<عمرو> {\it {{E}}mrw} is Romanized as {\it `Amr}. \begin{table*}[ht!] \centering \setlength{\tabcolsep}{5pt} \begin{tabular}{|l||r|c|c||r|r|r||r|r|r|} \hline \textbf{} & \multicolumn{1}{c|}{\textbf{Corpus }} & \textbf{Morph}&\textbf{Char} &\multicolumn{3}{c||}{\textbf{Dev}} &\multicolumn{3}{c|}{\textbf{Test}} \\ \cline{5-10} \textbf{Model} & \multicolumn{1}{c|}{\textbf{Size }}& \textbf{Trans}&\textbf{Trans} & \multicolumn{1}{c|}{\textbf{Exact}}& \multicolumn{1}{c|}{\textbf{CI}}& \multicolumn{1}{c||}{\textbf{CPI}} & \multicolumn{1}{c|}{\textbf{Exact}}& \multicolumn{1}{c|}{\textbf{CI}}& \multicolumn{1}{c|}{\textbf{CPI}} \\ \hline\hline \bf Rules Simple & 0 & & \ding{51} & 16.2 & 17.4 & 17.8 & 16.1 & 17.3 & 17.7 \\ \hline \bf Rules Morph & 0 & \ding{51} & \ding{51} & 67.4 & 83.5 & 84.8 & 67.4 & 83.6 & 84.9 \\ \hline\hline \bf MLE Simple 1/64 & 31K & & \ding{51} &63.6&69.8& 71.1 & \multicolumn{3}{c}{} \\ \cline{1-7} \bf MLE Simple 1/32 & 63K & & \ding{51}& 68.5 & 75.1 & 76.4 & \multicolumn{3}{c}{} \\ \cline{1-7} \bf MLE Simple 1/16 & 125K & & \ding{51}& 73.0 & 79.9 & 81.3 & \multicolumn{3}{c}{}\\ \cline{1-7} \bf MLE Simple 1/8 & 250K & & \ding{51} & 75.6 & 82.8 & 84.2 & \multicolumn{3}{c}{}\\ \cline{1-7} \bf MLE Simple 1/4 & 500K & & \ding{51}& 80.3 & 87.2 & 88.6 & \multicolumn{3}{c}{}\\ \cline{1-7} \bf MLE Simple 1/2 & 1M & & \ding{51} & 82.7 & 89.5 & 90.9 & \multicolumn{3}{c}{}\\ \hline \bf MLE Simple & 2M & & \ding{51}&84.0& 90.7 & 92.1 & 84.1 & 90.8 & 92.2 \\ \hline \bf MLE Morph & 2M & \ding{51} & \ding{51}&84.7& 91.6 & 93.0 & 84.8 & 91.7 & \bf 93.2 \\ \hline\hline \bf Seq2Seq 1/64 & 31K & && 6.3 & 7.5 & 10.2& \multicolumn{3}{c}{} \\ \cline{1-7} \bf Seq2Seq 1/32 & 63K & && 28.3 & 31.0 & 38.1 & \multicolumn{3}{c}{}\\ \cline{1-7} \bf Seq2Seq 1/16 & 125K & && 64.9 & 69.1 & 70.5& \multicolumn{3}{c}{} \\ \cline{1-7} \bf Seq2Seq 1/8 & 250K & && 75.5 & 79.6 & 80.9& \multicolumn{3}{c}{} \\ \cline{1-7} \bf Seq2Seq 1/4 & 500K & && 82.5 & 85.8 & 87.1& \multicolumn{3}{c}{} \\ \cline{1-7} \bf Seq2Seq 1/2 & 1M & && 85.9 & 88.6 & 90.1 & \multicolumn{3}{c}{}\\ \hline \bf Seq2Seq & 2M & &&87.2& 89.7 & 90.9 & 87.3 & 89.8 & 91.0 \\ \hline \bf Seq2Seq + Rules Morph & 2M & \ding{51} & \ding{51}&88.8& 91.6 & 92.9 & 88.9 & 91.7 & 93.0 \\ \hline \bf Seq2Seq + MLE Simple & 2M & & \ding{51}& \bf 89.2 & \bf 91.8 & \bf 93.1 & \bf 89.3 & \bf 91.9 & \bf 93.2 \\ \hline \bf Seq2Seq + MLE Morph & 2M & \ding{51} & \ding{51}&\bf 89.2 & \bf 91.8 & \bf 93.1 & \bf 89.3 & \bf 91.9 & \bf 93.2 \\ \hline \end{tabular} \caption{Dev and Test Romanization word accuracy (\%). (CI = case-insensitive, and CPI = case and punctuation-insensitive)} \label{results} \end{table*} \hide{ \begin{table*}[ht!] \centering \setlength{\tabcolsep}{3pt} \begin{tabular}{|l||r|c|c||c|c|c|c|} \hline \textbf{Model} & \textbf{Corpus Size} & \textbf{MorphTrans} &\textbf{CharTrans} &\textbf{Exact \%}&\textbf{CI \%}&\textbf{CPI \%}\\ \hline\hline \bf Rules Simple & 0 & & \ding{51} & 16.2\% & 17.4\% & 17.8\% \\ \hline \bf Rules Morph & 0 & \ding{51} & \ding{51} & 67.4\% & 83.5\% & 84.8\% \\ \hline\hline \bf MLE Simple 1/64 & 31K & & \ding{51} & 63.6\% & 69.8\% & 71.1\% \\ \hline \bf MLE Simple 1/32 & 63K & & \ding{51} & 68.5\% & 75.1\% & 76.4\% \\ \hline \bf MLE Simple 1/16 & 125K & & \ding{51} & 73.0\% & 79.9\% & 81.3\% \\ \hline \bf MLE Simple 1/8 & 250K & & \ding{51} & 75.6\% & 82.8\% & 84.2\% \\ \hline \bf MLE Simple 1/4 & 500K & & \ding{51} & 80.3\% & 87.2\% & 88.6\% \\ \hline \bf MLE Simple 1/2 & 1M & & \ding{51} & 82.7\% & 89.5\% & 90.9\% \\ \hline \bf MLE Simple & 2M & & \ding{51} & 84.0\% & 90.7\% & 92.1\% \\ \hline \bf MLE Morph & 2M & \ding{51} & \ding{51} & 84.7\% & 91.6\% & 93.0\% \\ \hline\hline \bf Seq2Seq 1/64 & 31K & & & 6.3\% & 7.5\% & 10.2\% \\ \hline \bf Seq2Seq 1/32 & 63K & & & 28.3\% & 31.0\% & 38.1\% \\ \hline \bf Seq2Seq 1/16 & 125K & & & 64.9\% & 69.1\% & 70.5\% \\ \hline \bf Seq2Seq 1/8 & 250K & & & 75.5\% & 79.6\% & 80.9\% \\ \hline \bf Seq2Seq 1/4 & 500K & & & 82.5\% & 85.8\% & 87.1\% \\ \hline \bf Seq2Seq 1/2 & 1M & & & 85.9\% & 88.6\% & 90.1\% \\ \hline \bf Seq2Seq & 2M & & & 87.2\% & 89.7\% & 90.9\% \\ \hline \bf Seq2Seq + Rules Morph & 2M & \ding{51} & \ding{51} & 88.8\% & 91.6\% & 92.9\% \\ \hline \bf Seq2Seq + MLE Simple & 2M & & \ding{51} & \bf 89.2\% & \bf 91.8\% & \bf 93.1\% \\ \hline \bf Seq2Seq + MLE Morph & 2M & \ding{51} & \ding{51} & \bf 89.2\% & \bf 91.8\% & \bf 93.1\% \\ \hline \end{tabular} \caption{Dev Romanization word accuracy. (CI = case-insensitive, and CPI = case and punctuation-insensitive)} \label{dev_results} \end{table*} \begin{table*}[ht!] \centering \setlength{\tabcolsep}{3pt} \begin{tabular}{|l||r|c|c||c|c|c|c|} \hline \textbf{Model} & \textbf{Corpus Size} & \textbf{MorphTrans} &\textbf{CharTrans} &\textbf{Exact \%}&\textbf{CI \%}&\textbf{CPI \%}\\ \hline\hline \bf Rules Simple & 0 & & \ding{51} & 16.1\% & 17.3\% & 17.7\% \\ \hline \bf Rules Morph & 0 & \ding{51} & \ding{51} & 67.4\% & 83.6\% & 84.9\% \\ \hline\hline \bf MLE Simple & 2M & & \ding{51} & 84.1\% & 90.8\% & 92.2\% \\ \hline \bf MLE Morph & 2M & \ding{51} & \ding{51} & 84.8\% & 91.7\% & \bf 93.2\% \\ \hline\hline \bf Seq2Seq & 2M & & & 87.3\% & 89.8\% & 91.0\% \\ \hline \bf Seq2Seq + Rules Morph & 2M & \ding{51} & \ding{51} & 88.9\% & 91.7\% & 93.0\% \\ \hline \bf Seq2Seq + MLE Simple & 2M & & \ding{51} & \bf 89.3\% & \bf 91.9\% & \bf 93.2\% \\ \hline \bf Seq2Seq + MLE Morph & 2M & \ding{51} & \ding{51} & \bf 89.3\% & \bf 91.9\% & \bf 93.2\% \\ \hline \end{tabular} \caption{Blind Test Romanization word accuracy on baseline and top performing models. (CI = case-insensitive, and CPI = case and punctuation-insensitive)} \label{test_results} \end{table*} } \section{Romanization Models} \label{techniques} We compare multiple Romanization models built using four basic techniques with different expectation about training data availability, contextual modeling, and system complexity. The models are listed in Table~\ref{results}. \paragraph{CharTrans Technique} Our baseline technique is an extremely simple character transliteration approach utilizing regular expressions and exception lists. This technique is built based on the ALA-LC guidelines, and is inspired by the work of \newcite{Biadsy:2009:improving}; it comprises 104 regex, 13 exceptions, and one capitalization rule (for entry-initial words). This technique accepts diacritized, undiacritized or partially diacritized input. Model {\bf Rules Simple} uses CharTrans only. \paragraph{MorphTrans Technique} This technique relies on the morphological disambiguation system MADAMIRA \cite{Pasha:2014:madamira} to provide diacritization, morpheme boundaries, POS tags and English glosses for the Arabic input. Morpheme boundaries are used to identify clitic hyphenation points. POS tags and capitalization in English glosses are used to decide on what to capitalize in the transliteration. We strip diacritical morphological case endings, but keep other diacritics. We utilize the CharTrans technique to finalize the Romanization starting with the diacritized, hyphenated and capitalization marked words. For words unknown to the morphological analyzer, we simply back off to the CharTrans technique. Model {\bf Rules Morph} uses MorphTrans with CharTrans backoff. \paragraph{MLE Technique} Unlike the previous two techniques, MLE (maximum likelihood estimate) relies on the parallel training data we presented in Section~\ref{datasets}. This simple technique works on white-space and punctuation tokenized entries and learns simple one-to-one mapping from Arabic script to Romanization. The most common Romanization for a particular Arabic script input in the training data is used. The outputs are detokenized to allow strict matching alignment with the input. Faced with OOV (out of vocabulary), we back off to the MorphTrans technique (Model {\bf MLE Morph}) or CharTrans Technique (Model {\bf MLE Simple}). In Table~\ref{results}, we also study the performance of {\bf MLE Simple} with different corpus sizes. \paragraph{Seq2Seq Technique} Our last technique also relies on existing training data. We use an encoder-decoder character-level sequence-to-sequence architecture closely following \newcite{Shazal:2020:unified} (although in reverse direction). The encoder consists of two gated recurrent unit (GRU) layers \cite{Cho:2014:learning} with only the first layer being bidirectional, and the decoder has two GRUs with attention \cite{Luong:2015:effective}. For the input, we used character embeddings concatenated with embeddings of the words in which the characters appear. For all other setting details, see \newcite{Shazal:2020:unified}'s Line2Line model. We also show how \textbf{Seq2Seq} performs with different corpus sizes in Table \ref{results}. The Seq2Seq technique is known for occasionally dropping tokens, which in our case leads to misalignment with the Arabic input. To handle this issue in model {\bf Seq2Seq}, we align its output and fill such gaps using the outputs produced by three other techniques, thus creating models {\bf Seq2Seq+Rules Morph}, {\bf Seq2Seq+MLE Simple}, and {\bf Seq2Seq+MLE Morph}. The alignment technique we use relies on minimizing character-edit distance between present words to identify missing ones. \paragraph{Comparing the Techniques} The CharTrans and MorphTrans techniques do not need parallel data, while the MLE and Seq2Seq techniques do. Furthermore, the MorphTrans and Seq2Seq techniques make use of available context: in MorphTrans, we use context-aware monolingual morphological disambiguation; and in Seq2Seq we model parallel examples in context. In contrast, neither the MLE technique nor the CharTrans technique use the context of the words being mapped. \begin{table*}[th!] \centering \begin{tabular}{|c|l|r|r|l|l|} \hline \multicolumn{2}{|c|}{\bf Error Type} & \bf Counts & \multicolumn{1}{l|}{\bf Source} & \bf Prediction & \bf Target \\\hline \hline \bf Gold & \bf Romanization & 34 & \<إبراهيم> & Ibr\=ah\={\i}m & Ibr\=ahim \\ 52 & & & \<الأشقر> & al-Ashqar & Ashqar \\ & & & \<ندوات> & Nadaw\=at & Nadw\=at \\\cline{2-6} & \bf Alignment & 8 & \<أحمد بو حسن.> & A\textsubdot{h}mad B\=u \textsubdot{H}asan. & B\=u \textsubdot{H}asan, A\textsubdot{h}mad.\\ \cline{2-6} & \bf Source & 5 & \<الطبع> & al-\textsubdot{T}ab` & al-\textsubdot{T}ab`ah \\ \cline{2-6} & \bf Translation & 5 & \<شعر.> & shi`r. & Poems. \\ \hline \hline \bf System & \bf Romanization& 36 & \<الريف> & al-Rayf & al-r\={\i}f \\ 48 & && \<حدث>&\textsubdot{h}adath & \textsubdot{H}addatha\\ & &&\<خسارة،> & Khass\=arah, & Khas\=arah.\\\cline{2-6} & {\bf Hallucination}& 10 & \<الادارية.> & Ta\textsubdot{s}al-Id\=ar\={\i}yah. & al-Id\=ar\={\i}yah. \\ \cline{2-6} & \bf Valid variant & 2 & \<السوفياتي> & al-S\=ufy\=at\=\i & al-S\=ufiy\=at\=\i \\ \hline \end{tabular} \caption{Error types, counts, and examples on a sample of 100 \textbf{Seq2Seq+MLE Morph} predictions.} \label{error_analysis} \end{table*} \section{Experimental Results} Table~\ref{results} presents the Dev and Test results for the models discussed in the previous section. All results are in terms of three word accuracy metrics: exact match (Exact), case-insensitive match (CI), and case and punctuation-insensitive match (CPI). The {\bf Rules Simple} baseline manages to correctly produce an exact answer in close to 1/6th of all the cases. {\bf Rules Morph}, which uses no training data, misses about 1/3rd of all exact transliteration matches; however, about half of the errors are from capitalization issues. The {\bf MLE Simple} with 2M words cuts the error from Rules Morph by 51\% (Exact) and 44\% (CI). Notably {\bf Rules Morph} outperforms {\bf MLE Simple} with 31K words in Exact match, and {\bf MLE Simple} with 250K words in CI match. The {\bf MLE Morph} model improves over {\bf MLE Simple} by $\sim$1\% absolute in all metrics (5\% and 10\% error reduction in Exact and CI, respectively). The {\bf Seq2Seq} model outperforms the {\bf MLE Morph} model by 2.5\% absolute (16\% error reduction) in Exact match, but under-performs in CI match. The \textbf{Seq2Seq} performance is comparatively much poorer with less data. With 31K words, \textbf{MLE Simple}'s performance is 10 times better than \textbf{Seq2Seq}; and their performance only becomes comparable with 250K words. We observe that $\sim$2\% of the {\bf Seq2Seq} output words are missing, contributing negatively to the system's results. Of the three models that address this issue through alignment and combination, {\bf Seq2Seq+Rules Morph}, {\bf Seq2Seq+MLE Simple}, and {\bf Seq2Seq+MLE Morph}, the last two using the MLE technique are the best performers overall in Exact match. It's noteworthy that in CPI match, \textbf{MLE Morph}'s performance is almost equivalent to the best systems' performance. The CPI metric values are consistently higher than CI by $\sim$1.3\% absolute for all models. Blind test results presented in the right hand side of Table~\ref{results} are consistent with Dev results. \section{Error Analysis} We classified a sample of 100 word errors (ignoring capitalization and punctuation) from the Dev set of our best performing model ({\bf Seq2Seq+MLE Morph}). Our classification results are presented in Table~\ref{error_analysis} along with representative examples. \paragraph{Gold Errors} We found 52 gold errors, where the human-provided target reference is incorrect. Romanization errors such as typos, incorrect vowelization, and dropped definite articles, constitute roughly 65\% of gold errors. The rest of the errors include issues such as first and last name flipping which we classify as an alignment issue, Arabic input source typos, and errors in which the target is a translation instead of a Romanization. Notably, we observe that our {\bf Seq2Seq+MLE Morph} model generates correct predictions for 85\% of all gold error cases. \paragraph{System Errors} Romanization errors make up 75\% of system errors. The vast majority of these mistakes are due to wrong prediction of vowels or gemination. An additional 21\% of the errors is due to Seq2Seq model hallucinations of characters unsupported by the source input. We also encountered 2 predictions that did not match the target reference but are correct variants. In $\sim$44\% of system error cases, outputs generated by the {\bf MLE Morph} or {\bf Rules Morph} models are in fact correct, but were not chosen during alignment and combination because of existing Seq2Seq answers. \section{Conclusions and Future Work} We presented a new task for Arabic NLP, namely the Romanization of Arabic bibliographic records. Our extracted corpus and benchmark data splits, as well as our code base will be publicly available. In the future, we plan to create an online Romanization interface to assist librarians. As more data is created efficiently, better models can be created. We also plan to exploit the latent annotations in bibliographic records for improving Arabic NLP tools, e.g. using vowelization for automatic diacritization and possible morphological disambiguation \cite{Habash:2016:exploiting}, marked clitics for tokenization, and Roman-script capitalization for Arabic named entity recognition. \section*{Acknowledgments} This work was carried out on the High Performance Computing resources at New York University Abu Dhabi (NYUAD). We thank Salam Khalifa, Ossama Obeid, Justin Parrott, and Alexandra Provo for helpful conversations. And we especially thank Elie Kahale for introducing us to this interesting challenge during the Winter Institute in Digital Humanities at NYUAD.
2024-02-18T23:40:57.028Z
2021-03-15T01:13:43.000Z
algebraic_stack_train_0000
3,754
3,811
proofpile-arXiv_066-2378
\section{INTRODUCTION} \label{sec1} As a core technique in modern data-driven artificial intelligence, Deep Neural Networks (DNNs) have surpassed the achievement of former methods in many typical problems and have made excellent solutions to questions in interdisciplinary research. However, the architecture designing of DNNs is limited by the existing knowledge of designers, which makes it hard to find the global best architecture for a given task. Hence, much attention has been paid to the Neural Architecture Search (NAS) to relieve the burden of researchers from architecture design for DNNs and to best explore the architecture searching space \cite{ref1,ref2}. There are many methods proposed to search architecture, among which Reinforcement Learning (RL) and Evolutionary Algorithm (EA) are the most popular. Zoph et al. (2017) \cite{ref3} firstly use the policy gradient algorithm, a RL approach, as the Recurrent Neural network (RNN) controller to produce new architectures of Convolution Neural Network (CNN). Subsequently, Zoph et al. (2018) \cite{ref4} use the RL with proximal policy optimization as the RNN controller. Baker et al. (2017) \cite{ref5} use Q-learning with the $\epsilon$-greedy exploration strategy to sequentially search for neural architectures. To relieve expensive calculations on GPUs, several speed-up methods and efficient solutions are proposed based on the RNN controller. Pham et al. (2018) \cite{ref6} propose Efficient Neural Architecture Search (ENAS), in which the controller searches for the best subgraph within a larger graph in the first stage and shares parameters between subgraphs in the second stage. Compared with the original work in \cite{ref3}, ENAS accelerates the efficiency of NAS up to a thousand times. As another popular method, NAS based on EA has a history of more than 30 years. Gruau (1993) \cite{ref7} proposes Cellular Encoding (CE), which is a grammatical inference process to search neural networks with Genetic Programming (GP). Yao and Liu (1997) \cite{ref8} propose Evolutionary Programming Network (EPNet), which evolves the network architecture and connection weights with Evolutionary Programming. To evolve neurons of a network, Stanley and Miikkulainen (2002) \cite{ref9} propose NeuroEvolution of Augmenting Topologies (NEAT), which encodes the neurons into Node genes and Connection genes and uses Genetic Algorithm (GA) to update Node genes and Connection genes. With the emergence of Automated Machine Learning, many NAS methods based on EA are proposed in recent years. Xie et al. (2017) \cite{ref10} propose GeNet based on GA to choose CNNs, where CNN is divided into different stages with pooling operation as the boundary, and all convolution operations in the same stage have the same convolution kernel and channel number. Suganuma et al. (2019) \cite{ref11} use Cartesian Genetic Programming (CGP), a graph form of GP, to encode the CNN architectures (CGP-CNN). CGP-CNN adopts highly functional Block as the node functions, for example, ConvBlock including convolution, batch normalization, and ReLU. Bi et al. (2019) \cite{ref12} propose Feature Learning GP (FLGP) to evolve convolution operators for feature learning on image classifications. Sun et al. (2019) \cite{ref13} use PSO to search Flexible Convolutional Auto Encoders (FCAE) with chain structure. To search image classifier, Real et al. (2019) \cite{ref14} modify the tournament selection evolution by introducing an age property to favor the younger genotypes (named Aging Evolution or regularized evolution), which keeps as many young individuals as possible. Most of the NAS methods are proposed to solve Computer Vision (CV) problems \cite{ref15} and focus on evolving CNN architectures. Nowadays, researchers make efforts to enable NAS to solve problems in the field of Natural Language Processing (NLP). Since Transformer \cite{ref16} has become the state-of-the-art model in NLP, David et al. (2019) \cite{ref17} use Transformer as initial, design a new searching space for NLP problems, and search for the best candidate Transformers, named Evolved Transformer. Ramakanth et al. (2020) \cite{ref18} propose Flexible and Expressive Neural Architecture Search (FENAS), dividing the search process into two stages similar to ENAS\cite{ref6}. The results show FENAS can reproduce Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) structures. Sentence classification task is a classical and fundamental task in the field of NLP. Motivated by the effectiveness of Transformer for NLP problems and the natural representation of DNN with CGP, this paper proposes a CGP encoding-based NAS (CGPNAS) method to deal with sentence classification task. The remaining parts are organized as follows: Section \ref{sec2} introduces the related work briefly; Section \ref{sec3} proposes the CGPNAS; Section \ref{sec4} presents the experimental results to evaluate the performance of CGPNAS; and finally, Section \ref{sec5} presents the conclusion. \section{RELATED WORK} \label{sec2} In this Section, we first introduce how EA applied to NAS in Part \ref{sec2.1}. Next, we briefly review the research on sentence classification task in Part \ref{sec2.2}. Finally, we introduce CGP method in Part \ref{sec2.3}, which is the encoding method of this paper . \subsection{Neural architecture search based on Evolutionary Algorithms} \label{sec2.1} NASs based on EA mainly focus on the following two aspects: encoding method and genetic operator. The encoding method is to convert the phenotype into the genotype of a given DNN. The genetic operator is to produce new genotypes in each iteration. Except encoding method and genetic operator, there are also a small number of studies on survival selection strategy and parental selection strategy \cite{ref14,ref19}. In EA, there are two classic kinds of genetic operators: Crossovers and Mutations. Crossovers combine the genotype of two or more parents to get one or more offspring genotypes. Mutations change the genotype of a parent to get a new genotype. To produce new genotypes, different NASs use one or both two kinds of genetic operators. CGP-CNN \cite{ref11} use mutation as genetic operator only. NEAT \cite{ref9}, GeNet \cite{ref10}, FLGP \cite{ref12}, AmoebaNet-A \cite{ref14} and DCNN designer \cite{ref20} use both crossover and mutation as genetic operators. There are two types of encoding methods: direct and indirect. As a widely used method, the direct encoding methods explicitly specify neural architecture information with genotypes. In NEAT \cite{ref9}, the genotype is composed of Node genes and Connection genes. Node genes store the node type, indicating input (or sensor) node, output node or hidden node. Connection genes store the numbers of in-nodes and out-nodes, and the weights, states (Enabled or Disenabled) and innovation numbers of the connections. Because FCAE is used to evolve a chain architecture without explicit topological connection information, Sun et al. \cite{ref13} only encode node type and its parameter information into genotype. The indirect encoding methods specify only a generating rule of genotypes. CE \cite{ref7} is a classical indirect encoding method. The entire neural network evolves from a single ancestor cell where the evolutionary DNA is stored in a tree structure. The tree structure defines the method of cell division, generating the final network topology with cell development. \subsection{Sentence classification task} \label{sec2.2} Sentence classification is a classical and fundamental task in NLP. Traditional classification methods often use human-designed features, which could learn only the shallow representation of sentences. With the development of deep learning, CNN, RNN and Attention \cite{ref21} are widely used in sentence classification tasks. Hochreiter and Schmidhuber (1997) \cite{ref22} propose Long Short-term Memory (LSTM) as a special RNN for long-term dependencies learning, which relieves the gradient disappearance effectively in the process of back propagation by gating mechanism. After that, there is great success in dealing with NLP problems with LSTM. Kim (2014) \cite{ref23} apply a simple CNN to sentence classification task, and achieve excellent results on multiple benchmarks. Compared with traditional machine learning methods, Kim’s method is good at capturing location features of sentences. Vaswani et al. \cite{ref16} (2017) propose the Transformer architecture based on Attention mechanisms, which could be widely used in NLP tasks. BERT \cite{ref24} is a pre-trained architecture, characterized by Masked Language Model and Next Sentence Prediction, which could create the state-of-the-art performance for a wide range of tasks by finetuning just the output layer. Since different methods have their own advantages, many scholars combine multiple methods to achieve competitive results than a single method. Lai et al. \cite{ref25} (2015) propose Recurrent Convolutional Neural Networks, combining the bidirectional RNN and max-pooling layer in CNN. Liu and Guo \cite{ref26} (2019) propose AC-BiLSTM, combining the Attention mechanism, convolutional layer and bidirectional LSTM. Zhang et al. \cite{ref27} (2019) propose 3W-CNN, combining deep learning methods and traditional feature-based methods. Zhang et al. design a confidence function to divide the outputs of CNN into 2 parts with strong and weak confidence, respectively. The CNN classification outputs with weak confidence will be reclassified by NB-SVM proposed in \cite{ref28}. \subsection{Cartesian genetic programming} \label{sec2.3} As a graph form of GP, CGP is initially proposed to optimize digital circuits \cite{ref29}, and hence, each intermediate node has two inputs. Subsequently, CGP is applied to many problems, such as image processing and molecular docking \cite{ref30,ref31,ref32,ref33}. As shown in Fig. 1, CGP is represented by a directed graph with n input nodes, m output nodes. Except input nodes and output nodes, CGP has $r\ast c$ intermediate nodes, also known as function nodes, where r and c denote rows size and Columns size, respectively. CGP could set the number of inputs for each function node, for example, Ref. \cite{ref29} sets 2 as the number of inputs and the number of outputs is usually set 1. In addition, CGP forbids the links in the same grid columns, and usually sets a max stride of connection between columns, called “levels-back”, which can increase or reduce the size of searching space. Borrowing the words in genetics, some function nodes, e.g., Node $a_{r,c}$, is not be used as input for subsequent nodes, called inactive nodes \cite{ref11}. \begin{figure}[H] \centering \label{fig1} \includegraphics[scale=0.2]{1.pdf} \caption{Illustration of Cartesian Genetic Programming.} \end{figure} \section{OUR METHOD} \label{sec3} Motivated by the natural representation form of neural networks by CGP, we propose a novel NAS method based on CGP, named CGPNAS, to deal with sentence classification task. In Part \ref{sec3.1}, we introduce CGP coding method applied to NAS. In Part \ref{sec3.2}, we present the function nodes used in this paper. In Part \ref{sec3.3}, we design an evolution method for CGPNAS. \subsection{CGP coding method} \label{sec3.1} For NAS problem, CGP uses a two-dimensional grid as the phenotype of neural networks, as shown in Fig. 1, which is a natural presentation of neural networks due to the topological similarity between CGP and neural networks. The links represent the data flow and the function nodes represent basic operations of the neural networks, such as Convolution, Attention and so on. The encoding structure of CGP is a triplet shown at the bottom in Fig. 2-a, indicating the function name and the two numbers of input nodes. An illustrating genotype, with 10 function nodes, is shown above the encoding structure in Fig. 2-a. For the genotype, each gene corresponds to a node in Fig. 2-b, which is the intermediate phenotype with both inactive and active links in dashed and solid arrows, respectively. In addition, Node 6 and Node 7 are both inactivate nodes. Fig. 2-c is the final phenotype with only active links in solid arrows and can be used as a DNN to solve problems. \begin{figure}[H] \label{fig2} \centering \includegraphics[scale=0.25]{2.pdf} \caption{Illustration of Cartesian Genetic Programming.} \end{figure} \subsection{Function Node Design} \label{sec3.2} The set of function nodes is important for the evolved neural architectures. Hence, for the task of sentence classification, we design the set of function nodes as follows, denoted as S: $$ S=\{Conv,\ Atte,\ Linear,\ Sum,\ ReLU,\ LNorm,\ GLU\} $$ where the enumerated symbols mean the operations of Convolution, Attention, Linear, Sum, ReLU, Layer Normalization \cite{ref35} and Gated Linear Units (GLU) \cite{ref36}. The function node types, the number of input nodes, parameter name, candidate parameter values, input and outputs dimensions are shown sequentially in Table 1, from the fifth to the sixth column, where $b$, $l$, $d$, and $d^\prime$ denote batch size, max sentence length, input dimension of word vectors and output dimension of word vectors. It is worth emphasizing that when the node has two inputs, we use $d_1$ and $d_2$ to represent the word vector dimensions of the two inputs respectively. \begin{table}[H] \label{table1} \centering \caption{THE TYPES OF FUNCTION NODES AND THEIR CANDIDATE PARAMETER VALUES.} \begin{tabular}{l|l|l|l|l|l} \hline Node Type & \# Input Nodes & Para. Name & Para. Value & Input Dim. & Output Dim. \\\hline Conv. & 1 & Channel & \{16, 32\} & $b\times l\times d$ & $b\times l\times d'$ \\ & & Kernel & \{1, 3, 5\} & & \\\hline Atte. & 1 & Head & \{4, 8, 16\} & $b\times l\times d$ & $b\times l\times d$ \\\hline Linear. & 1 & Channel & \{32, 128\} & $b\times l\times d$ & $b\times l\times d'$ \\\hline Sum. & 2 & - & - & $b\times l\times d_1$ & $b\times l\times d'$ \\ & & & & $b\times l\times d_2$ & \\\hline ReLU. & 1 & - & - & $b\times l\times d$ & $b\times l\times d$ \\\hline LNorm. & 1 & - & - & $b\times l\times d$ & $b\times l\times d$ \\\hline GLU. & 1 & - & - & $b\times l\times d$ & $b\times l\times d'$ \\\hline \end{tabular} \end{table} For the task of sentence classification, one-dimensional convolutions and multi-head attention are used. The Linear node represents a linear transformation. The function of Sum node is to merge two branches. When two branches with different dimensions of word vectors are going to be merged, the smaller word vector would be filled with 0 at its end to force it into the same size as the larger one. Although Sum node has two input nodes formally, it is allowed to receive the same input two times from a single precursor Sum node, such as Node 1 and Node 5, shown in Fig. 2-a. The Layer Normalization is proposed by Ba et, al. \cite{ref35} for RNN, which is normalized in the channels and features of samples. The Linear node represents a linear transformation. GLU node is a variant of Convolution with gate-controlled outputs. \subsection{Evolution strategy design} \label{sec3.3} CGP usually uses the $1+\lambda$ Evolutionary Strategy (ES) to update and select the population, meaning that one parental individual and $\lambda$ offspring individuals compete to survive into the next generation. Through mutation operation and adaptive selection, the population evolves towards the optimal goal. According to \cite{ref11}, there are two kinds of mutations in 1+$\lambda$ ES, named forced mutation and neutral mutation, respectively. The forced mutation works on all parental nodes to generate offspring, and the neutral mutation works only on inactive parental nodes to contribute potentially new nodes for the next generation. Both forced mutation and neutral mutation are point mutations, which means that the function and connection of nodes randomly change to valid values according to the mutation rate. To enhance exploration and overcome the local optimal traps, we double the initial mutation rate for the late 25\% generation. The algorithm is described as follows. Firstly, the $\lambda$ offspring individuals are produced by the current parental individual through the forced mutation. If all fitness of the $\lambda$ offspring individuals are worse than their parental individual, the inactive nodes of the parental individual are mutated by neutral mutation, and the $\lambda$ offspring individuals are eliminated. Otherwise, the offspring individual with the highest fitness is selected as the parental individual of the next generation. The pseudocode is presented as follows: \begin{algorithm}[htb] \caption{ Evolution Strategy} \begin{algorithmic}[1] \State Create a parent randomly \State Evaluate the fitness of parent \While{ generation $\textless$ Max\_generation} \State Double the mutation rate for late 25\% generation \State $\lambda$ offspring are produced by forced mutation. \State Evaluate the fitness of $\lambda$ offspring individuals \If{the $\lambda$ offspring individuals are all worse than the parent} \State Mutate the inactive nodes of parent with neutral mutation \Else{ offspring with the best fitness become the new parent for the next iteration} \EndIf \EndWhile \State End \end{algorithmic} \end{algorithm} The accuracy of sentence classification task corresponding to each architecture is taken as the individual fitness. The neutral mutation acts on inactive nodes, it does not change the parental fitness, so we do not need to evaluate the altered parent by the neutral mutation. \section{EXPERIMENT} \label{sec4} In this Section, we first introduce datasets, hyperparameter and experimental setting details in Part \ref{sec4.1} and \ref{sec4.2}. Next, we compare the searched architecture obtained by CGPNAS and CGPNAS(GloVe) with the classical architecture in Part \ref{sec4.3}. GloVe \cite{ref37} is an embedding method, that allows neural networks not to learn the correlation between words from scratch, so GloVe can improve the performance of the network. And then we verify the transfer ability of the searched architecture on different datasets in Part \ref{sec4.4}. Finally, we implement ablation testing to analyze the impact of function nodes on the searched architecture in Part \ref{sec4.5}. \subsection{Datasets} \label{sec4.1} The following datasets are used in our experiments, shown in Table 2. There are 3 datasets labeled with positive and negative, including SST2 \cite{ref38} (Binary labeled version of Stanford sentiment treebank), MR \cite{ref39} (a large movie review dataset extracted from Rotten Tomatoes web) and IMDB \cite{ref40}. Samples of SST5 \cite{ref38} (Stanford Sentiment Treebank) are labeled with 5 levels, i.e., very positive, positive, neutral, negative and very negative. Samples of AG\_news \cite{ref41}, extracted by ComeToMyHead website, are labeled with 4 kinds of tags, i.e., World, Sports, Business and Sci/Tech. \begin{table}[H] \label{table2} \centering \caption{PROPERTIES OF THE EXPERIMENTAL DATASETS.} \begin{tabular}{l|l|l|l|l|l} \hline Dataset & SST2 & SST5 & MR & IMDB & Ag\_news \\\hline Label levels & 5 & 2 & 2 & 2 & 4 \\\hline max sentence length & 50 & 50 & 50 & 400 & 50 \\\hline word vector dimension & \multicolumn{5}{c}{300} \\\hline \end{tabular} \end{table} \subsection{Hyperparameter and experiment details} \label{sec4.2} The CGP parameters are shown in Table 3. Initially, we set the CGP grid by $5\times20$ and use a relatively large number of columns size to generate deep architectures. To leverage searching space complexity and models’ generalization ability, Levels-back is set to 3. The lower and upper bounds of numbers of active nodes are 10 and 60, respectively. To enhance the exploration ability, the offspring size is set to 4. \begin{table}[H] \label{table3} \centering \caption{EXPERIMENTAL PARAMETERS.} \begin{tabular}{l | l} \hline Parameters&Values \\ \hline Input nodes number&1 \\ Input nodes number&1 \\ Rows Size r&5 \\ Columns Size c&20 \\ Levels-back&3 \\ Activate nodes number&$[10, 60]$ \\ Mutation rate&$\{0.1, 0.2, 0.4\}$ \\ Offspring Size $\lambda$&4 \\ Max\_generation&1000 \\\hline \end{tabular} \end{table} To enhance exploration, we set the initial mutation rate of the early 75\% generations as 0.1, and double it into 0.2 for the late 25\% generations. However, the mutation rate of SUM function nodes should be larger than that of other functional nodes to decrease the probability of single-chain architectures. Hence, we set the mutation rate of SUM function nodes as 0.2 and 0.4 by trials, respectively, in the early and late generations. Taking the time consumption into account, we try to use the small values for the max sentence length and the word vector dimension as shown in Table 2. However, due to the average sentence length of IMDB is 8 times larger than the other dataset, the max sentence length is set as 400. For all experimental datasets, the word vector dimension is set uniformly as 300. In Parts \ref{sec4.3},\ref{sec4.4} and \ref{sec4.5} we train CGPNAS and CGPNAS(GloVe) with Adam Optimizer for 50 epochs and the learning rate is 0.01. In Part \ref{sec4.3}, the classic architectures in comparison include TextCNN \cite{ref23}, Transformer \cite{ref16}, BERT \cite{ref24}, Evolved Transformer \cite{ref17}, AC-BiLSTM \cite{ref26}, 3W-CNN \cite{ref27} and FENAS \cite{ref18}. In this paragraph, we introduce the training details of the comparison algorithm in Part \ref{sec4.3}. Similar to CGPNAS, we also train TextCNN, Transformer, BERT and Evolved Transformer with Adam Optimizer for 50 epochs and the learning rate is 0.01. In addition, We train a 6 layers Transformer encoder \cite{ref17} and the number of attention heads is set to 6. We follow the official guide from \cite{ref42} to finetune the BERT-Base-Uncased model \cite{ref24} for downstream tasks. We use the searched network from \cite{ref17}, training a 6 layer Evolved Transformer encoder with a linear layer to perform classification task at last. \subsection{Comparation with other algorithms} \label{sec4.3} To present the performance of CGPNAS and CGPNAS(GloVe) on different datasets and perform statistical tests, we execute CGPNAS and CGPNAS(GloVe) 10 times on each dataset, respectively. As an example, one of the searched architectures on IMBD dataset is shown in Table 4. As shown in Table 5, with the help of GloVe, CGPNAS(GloVe) knows the correlation between words in the initial stage and improves the accuracy by 2-5\% on different datasets, compared with CGPNAS. Hence, the performance of CGPNAS is similar to TextCNN and the performance of CGPNAS(GloVe) is similar to Transformer and Evolved Transformer. As the human-designed architectures, BERT and AC-BiLSTM get the best accuracy on 2 and 3 datasets, respectively. It can be said that for the sentence classification task, even if the existing NASs can reach the human-designed level, they are still difficult to outperform the best human-designed methods. \begin{table}[H] \label{table4} \centering \caption{DIMENSION CHANGE OF THE SEARCHED NEURAL NETWORK.} \begin{tabular}{ll|ll} \hline \multicolumn{2}{c}{Input} & \multicolumn{2}{c}{$8\times400\times300$} \\\hline Sum &$8\times400\times300$ & Linear (Channel: 128) & $8\times400\times128$ \\\hline Conv (Channel: 32 Kernel: 1) & $8\times400\times32$ & LNorm & $8\times400\times128$ \\\hline \multicolumn{2}{c}{Sum} &\multicolumn{2}{c}{$8\times400\times128$} \\\hline \multicolumn{2}{c}{Sum} &\multicolumn{2}{c}{$8\times400\times128$} \\\hline \multicolumn{2}{c}{Atte (Head: 16)} &\multicolumn{2}{c}{$8\times400\times128$} \\\hline \multicolumn{2}{c}{LNorm} &\multicolumn{2}{c}{$8\times400\times128$} \\\hline \multicolumn{2}{c}{Atte (Head: 4)} &\multicolumn{2}{c}{$8\times400\times128$} \\\hline \multicolumn{2}{c}{Conv (Channel: 32 Kernel: 3)} &\multicolumn{2}{c}{$8\times400\times32$} \\\hline \multicolumn{2}{c}{Conv (Channel: 16 Kernel: 5)} & \multicolumn{2}{c}{$8\times400\times16$} \\\hline \end{tabular} \end{table} \begin{table}[H] \label{table5} \centering \caption{COMPARISON OF DIFFERENT ALGORITHMS. (“*” RESULTS FROM THE ORIGINAL PAPERS. THE CELLS HIGHLIGHTED IN BOLD INDICATE THE BEST ACCURACY)} \begin{tabular}{l|l|l|l|l|l} \hline \diagbox{Architecture}{Dataset} & SST2 & SST5 & MR & IMDB & Ag\_news \\\hline TextCNN (2014) & 0.812 & 0.372 & 0.713 & 0.84 & 0.817 \\\hline Transformer (2017) & 0.855 & 0.365 & 0.746 & 0.863 & 0.853 \\\hline BERT(2019) & $\mathbf{0.915}$ & 0.423 & 0.821 & 0.912 & $\mathbf{0.892}$ \\\hline Evolved Transformer (2019) & 0.769 & 0.385 & 0.717 & 0.873 & 0.812 \\\hline AC-BiLSTM* (2019) & 0.883 & $\mathbf{0.489}$ & $\mathbf{0.832}$ & $\mathbf{0.918}$ & - \\\hline 3W-CNN* (2019)& - &- &0.823& -& -\\\hline FENAS* (2020)& 0.866& - &-& -& -\\\hline CGPNAS& $0.733\pm0.027$& $0.362\pm0.006$& $0.704\pm0.015$& $0.844\pm0.012$ &$0.843\pm0.017$\\\hline CGPNAS (GloVe) &$0.788\pm0.013$& $0.413\pm0.013$& $0.744\pm0.015$& $0.864\pm0.011$& $0.864\pm0.018$ \\\hline \end{tabular} \end{table} \subsection{Transfer ability study} \label{sec4.4} To verify the transfer ability of the searched architecture, we transfer all the architectures searched on one dataset to the other datasets. The results are shown in Table 6. We can see that the architectures searched on Ag\_news still perform better on the target dataset; the mean of accuracy improves 1\% on target datasets SST2 and MR, reduces by 1\% on target dataset SST5 and reduces by 3\% on target dataset IMDB. But the architectures searched on SST2, SST5, MR and IMDB perform slightly poorly on target datasets, the mean of accuracy is reduced by 2-5\%. In particular, on the target dataset Ag\_news, the mean of most accuracy is reduced by 7-8\%, especially 15\% of architectures searched on SST5. The results show that the architecture searched by CGPNAS has transfer ability and can be applied to most target datasets, but the accuracy of some target datasets has decreased significantly. \begin{table}[H] \label{table6} \centering \caption{TRANSFER TESTING OF CGPNAS.} \begin{tabular}{l|l|l|l|l|l} \hline \diagbox{Origin}{Target}& SST2& SST5& MR& IMDB& Ag\_news\\\hline SST2& $0.733\pm0.027$& $0.324\pm0.022$& $0.661\pm0.020$& $0.814\pm0.009$& $0.762\pm0.057$\\\hline SST5& $0.673\pm0.015$& $0.362\pm0.006$& $0.654\pm0.017$& $0.797\pm0.018$& $0.689\pm0.021$\\\hline MR &$0.706\pm0.018$& $0.324\pm0.013$& $0.704\pm0.015$& $0.813\pm0.008$&$0.769\pm0.046$\\\hline IMDB& $0.689\pm0.044$ &$0.341\pm0.011$ &$0.674\pm0.014$ &$0.844\pm0.012$& $0.776\pm0.044$\\\hline Ag\_news& $0.742\pm0.026$& $0.351\pm0.020$& $0.711\pm0.018$& $0.819\pm0.011$& $0.843\pm0.017$ \\\hline \end{tabular} \end{table} \subsection{Ablation study} \label{sec4.5} To investigate the key component that has a remarkable contribution to the performance, the ablation testing is presented in this part. For this purpose, we reduce the diversity of functions in the set of function nodes and create three new sets of function nodes. The first one is denoted as $S \backslash \{Conv\}$, which removes Convolution in the set of function node. The second one is denoted as $S \backslash \{Atte\}$, which removes Attention in set of function node. The third one is denoted as $S \backslash \{Conv,Atte\}$, which removes both Convolution and Attention in the set of function node. We execute CGPNAS 10 times on each set of function nodes, respectively. Schematic architectures of ablation testing are shown in Fig. 3. It can be seen from Table 7. that even if the Convolution is removed, the accuracy improves 0.6\% on IMDB and drops only by 1-2\% on the rest datasets. However, if the Attention is removed, the average accuracy drops by 1.1\% on Ag\_news but by 4-6\% on the other datasets. The experimental results show that the Attention function node is vital for the searched architecture. While it is also noted that even if all Convolution and Attention nodes are both removed, the accuracy drops by 4-5\%. However, the accuracies of the evolved architectures, excluding Convolution and Attention nodes, are higher than those only Attention excluded on SST2, SST5 and MR. It can be known that the architecture shown in Fig. 3-c performs mainly the linear transformation from its input, but it still achieves better accuracy than $S\{Atte\}$. The detailed mechanism is worthy of investigation in the future. \begin{table}[H] \label{table7} \centering \caption{ABLATION TESTING OF CGPNAS.} \begin{tabular}{l|l|l|l|l|l} \hline \diagbox{Search Space}{Dataset}& SST2& SST5& MR& IMDB& Ag\_news\\\hline $S \backslash \{Conv\}$& $0.717\pm0.018$& $0.348\pm0.003$& $0.678\pm0.029$& $0.850\pm0.005$& $0.838\pm0.006$\\\hline $S\backslash\{Atte\}$& $0.678\pm0.022$& $0.319\pm0.003$& $0.647\pm0.008$& $0.798\pm0.016$& $0.832\pm0.007$\\\hline $S\backslash\{Conv,Atte\}$&$ 0.690\pm0.019$& $0.325\pm0.003$& $0.663\pm0.027$& $0.795\pm0.009$& $0.823\pm0.006$\\\hline S&$ 0.733\pm0.027$& $0.362\pm0.006$&$ 0.704\pm0.015$& $0.844\pm0.012$& $0.843\pm0.017$\\\hline \end{tabular} \end{table} \begin{figure}[H] \label{fig3} \centering \includegraphics[scale=0.25]{3.pdf} \caption{Schematic architectures of ablation testing.} \end{figure} \section{CONCLUSION} \label{sec5} CGP is a natural representation of neural networks and can evolve the structure and parameters of neural architectures at the same time. For this reason, we propose CGPNAS which can reach the state-of-the-art of human-designed architectures for sentence classification tasks. The transfer study proves that the evolved architectures have transfer ability and can be applied to different target domains. According to the ablation testing, the attention mechanism is very important for CGPNAS, which also proves the reason why the attention mechanism is widely used in NLP. NAS is still worthy of in-depth study on NLP and subsequent work can increase the diversity of function, such as adding LSTM in the set of function node. To give the design specification of the neural network, a large number of experiments can be carried out to give which combinations are more likely to appear in the network. In addition, the basic mathematical operations can be considered as function nodes to expand the representation ability of the evolved architectures. \section*{Acknowledgement} This work is supported by the National Natural Science Foundation of China (61876069, 61972174 and 61972175), the Jilin Natural Science Foundation (20200201163JC), and the Science and Technology Planning Project of Guangdong Province (2020A0505100018), the Guangdong Key-Project for Applied Fundamental Research (2018KZDXM076).
2024-02-18T23:40:57.334Z
2021-09-30T02:06:32.000Z
algebraic_stack_train_0000
3,762
4,940
proofpile-arXiv_066-2428
\section{Introduction} \label{sec:intro} The Standard Model (SM) of elementary particle physics is constructed based on a non-Abelian gauge theory of SU(3)$_{\rm C} \otimes$ SU(2)$_{\rm L}\otimes$U(1)$_{\rm Y}$, that has been experimentally verified with a high accuracy to the highest energies accessible to date \cite{Zyla:2020zbs}. On the other hand, there is mounting evidence from observations for the need of new physics beyond the SM, such as the dark matter, neutrino mass generation, and the matter/antimatter asymmetry. Unlike the past decades, at the moment we are lacking well-defined traces of where to look for new physics. While there are many loose ends in the SM of particle physics and cosmology, however, there is no clear indication at what energy scales new phenomena would appear below the Planck scale. This gives us the task to use all available tools to search for new phenomena, particularly all the discovered particles as vehicles for our searches. Especially, the scalar boson discovered in 2012~\cite{Aad:2012tfa,Chatrchyan:2012xdj} which closely resembles the SM Higgs boson is very well suited for beyond the Standard Model (BSM) searches~\cite{deFlorian:2016spz}. Currently, the couplings of the Higgs boson to the third generation SM fermions have been established with a precision of $10\% -20\%$ (for an overview of the current status and projections, see {\it e.g.}~\cite{deBlas:2019rxi}), The high-luminosity phase of the LHC will study the properties of this particle and its couplings to a precision at a few percent level \cite{ATL-PHYS-PUB-2018-054,CMS-PAS-FTR-18-011}. The next collider facility will most likely be a Higgs factory~\cite{EuropeanStrategyforParticlePhysicsPreparatoryGroup:2019qin,EuropeanStrategyGroup:2020pow} in the form of an electron-positron collider running at or slightly above the $ZH$ threshold, such as the International Linear Collider (ILC) \cite{Baer:2013cma, Behnke:2013lya}, the Future Circular Collider (FCC-ee) \cite{Abada:2019zxq}, the Circular Electron-Positron Collider (CEPC) \cite{CEPCStudyGroup:2018ghi}, or the Compact Linear Collider (CLIC) at higher energies \cite{Aicheler:2012bya,CLIC:2016zwp} to achieve a per-mille level accuracy for the Higgs couplings to $W^+W^-,ZZ,\gamma\gamma,gg$ and $b\bar b, \tau\bar\tau, c\bar c$, as well as the invisible decay mode. However, there will still be parts of the Higgs sector left unexplored or measured with low precision because it can only be probed with very rare processes for which there are too low rates at a Higgs factory and the LHC measurements (or searches) suffer from large systematic uncertainties due to the challenging experimental environment. To this class belong the couplings to the first and second generations of fermions. The Higgs mechanism in the SM provides the mass for all elementary particles, and thus specifies the form of their interactions associated with the electroweak symmetry breaking (EWSB). With only a single SU(2)$_L$ Higgs doublet and the minimal set of interactions at the renormalizable level, the Yukawa couplings of SM fermions are proportional to the respective particle masses, and thus exhibit a large hierarchy. It would be desirable to achieve a better precision for the measurement of the Yukawa couplings of the light fermions, since this would be a direct and important test whether the Higgs mechanism as implemented in the SM provides the masses for all SM fermions, or whether it is a mixture of two (or more) mechanisms. Because of the small Yukawa couplings for light fermions predicted in the SM, any small deviation due to BSM physics may result in a relatively large modification to those couplings. The next target is the Higgs-muon coupling. The recent evidence for the $H\to\mu^+\mu^-$ decay at ATLAS and CMS indicates that the Yukawa coupling is present within the predicted order of magnitude \cite{Sirunyan:2020two,Aad:2020xfq}. However, the results are not yet at the $5\sigma$ level for discovery, and thus leaves room for $O(100\%)$ corrections. Also, the measurement is insensitive to the sign of the coupling. According to the current experimental projections, by the end of the high-luminosity runs of the LHC in the late 2030s the muon Yukawa coupling could be measured with an accuracy of about several tens of percent \cite{ATL-PHYS-PUB-2014-016} in a model-dependent way. This situation might not be improved very much neither at the Higgs factory due to the limited rate, nor at a high-energy hadron collider like the FCC-hh~\cite{Abada:2019lih,Benedikt:2018csr}, due to the systematics and the model-dependence. Thanks to the technological development~\cite{Delahaye:2019omf}, a renewed idea that has recently gathered much momentum is the option of a high-energy muon collider that could reach the multi-(tens of) TeV regime with very high luminosity \cite{Bartosik:2020xwr,Schulte:2020xvf,Long:2021upy}. % It has been demonstrated in the recent literature that a high-energy muon collider has great potential for new physics searches at the energy frontier from direct $\mu^+\mu^-$ annihilation and a broad reach for new physics from the rich partonic channels \cite{Han:2020uid,Costantini:2020stv,Buttazzo:2020uzc}, as well as precision measurements for SM physics \cite{Han:2020pif} and beyond \cite{Han:2020uak,Han:2021udl,Capdevilla:2020qel,Yin:2020afe,Capdevilla:2021rwo,Liu:2021jyc,Gu:2020ldn,Huang:2021nkl,Capdevilla:2021fmj}. Of particular importance is the connection between the muon collider expectation and the tantalizing hint for new physics from the muon $g-2$ measurement \cite{Muong-2:2006rrc,Muong-2:2021ojo}. In this paper, we propose one unique measurement and BSM search in the Higgs sector which serves as a paradigm example for exploiting a high-energy muon collider, namely the direct measurement of the muon Yukawa coupling. At a high-energy $\mu^+\mu^-$ collider, one probes the coupling at a much higher energy scale and it may reach some sensitivity to new physics with scale-dependent effects. Unlike the precision measurements at low energies where one probes the virtual quantum effects, our proposal is to directly measure the muon coupling associated with its mass generation. Our search strategy is generally applicable to other new physics searches involving final states of charged leptons and jets, that may provide general guidance for future considerations. The rest of the paper is organized as follows. We first present a brief overview and motivation for the importance of studies of the muon Yukawa coupling in Sec.~\ref{sec:setup}. In Sec.~\ref{sec:muonhiggs}, we examine the renormalization group (RG)-induced scale dependence of the couplings. This is important to relate a measured quantity in a high-energy collider setup to the low-scale value. In Sec.~\ref{sec:MuY}, we construct an effective field theory (EFT) setting to discuss possible deviations of the muon Yukawa coupling from its SM value. We present a few paradigm examples of modifications of the muon-Higgs coupling from its SM Yukawa value. In Sec.~\ref{sec:smeft} we then discuss different EFT parameterizations, constraints from unitarity limits in Sec.~\ref{sec:unitarity}, and consequences for ratios of different production cross sections in Sec.~\ref{sec:ratios}. It sets the theoretical frame for our phenomenological studies in Sec.~\ref{sec:Pheno}, where we analyze the collider sensitivity for the determination of the muon Yukawa coupling at a high energy muon collider, before we conclude in Sec.~\ref{sec:summary}. \section{Theoretical Considerations for the Muon Yukawa Coupling} \label{sec:setup} \subsection{Illustrations of the running of the Muon Yukawa Coupling} \label{sec:muonhiggs} When testing the muon-Higgs Yukawa coupling, it is necessary to properly take into account the energy-scale dependence of the coupling, which is a fundamental prediction in quantum field theory. The specific form of this running depends on the particle spectrum and their interactions in the underlying theory. In the electroweak sector of the SM, the dominant contribution to the renormalization group (RG) running is the top Yukawa coupling, followed by the strong and EW gauge interactions. For the sake of illustration, the coupled renormalization group equations (RGEs) of Yukawa couplings $y_\mu, \, y_t$, vacuum expectation value $v$, and gauge couplings $g_i$ are given in the $\overline{\rm MS}$ scheme at leading order (LO) in one-loop by \cite{Machacek:1983tz,Machacek:1983fi,Arason:1991hu,Arason:1991ic,Arason:1992eb,Castano:1993ri,Grzadkowski:1987tf} \begin{eqnarray} \beta_{y_t}&=& \frac{\dd y_t}{\dd t} = \frac{y_t}{16 \pi^2} \left (\frac{9}{2}y_t^2 - 8 g_3^2 - \frac{9}{4} g_2^2 - \frac{17}{20} g_1^2 \right), \\ \beta_{y_\mu}&=& \frac{\dd y_\mu}{\dd t} = \frac{y_\mu}{16 \pi^2} \left (3y_t^2 - \frac{9}{4}(g_2^2 + g_1^2) \right), \\ \beta_{v}&=& \frac{\dd v}{\dd t} = \frac{v}{16 \pi^2} \left(\frac{9}{4} g_2^2+\frac{9}{20} g_1^2-3 y_t^2 \right), \\ \beta_{g_i} &=& \frac{\dd g_i}{\dd t} = \frac{b_i g_i^3}{16 \pi^2}, \end{eqnarray} with $t=\ln(Q/M_Z)$ and the coefficients $b_i$ for the gauge couplings $(g_1,g_2,g_3)$ given as \begin{align} b_i^{\rm SM} = & (41/10,-19/6,-7) . \end{align} We show the LO RGE running of the muon Yukawa $y_\mu$ in the SM in Fig.~\ref{fig:ym_run} (red solid curve) and the SM vacuum expectation value $v$ in Fig.~\ref{fig:vev_run} (left axis) as functions of the energy scale $Q$, respectively. With the relation $$m_\mu(Q)=y_\mu(Q) v(Q)/\sqrt 2,$$ we also show the running of the muon mass, $m_\mu(Q)$, in Fig.~\ref{fig:vev_run} (right axis). At the energy scales accessible in near future colliders, the change in $y_\mu$ is observed to be rather small, for example, $y_\mu(Q=15 {~\rm TeV})$ is found to be around $3\%$ smaller compared to $y_\mu(M_Z)$. Similarly, $v\ (m_\mu)$ runs down by about $4\%$ ($2\%$). \begin{figure}[htb] \centering \includegraphics[width=0.8\textwidth]{figs/YukawaRunning/ymu2.pdf} \caption{LO RGE running of the muon Yukawa $y_\mu$ coupling as a function of the energy scale $Q$, in the SM (red solid). In the extra-dimensional scenarios (with inverse radius $1/R = 3$ TeV), we consider 1) Bulk: all fields propagating in the bulk, and 2) Brane: all matter fields localized to the brane. } \label{fig:ym_run} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.8\textwidth]{figs/YukawaRunning/Mmu_vevSM2.pdf} \caption{LO RGE running of SM vacuum expectation value $v$ (left scale) and muon mass $m_\mu$ (right scale) as functions of the energy scale $Q$. } \label{fig:vev_run} \end{figure} New states appearing in beyond SM scenarios can modify the running of the relevant gauge and Yukawa couplings. Generically, the beta function for a coupling $\lambda$ is given as \begin{equation} \beta_\lambda = \beta_\lambda^{\rm SM} + \sum_{\rm s: ~massive ~new ~states} \theta(Q-M_s) \, \times\, N_s \beta_{s,\lambda}^{\rm NP}\;, \end{equation} where $\beta_\lambda^{\rm SM}$ is the SM beta function, and $\beta_{s,\lambda}^{\rm NP}$ represents the contribution of a new heavy state $s$ of mass $M_s$, with $N_s$ number of degenerate degrees of freedom. The theta function encodes the fact that the effect of new heavy states is included in the RG running once the energy scale $Q$ is above the threshold $M_s$, ignoring here for simplicity the effect of threshold corrections. In extensions of the SM, the muon-Higgs Yukawa coupling could also be affected both at the tree level and at the quantum level. In addition, the Higgs sector may show a rich flavor structure. In flavor-sensitive Higgs models, the SM prediction for the Yukawa couplings is lost, and the Yukawa couplings become free model parameters. The physical coupling of the SM Higgs to muons may be larger or smaller than its expected SM value. In principle, it could be completely absent, such that the muon mass is generated by other means. The assumption we make for the study in this paper is that the muon Yukawa coupling is a free parameter, as the mass generation for the muon is in general a mixture of the SM mechanism and a yet-unknown mechanism. A typical example for this is a Two-Higgs doublet model (2HDM), or in a general multi-doublet model, that generates third-generation Yukawa couplings, while the second generation couplings are from a different sector (a sample implementation of such a mechanism can be found in~\cite{Altmannshofer:2015esa}). Clearly, the LHC offers also some opportunities to probe first and second generation Higgs Yukawa couplings to light quarks~\cite{Soreq:2016rae}, which applies mostly to the Higgs charm Yukawa coupling~\cite{Bodwin:2013gca,Kagan:2014ila,Perez:2015aoa, Bishara:2016jga}, and maybe even strange tagging is possible at a future Higgs factory~\cite{Duarte-Campderros:2018ouv}. In weakly-coupled theories, the running effects for the muon-Yukawa coupling are rather moderate, similar in size to that in the SM. We will not show it separately. An interesting question is also whether there could be considerable CP violation in the Higgs Yukawa sector beyond CKM, where there are bounds e.g.~for the electron Yukawa coupling~\cite{Altmannshofer:2015qra}. Though it is perfectly possible in our setup in Sec.~\ref{sec:MuY} to discuss CP-violating operators for the muon Yukawa couplings, such a study is beyond the scope of this current paper. We add the remark that additional, flavor-dependent, higher-dimensional operators that are responsible for a deviation of the SM muon Yukawa coupling could easily lead to flavor-violating Yukawa couplings that induced $H \to e\mu$. This has been studied e.g. in~\cite{Harnik:2012pb}, however, we are not further investigating such flavor-violating processes in this paper. The EFT setup for our study is presented in detail in the next section. Large modifications to the running couplings compared to the SM case are not expected in four-dimensional quantum field theories essentially due to the logarithmic nature of the running. A qualitatively different scenario however is obtained if there is a tower of new physics states modifying the RGEs, asymptotically leading to a power-law running of the Yukawa coupling~\cite{Dienes:1998vh,Dienes:1998vg}. This four-dimensional description is equivalent to a theory with compactified flat extra space-like dimensions, with gauge and/or matter fields propagating in the higher-dimensional bulk. To illustrate this, we consider two scenarios of compactified flat extra-dimensions~\cite{Appelquist:2000nn}: a 5D model with the extra-dimension compactified on an $S_1/Z_2$ orbifold, and a 6D model with the two extra dimensions compactified on a square $T^2/Z_2$ orbifold~\cite{Appelquist:2000nn,Appelquist:2001mj}. In both models, we consider two cases: 1) all SM fields propagating in the bulk and 2) the SM gauge fields to be propagating in the bulk, with the matter fields of the SM restricted to the brane \cite{Bhattacharyya:2006ym,Cornell:2012qf,Blennow:2011mp,Kakuda:2013kba,Abdalgabar:2013oja}. The beta functions of the gauge couplings in such scenarios are given as: \begin{align} b_i^{\rm 5D} = & b_i^{\rm SM} + (S(t)-1) \times \left[\left(\frac{1}{10},-\frac{41}{6},-\frac{21}{2}\right)+\frac{8}{3} \eta \right] \nonumber\\ b_i^{\rm 6D} = & b_i^{\rm SM} + (\pi S(t)^2-1) \times \left[\left(\frac{1}{10},-\frac{13}{2},-10\right)+\frac{8}{3} \eta \right]. \end{align} Here, $S(t)$ counts the number of degrees of freedom $S(t)=e^{t}R$, $R$ being the radius of the extra dimension, $\eta$ being the number of generations of fermions propagating in the bulk. The corresponding one-loop RGE equations for the Yukawa couplings $y_t,\, y_\mu$ in the extra-dimensional scenarios are as follows \cite{Cornell:2011fw,Cornell:2012qf,Abdalgabar:2013oja} \begin{subequations} \begin{align} \frac{dy_t}{d t} = & \beta_{y_t}^{\rm SM} +\frac{y_t}{16\pi^2} 2(S(t)-1)\left(\frac{3}{2} y_t^2 - 8 g_3^2 - \frac{9}{4} g_2^2 - \frac{17}{20} g_1^2\right), &{\rm 5D~Brane}, \\ \frac{dy_\mu}{d t} = & \beta_{y_\mu}^{\rm SM} -\frac{y_\mu}{16\pi^2} 2(S(t)-1)\left(\frac{9}{4} g_2^2+ \frac{9}{4} g_1^2\right), &{\rm 5D~Brane}, \\ \frac{dy_t}{d t} = & \beta_{y_t}^{\rm SM} +\frac{y_t}{16\pi^2} (S(t)-1)\left(\frac{15}{2} y_t^2 - \frac{28}{3} g_3^2 - \frac{15}{8} g_2^2 - \frac{101}{120} g_1^2\right), &{\rm 5D~Bulk}, \\ \frac{dy_\mu}{d t} = & \beta_{y_\mu}^{\rm SM} +\frac{y_\mu}{16\pi^2} (S(t)-1)\left(6y_t^2-\frac{15}{8} g_2^2- \frac{99}{40} g_1^2\right), &{\rm 5D~Bulk}. \end{align} \end{subequations} \begin{subequations} \begin{align} \frac{dy_t}{d t} = & \beta_{y_t}^{\rm SM} +\frac{y_t}{16\pi^2} 4\pi(S(t)^2-1)\left(\frac{3}{2} y_t^2 - 8 g_3^2 - \frac{9}{4} g_2^2 - \frac{17}{20} g_1^2\right), &{\rm 6D~Brane}, \\ \frac{dy_\mu}{d t} = & \beta_{y_\mu}^{\rm SM} -\frac{y_\mu}{16\pi^2} 4\pi(S(t)^2-1)\left(\frac{9}{4} g_2^2+ \frac{9}{4} g_1^2\right), &{\rm 6D~Brane}, \\ \frac{dy_t}{d t} = & \beta_{y_t}^{\rm SM} +\frac{y_t}{16\pi^2} \pi(S(t)^2-1)\left(9 y_t^2 - \frac{32}{3} g_3^2 - \frac{3}{2} g_2^2 - \frac{5}{6} g_1^2\right), &{\rm 6D~Bulk}, \\ \frac{dy_\mu}{d t} = & \beta_{y_\mu}^{\rm SM} +\frac{y_\mu}{16\pi^2} \pi(S(t)^2-1)\left(6y_t^2-\frac{3}{2} g_2^2- \frac{27}{10} g_1^2\right), &{\rm 6D~Bulk}. \end{align} \end{subequations} We see from Fig.~\ref{fig:ym_run} that in the presence of such a tower of new states, the running of $y_\mu$ can be substantially altered for both the 5D (dot-dashed curves), and 6D (dashed curves) models. We note that the effects only become significant when close or above the new physics threshold, $1/R\sim 3$ TeV in our illustration. Above the threshold, the other more direct effects from the existence of the extra dimensions may be observable as well and a coordinated search would be beneficial. We conclude that while in the SM the energy dependence of the $y_\mu$ is a minor effect, there are viable models where the value and the running of this quantity could both follow completely different patterns, as illustrated above with extra-dimensional scenarios. In the next subsection, we will extend this direction in the EFT framework. \subsection{EFT Description of an Anomalous Muon Yukawa Coupling} \label{sec:MuY} In a purely phenomenological ansatz, if small modifications of the SM Lagrangian exist, they should be detectable most easily in interactions which are accidentally suppressed in the SM, and at the same time are unaffected by large radiative corrections. The muon mass and the associated production and decay processes perfectly fit this scenario. In this spirit, we introduce representative new interactions in form of a modification of this muon mass parameter, without referencing a specific model context. The modification is supposed to be tiny in absolute terms, but nevertheless becomes significant if compared with the SM muon Yukawa coupling which has a numerical value of less than $10^{-3}$. A few well-motivated physics scenarios with a modification of the SM can be constructed as we will discuss next. They may describe rather different underlying dynamics, but represent physically equivalent calculational frameworks in the perturbative regime. \subsubsection{The Yukawa interaction in the HEFT parameterization} \label{sec:heft} In the Higgs Effective Theory (HEFT)~\cite{Coleman:1969sm,Callan:1969sn,Weinberg:1980wa,Appelquist:1980vg,Longhitano:1980tm,Dobado:1990zh} or non-linear chiral-Lagrangian description, the scalar sector consists of a physical singlet Higgs boson together with unphysical triplet Goldstone bosons associated with the EW symmetry breaking. The latter isolate the contributions of longitudinally polarized vector bosons. This property can be formalized as the Goldstone-boson Equivalence Theorem (GBET)~\cite{Chanowitz:1985hj,Gounaris:1986cr}: \begin{center} \raisebox{-.45\height}{ \includegraphics[width=.2\textwidth]{figs/equiv_theo_1.pdf}} = \raisebox{-.45\height}{ \includegraphics[width=.2\textwidth]{figs/equiv_theo_2.pdf}} $\;+\;\mathcal O \left(\frac{m}{\sqrt s}\right)$ \end{center} Here, $V^L_k$ denotes a longitudinal EW vector boson, $\phi_k$ the corresponding Goldstone boson, and $\Psi_k$ any possible SM fermion. This denotes that fact that matrix elements for multi-boson final states including vector bosons are dominated in the high-energy limit by their longitudinal component \begin{equation} \varepsilon^{\mu}_L(p)=\frac{p^{\mu}}{m}+v_p^{\mu} \quad , \end{equation} where $v^\mu_p\sim \mathcal{O}(m/ \sqrt s)$ is a four-vector depending on the boson momentum. According to~\cite{Dobado:1997jx} the GBET in an EFT framework takes the form \begin{align} \mathcal M(V^L_1,\dots, V^L_r, \mathbf{\Phi} )=&\;\left(\prod_j^r \pm i \omega_j\right)\mathcal M^{0}(\varphi_{1},\dots,\varphi_{r}, \mathbf{\Phi} ) \notag \\& \qquad +\mathcal{O}\left(\frac{m}{\sqrt s}\right) +\mathcal O \left(\frac{\sqrt{s}}{\Lambda}\right)^{N+1} +\mathcal{O}\left(g, g'\right) \quad , \label{eq:gbeteft} \end{align} where $\mathcal{M}^{0}$ is the leading order of the matrix element in $g,g'$, and $\mathcal{O}\left(g, g'\right)$ denotes terms, which are suppressed by $g,g'$ in comparison to this leading term. The $\omega_j$ are specific phases that differ between initial and final states within the amplitude. In this framework, the matrix elements appear not only as series expansions in the gauge couplings, but also in $\sqrt s/\Lambda$, which are usually truncated after some finite order $N$. The high-energy scale $\Lambda$ of any such bottom-up EFT corresponds to a specific scale of BSM models, e.g. a reference mass of a single heavy new particle. All longitudinal gauge bosons $V_i^L$ can be replaced by the corresponding Goldstone bosons $\varphi_i$ at high energies within the accuracy goal of the EFT. The results will match at the leading order in $g$ and $g'$. In the present context, we can rewrite a modified muon Yukawa coupling as a gauge-invariant operator in the HEFT Lagrangian, and conclude that this new interaction should cause extra contributions to the production of multiple vector bosons in association with the Higgs boson which rise with energy. By construction, these contributions exactly reproduce the effect of spoiled gauge cancellations in unitary gauge, as computed by automated programs. In the non-linear representation we introduce a field $U$ \begin{equation} U=e^{i\phi^a\tau_a/v} \quad \text{with} \quad \phi^a \tau_a=\sqrt{2}\begin{pmatrix} \frac{\phi^0}{\sqrt 2} & \phi^+\\ \phi^-& -\frac{\phi^0}{\sqrt 2} \end{pmatrix}\quad , \end{equation} and its covariant derivative \begin{equation} D_{\mu}U=\partial_{\mu}U+igW_{\mu}U-i\frac{g'}{2}B_{\mu}U\tau_3 \quad \text{with} \quad W_{\mu}=\frac{1}{2}\tau_a W^a_{\mu}\quad , \end{equation} where $\tau_a$ denote the usual Pauli matrices and $\{\phi^+,\phi^-,\phi^0\}$ are the Goldstone bosons to the corresponding gauge bosons $\{W^+,W^-,Z\}$. The most general extension of the SM Lagrangian can be written as \begin{align} \begin{split} \mathcal{L}_{\text{EW}}=&-\frac{1}{2} \operatorname{tr}{W_{\mu \nu} W^{\mu \nu}} -\frac{1}{4}B_{\mu \nu} B^{\mu \nu} + \sum_{f\in\{\ell_L,\ell_R\}} i \bar f^i \slashed D f^i \\ & \qquad +\mathcal{L}_{UH}+\mathcal{L}_{\text{gauge-fix}} \quad . \end{split} \end{align} The Higgs and Goldstone sector is given by \begin{align} \begin{split} \mathcal L_{UH}&=\frac{v^2}{4}\operatorname{tr}[D_{\mu}U^{\dagger}D^{\mu}U] F_U(H)+\frac{1}{2}\partial_{\mu}H\partial^{\mu}H-V(H) \\ &\qquad -\frac{v}{2\sqrt{2}}\left[\bar \ell^i_L \tilde Y_\ell^{ij}(H) U(1-\tau_3)\ell^j_R+\text{h.c.}\right] \quad, \end{split} \end{align} where we defined the right-handed doublets as $\ell^i_R=(\nu^i_R,e^i_R)^T$, and $i,j$ are the lepton-flavor indices. In the SM, the functions $F_U(H), \, V(H) $ and $Y^{ij}_e(H)$ are simple polynomials in $H/v$ that can be generalized to \begin{align} F_U(H)&=1+\sum_{n\geq1}f_{U,n}\left(\frac{H}{v}\right)^n ,\\ V(H)&=v^4\sum_{n\geq2}f_{V,n}\left(\frac{H}{v}\right)^n \qquad \text{and}\\ \tilde Y_\ell^{ij}(H)&=\sum_{n\geq0}\tilde Y^{ij}_{\ell,n}\left(\frac{H}{v}\right)^n \ . \end{align} We do not assume CP violation in this sector, hence the coefficient of these different series are real, $\tilde f_{U,n},f_{V,n},\tilde Y^{ij}_{\ell,n}\in \mathbb R$. They are general parameters that can be obtained by a matching procedure from a possible underlying physical model, and in principle can be measured in appropriate physical processes. We are primarily interested in the Higgs-lepton couplings. So we read off the mass matrix for the leptons \begin{equation} \tilde M_\ell^{ij}=\frac{v}{\sqrt{2}}\tilde Y_{\ell,0}^{ij} \quad, \end{equation} which is non-diagonal in general. As its eigenvalues are assumed to be positive, we can perform the usual polar decomposition $\tilde M_\ell =U_L M_\ell U_R^{\dagger} $ with some unitary matrices $U_{L/R}$ and compensate this by the rotation to the physical fields $\ell_L \mapsto U_L \ell_L$ and $\ell_R \mapsto U_R \ell_R$. Furthermore this defines $Y_{\ell,n} = U_L^\dagger \tilde Y_{\ell,n} U_R$, where, again, $n+1$ is the number of Higgs fields involved in the corresponding vertex. We will focus on the physical basis from now on. Note, that these equations all are still matrix equations, with the (2,2)-components $Y^{2,2}_{\ell,0}:=y_{\mu},\,Y^{2,2}_{\ell,n}:=y_{n}$ and $M^{2,2}_{\ell}:=m_{\mu}$ denoting the muon. Selecting the muon term and requiring the physical muon mass to equal its observed value, we observe an effective correction of the observable Yukawa coupling by the factor \begin{equation}\label{eq:kmu_heft} \kappa_\mu = \frac{v}{\sqrt{2}m_{\mu}}y_{1}, \end{equation} which, for $y_1=y_0=y_\mu$, would correspond to the SM case $\kappa_\mu=1$. A priori, the size of the coupling coefficients is unknown as it depends on the underlying dynamics. From the ``naive dimensional analysis'' \cite{Manohar:1983md,Cohen:1997rt}, one would expect the modification as $y_{n}\sim y_\mu(g^2/16\pi^2)^n$, with $g\sim 1$ for a weakly coupled theory and $g\sim {\cal{O}}(4\pi)$ a strongly coupled theory. New operators in the series expansion in $H/v$ introduce contact terms which couple the muon to $n$ Higgs or Goldstone bosons. These contact terms are proportional to $y_m$, where $m\le n$ denotes the number of Higgs bosons and they are the leading contributions to $\mu^+\mu^-\rightarrow n \varphi$ scattering in the high energy limit. Hence, via the GBET, a modification of $y_\mu$ is generically accompanied by new large contributions to multi-boson production in the high-energy limit. \subsubsection{The Yukawa interaction in the SMEFT parameterization} \label{sec:smeft} In the SMEFT framework, the SM gauge invariance is represented in linear form, and the Higgs boson combines with the Goldstone bosons as a complex $SU(2)$ doublet. The pure effect of a modified muon Yukawa coupling can be reproduced by an infinite series of higher-dimensional operators in the SMEFT Lagrangian~\cite{Weinberg:1979sa,Abbott:1980zj,Buchmuller:1985jz,Grzadkowski:2010es}, where all coefficients are related to the original coupling modification. The results will be again identical to the unitary-gauge calculation. However, if we furthermore assume a \emph{decoupling} property of the new interactions, {\it i.e.}, their parameters are not intrinsically tied to the electroweak scale, we should expect higher-order terms in the SMEFT series to be suppressed by a new heavy physics scale $v^2/\Lambda^2$, such that truncation after the first term is permissible. In that case, we have to discard the former relation between all orders, and accept that the resulting amplitudes will differ from the unitary-gauge results for an anomalous Yukawa coupling. In concrete terms, in a decoupling new-physics scenario we expect anomalous production of multiple vector bosons to be accompanied by anomalous production of multiple Higgs bosons. The clean environment of a muon collider is optimally suited to separate such final states irrespective of their decay modes, and thus to guide model building in either direction, depending on the pattern actually observed in data. The formalism set up here is very similar to the one used in~\cite{Falkowski:2020znk} for searching deviations in the charm and strange Yukawa couplings in multi-boson production at the LHC and FCC-hh. In the linear representation of the Higgs doublet, \begin{equation} \varphi=\frac{1}{\sqrt{2}}\begin{pmatrix} \sqrt 2 \phi^+\\ v+H+i \phi^0 \end{pmatrix}\quad , \end{equation} the most general bottom-up extension of the SM Lagrangian, \begin{align} \begin{split} \mathcal{L}_{\text{EW}}=&-\frac{1}{2} \operatorname{tr}{W_{\mu \nu} W^{\mu \nu}}-\frac{1}{4}B_{\mu \nu} B^{\mu \nu} + (D_\mu \varphi)^{\dagger}(D^\mu \varphi)+\mu^2 \varphi^{\dagger}\varphi-\frac{\lambda}{2}( \varphi^{\dagger}\varphi)^2\\ &+ \sum_{f\in\{\ell_L,e_R\}} i \bar f^i \slashed D f^i -\left(\bar \ell_L^i \tilde Y_{\ell}^{ij} \varphi e_R^j + \text{h.c.} \right) + \mathcal{L}_{\text{gauge-fix}} \end{split} \end{align} that leads to a modification of the Yukawa coupling, reads \begin{equation}\label{eq:EFT} \mathcal L =\mathcal L_{\text{EW}}+\left [ \sum_{n=1}^N \frac{\tilde C^{(n)ij}_{\ell\varphi}}{\Lambda^{2n}}(\varphi^{\dagger}\varphi)^n{\bar\ell}^i_L \varphi {e^j}_R + \text{h.c.}\right ] \quad. \end{equation} Operators of higher mass dimension are as usual suppressed by a large scale $\Lambda$ that can be understood as an energy cutoff for the validity of the theory, as it will lead to an expansion of the scattering matrix elements in ${\sqrt s}/{\Lambda}$. Again, we do not consider CP violation, hence the Wilson coefficients are real $\tilde C^{(n)}_{\ell\varphi}\in \mathbb R$. They can be obtained by a matching procedure from an underlying physical model, and in principle can be measured.\footnote{One rather measures form factors, which are linear combinations of the Wilson coefficients.} For further calculations, we absorb the large scale $1/\Lambda^2$ in the Wilson coefficients. We can read off the (non-diagonal) mass matrix for the charged leptons \begin{equation} \tilde M_\ell^{ij}=\frac{v}{\sqrt 2}\left(\tilde Y_{\ell}^{ij}-\sum_{n=1}^N \tilde C^{(n)ij}_{\ell\varphi} \frac{v^{2n}}{2^n}\right) \quad. \end{equation} In the same way as for the non-linear representation, we can diagonalize the mass matrix by redefinitions of the physical fields $e_L \mapsto U_L e_L$, $e_R \mapsto U_R e_R$. This defines $Y_\ell=U_L^{\dagger} \tilde Y_\ell U_R$ and $C^{(n)}_{\ell\varphi}=U_L^{\dagger} \tilde C^{(n)}_{\ell\varphi} U_R$. As already discussed for the non-linear case, the operator coefficients $C^{(n)}_{\ell\varphi}$ can shift the muon Yukawa coupling away from its SM value. Because of its intrinsically small value, a moderate new physics contribution could lead to a drastic effect, driving it to zero or reversing its sign. The extreme case of a vanishing muon Yukawa coupling has the significant consequence that multi-Higgs production, $\mu^+\mu^-\rightarrow H^M$ would be absent at tree level, while production of up to $k\in\{1,\dots,M-1\}$ Higgs bosons associated with $M-k$ vector bosons would be allowed. As a paradigm example, we show how to embed this in our SMEFT framework: we require all lepton couplings to $k$ Higgs bosons, $\Lambda_{(k)}$, $k\in\{1,\dots,M-1\}$, to vanish while the mass of the measured muon mass $m_{\mu}$ is fixed as an input. This leads to the conditions \begin{align} \label{eq:SMEFT_L} M_\ell &=\frac{v}{\sqrt 2}\left[ Y_\ell-\sum_{n=1}^{M-1} C^{(n)}_{\ell\varphi} \frac{v^{2n}}{2^n} \right] \quad ,\\ \Lambda_{(k)}& :=-i\frac{k!}{\sqrt 2}\left[Y_\ell\delta_{k,1}-\sum_{n=n_k}^{M-1} C^{(n)}_{\ell\varphi} \begin{pmatrix} 2n+1\\ k \end{pmatrix} \frac{v^{2n+1-k}}{2^n} \right] = 0 \quad , \end{align} where $n_k=\operatorname{max}(1,\lceil \frac{k-1}{2}\rceil)$. For the general case, we define the following modification of the SM Yukawa coupling, still matrix-valued in flavor space, as \begin{equation} K_\ell =1-\frac{v}{\sqrt 2} M_\ell^{-1}\sum_{n=1}^{M-1} C^{(n)}_{\ell\varphi} \frac{n v^{2n}}{2^{n-1}} \quad . \end{equation} Again, we can project to the muon via $Y^{2,2}_{\ell}:=y_{\mu},\,C^{(n)2,2}_{\ell\varphi}:=c^{(n)}_{\ell\varphi}, M^{2,2}_{\ell}:=m_{\mu}$, as well as $K^{2,2}_{\ell}:=\kappa_{\mu}$. As usual, we will consider the linear SMEFT expansion up to the first non-trivial order, which adds to the dimension-4 SM Yukawa coupling operator, $\mathcal{L}_{\text{Yuk.}} \, = \; -(\bar\ell_L Y_\ell e_R)\varphi$ at dimension-6 a single operator that modifies the static Higgs coupling to leptons: \begin{equation} \label{eq:O_ephi-6} \mathcal{O}_{\ell\varphi} = C_{\ell\varphi}(\varphi^\dagger\varphi)(\bar\ell_L e_R)\varphi \ . \end{equation} Here, both $\Gamma_\ell$ as well as $C_{\ell\varphi}$ are matrices in lepton-flavor space. On dimensional grounds, $C_{\ell\varphi}\sim 1/\Lambda^2$, where $\Lambda$ is the scale at which new physics sets in. Inserting the Higgs vev, we obtain at dimension-4 the SM value of the lepton mass matrix, $M_\ell^{(4)} = \frac{v}{\sqrt2}Y_\ell$, while at dimension-6 we get a modified mass matrix \begin{equation} M_\ell^{(6)} = \frac{v}{\sqrt2}\left(Y_\ell - \frac{v^2}{2}C_{\ell\varphi}\right) . \end{equation} Specializing to the muon term and requiring the physical muon mass to equal its measured value, we observe an effective modification of the observable Yukawa coupling by the factor \begin{equation} \label{eq:kmu_smeft} \kappa_\mu^{(6)} = 1 - \frac{v^3}{\sqrt2\,m_\mu}c_{\ell\varphi}^{(1)}. \end{equation} Expanding the Higgs field, the new operator induces contact terms which couple the muon to $n=1, 2$, or 3 Higgs or Goldstone bosons. The contact terms are all proportional to the operator coefficient $c_{\ell\varphi}^{(1)}$, either scalar or pseudoscalar. Squaring this interaction, we obtain local contributions to $\mu^+\mu^-\to n\varphi$ scattering, in analogy with the HEFT description. The physical final states are Higgs or longitudinal $W,Z$ gauge bosons. As we will discuss in more detail in Sec.~\ref{sec:ratios}, the $d=6$ contributions to their production cross sections with multiplicity $n=3$ rise with energy, $\sigma \propto s$, while the SM contribution falls off like $1/s$. There is no interference, since -- for these final states -- the SM requires a vector exchange while the new contact term is scalar. We obtain a deviation from the SM prediction which is determined by the EFT contribution alone, which becomes leading above some threshold which depends on $\kappa^{(6)}_\mu-1$. The decomposition of the anomalous contribution into particle types ($WWZ$, $WWh$, etc.) is fixed by electroweak symmetry and the particular SMEFT operator content, such that the exclusive channels are related by simple rational factors beyond the threshold where the new-physics part starts to dominate the production rates. This will be elaborated in Sec.~\ref{sec:ratios}. If the correction was large enough to render $\kappa_\mu=0$, we would obtain the unitarity bound for $d=6$, {\it i.e.}~three-boson emission, as discussed in the next subsection. Generally speaking, the modification from the SM Yukawa coupling could reach an order of $100\%$ if $c_{\ell\varphi}^{(1)} \sim 0.1/(10v)^2$. We emphasize that these two sample scenarios -- a pure modified Yukawa coupling, and a modified Yukawa coupling combined with truncation of the SMEFT series -- are to be understood as mere representatives of a potential new class of SM modifications that are difficult to observe at lower energy. As our results indicate, there is a great redundancy in the analysis of exclusive multi-boson final states, which should translate into significant discrimination power regarding more detailed models of the Higgs-Yukawa sector beyond the SM. If we translate an experimental bound on $\Delta\kappa_\mu$ to the SMEFT coefficient $c^{(1)}\sim g/\Lambda^2,$ we obtain a bound on the scale of new physics as \begin{equation} \Lambda >10\ {\rm TeV}\sqrt{\frac{g}{\Delta\kappa_\mu}}\quad. \label{eq:bound} \end{equation} \subsubsection{Unitarity bounds on a nonstandard Yukawa sector} \label{sec:unitarity} In the SM, the high-energy asymptotics of the multi-boson production cross sections universally fall off with rising energy, manifesting themselves in delicate gauge cancellations which become huge at high energies. A modification of the muon Yukawa coupling from the SM prediction would show up as spoiling such cancellations, and thus eventually causes specific scattering amplitudes to rise again, without limits. While in theory, such a unitary-gauge framework does not do justice to the built-in symmetries of the SM, it is nevertheless the baseline framework for any tree-level evaluations such as the ones that we use in this work. In Ref.~\cite{Maltoni:2001dc}, generic models have been investigated where the leading contribution to a fermion mass originates from a dimension-$d$ EFT operator that couples the fermion to the SM Higgs field. The limit $d\to\infty$ corresponds to the case of no Higgs-fermion coupling, as described above. Using the GBET, they computed the energy scale $\Lambda_d$ where unitarity is violated by multiple emission of Goldstone bosons, representing longitudinally polarized weak vector bosons, and Higgses. \begin{equation} \Lambda_d = 4\pi\kappa_d\left(\frac{v^{d-3}}{m_f}\right)^{1/(d-4)}, \quad\text{where}\quad \kappa_d = \left(\frac{(d-5)!}{2^{d-5}(d-3)}\right)^{1/(2(d-4))}. \end{equation} For any given $d>4$, the most relevant bound corresponds to a final state that consists of $n=d-3$ Goldstone or Higgs bosons in total. For $m_f=m_\mu$ and $d=6,8,10$, the numeric values of the unitarity bound are $95\,\text{ TeV}$, $17\,\text{ TeV}$, and $11\,\text{ TeV}$, respectively. For $d\geq 8$, the values of these bounds lie within the energy range that is accessible at a future muon collider. They imply large amounts of observable multi-boson production. The strong suppression of the corresponding SM processes enables a study already significantly below those upper bounds. Furthermore, we expect observable effects even if only a fraction of the muon mass is due to the new-physics contributions that are parameterized by those operators. In the $d\to\infty$ case, the multiplicity of extra Goldstone-boson production becomes unbounded, and the unitarity limit formally drops towards the original electroweak scale~\cite{Maltoni:2001dc}. Even if we account for finite vector-boson masses, such a scenario should be qualified as strongly interacting, and finite-order predictions in the multi-TeV range become invalid. For this reason, we consider lower-dimensional operators in the SMEFT expansion individually. The presence of extra Higgs bosons in the gauge-invariant SMEFT operators of fixed dimension delays the potential onset of new (strong) interactions to higher energy. \subsubsection{Multi-boson production and cross section ratios} \label{sec:ratios} Obviously, the most direct and model-independent probe to the muon-Higgs coupling would be the $s$-channel resonant production $$\mu^+\mu^-\to H.$$ This was the motivation for a muon-collider Higgs factory~\cite{Barger:1995hr,Barger:1996jm}. This process would put an extremely high demand on the collider beam quality to resolve the narrow width of the Higgs boson, and on the integrated luminosity. Off the resonance at higher energies, one could consider to study this coupling by utilizing the process of radiative return~\cite{Chakrabarty:2014pja}. Although the expected cross sections for multiple Higgs production $\mu^+\mu^- \to HH$ and $HHH$ are quite small as shown later, they receive a power enhancement $E/\Lambda$ of the effective coupling of $\kappa_\mu$, if new interaction like the dimension-6 operator, Eq.~\eqref{eq:O_ephi-6}, is present. If an analogous dimension-8 operator is present with a Wilson coefficient $c_{\ell\varphi}^{(2)}\sim 1/\Lambda^4$, the physical muon mass and the Yukawa couplings are given by \begin{align} \label{eq:d=6+8-m} m_\mu^{(8)} &= \frac{v}{\sqrt2}\left(y_{\mu} - \frac{v^2}{2}c^{(1)}_{\ell\varphi} - \frac{v^4}{4}c^{(2)}_{\ell\varphi}\right), \\ \label{eq:d=6+8-l} \lambda_\mu^{(8)} &= \phantom{\frac{v}{\sqrt2}} \left(y_{\mu} - \frac{3v^2}{2}c^{(1)}_{\ell\varphi} - \frac{5v^4}{4}c^{(2)}_{\ell\varphi}\right), \end{align} The dimension-8 operator causes a rise of $n$-boson production cross sections, and ultimately a saturation of tree-level unitarity, for up to $n=5$ as discussed in the previous section. Depending on the relative size of the individual contributions at a given energy, the ratios of individual multi-boson channels are determined by either $Y_e$, $C^{(1)}_{\ell\varphi}$ or $C^{(2)}_{\ell\varphi}$ . Final states with more Higgs bosons receive direct contributions which rapidly rise with energy $(E/\Lambda)^n$. The operators introduced in Eqs.~\eqref{eq:EFT} and \eqref{eq:d=6+8-m}$-$\eqref{eq:d=6+8-l} induce contact terms, schematically written as, \begin{center} \raisebox{-.45\height}{ \includegraphics[width=.25\textwidth]{figs/muon_multiboson.pdf} } $\approx$ \raisebox{-.45\height}{ \includegraphics[width=.25\textwidth]{figs/muon_multiboson_contact.pdf} } \end{center} which are dominant in the high-energy limit as there is no suppression in $\sqrt{s}$ from propagator denominators. Let us denote the Feynman rules for a multi-boson final state $X$ as \begin{center} $\left. \raisebox{-.45\height}{ \includegraphics[width=.25\textwidth]{figs/muon_multiboson_contact.pdf} } \qquad \right\} \quad X_i : \qquad i \;C_{X_i} (P_L \pm P_R) \quad ,$ \end{center} where $C_{X_i}$ is a linear combination of Wilson coefficients, and $i$ labels all possible final states for a given multiplicity. The sign in $(P_L \pm P_R)$ depends on the number of Goldstone bosons $\phi^0$ in the final state and does not play any role for the following argument. The spin-averaged matrix element reads ($k_i, i=1,2$ are the two muon momenta, $s=2 k_1\cdot k_2$, where we ignored the muon mass in the kinematics of the matrix element) \begin{align*} \overline {|\mathcal A_{X_i}|^2} &=\frac{1}{4} |C_{X_i}|^2\sum_{s_1,s_2} \bar v_{s_1}(k_1)(P_L\pm P_R)u_{s_2}(k_2)\bar u_{s_2}(k_2)(P_R\pm P_L)v_{s_1}(k_1) \\ & =|C_{X_i}|^2 \times ( k_1 \cdot k_2 \mp m_{\mu}^2) \approx \ \frac{|C_{X_i}|^2 s}{2} \quad . \end{align*} As the spin-averaged matrix element in that approximation is constant, the integration over the phase space is trivial and yields a cross section \begin{equation} \sigma^{X_i}=\frac{(2\pi)^4} {2s} \; |\mathcal A_{X_i}|^2 \; \left(\prod_{j\in J_{X_i}}\frac{1}{n_j !}\right)\,\Phi_{M}(k_1+k_2;p_1,\dots ,p_{M}) \quad , \end{equation} where $\Phi^{X_i}_{M}(k_1+k_2;p_1,\dots ,p_{M})$ is the $M$-particle phase-space volume and $J_{X_i}$ is the set of indistinguishable particles $X_i$ in the final state with numbers $n_j$ for particle $j\in J_{X^i}$. As we study the limit of very high energies, we neglect all particle masses, and the phase-space volume will be the same for all final states $X_i$. In the center-of-mass (CMS) system (cf.~\cite{Kleiss:1985gy}), the $M$-particle phase space is given by ($\Gamma$ is the Euler gamma function) \begin{align} \Phi^{X_i}_{M}(k_1+k_2;p_1,\dots ,p_{M})=\frac{1}{(2\pi)^{4M}}\left(\frac{\pi}{2}\right)^{M-1}\frac{s^{M-2}}{\Gamma(M) \Gamma(M-1)} \quad . \end{align} In order to study the effects from specific operator coefficients, it is beneficial to look into ratios of cross sections with respect to a certain reference cross section for a specific exclusive final state of the same multiplicity. For such cross-section ratios we find \begin{equation} R^{X_i}:=\frac{\sigma^{X_i}}{\sigma^{X_{\text{ref}}}}=\frac{|C_{X_i}|^2\left(\prod_{j\in J_{X_i}}\frac{1}{n_j !}\right)}{|C_{X_{\text{ref}}}|^2\left(\prod_{j\in J_{X_{\text{ref}}}}\frac{1}{n_j !}\right)} \quad. \end{equation} \begin{table} \begin{center} \begin{tabular}{ c||c|c|c|c||c|c} \hline & \multicolumn{6}{|c}{$\Delta\sigma^{X}/\Delta\sigma^{W^+W^-}$}\\ \hline & \multicolumn{4}{|c||}{SMEFT} & \multicolumn{2}{c}{HEFT} \\ \hline $X$ &dim$_6$ & dim$_8$ & dim$_{6,8}$ & dim$_{6,8}^{\text{matched}}$ & dim$_\infty$ & dim$_\infty^{\text{matched}}$\\ \hline $W^+W^-$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1 $\\ $ZZ$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $1/2$\\ \hline $ZH$ & $1$ & $1/2$ & $1$ & $1$ & $R^{\text{HEFT}}_{(2),1}$ & $1$\\ $HH$ & $9/2$ & $25/2$ & $ R^{\text{SMEFT}}_{(2),1}/2$ & $0$ & $2\, R^{\text{HEFT}}_{(2),2}$ & $0$\\ \hline \end{tabular} \end{center} \caption{Ratios of final-state cross-section deviations in diboson production, assuming that the leading muon-Yukawa contribution originates from various combinations of $d=6$ and $d=8$ operators in SMEFT, or from a direct contribution in the HEFT, respectively. The term ``matched" indicates the matching to a model with a vanishing muon-Yukawa coupling. See the text for details. The coefficients $R_{(2),i}$ are defined in~\eqref{eq:Rin2}.} \label{tab:ratios-2} \end{table} In the following, we discuss ratios of deviations of production cross sections from their SM values for final-state multiplicities $n=2,3,4$. For each multiplicity, the cross-section deviations $\Delta\sigma^X$ for different final states $X$ will be normalized with respect to a particular exclusive reference final state, which is $W^+W^-$ for dibosons, $W^+W^-H$ for tribosons, and $W^+W^-HH$ for four bosons, respectively. The cross sections are calculated in the GBET approximation for massless Goldstone bosons; for longitudinal $W^\pm$ and $Z$ boson final states they become exact in the limit that both their masses as well as the SM contributions to these cross sections can be neglected. We are considering these ratios for different EFT scenarios, namely for truncating the SMEFT series of higher-dimensional operators at dimension $d=6,8,10$, respectively, as well as for the non-linear HEFT case. In detail, in Table~\ref{tab:ratios-2} we consider the diboson final states for the cases of a pure $d=6$ contribution (dim$_6$), a pure $=8$ contribution (dim$_8$), a mixed contribution (dim$_{6,8}$), and for the case where the $d=6$ and $d=8$ operators are tuned to cancel the leading-order Yukawa coupling according to~\eqref{eq:d=6+8-m}, \eqref{eq:d=6+8-l}, denoted dim$_{6,8}^\text{matched}$. For the non-linear HEFT setup, the first column (dim$_\infty$) takes into account the full tower, in principle, though only the lowest dimension contributes at tree level due to the $n$-arity of the vertex. The last column (dim$_\infty^\text{matched}$) is the matched case again with a vanishing Yukawa coupling, calculated by taking into account a sufficiently large number of terms corresponding to the linear setup. The list of processes includes direct production of up to two Higgs bosons. The non-rational coefficients in this and the following tables are expressed in terms of ratio coefficients, $R^{\text{HEFT/SMEFT}}_{(N),i}$, where $N$ is the multiplicity of the boson final state, and $i$ labels the contribution from higher-dimensional operators to the given multiplicity with increasing operator order, \begin{align} \label{eq:Rin2} R^{\text{SMEFT}}_{(2),1}&=\left(\frac{5v^2 c^{(2)}_{\ell\varphi}+c^{(1)}_{\ell\varphi}}{v^2 c^{(2)}_{\ell\varphi}+c^{(1)}_{\ell\varphi}}\right)^2, & R^{\text{HEFT}}_{(2),1}&=\left(\frac{y_{1}}{y_{\mu}}\right)^2, & R^{\text{HEFT}}_{(2),2}&=\left(\frac{y_2}{y_{\mu}}\right)^2 \ . \end{align} Here, the $c^{(i)}_{\ell\varphi}$ operator coefficients of SMEFT have been introduced above in~\eqref{eq:d=6+8-m}, \eqref{eq:d=6+8-l}, while by $y_i$ we have denoted the Yukawa couplings of the muon to $i+1$ Higgs bosons in the HEFT parameterization. In SMEFT, if the dim$_6$ contributions dominate, then $R^{\rm SMEFT}\sim 1$. On the other hand, the dim$_8$ contributions can modify this behavior. In HEFT, $R^{\rm HEFT}$ could be larger than 1 in a strongly coupled theory. In addition, those anomalous contributions will lead to enhancements at high energies. \begin{table} \begin{center} \begin{tabular}{ c||c|c|c|c||c|c} \hline & \multicolumn{6}{|c}{$\Delta\sigma^{X}/\Delta\sigma^{W^+W^-H}$}\\ \hline & \multicolumn{4}{|c||}{SMEFT} & \multicolumn{2}{c}{HEFT} \\ \hline $\mu^+\mu^-\to X$ & dim$_6$ & dim$_8$ & dim$_{6,8}$ & dim$^{\text{matched}}_{6,8}$ & dim$_\infty$ & dim$^{\text{matched}}_\infty$ \\ \hline $WWZ$ & $1$ & $1/9$ & $R^{\text{SMEFT}}_{(3),1}$ & $1/4$ & $ R^{\text{HEFT}}_{(3),1}$/9 & $1/4$\\ $ZZZ$ & $3/2$ & $1/6$ & $3 \, R^{\text{SMEFT}}_{(3),1}/2$ & $3/8$ & $R^{\text{HEFT}}_{(3),1}/6 $& $3/8$ \\ \hline $WWH$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$\\ $ZZH$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $1/2$& $1/2$\\ $ZHH$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $2\, R^{\text{HEFT}}_{(3),2}$ & $1/2$\\ $HHH$ & $3/2$ & $25/6$ & $3\, R^{\text{SMEFT}}_{(3),2}/2$ & $75/8$ & $6\, R^{\text{HEFT}}_{(3),3}$ & $0$\\ \hline \end{tabular} \end{center} \caption{Same as Tab.~\ref{tab:ratios-2} but for triboson production. The coefficients $R_{(3),i}$ are listed in~\eqref{eq:Rin3i}-\eqref{eq:Rin3f}. } \label{tab:ratios-3} \end{table} \begin{table} \begin{center} \begin{tabular}{ c||c|c|c|c||c|c } \hline & \multicolumn{6}{|c}{$\Delta\sigma^{X}/\Delta\sigma^{WWHH}$} \\ \hline & \multicolumn{4}{|c||}{SMEFT} & \multicolumn{2}{c}{HEFT} \\ \hline $\mu^+\mu^-\to X$ &dim$_{6,8}$ & dim$_{10}$ & dim$_{6,8,10}$ & dim$^\text{matched}_{6,8,10}$ & dim$_\infty$ & dim$^\text{matched}_\infty$ \\ \hline $WWWW$ & $2/9$ & $2/25$ & $2 \, R^{\text{SMEFT}}_{(4),1}/9$ & $1/2$ & $R^{\text{HEFT}}_{(4),1}/18$ & $1/2$\\ $WWZZ$ & $1/9$ & $1/25$ & $ R^{\text{SMEFT}}_{(4),1}/9$ & $1/4$ & $ R^{\text{HEFT}}_{(4),1}/36$ & $1/4$\\ $ZZZZ$ & $1/12$ & $3/100$ & $ R^{\text{SMEFT}}_{(4),1}/12$ & $3/16$ & $ R^{\text{HEFT}}_{(4),1}/48$ & $3/16$\\ \hline $WWZH$ & $2/9$ & $2/25$ & $2 \, R^{\text{SMEFT}}_{(4),1} /9$ & $1/2$ & $R^{\text{HEFT}}_{(4),2}/8$ & $1/2$\\ $WWHH$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$\\ $ZZZH$ & $1/3$ & $3/25$ & $R^{\text{SMEFT}}_{(4),1}/3$ & $3/4$ & $R^{\text{HEFT}}_{(4),2}/12$ & $3/4$\\ $ZZHH$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $1/2$\\ $ZHHH$ & $1/3$ & $1/3$ & $1/3$ & $1/3$ & $3\, R^{\text{HEFT}}_{(4),3}$ & $1/3$\\ $HHHH$ & $25/12$ & $49/12$ & $25\, R^{\text{SMEFT}}_{(4),2}/12$ & $1225/48$ & $12\, R^{\text{HEFT}}_{(4),4}$ & $0$\\ \hline \end{tabular} \end{center} \caption{Same as Tabs.~\ref{tab:ratios-2} and \ref{tab:ratios-3} but for four-boson production. The coefficients $R_{(4),i}$ are listed in~\eqref{eq:Rin4i}-\eqref{eq:Rin4f}.} \label{tab:ratios-4} \end{table} The cross-section ratios in the case of triboson production are summarized in Table~\ref{tab:ratios-3}. Here, all exclusive final-state production cross sections are normalized to the $W^+W^-H$ final state, which is the one whose phenomenology we will study in detail in Sec.~\ref{sec:Pheno}. As for the case of diboson production, we consider scenarios with a pure $d=6$ contribution (dim$_6$), a pure $d=8$ contribution (dim$_8$), a mixed contribution (dim$_{6,8}$), and for the case where the $d=6$ and $d=8$ operators are tuned to cancel the leading-order Yukawa coupling according to~\eqref{eq:d=6+8-m}, \eqref{eq:d=6+8-l} (dim$_{6,8}^\text{matched}$), respectively. Exclusive final states contain up to three physical Higgs bosons. For the triboson case, we define the following ratio coefficients for the SMEFT and HEFT case, respectively, as \begin{align} \label{eq:Rin3i} R^{\text{SMEFT}}_{(3),1}&=\left(\frac{v^2 c^{(2)}_{\ell\varphi}+c^{(1)}_{\ell\varphi}}{3v^2 c^{(2)}_{\ell\varphi}+c^{(1)}_{\ell\varphi}}\right)^2, & R^{\text{SMEFT}}_{(3),2}&=\left(\frac{5v^2 c^{(2)}_{\ell\varphi}+c^{(1)}_{\ell\varphi}}{3v^2 c^{(2)}_{\ell\varphi}+c^{(1)}_{\ell\varphi}}\right)^2 \end{align} and \begin{align} \label{eq:Rin3f} R^{\text{HEFT}}_{(3),1}&=\left(\frac{y_{\mu}}{y_1}\right)^2, & R^{\text{HEFT}}_{(3),2}&=\left(\frac{y_2}{y_1}\right)^2, & R^{\text{HEFT}}_{(3),3}&=\left(\frac{y_{3}}{y_1}\right)^2 \qquad . \end{align} We recall that at multiplicity $n=4$ and beyond, the dimension-6 SMEFT operator does not directly contribute in the GBET approximation, so we choose to include the effects of the analogous dimension-8 and dimension-10 operators in the table for the production of quartic final states. In Table~\ref{tab:ratios-4}, we display the ratios of four-particle final state cross sections; definitions and conventions are analogous to those in Table~\ref{tab:ratios-3}. The ratio coefficients for the four-boson final states are given by \begin{align} \label{eq:Rin4i} R^{\text{SMEFT}}_{(4),1}&=\left(\frac{3v^2 c^{(3)}_{\ell\varphi}+2c^{(2)}_{\ell\varphi}}{5v^2 c^{(3)}_{\ell\varphi}+2c^{(2)}_{\ell\varphi}}\right)^2, & R^{\text{SMEFT}}_{(4),2}&=\left(\frac{7v^2 c^{(3)}_{\ell\varphi}+2c^{(2)}_{\ell\varphi}}{5v^2 c^{(3)}_{\ell\varphi}+2c^{(2)}_{\ell\varphi}}\right)^2 \end{align} and \begin{align} \label{eq:Rin4f} R^{\text{HEFT}}_{(4),1}&=\left(\frac{y_{\mu}}{y_2}\right)^2, & R^{\text{HEFT}}_{(4),2}&=\left(\frac{y_1}{y_2}\right)^2, & R^{\text{HEFT}}_{(4),3}&=\left(\frac{y_3}{y_2}\right)^2, & R^{\text{HEFT}}_{(4),4}&=\left(\frac{y_4}{y_2}\right)^2. \end{align} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figs/BB.pdf} \caption{The cross sections of diboson production at a $\mu^+\mu^-$ collider as a function of the c.m. energy $\sqrt{s}$. The solid and dotted lines are for the direct annihilation with muon Yukawa coupling as $\kappa_\mu=1$ and $\kappa_\mu=0~(2)$ (hardly visible), respectively. The dashed rising curves are the (charged) vector boson fusions (VBF), $\mu^+\mu^-\to\nu_{\mu}\bar{\nu}_{\mu} X$, calculated using the fixed-order (FO) approach with a cut on the invariant mass of $\nu_{\mu}\bar{\nu}_{\mu} $ pair $M_{\nu_{\mu}\bar{\nu}_{\mu}} > 150 \,{\rm GeV}$. All calculations are carried out with {\sc Whizard~2.8.5}.} \label{fig:2B} \end{figure} To numerically cross check the analytical results for the cross-section ratios, we implemented the extreme case of the SM with a vanishing as well as with a $\kappa$-rescaled muon Yukawa coupling, respectively, within the same Monte Carlo (MC) framework that we used for our phenomenological study in Sec.~\ref{sec:Pheno} for multi-boson final states $X_i$ for the class of processes $\mu^+\mu^-\rightarrow W^+W^-H^{M-2}$. Our numerical MC results agree perfectly with the ratios given in Tables \ref{tab:ratios-2}, \ref{tab:ratios-3}, and~\ref{tab:ratios-4}, thereby validating our SMEFT implementation. In summary, the common feature of all versions of the modified Yukawa sector is a proliferation of multi-boson production at high energy. The anomalous contributions do not interfere with SM production due to the mismatch in helicity. The dimensionality of the anomalous interactions determines the particle multiplicity in the energy range where the new interactions start to dominate over SM particle production. The breakdown into distinct final states allows for drawing more detailed conclusions on the operator content and thus the underlying mechanism. In the next section, we are studying the phenomenology of such a SMEFT setup featuring a modified muon Yukawa coupling and assess our sensitivity to it at a high-energy $\mu^+\mu^-$ collider, using the paradigm process $\mu^+\mu^- \to W^+W^-H$. Clearly, processes with multiple Higgs bosons only in the final state are also very interesting, but due to the different signatures and the smaller event rates we defer them to a separate phenomenological study. \section{Phenomenology of Muon-Higgs Coupling at a high-energy Muon Collider} \label{sec:Pheno} In this section, we explore the phenomenology of multi-boson production for the sensitivity to the muon Yukawa coupling at a muon collider with collision energy in the range $1<\sqrt{s}<30$ TeV, with an integrated luminosity, which scales with energy quadratically as \cite{Delahaye:2019omf,Bartosik:2020xwr}, \begin{equation}\label{eq:lumi} \mathcal{L}=\left(\frac{\sqrt{s}}{10\text{ TeV}}\right)^2 10~\textrm{ab}^{-1}. \end{equation} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figs/BBB.pdf} \caption{Similar to Fig.~\ref{fig:2B}, the cross sections of three-boson production at a $\mu^+\mu^-$ collider as a function of the c.m. energy $\sqrt{s}$.} \label{fig:3B} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figs/BBBB_SM.pdf} \caption{Similar to Fig.~\ref{fig:2B}, the cross sections of four-boson production at a $\mu^+\mu^-$ collider as a function of the c.m. energy $\sqrt{s}$, for SM $\kappa_\mu=1$ only.} \label{fig:4A} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figs/BBBB_MM.pdf} \includegraphics[width=0.8\textwidth]{figs/BBBB_VBF.pdf} \caption{The cross sections of four-boson production at a $\mu^+\mu^-$ collider via (a) annhilation $\mu^+\mu^- \to 4B$ and (b) the (charged) vector boson fusions (VBF), $\mu^+\mu^-\to\nu_{\mu}\bar{\nu}_{\mu} X$ as functions of the c.m. energy $\sqrt{s}$. The solid and dotted lines are for the results with muon Yukawa coupling as $\kappa_\mu=1$ and $\kappa_\mu=0~(2)$, respectively. } \label{fig:4B} \end{figure} \subsection{Multi-boson production} To numerically determine the different multi-boson production cross sections and later on assess the sensitivity to the muon Yukawa coupling, we implemented the different EFT setups discussed in the last section into the multi-purpose event generator {\sc Whizard~2.8.5}~\cite{Kilian:2007gr,Moretti:2001zz,Brass:2018xbv} using its plugin to external models~\cite{Christensen:2010wz}. This is building upon the EFT frameworks used for multi-boson production and vector-boson scattering at hadron~\cite{Alboteanu:2008my,Kilian:2014zja,Brass:2018hfw,Ballestrero:2018anz} and electron-positron colliders~\cite{Beyer:2006hx,Fleper:2016frz}, which we adapted here for the muon collider. The QED initial-state radiation (ISR), resummed to all orders in soft photons and up to third order in hard-collinear radiation, is equally applicable to the muon collider. Beam spectra for multi-TeV muon colliders are much less complicated than for electron-positron colliders and can be easily described with a Gaussian beam spread of 0.1\%. They are, however, not relevant at the level of this study. In Figs.~\ref{fig:2B}, \ref{fig:3B} and \ref{fig:4A}, we first present the Standard Model (with $m_\mu=y_{\mu}v/\sqrt{2}$) cross sections for the production of two, three and four bosons, respectively, including the Higgs and the EW gauge bosons. The cross sections -- in each case decreasing in size -- are for two-boson production, \begin{equation}\label{eq:2B} WW,~ZZ,~ZH,~HH \end{equation} for three-boson production, \begin{equation}\label{eq:3B} WWZ,~WWH, ~ZZZ, ~ZZH, ~ZHH,~HHH \end{equation} and for four-boson production, \begin{equation}\label{eq:4B} WWWW, ~WWZZ, ~WWHZ, ~WWHH, ~ZZZZ,~HZZZ, ~HHZZ,~HHHZ \end{equation} respectively. The single Higgs ($H$) production is also illustrated in Fig.~\ref{fig:2B}, which are obtained through $\mu^+\mu^-\to H$ recoiled by ISR. We present two classes of production mechanisms, namely, the direct $\mu^+\mu^-$ annihilation and the vector boson fusion (VBF) resulting from the initial-state radiation off the muon beams.\footnote{If no specific indication, we only include the charged vector boson ($W^{\pm}$) in VBF, \emph{i.e.}, $W^+W^-\to X$. The $Z$ boson fusion, $ZZ\to X$, is sub-leading due to its smaller vector coupling to leptons, with the example of $ZHH$ production demonstrated in Table \ref{tab:cutflow}. The final states involving charged particles, \emph{e.g.}, $W^+W^-H$, can be produced through photon or photon-$Z$ fusion as well, which are mostly collinear to the initial beams. This background is largely excluded when a reasonable angular cut (\emph{e.g.}, $10\degree<\theta<170\degree$) is imposed, also illustrated in Table \ref{tab:cutflow}.} Representative Feynman diagrams for these production mechanisms are shown in Fig.~\ref{fig:mumuWWH} for the $W^+W^-H$ final state. Near the threshold, the annihilation cross sections dominate. With the increase of collision energy, they are suppressed by $1/s$. The VBF mechanisms, on the other hand, increase with energy logarithmically \cite{Costantini:2020stv,Han:2020uid} and eventually take over above a few TeV. The $\mu^+\mu^-$ annihilation to multiple Higgs bosons is induced by the Yukawa and possible Higgs self interactions, while no gauge couplings. The corresponding cross sections are highly suppressed compared with the channels involving gauge boson(s), with examples of $HH$ and $HHH$ demonstrated in Fig.~\ref{fig:2B} and \ref{fig:3B}. Therefore, there is no need to include four-Higgs production in Eq.~(\ref{eq:4B}) or Fig.~\ref{fig:4B}, and the corresponding phenomenological study of the pure Higgs production is largely left for the future. In the presence of anomalous couplings, the characteristic high-energy behavior shown in these figures is modified, as we discussed above in Sec.~\ref{sec:setup}. At asymptotically high energy, for each final state the new-physics contribution dominates over the SM and exhibits a simple and uniform power law as shown in Figs.~\ref{fig:2B}, \ref{fig:3B} and \ref{fig:4B} by the dotted curves, which behave as straight lines in double-logarithmic plots. In Sec.~\ref{sec:setup} we provided a description within the EFT framework, in which the muon Yukawa coupling can receive contributions from new physics beyond the SM. The breakdown of the final states in terms of individual channels follows precisely the ratios of cross-section differences in Tables~\ref{tab:ratios-3} and~\ref{tab:ratios-4}, respectively, for the matched model. Given real data, measuring those ratios at various energy values will allow us to deduce the underlying pattern. In particular, the absence of pure multi-Higgs states is a special feature for the extreme scenario $d\to\infty$ which we used for the plots in Fig.~\ref{fig:3B}~and~\ref{fig:4B}, i.e., there are no direct muon-Higgs couplings at any order. In a more generic scenario, multi-Higgs states will appear with a sizable rate, and the observable ratios of vector-boson and Higgs final states are related to the operator structure in the SMEFT expansion. We now discuss the phenomenology of a modified muon Yukawa coupling in more detail. In the effective approach discussed above, the muon Yukawa coupling gets a modification like Eq.~(\ref{eq:kmu_heft}) or (\ref{eq:kmu_smeft}). In such a way, $\kappa_{\mu}=1$ corresponds to the SM case. The deviation of $\kappa_{\mu}$ from 1 quantifies the new physics contribution, which serves as the signal in this work. In Figs.~\ref{fig:3B}-\ref{fig:4B}, we showed two such benchmark cross sections for $\kappa_{\mu}=0$ and 2 as dotted curves. They coincide with each other, which reflects a symmetry of the annihilation cross sections such that \begin{equation}\label{eq:xsec} \sigma|_{\kappa_{\mu}=1+\delta}=\sigma|_{\kappa_{\mu}=1-\delta}, \end{equation} where $\delta$ is the deviation from the SM muon Yukawa prediction, with an exception for the pure Higgs production. With $\kappa_{\mu}=0\ (2)$ at a high energy, the annihilation cross sections of the $ZZH$ and $ZHH$ channels merge in Fig.~\ref{fig:3B}(a), which is a result of the Goldstone equivalence between the longitudinal $Z$ boson and the Higgs. A similar situation happens to the four-boson case at a higher collision energy in Fig.~\ref{fig:4B}(b). When compared with the Standard Model annihilation, we find that the $\kappa_{\mu}=0\ (2)$ cross sections agree at low collision energies, but gradually diverge as the collision energy increases. At $\sqrt{s}=30$ TeV, the relative cross section deviation can be three orders of magnitude for the $ZHH$ case, while it amounts to 20\% for $WWZ$ case. This big difference provides us a good opportunity to test the muon Yukawa coupling at a multi-TeV $\mu^+\mu^-$ collider. As discussed above, and pointed out in~\cite{Han:2020uid,Costantini:2020stv}, the annihilation process, in our particular case here for three-boson production, is overcome at high energies by the \begin{figure} \centering \includegraphics[width=.25\textwidth]{figs/mu_signal-1.pdf} \quad \includegraphics[width=.25\textwidth]{figs/mu_signal-2.pdf} \quad \includegraphics[width=.25\textwidth]{figs/mu_bkgd.pdf} \caption{Representative diagrams for the signal annihilation process $\mu^+\mu^- \to W^+W^-H$} (left and middle), and for the VBF background process (right). \label{fig:mumuWWH} \end{figure} vector-boson fusion (VBF) production which becomes dominant at all high-energy (lepton) colliders. Here we show the VBF cross sections as dashed lines in Table Fig.~\ref{fig:3B}, as well. They are calculated with the fixed-order approach for fusion processes $\mu^+\mu^-\to\nu_{\mu}\bar{\nu}_{\mu} X$, where $X$ represents the desired final-state particles. We have imposed a cut on the invisible neutrinos, $M_{\nu_{\mu}\bar{\nu}_{\mu}}>150$ GeV~\cite{Boos:1997gw,Boos:1999kj}, to suppress the on-shell decay $Z\to\nu_{\mu}\bar{\nu}_{\mu}$. We see that at an energy as high as 30 TeV, the VBF cross sections are generally $2\sim3$ magnitudes larger than the annihilation processes for three-boson production. The relative size is even larger for the four-boson case. Theses channels will serve as backgrounds for the annihilation multi-boson productions when we measure the muon Yukawa coupling. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figs/dist/WWH_m3B.pdf} \includegraphics[width=0.48\textwidth]{figs/dist/WWH_ThetaB.pdf} \includegraphics[width=0.48\textwidth]{figs/dist/WWH_RBB.pdf} \caption{The kinematic distributions of the boson angle $\theta_B$, the diboson distance $R_{BB}$, and the triboson invariant mass $M_{3B}$ ($B=W,H$), respectively, in the $WWH$ production at a $\sqrt{s}=10$ TeV $\mu^+\mu^-$ collider.} \label{fig:distWWH} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figs/dist/ZHH_m3B.pdf} \includegraphics[width=0.48\textwidth]{figs/dist/ZHH_ThetaB.pdf} \includegraphics[width=0.48\textwidth]{figs/dist/ZHH_RBB.pdf} \caption{The kinematic distributions for $\theta_B$, $R_{BB}$, and $M_{3B}$ as in Fig.~\ref{fig:distWWH}, but for $ZHH$ production at a $\sqrt{s}=10$ TeV $\mu^+\mu^-$ collider.} \label{fig:distZHH} \end{figure} \subsection{Kinematic distributions} \label{sec:dist} As we know, the kinematic distributions for the annihilation and VBF processes behave very differently. We take the $WWH$ and $ZHH$ production at a $\sqrt{s}=10$ TeV $\mu^+\mu^-$ collider as benchmark examples\footnote{In triboson production, we choose $WWH$ as a demonstration example considering its large production rate, and $ZHH$ as another one for its relatively large deviation from the anomalous coupling. The $WWZ$ channel has an even larger cross section, while it suffers from a small relative deviation.} and show the distributions of boson angles $\theta_B\ (B=W,Z,H)$, the diboson separation distances $R_{BB}=\sqrt{(\Delta\eta)^2+(\Delta\phi)^2}$ in the rapidity-azimuthal angle plane, and triboson invariant masses $M_{3B}$, respectively, in Fig.~\ref{fig:distWWH}~and~\ref{fig:distZHH}. We see two main differences. First, the invariant mass $M_{3B}$ for the annihilation process is sharply peaked at the collision energy $\sqrt{s}$ seen in Fig.~\ref{fig:distWWH}(a) and \ref{fig:distZHH}(a), with a small spread due to the initial-state radiation (ISR). In contrast, in vector-boson fusion, the $M_{3B}$ is mainly peaked around the threshold. This feature enables us to efficiently separate these two processes and reduce the VBF background with an invariant mass cut. More specifically, with the $M_{3B}>0.8\sqrt{s}$ cut, the VBF background is reduced by three orders of magnitudes, with the absolute differential cross sections falling below the lower axis limits in Figs.~\ref{fig:distWWH}~and~\ref{fig:distZHH}. In comparison, the signal, $\kappa_{\mu}=0~(2)$, almost remains the same size, with specific numbers listed in Tab.~\ref{tab:cutflow}. We also include the cut flow for the cross sections of SM annihilation to $WWH$ and $ZHH$ without including the ISR effect in Tab.~\ref{tab:cutflow}. We see the invariant mas cut does not impact at all in this case, because the $M_{3B}=\sqrt{s}$ is exact as a result of the momentum conservation. Another important observation is that the invariant mass cut $M_{3B}>0.8\sqrt{s}$ together with the ISR effect gives roughly the same cross sections without ISR, which justifies neglecting the ISR effect when necessary. \begin{table} \centering \begin{tabular}{c|c|c|c|c|c} \hline\hline Cut flow & $\kappa_{\mu}=1$ & w/o ISR & $\kappa_{\mu}=0~(2)$ & CVBF & NVBF \\\hline \hline $\sigma$ [fb] & \multicolumn{5}{c}{$WWH$} \\\hline No cut & 0.24 & 0.21 & 0.47 & 2.3 & 7.2 \\ $M_{3B}>0.8\sqrt{s}$ & 0.20 & 0.21 & 0.42 & $5.5\cdot10^{-3}$ & $3.7\cdot10^{-2}$ \\ $10\degree<\theta_{B}<170\degree$ & 0.092 & 0.096 & 0.30 & $2.5\cdot10^{-4}$ & $2.7\cdot10^{-4}$ \\ $\Delta R_{BB}>0.4$ & 0.074 & 0.077 & 0.28 & $2.1\cdot10^{-4}$ & $2.4\cdot10^{-4}$ \\ \hline \# of events & 740 & 770 & 2800 & 2.1 & 2.4 \\\hline $S/B$ & \multicolumn{5}{c}{2.8} \\\hline\hline $\sigma$ [fb] & \multicolumn{5}{c}{$ZHH$}\\ \hline No cut & $6.9\cdot10^{-3}$ & $6.1\cdot10^{-3}$ & 0.119 & $9.6\cdot10^{-2}$ & $6.7\cdot10^{-4}$\\ $M_{3B}>0.8\sqrt{s}$ & $5.9\cdot10^{-3}$ & $6.1\cdot10^{-3}$ & 0.115 & $1.5\cdot10^{-4}$ & $7.4\cdot10^{-6}$\\ $10\degree<\theta_{B}<170\degree$ & $5.7\cdot10^{-3}$ & $6.0\cdot10^{-3}$ & 0.110 & $8.8\cdot10^{-6}$ & $7.5\cdot10^{-7}$\\ $\Delta R_{BB}>0.4$ & $3.8\cdot10^{-3}$ & $4.0\cdot10^{-3}$ & 0.106 &$8.0\cdot10^{-6}$ & $5.6\cdot10^{-7}$ \\\hline \# of events & 38 & 40 & 1060 & -- & -- \\\hline $S/B$ & \multicolumn{5}{c}{27} \\\hline\hline \end{tabular} \caption{The cut-flow for the cross sections of $WWH$ and $ZHH$ production through annihilation (SM with $\kappa_\mu = 1$) with and without ISR, and the BSM signal models for $\kappa_{\mu}=0~(2)$ (i.e., $\Delta\kappa_\mu = \pm 1$). The last two columns are the SM backgrounds from charged (CVBF) and neutral vector boson fusion (NVBF), respectively. All cross sections are at a $\sqrt{s}=10$ TeV $\mu^+\mu^-$ collider. The event numbers correspond to an integrated luminosity $\mathcal{L}=10~\textrm{ab}^{-1}$. The signal and background are defined in Eq.~(\ref{eq:SB}). } \label{tab:cutflow} \end{table} Second, the final-state particles produced in the vector boson fusion are very forward, shown in Fig.~\ref{fig:distWWH}(b) and \ref{fig:distZHH}(b). In comparison, the annihilation-produced particles are much more central, especially for the events induced by a Yukawa interaction with $\kappa_\mu=0~(2)$. With an angular cut, such as $10\degree<\theta_{B}<170\degree$ based on the detector design \cite{Bartosik:2020xwr}, we are able to reduce the VBF background by more than another factor of 10. The SM annihilation cross section will be suppressed by a factor of 2 for $WWH$, while the signal events with $\kappa_{\mu}=0~(2)$ are only reduced by 30\%. As for the case of the $ZHH$ process, the impact of the angular cut is small for the annihilation process. Finally, in order to reasonably resolve the final states within the detector, we need to require a basic separation among the reconstructed final-state bosons. The distributions of separation distance $R_{BB}$ in the $WWH$ and $ZHH$ production are shown in Fig.~\ref{fig:distWWH}(c) and \ref{fig:distZHH}(c). Besides the peak around $R_{BB}\sim\pi$ due to the back-to-back configuration, we obtain another minor peak around $R_{BB}\sim0$ for the SM annihilations, which reflects the collinear splitting behaviors, such as $W\to WH$ or $Z\to ZH$. With a reasonable separation cut $R_{BB}>0.4$, the SM annihilation to $ZHH$ is reduced by roughly 30\% due to the removal of radiation patterns with collinear splitting $Z\to ZH$. In comparison, both signal and backgrounds for $WWH$ production are only reduced slightly, with specific numbers presented in Table \ref{tab:cutflow}. In this case, the collinear splitting coincides with the forward beam region, which is already cut away by the angular acceptance. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figs/xsec_WWH.pdf} \includegraphics[width=0.48\textwidth]{figs/xsec_ZZZ.pdf} \includegraphics[width=0.48\textwidth]{figs/xsec_ZZH.pdf} \includegraphics[width=0.48\textwidth]{figs/xsec_ZHH.pdf} \caption{The cross sections of annihilation without ISR for the three-boson production channels $\mu^+\mu^- \to WWH, ZZZ, ZZH, ZHH$ versus the $\mu^+\mu^-$ c.m. energy $\sqrt{s}$ and the effective coupling $\kappa_{\mu}$. The lower two clusters of curves correspond the flow cut: $\theta_{if}>10\degree$ and the accumulated $\Delta R>0.4$.} \label{fig:scan} \end{figure} \subsection{Statistical sensitivity on the Muon Yukawa Coupling} With the integrated luminosity in Eq.~(\ref{eq:lumi}), we obtain the event numbers for annihilation and VBF for $WWH$ and $ZHH$, listed in Table \ref{tab:cutflow}. We see a big visible deviation from the SM backgrounds ($\kappa_{\mu}=1$) if we assume the muon Yukawa coupling varying within a range $\kappa_{\mu}=0 \ldots 1 \ldots 2$. We can obtain the signal and background events as \begin{equation}\label{eq:SB} S=N_{\kappa_{\mu}}-N_{\kappa_{\mu}=1}, ~B=N_{\kappa_{\mu}=1}+N_{\rm VBF}, \end{equation} with a large signal-to-background ratio $S/B$ for $WWH$ and $ZHH$ shown in Table \ref{tab:cutflow}. We can define the corresponding statistical sensitivity to the anomalous (non-SM) muon Yukawa coupling as \begin{equation}\label{eq:sensi} \mathcal{S}=\frac{S}{\sqrt{B}}. \end{equation} We would like to emphasize that $\mathcal{S}$ is always positive due to $N_{\kappa_{\mu}}\geq N_{\kappa_{\mu}=1}$, so we can define it without a modulus. We would expect a big sensitivity under the assumption $\kappa_{\mu}=0~(2)$ for both $WWH$ and $ZHH$ channels, with the specific values even beyond the applicability of Gaussian approximation adopted in Eq.~(\ref{eq:sensi}). We want to know how precisely we can measure the muon Yukawa coupling at a high-energy muon collider. For this task, we perform a scan of the annihilation cross sections over the collision energy $\sqrt{s}$ and the effective coupling $\kappa_{\mu}$, with results in the band of curves shown in Fig.~\ref{fig:scan}. We do not include the $WWZ$ channel as the corresponding sensitivity is small resulting from the relatively small deviation shown in Fig.~\ref{fig:3B}. The ISR effect is safely discarded in this scan, thanks to the balance of the invariant mass cut, illustrated by the example of $WWH$ and $ZHH$ production in Table \ref{tab:cutflow}. In Fig.~\ref{fig:scan}, we present three clusters of curves to illustrate the impact of cut flow. The solid lines indicate the annihilation cross sections without any cuts. The lower clusters of dashed and dotted curves correspond to the angular cuts $10\degree<\theta_{B}<170\degree$ and the accumulated $\Delta R_{BB}>0.4$. We see that at large collision energy, the signal cross sections corresponding to $\kappa_{\mu}\neq1$ are not hampered by the kinematic cuts compared to the SM annihilation ones ($\kappa_{\mu}=1$). Especially at a large $\kappa_{\mu}$ deviation, such as $\kappa_{\mu}=0(2)$, the cross sections with and without selection cuts are more or less the same. The angular cut almost has no impact on the $ZHH$ channel, because both the $Z$ and $H$ boson are predominantly central in this channel, as mentioned above and shown in Fig.~\ref{fig:distZHH} (b). Instead, the separation distance cut reduces the SM annihilation rate by a factor of 30\%$\sim$40\%, due to the removal of collinear splittings of $Z\to ZH$. At this stage, we are able to obtain the sensitivity of a high-energy muon collider on the muon Yukawa coupling, by combining the cross sections with the corresponding integrated luminosity. In Fig.~\ref{fig:sensi}, we show two type of contours, corresponding to $\mathcal{S}=2$ and 5 respectively, with an integrated luminosity as given in Eq.~(\ref{eq:lumi}). We recall that the sensitivity respects a symmetry that $\mathcal{S}|_{\kappa_{\mu}=1+\delta}=\mathcal{S}|_{\kappa_{\mu}=1-\delta}$, due to the nature of the symmetric cross sections in Eq.~(\ref{eq:xsec}). The channels -- in decreasing size of sensitivity -- are $ZHH$, $ZZH$, $WWH$, and $ZZZ$, respectively. At the low energy end, around 3 TeV, we are able to probe the muon Yukawa coupling about 100\% by means of the $ZHH$ channel, if we take the criterion $\mathcal{S}=2$. At a 10 (30) TeV muon collider, we are able to test the muon Yukawa coupling to a precision of up to 10\% (1\%), mostly because of two factors: large signal-to-background ratios and large integrated luminosity. In addition, we see the sensitivity of the $ZZH$ is very close to the $ZHH$ channel, as a result of the Goldstone equivalence theorem. Again, in the SMEFT formalism, the anticipated precision of $10\% - 1\%$ would translate to the sensitivity of the scale as $\Lambda \sim 30-100$ TeV. \begin{figure}\centering \includegraphics[width=0.8\textwidth]{figs/sig_contour.pdf} \caption{The statistical sensitivity of a high-energy muon collider to the muon Yukawa coupling $\kappa_{\mu}$ from the measurements of three-boson production.} \label{fig:sensi} \end{figure} So far in this paper, we have focused on the sensitivity to the muon Yukawa coupling from triboson production measurements at a high-energy muon collider. Similar analyses can be performed in the two- and four-boson channels. However, the sensitivities from the two-boson channels are expected to be weaker, due to the relatively smaller sizes of the cross-section deviations from anomalous couplings, shown in Fig.~\ref{fig:2B}. Though in the four-boson channels, the signal-to-background ratios can be larger than that for the triboson channels, the production rates become significantly smaller compared to the three-boson channels. This elevates in our opinion the triple production to the ``golden channels" for this kind of measurement. Our event selection is based on imposing an invariant mass cut $M_{3B}>0.8\sqrt{s}$ in our analysis to enrich the annihilation channels. An opposite selection cut could likewise yield enriched samples of VBF processes; this is also expected to have some sensitivity on anomalous muon-Higgs couplings, based on the deviations shown in Fig.~\ref{fig:4B}(b). As a final remark, annihilation cross sections of (pure) multi-Higgs production do not respect the symmetry in Eq.~(\ref{eq:xsec}), which provides an opportunity to determine the sign of the deviation $\delta$. Nevertheless, the production rate is so small that not even a single expected event survives the event selection, given the luminosity in Eq.~(\ref{eq:lumi}). The only chance lies in the single Higgs production with collision energy right on the Higgs mass threshold. We leave all these possibilities to future dedicated studies. To summarize our results, a high-energy muon collider in the range of $10-30$ TeV, combining multi-TeV resolution power with the well-defined and clean leptonic environment, allows probing a tiny and elusive parameter of the SM like the muon Yukaww coupling to the single-digit per-cent level. \section{Summary and Conclusions} \label{sec:summary} Motivated by the recent proposal for a multi-TeV muon collider, we explored the sensitivity of testing the muon-Higgs coupling at such a collider. Owing to the small muon-Yukawa coupling in the SM, any new physics contributions to the muon mass generation different from the SM Yukawa formalism would result in relatively large deviations from the SM prediction, and thus deserve special scrutiny at future collider experiments. We claim that a muon collider would be unique in carrying out such explorations. Our results are summarized as follows. After presenting the scale-dependence of the muon Yukawa coupling in the SM and in an extra-dimensional theory, we discussed parameterizations for deviations of the muon-Yukawa coupling from its SM values within the frameworks of HEFT and SMEFT effective descriptions, and considered the implications on such anomalous couplings from perturbative unitarity bounds. As paradigm observables, we applied this EFT formalism to multi-boson production at a muon colliders, particularly the production of two, three and four electroweak gauge bosons associated with a Higgs boson. Using the Goldstone boson equivalence theorem, we derived the scaling behavior of cross sections for processes with multiple bosons, containing deviations to the muon-Higgs coupling, normalized to specific reference cross sections for each multiplicity in Sec.~\ref{sec:ratios}. Our studies show that the sensitivity reach to such anomalous muon-Higgs couplings rises with the number of gauge bosons as the onset of the deviation from the SM is at lower energies. This is due to the fact that processes with higher multiplicities are involved in more insertions of the operators generating the deviations (and of higher operators) with high-energy enhancements and sizeable coupling coefficients. We further performed detailed numerical analyses in Sec.~\ref{sec:Pheno}, and found that two-boson production processes have less sensitivity to the muon-Yukawa coupling, while those for four-boson production have lower production rates. Therefore, to demonstrate the feasibility of such a study, we identified the optimal processes of triboson production $\mu^+\mu^-\to W^+W^-H,ZHH$ as prime examples and showed how to isolate this from its most severe background, the same final state produced in vector-boson fusion. Typical observables are diboson correlations, either their invariant masses, their angular distributions or their $\Delta R$ distances. In this scenario, a muon collider with up to 30 TeV center-of-mass energy has a sensitivity to deviations of the muon-Yukawa coupling from its SM value of the order of 1\%$\sim$4\%. This can be interpreted in the SM as a measurement of the muon Yukawa coupling with this precision. In the SMEFT formulation, if we assume an order-1 coupling, this precision would correspond to a probe to a new physics scale of about $\Lambda \sim 30-100$ TeV. There are many ways such an analysis can be improved, {\it e.g.,}~by combining different channels, performing measurements at different energy stages of the machines, by combining final states with different multiplicities, by using multivariate analyses instead of simple cut-based analyses and by using polarization information on the final-state vector bosons. All of this is beyond the scope of this paper and is left for future investigations. This paper highlights the tantamount possibilities to study one of the most elusive parameters within particle physics, the Higgs-muon coupling, and it also shows in more general context how effective field theories can be utilized to make the utmost use of a discovery facility like the muon collider. \acknowledgments We thank Fabio Maltoni, Daniel Schulte and Andrea Wulzer for useful discussions. This work was supported in part by the U.S.~Department of Energy under grant No.~DE-FG02-95ER40896, U.S.~National Science Foundation under Grant No.~PHY-1820760, and in part by the PITT PACC. JRR acknowledges the support by the Deutsche Forschungsgemeinschaft (DFG, German Research Association) under Germamny's Excellence Strategy-EXC 2121 ``Quantum Universe"-39083330. WK and NK were supported in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 – TRR 257. \bibliographystyle{JHEP}
2024-02-18T23:40:57.529Z
2021-08-13T02:00:18.000Z
algebraic_stack_train_0000
3,773
14,318
proofpile-arXiv_066-2507
\section{Compliance with the BCBS 239 standard requires explainable AI/ML} Financial services companies have unique, industry-specific needs for explainable AI/ML, many of which are driven in part by compliance needs \cite[Ch.\ 7]{wef2020}. Some previous work describes how explainability needs arise from fairness considerations in specific business decisions like credit decisions \cite{Chen2018}, and other needs relate to regulatory requirements such as model risk management \cite{Chen2020b}. Other needs for explainability in financial services have been covered by multiple workshop papers in the NeurIPS Workshops on AI in Financial Services (2018 and 2019). In this survey, we describe how verifying compliance with a specific standard, No.\ 239 from the \cite{bcbs239}, drives specific explainability needs for Global Systemically Important Banks (G-SIBs, whose distribution worldwide is shown in \Cref{fig:gsibs-map}), and presents opportunities for using explainable AI to service these compliance needs. we do not confine ourselves to the narrow scope of variable level explanations that have been in vogue recently, but rather take a more holistic view of what explanation means \cite{xaitutorial2020}. \begin{figure} \centering \includegraphics[width=\columnwidth,keepaspectratio]{fig/gsibs-map.pdf} \caption{Worldwide distributions of Global Systemically Important Banks (G-SIBs) in 2019 \cite{bcbs239pr}.} \label{fig:gsibs-map} \end{figure} \subsection{The 11 principles of BCBS 239} The eleven principles for effective risk data aggregation and risk reporting for banks, as defined in BCBS 239, are grouped into three topics as follows: \renewcommand{\labelenumi}{\Roman{enumi}.} \renewcommand{\labelenumii}{P\arabic{enumii}.} \begin{enumerate} \item Overarching governance and infrastructure \begin{enumerate}[series=innerlist] \item Governance [*] \item Data architecture and IT infrastructure [*] \end{enumerate} \item Risk data aggregation capabilities \begin{enumerate}[resume=innerlist] \item Accuracy and integrity [**] \item Completeness [*] \item Timeliness \item Adaptability [*] \end{enumerate} \item Risk reporting practices \begin{enumerate}[resume=innerlist] \item Accuracy [**] \item Comprehensiveness [*] \item Clarity and usefulness [**] \item Frequency [*] \item Distribution \end{enumerate} \end{enumerate} \noindent A further three principles are specific to bank supervisors (regulators) only, and are not listed above. The star ratings indicate our subjective evaluation as to whether a particular principle relates in a major [**] or minor [*] way to explainability requirements, which are described below. \subsection{The explainability needs inherent in BCBS 239}% \label{sec:explanation-needs} We now describe how the BCBS standard can be interpreted (non-exhaustively) as highlighting inherent needs for explainable AI. Here, we ignore requirements for process documentation, automation, and infrastructure, assuming that they are necessary prerequisites for explainable AI. Relevant principles and paragraphs of the BCBS 239 standard are cross-referenced and paraphrased below. We collect the explainability needs around input data and output reporting as follows: \begin{enumerate}[label=(\Alph*)] \item Input data should be of high quality. \begin{enumerate}[label=(A\arabic*)] \item \textit{Data completeness.} Verify that all relevant data is collected across the entire bank, across organizational boundaries, even as the organizational structure changes. (P1:29, P4:41--43) \item \textit{Data provenance.} Trace the provenance of all data back to unique, authoritative sources. (P3:36(c,d)) \item \textit{Quality control.} Monitor data quality and have mitigation strategies for bad data. (P3:40, P7:53) \item \textit{Data description.} Use a single, complete inventory of data, with a data dictionary and taxonomy describing data characteristics (metadata). (P2:33, P3:37, P9:67) \end{enumerate} \item Output reports should address the needs of multiple stakeholders. \begin{enumerate}[label=(B\arabic*)] \item \textit{Flexible aggregation.} Data aggregation should be customizable to user needs, and adaptable to both internal organization changes and extrinsic changes such as new regulations. (P6:49(b--d)) \item \textit{Business needs.} Banks should justify their own reporting requirements based on their portfolios of business needs (P7:56, P8:59) \item \textit{Summarization level.} Reports should be accurate, concise, comprehensive, and understandable to their intended recipients. (P7, P9) Reports should be tailored to the needs of multiple stakeholders at different levels across the organization, such as the board, senior management, and risk committees, as well as external stakeholders like bank supervisors (regulators). (P9:61--66,68) Reports to senior stakeholders should be more highly summarized. (P9:69) \item \textit{Reporting frequency,} which should be appropriate for the stakeholder's needs. (P10:70) \end{enumerate} \end{enumerate} At the high level, note that the data quality needs correspond roughly to P3, whereas reporting needs correspond roughly to P7 and P9. However, secondary requirements related to explainability are scattered throughout many of the other eleven principles codified in BCBS 239. we therefore designate Principles P3, P7 and P9 as of major relevance to explainable AI, and other principles mentioned above as of minor relevance. In the next section, we will also show quantitative evidence that these principles are also those that demonstrate the least statistical evidence for compliance improvement. \subsection{Compliance has been slower than expected} \begin{figure*} \centering \includegraphics[width=0.33\textwidth,keepaspectratio]{fig/bcbs-compliance-1.pdf} \includegraphics[width=0.33\textwidth,keepaspectratio]{fig/bcbs-compliance-2.pdf} \includegraphics[width=0.33\textwidth,keepaspectratio]{fig/bcbs-compliance-3.pdf} \caption{ Worldwide compliance with BCBS 239 has missed the original 2016 deadline. Shown are the mean compliance scores (1 = not compliant, 4 = fully compliant) across the evaluated banks, together with the standard error in the light shaded envelopes. While compliance with Topics I and II (left and center respectively) have in general improved, compliance with the principles of Topic III (right) have for the most part stayed stagnant. } \label{fig:compliance-scores} \end{figure*} These 11 principles form the core of periodic compliance assessments carried out by BCBS, which were conducted on G-SIBs in the years 2013, 2014, 2016, and 2017 \cite{bcbs239pr}. \Cref{fig:compliance-scores} shows the mean compliance scores from the progress reports, on a four-point Likert scale (1 = not compliant, 4 = fully compliant) across the evaluated banks, together with the standard error in the light shaded envelopes. Overall, worldwide compliance with BCBS 239 among G-SIBs has missed the original 2016 deadline. While compliance with Topics I and II (left and center respectively) have in general improved, compliance with the principles of Topic III (right) have for the most part stayed stagnant, and even shown some regression (P8, P9). Widening standard errors from 2016 to 2017 also indicate a widening gaps in compliance levels between individual banks. The progress reports note that one of the key difficulties in compliance lies in automation. However, we now offer some statistical evidence in favor of a claim that explainable AI is needed to make progress in BCBS compliance. \Cref{tab:compliance-change} reports mean compliance scores in the years 2016 and 2017 (with standard error in parentheses), the improvement from 2016 to 2017, and the $p$ value for the one-sided $t$-test for the hypothesis test that the mean compliance score improved in 2017 relative to 2016. Data is from the annual BCBS progress reports. We use the pooled sample (independent) $t$-test because scores for each bank are not publicly available. Values that do not meet the usual $p<0.05$ criterion are \textit{italicized}. The largest three values are in \textbf{\textit{bold italics}}. The last column shows our ratings of the principle's relevance to explainable AI (* = minor, ** = major). The top three principles that show the least evidence for improvement (by $p$-value) are exactly those most strongly related to explainable AI needs. \begin{table} \centering {\small \begin{tabular}{|c|c|c|c|c|c|} \hline & 2017 & 2019 &$\Delta$ & $\nu$& $p$ \tabularnewline \hline \input{bcbs_table_rows} \hline \end{tabular} } \caption{Lack of compliance progress is related to explainable AI needs. Reported are mean compliance scores in the years 2016 and 2017 (with standard error in parentheses), the improvement from 2016 to 2017, and the $p$ value for the one-sided $t$-test for the hypothesis test that the mean compliance score improved in 2017 relative to 2016. Values that do not meet the usual $p<0.05$ criterion are \textit{italicized}. The largest three values are also in \textbf{\textit{bold}}. The last column shows our ratings of the principle's relevance to explainable AI (* = minor, ** = major). The top three principles that show the least evidence for improvement (by $p$-value) are exactly those most strongly related to explainable AI needs. } \label{tab:compliance-change} \end{table} From our qualitative analysis in the previous section, note that Principles P6, P7 and P9 are the principles most closely aligned with explainability needs, and these are precisely the principles that show the largest decline in compliance. In this discussion, we look at data quality as a necessary component of explainable AI. In particular, we look the challenge of building a coherent data taxonomy. \section{Metadata debt in legacy enterprises} Enterprises that collect data eventually have to face the challenges of data governance and data management \cite{Khatri2010,Redman2013}. In some heavily regulated industries, such as financial services, having good data management is even a matter of regulatory compliance \cite{bcbs239}. In this section, we consider the problem of \textit{metadata debt}, a kind of technical debt incurred by legacy enterprises when they lack a unified ontology for describing the various kinds of data that they have \cite{Chen2020}. A key aspect of good data governance is to develop have metadata that describe the semantic content of data in a way that is interpretable to end users \cite{Khatri2010,robcasper}. However, retrofitting good data governance onto legacy enterprises is a major challenge, especially when the infrastructure necessary to track the creation and consumption of data does not exist. Legacy enterprises may attempt to pay down metadata debt with a generalized form ontology learning, deriving a common vocabulary of data concepts not just from natural text, but also other sequence structures present in relational databases, log files, and other forms of metadata. When doing so, they will face several practical challenges: \paragraph{Indirect and polymorphic representation} The same kind of data can have many physical representations, be they structured or unstructured, in databases, flat files, text, images, or other binary blobs. Even when structured representations are used, there may be many such representations for data of the same semantics. Such polymorphism can result from the merger of multiple legacy systems that use incompatible representations, which nevertheless have to be harmonized to provide a complete representation of the available data. Having an accurate and precise relationship between semantic and physical data is crucial for other aspects of data governance, such as data access, data quality, and lifecycle management. \paragraph{High cost, high accuracy requirements despite label noise} The creation of semantic labels is very expensive, yet high accuracy is required. The high cost comes from the need to rely on subject matter experts to provide the human labor to label data semantically, as well as to verify the correctness of existing labels. The need for high quality distinguishes this problem to many deep learning applications such as image classification, where labels are crowdsourced, easily verifiable, and essentially free to acquire. High accuracy requires us to detect and mitigate label noise, but conventional methods such as explicit testing for inter-rater reliability are usually too expensive to run. There is therefore a need for a method that automatically quantifies label noise. \paragraph{Large controlled vocabulary} The size of the controlled vocabulary for the semantic metadata can be itself large for a large organization with multiple kinds of data. Terms in the controlled vocabulary are rarely, if ever, used with equal frequency---there will be some frequently used terms, accompanied by a tail of many, successively infrequent, terms. The existence of many rare terms makes it difficult for the na\"ive approach of training many independent classifiers, simply because there are insufficient positive examples of usage for the rare terms: the low signal-to-noise ratio makes it difficult to train a classifier with performance better than random. \paragraph{Organizational changes} The high cost of changing systems in production create an incentive to reappropriate existing data systems for new tasks and new representations of data, but the same attitude of cost avoidance also means that the costly task of updating semantic metadata is often skipped over. In other situations, the organization changes such as restructuring may result in a loss of subject matter experts which are conversant in the informal folklore, which further raises the difficulty of acquiring labels. \paragraph{Ontology drift and concept drift} The physical representations of a data concept can change over time, as APIs and data formats change \cite{Gama2014}. Furthermore, the meaning of the concept itself can change over time due to changes in usage \cite{Wang2011,Kenter2015}. Even seemly unchanging ontologies like the Dewey Decimal Classification for library books have undergone 23 major print editions in the years 1876---2011, and is now continuously updated \cite{ddc}. There is therefore a need for an ontology learning method can can be rerun periodically to detect such concept drifts and ontology drifts. \subsection{There are multiple relevant industry taxonomies and ontologies} We now examine one specific requirement in greater detail: \begin{displayquote} 33. A bank should establish integrated data taxonomies and architecture across the banking group, which includes information on the characteristics of the data (metadata), as well as use of single identifiers and/or unified naming conventions for data including legal entities, counterparties, customers and accounts. [...] Banks do not necessarily need to have one data model; rather, there should be robust automated reconciliation procedures where multiple models are in use. \end{displayquote} In practice, there are multiple business-relevant taxonomies and ontologies for the financial services industry. An incomplete list of these are: \begin{itemize} \item GCIS \cite{gics}, to classify companies by industrial sectors. \item Solvency II DPM \cite{eiopa-dpm}, describing business concepts relevant to solvency testing. \item IFRS Taxonomy \cite{ifrs}, which describes financial statements. \item ESEF Taxonomy \cite{esef}, used for electronic reporting within the European Union. \item US GAAP Taxonomy \cite{us-gaap}, which is used for accounting within the US. \item LEI Taxonomy \cite{lei}, which describes unique global identifiers for legal entities participating in financial transactions. \item FRC Taxonomy \cite{frc}, which is used for accounting within the UK. \item BIRD \cite{ecb-bird}, which describes information in bank internal systems for fulfilling their reporting requirements within the European System of Central Banks. \item FIBO \cite{Bennett2013,fibo}, a general purpose ontology for all business concepts in the financial industry. \end{itemize} Each taxonomy has its own roadmap and cadence of updates. Some, like FIBO, are updated quarterly. Others, like IFRS, are updated annually. Yet others have more infrequent or irregular schedules of updates. Furthermore, the taxonomies come in different formats which have to be integrated. While many of the taxonomies above are in the industry-standard XML-based XBRL \cite{xbrl}, some are still published as Microsoft Excel spreadsheets (like NAICS \cite{naics}), and others are published as Microsoft Access databases (like BIRD). Therefore there is also the technical challenge of interoperability between different formats to consider. \subsection{Changes in an industry standard ontology} The Financial Industry Business Ontology (FIBO) project \cite{Bennett2013,fibo} was launched in the same year that BCBS 239 was published. The first production-ready released was in 2017Q3, with a quarterly update cadence. Shown in \Cref{tab:fibo,fig:fibo} are the changes in the list of classes (data concepts) up to the most recently available version. The results presented show that the business ontology is in constant flux, with as much as one third of the entire ontology changing quarterly. Rather than converging toward a steady state, the pace of change seems erratic. These simple statistics show that mapping onto an industry standard ontology is far from a straightforward, one-time exercise; rather, changes in the ontology have to be versioned and managed as concepts are added, redefined, or removed. Explainable AI work that aims to map onto FIBO or other standardized ontologies must therefore heed the ensuing challenges of ontology drift and other semantic changes. \begin{table} \centering \csvautotabular{data/fibo-changes.tsv} \caption{Changes in the FIBO ontology} \label{tab:fibo} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig/fibo-change.pdf} \caption{Changes in FIBO classes} \label{fig:fibo} \end{figure} \section{Summary and outlook} We have offered our interpretation of BCBS 239 as encoding 8 distinct needs for explainable AI, which we grouped into data quality, and appropriate reporting for multiple stakeholders. Evidence from compliance progress reports demonstrates that the areas with the slowest progress toward compliance is also the same areas that are most strongly related to explainable AI. We also took a closer look at the construction and maintenance of a firmwide data taxonomy, being one of the needs for explainable AI that I have identified. We described the implementation challenges for a specific requirement that occur in legacy enterprises that have to retrofit their business operations and infrastructure to support a firmwide data taxonomy. Finally, we reviewed how a candidate standardized solution to this problem, namely the Financial Industry Business Ontology (FIBO), highlights the need for versioning and updating to prevent semantic drift. We therefore expect explainable AI to be a constant need for the foreseeable future when it comes to standards compliance, and that systems for representing explanations must handle semantic changes to avoid obsolescence in the concepts themselves. \paragraph{Disclaimer} This paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase \& Co and its affiliates (``JP Morgan''), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful. © 2021 JPMorgan Chase \& Co. All rights reserved. \bibliographystyle{ACM-Reference-Format}
2024-02-18T23:40:57.825Z
2021-08-13T02:01:33.000Z
algebraic_stack_train_0000
3,792
3,030
proofpile-arXiv_066-2509
\section{Introduction} Determining the physical processes that drive the growth of both galaxies and their supermassive black holes (SMBHs) is a key goal of current observational and theoretical work \citep[see][for a review]{heckmanbest14}. An increasing body of evidence shows that galaxy growth mainly occurs through `secular processes' rather than by mergers. For example, \cite{kaviraj13} show that only $27\%$ of star formation is triggered by major or minor mergers at $z\sim2$, the peak of both star formation and black hole accretion activity. In addition, \cite{parry09} find that in the Millenium simulations that only $\sim 35\%$ of bulge mass is built by mergers, with the majority built through disk instabilities (triggered through interactions with nearby satellites). Similarly, many recent results have pointed to secular processes as the main driver of SMBH growth. For example, \cite{martin18} showed that in their hydro-dynamical simulations (RAMSES) only $35\%$ of the cumulative growth of SMBHs since $z\sim3$ could be attributed to mergers, both major and minor. Similarly \cite{mcalpine20} found in the EAGLE simulations that galaxy mergers do not induce a significant amount of black hole growth yet do increase the rate of luminous AGN, concluding that that on average no more than $15$\% of a SMBHs mass at $z\sim0$ comes from the enhanced accretion rates triggered by a merger. These results, among others both observational and theoretical, challenge the long accepted paradigm whereby mergers are responsible for the correlations between SMBHs and bulges, such as velocity dispersion and bulge mass \citep{magorrian98, haringrix04, vdb16, batiste17, davis19}. Galaxies which have evolved via mergers are easily recognisable, as mergers are able to redistribute angular momentum in galaxy systems, transferring stars from rotation-supported orbits to pressure-supported orbits in a central bulge, similar to an elliptical galaxy. While there is also an increasing number of simulations finding that a disk can reform post-gas rich merger \citep{hopkins09c, sparre17, pontzen17,peschken20, jackson20}, a significant bulge component still forms even in a minor merger \citep[i.e. when the mass ratio in the merger exceeds $10:1$;][]{walker96, hopkins12, tonini16, stevens16}. Therefore, galaxies with little, to no bulge, can be assumed to have merger-free (and interaction-free) histories, at least since $z\sim2$ \citep{martig12}. The growth of both the galaxy and the SMBH in such systems, will have been dominated by non-merger processes alone. \citet*[hereafter SSL17]{ssl17} calculated the masses of SMBHs powering such a sample of $101$ disk-dominated AGN and showed that they were over-massive (up to $\sim2$ dex) than would be expected from the black hole-bulge mass relation of \citet{haringrix04}. However, SSL17 also found that their disk-dominated AGN still lay on the total stellar mass-SMBH mass relation. This result suggested that secular processes were able to grow a SMBH at rates higher than previously thought. \citet[hereafter S19]{smethurst19b} investigated these possible growth rates by measuring the $\mathrm{\left[ O \textsc{iii}\right] }$~outflow rates in $12$ disk-dominated galaxies using narrowband imaging from the Shane-3m telescope at the Lick Observatory. Under the assumption that the inflow rate to the AGN will be at least equal to the sum of the the outflow rate and the SMBH accretion rate, S19 found that the inflow rates they inferred could be achieved by non-merger processes, including funnelling of gas by bars and spiral arms, and cold accretion from the surrounding galaxy halo. However, this work was limited by the inability to adequately distinguish between gas ionised by the AGN outflow and star formation within the galaxy, and the subtraction of the central AGN PSF (leading to an overestimate and underestimate of the outflow rate respectively). In this work, we aim to measure the outflow rates in 4 of the galaxies observed in S19 using spectral observations taken with the Keck Cosmic Web Imager (KCWI). High spectral resolution observations allow for the narrow component in $\mathrm{\left[ O \textsc{iii}\right] }$~(ionised by star formation or the central AGN) to be isolated from the broad component in $\mathrm{\left[ O \textsc{iii}\right] }$~(assumed to be ionised by the AGN outflow). This allows us to derive the outflow rate in these systems more accurately than in the previous study of S19. By using a sample of galaxies where we can be sure that secular processes dominate, we can isolate the merger-free growth path and understand the limitations to merger-free SMBH growth. In the rest of this work we adopt the Planck 2015 \citep{planck16} cosmological parameters with $(\Omega_m, \Omega_{\lambda}, h) = (0.31, 0.69, 0.68)$ and any emission or absorption features referred to are in the Lick system. All uncertainties on calculated values are determined in quadrature, and all uncertainties on quoted mean values are the standard error on the mean. In Section~\ref{sec:sample} discuss our sample selection and in Section~\ref{sec:obs} describe our observations. In Section~\ref{sec:data} we describe our data reduction and analysis process, including how we determine the outflow rates in these systems. In Section~\ref{sec:results} we state our results and discuss their implications in Section~\ref{sec:discussion}. Finally, we summarise our conclusions in Section~\ref{sec:conc}. \begin{figure*} \centering \includegraphics[width=\textwidth]{HST_images_KCWI_targets_SDSS_scale_contrast.png} \caption{\emph{HST} ACS WFC postage stamp images of the $4$ disk-dominated AGN observed with KCWI. North is up and a stretch of 0.55 ($Q=12$) is applied. In each image the HST filter is noted. The AGN can be seen as a bright point source in the centre of each image, which we assume is powered by merger-free processes due to the disk-dominated morphology of these sources. The white bars show $1~\rm{kpc}$ for scale in each panel.} \label{fig:hsttargets} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{kcwi_summed_all_targets.png} \caption{Total integrated flux across the IFU data cubes of the $4$ disk dominated AGN observed with KCWI. North is up and an arcsinh stretch is applied. The bar features seen in the HST images in Figure~\ref{fig:hsttargets} can be seen, however spiral arm detail is only apparent for Harry and Neville. The black bars show $1~\rm{kpc}$ for scale along each dimension in each panel.} \label{fig:kcwitargets} \end{figure*} \section{Sample and observations}\label{sec:two} \subsection{Sample Selection}\label{sec:sample} We observed four disk-dominated galaxies with KCWI at the Keck Observatory, Hawai'i, USA, on the 13th December 2018. These were selected from a larger well-studied sample of $101$ disk-dominated galaxies with luminous, unobscured Type 1 AGN first identified in SSL17 ($\langle z \rangle = 0.129$). This parent sample was constructed from galaxies in the SDSS \citep{york00} Data Release 8 \citep{aihara11} imaging sample cross-matched with sources identified by \citet{edelson12} using multi-wavelength data from the Wide-field Infrared Survey Explorer \citep[WISE;][]{wright10}, Two Micron All-Sky Survey \citep[2MASS;][]{skrutskie06}, and ROSAT all-sky survey \citep[RASS;][]{voges99}. The disk-dominated morphologies were assigned by expert review of the SDSS imaging (see \citealt{simmons13} and SSL17), and were all later confirmed using images from an \emph{HST} snapshot survey with broadband imaging using ACS WFC (programme ID HST-GO-14606, PI: Simmons), which were reduced using the standard pipeline. \emph{HST} images showing the disk-dominated nature of our four targets, including spiral arms and bar features, along with the bright point source of the unobscured central AGN, are shown in Figure~\ref{fig:hsttargets}. Black hole masses for this sample were originally estimated by SSL17 using the relation between black hole mass and the FWHM and luminosity in the broadened $H\alpha$ emission line from \cite{greene05}. $58$ galaxies within this sample showed broadened blueshifted $\mathrm{\left[ O \textsc{iii}\right] }$~components in their SDSS $3"$ fibre spectra. From this detection of a blueshifted component in the the spectra we know that there is \emph{some} outflowing material from the AGN within the $3"$ diameter central SDSS fibre, however this may not capture the full luminosity or extent of the outflow. The $12$ brightest galaxies in the blushifted $\mathrm{\left[ O \textsc{iii}\right] }$~$5007\rm{\AA}$ spectral component were observed using narrowband filters on the Shane-3m telescope from 12-14th May 2018 at the Lick Observatory, California, USA. The results of this work are described in S19. We then selected $4$ of these targets to observe with KCWI; Harry, Padma, Neville and Theodore (continuing the naming convention used in S19; see Table~\ref{table:coords} for more details). These targets were visible from Mauna Kea in December 2018 and had an appropriate redshift to ensure $\mathrm{\left[ O \textsc{iii}\right] }$~was in the wavelength range of KCWI. \subsection{KCWI observations}\label{sec:obs} \rowcolors{1}{lightgray}{} \begin{table} \centering \caption{Co-ordinates of the four disk-dominated AGN hosts observed with KCWI. } \label{table:coords} \begin{tabular}{lcccc} \hline Name & SDSS name & RA & Dec & z \\ \hline Harry & J0813+5422 & 123.350 & 54.377 & 0.043 \\ Padma & J1012+1017 & 153.161 & 10.289 & 0.070 \\ Neville & J1034+3938 & 158.661 & 39.641 & 0.043 \\ Theodore & J1314+4218 & 198.715 & 42.305 & 0.073 \\ \hline \end{tabular} \justify \end{table} We observed the $4$ disk-dominated AGN host galaxies listed in Table~\ref{table:coords} using KCWI at the Keck Observatory on Mauna Kea, Hawai'i, USA during dark time over half the night of the 13th December 2018. The weather was clear and the resultant seeing was $1.1''$. Our observational setup was determined by the combination of our need for a large field of view, high spectral resolution to resolve the emission lines of interest ($\mathrm{\left[ O \textsc{iii}\right] }$~and $\rm{H}\beta$), and spectral bandpass coverage wide enough to allow for good continuum measurements for continuum subtraction. We used KCWI's blue camera with the `KBlue' filter. The field of view was $33''$ x $20''$, with a pixel scale $[0.30, 0.68] ''/\rm{pixel}$ using 2x2 binning. Using KCWI's large slicer allowed us to cover the full extent of all the galaxies in a single pointing. We used the BH3 grating, which allowed us to cover both $\mathrm{\left[ O \textsc{iii}\right] }$~and $\rm{H}\beta$~with a spectral resolution of $R = 4500$, suitable for tracing the high-velocity line emission in these sources. The targets were bright enough that we were not significantly affected by the somewhat reduced throughput of the BH3 grating. Three targets (Harry, Padma \& Neville) were observed for $2,700$ seconds ($45$ minutes), with Theodore observed for $3,600$ seconds ($60$ minutes) to ensure a signal-to-noise ratio (SNR) of at least 10 for each target in the $\mathrm{\left[ O \textsc{iii}\right] }$~emission. An inspection of the data cubes reveals that this SNR was exceeded for each target. \section{Data Reduction \& Analysis}\label{sec:data} \subsection{KCWI data reduction}\label{sec:datared} Each KCWI raw data cube was reduced using the Keck Data Reduction Pipeline (KeckDRP) written in IDL\footnote{Note that a \emph{Python} Data Reduction Pipeline is now available for Keck data; see \url{https://kcwi-drp.readthedocs.io}}. The pipeline has 8 stages: a basic CCD reduction (bias and overscan subtraction, gain-correction, trimming and cosmic ray removal), dark subtraction, geometric transformation, flat-field correction, sky subtraction, data cube generation, atmospheric refraction correction and a flux calibration. The standard stars used for flux calibration were G191-B2B and Feige 34. The total integrated flux across the data cubes for each of the four targets is shown in Figure~\ref{fig:kcwitargets} \begin{figure*} \centering \includegraphics[width=0.985\textwidth]{Harry_ifscube_brightest_spaxel_fit_z.png} \includegraphics[width=0.985\textwidth]{Padma_ifscube_brightest_spaxel_fit_z.png} \includegraphics[width=0.985\textwidth]{Neville_ifscube_brightest_spaxel_fit_z.png} \includegraphics[width=0.985\textwidth]{Theodore_ifscube_brightest_spaxel_fit_z.png} \caption{The spectrum (black) and fit (red) to the brightest, central spaxel for each source. The individual components for each emission line are shown by the coloured lines (offset to 0). Each source was fitted with 2 components {\color{referee}(narrow in green and broad in blue)} for the $\mathrm{\left[ O \textsc{iii}\right] }$~$4958\rm{\AA}$ and $5007\rm{\AA}$ emission lines, and with 3 components (narrow in blue, broad in green, and broad line region in magenta) for the $\rm{H}\beta$~emission line. Note that only Harry and Padma needed all three H$\beta$ components to fit to the brightest, central spaxel. The residual between the spectrum and the fit is shown below, with the $\chi^2$ value and corresponding p-value for a model with 21 degrees of freedom (amplitude, velocity and velocity dispersion for each of the 7 components). } \label{fig:specfits} \end{figure*} \subsection{Spectral fitting}\label{sec:specfit} Once the reduced data cubes were obtained using the KeckDRP, we used the \emph{Python} module \texttt{ifscube}\footnote{\url{https://ifscube.readthedocs.io/}} to fit spectral features in the wavelength range probed by KCWI. {\color{referee} Systemic velocities were first determined using the peak of the $\rm{H}\beta$~emission in the central spaxel pre-decomposition\footnote{{\color{referee} Upon inspection of the final fits, the peak of the overall $\rm{H}\beta$~emission in the central spaxel coiincided with the peak of the narrow $\rm{H}\beta$~emission, see Figure~\ref{fig:specfits}}}, since stellar absorption lines were not available to us due to the Type 1 AGN nature of these systems (\citealt{RW18} show how $\rm{H}\beta$~is a good proxy for stellar absorption lines with an average velocity shift of $-9\pm^{41}_{45}~\rm{km}~\rm{s}^{-1}$).} Initially the flux, velocity and velocity dispersion of H$\beta$, $\mathrm{\left[ O \textsc{iii}\right] }$~$4958\rm{\AA}$ and $5007\rm{\AA}$ were fitted with two components each, with one component required to have a broader velocity dispersion. After inspection of the spectra and the initial spectral fits, it was apparent that the central $H\beta$ emission was dominated by emission from the broad line region (BLR) of the AGN, and that the H$\beta$ and $\mathrm{\left[ O \textsc{iii}\right] }$~narrow components were not kinematically linked, suggesting that the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission was ionised by the central AGN alone, rather than extended star formation in each source. We therefore reperformed the fits with three components for $H\beta$ (narrow, broad which was kinematically tied to the broad $\mathrm{\left[ O \textsc{iii}\right] }$~components, and a BLR) and once again two components each for $\mathrm{\left[ O \textsc{iii}\right] }$~$4958\rm{\AA}$ and $5007\rm{\AA}$(narrow and broad), with the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~components no longer kinematically tied to the narrow $H\beta$ component. The BLR component is also not kinematically tied to the narrow $H\beta$ component. The fits to the central spaxel for each source are shown in Figure~\ref{fig:specfits}, clearly showing the need for a BLR $H\beta$ component along with the obvious blueshifted outflows in $\mathrm{\left[ O \textsc{iii}\right] }$. Only Harry and Padma (top panels of Figure~\ref{fig:specfits}) needed three components in H$\beta$ (narrow, BLR and outflow) in the central spaxel. Note since these are Type 1 AGN we only expect to detect a blueshifted outflow component due to the effects of dust \citep{fischer13, muller11, baewoo14}. Indeed \cite{RW18} found that blueshifted $\mathrm{\left[ O \textsc{iii}\right] }$~is more frequently detected than redshifted $\mathrm{\left[ O \textsc{iii}\right] }$~by a factor of $3.6$ in Type 1 AGN (as opposed to a factor of 1.08 for Type 2 AGN) due to projection and orientation effects. The integrated flux, velocity and velocity dispersion of the narrow H$\beta$ emission are shown in Figure~\ref{fig:hbetafour} {\color{referee} with the top panels showing some of the structure in each system}. In Figure~\ref{fig:oiiinfour}, the integrated flux, velocity and velocity dispersion of the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~component is shown, assumed to be ionised by the central AGN (although note that Neville does show some extended narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission presumably due to star formation along a spiral feature). Similarly, Figure \ref{fig:oiiiwfour} shows the integrated flux, velocity and velocity dispersion of the broad $\mathrm{\left[ O \textsc{iii}\right] }$~components, assumed to be ionised by the AGN outflow. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{kcwi_all_targets_hbeta_flux_v_sigma_z.png} \caption{The fit to the H$\beta$ narrow emission for the four targets observed with KCWI, showing the integrated flux (top; with an arcsinh stretch), velocity (middle; relative to the systemic velocity) and velocity dispersion, $\sigma$ (bottom). Pixels are masked if the flux is below 3 standard deviations. Note that the KCWI spectral resolution (and therefore the minimum resolvable $\sigma$ value) is $\sim60~\rm{km}~\rm{s}^{-1}$. The bars show $1~\rm{kpc}$ in each panel for scale.} \label{fig:hbetafour} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{kcwi_all_targets_oiii_narrow_flux_v_sigma_z.png} \caption{The fit to the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission for the four targets observed with KCWI, showing the integrated flux (top; with an arcsinh stretch), velocity (middle; relative to the systemic velocity) and velocity dispersion, $\sigma$ (bottom). Pixels are masked if the flux is below 3 standard deviations. Note that the KCWI spectral resolution (and therefore the minimum resolvable $\sigma$ value) is $\sim60~\rm{km}~\rm{s}^{-1}$. The bars show $1~\rm{kpc}$ in each panel for scale.} \label{fig:oiiinfour} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{kcwi_all_targets_oiii_broad_flux_v_sigma_z.png} \caption{The fit to the broadened $\mathrm{\left[ O \textsc{iii}\right] }$~emission for the four targets observed with KCWI, showing the integrated flux (top; with an arcsinh stretch), velocity (middle; relative to the systemic velocity) and velocity dispersion, $\sigma$ (bottom). Note that the KCWI spectral resolution (and therefore the minimum resolvable $\sigma$ value) is $\sim60~\rm{km}~\rm{s}^{-1}$. The bars show $1~\rm{kpc}$ in each panel for scale; note the difference in scale to Figures~\ref{fig:hbetafour} \&~\ref{fig:oiiinfour}. Pixels are masked if the flux is below 3 standard deviations. In the top panels, the blue cross denotes the brightest point in the $\mathrm{\left[ O \textsc{iii}\right] }$~narrow emission flux. For Padma, the position of the brightest outflow ionised emission is offset from the position of the brightest narrow emission ionised by the central AGN (marked by the blue cross). Note that the KCWI spatial resolution, combined with the ground based seeing, limits any further conclusions on the geometry or morphology of the outflows in these systems.} \label{fig:oiiiwfour} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{kcwi_ratio_oiii_narrow_hbeta_narrow_z.png} \includegraphics[width=0.99\textwidth]{kcwi_ratio_oiii_broad_hbeta_broad_z.png} \caption{The ratio of narrow (top) and broad (bottom) $\mathrm{\left[ O \textsc{iii}\right] }$/H$\beta$ emission for each target. Note the change of scale between the two rows; the scale bars show $1\rm{kpc}$ in each panel. Here we use only the flux from the broad $H\beta$ component ionised by the outflow, and not from the $H\beta$ BLR component in these plots (note that the central spaxels for Neville and Theodore do not have outflow ionised H$\beta$ emission; see Figure~\ref{fig:specfits}). The colour bars are scaled between the typical ranges on a BPT diagram; star formation ionised emission typically has $\log_{10} [OIII]/H\beta \mathrel{\hbox{\rlap{\hbox{\lower3pt\hbox{$\sim$}}}\hbox{\raise2pt\hbox{$<$}}}} 0$ and AGN ionised emission typically has $\log_{10} [OIII]/H\beta \gtrsim 0$ {\color{referee} \citep{kewley01, kewley06}}. All of our sources have high broad $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$~values, indicating that the outflows are ionized by the AGN.} \label{fig:ratios} \end{figure*} \subsection{Calculating \textsc{[OIII]} outflow rates}\label{sec:calcgasmass} The fluxes shown in Figure~\ref{fig:oiiiwfour} enable a measurement of the outflow luminosity, $L$$\mathrm{\left[ O \textsc{iii}\right] }$~(knowing the redshift of each target), which can then be used to calculate a gas mass in the outflow following the method outlined in \cite{carniani15}: \begin{multline}\label{eq:carni} M_{\rm{[OIII]}} = 0.8 \times 10^8~M_{\odot} ~\times \\ \left( \frac{C}{10^{[O/H] - [O/H]_{\odot}}} \right) \left( \frac{L[\rm{O}\textsc{iii}]}{10^{44}~\rm{erg}~\rm{s}^{-1}} \right) \left( \frac{n_e}{500~\rm{cm}^{-3}} \right)^{-1} \end{multline} where $n_e$ is the electron density, $[O/H] - [O/H]_{\odot}$ is the metallicity relative to solar, and $C = <n_e>^2 / <n_e^2>$. Here $<n_e>^2$ is the volume averaged electron density squared and $<n_e^2>$ is the volume averaged squared electron density. This method requires some simplifying assumptions regarding the nature of the outflowing gas, particularly the temperature, metallicity and density of the gas. The largest source of uncertainty when determining the mass outflow rate is the electron density, $n_e$. Typically, the $\mathrm{\left[ S \textsc{ii}\right] }$~emission is used to determine $n_e$ (although see \citealt{davies20} for a study showing that $\mathrm{\left[ S \textsc{ii}\right] }$~underestimates $n_e$), however the wavelength of $\mathrm{\left[ S \textsc{ii}\right] }$~is not probed by KCWI for these four targets. However, there is no general agreement on the best value of $n_e$ to use, with conflicting estimates across the literature for AGN at different redshifts. The long assumed value of $n_e = 100~\rm{cm}^{-3}$ has recently been challenged by \citet[][$700 < n_e < 3000~\rm{cm}^{-3}$]{perna17} and \citet[][$n_e \sim 10^5~\rm{cm}^{-3}$]{villar15}. Recent IFU studies have shown that $n_e$ can also vary spatially across a galaxy, for example \cite{mingozzi19} find a wide range of electron densities from $50-1000~\rm{cm}^{-3}$, with regions of high density concentrated in localized regions (which then dominate the total flux), while the rest of the regions in the galaxy have a much lower electron density. In the outflows themselves, \cite{mingozzi19} find a median $n_e\sim250~\rm{cm}^{-3}$. This is an issue which plagues all such studies on AGN outflows since assuming a larger value of $n_e$ can lead to an underestimate of the gas mass present and vice versa. We chose to use $n_e = 500~\rm{cm}^{-3}$ in order to be consistent with \cite{carniani15}. {\color{referee} However, we note that taking the extremes in $n_e$ found by \citet[][$50-1000~\rm{cm}^{-3}$]{mingozzi19} in comparison to the $n_e=500~\rm{cm}^{-3}$ value we use in this study, would result in outflow values either 10 times larger ($n_e =50~\rm{cm}^{-3}$) or two times smaller ($n_e=1000~\rm{cm}^{-3}$). In the absence of spatially resolved information of the electron densities for the 4 galaxies in this study, using an average value of $n_e=500~\rm{cm}^{-3}$ is therefore a reasonable choice.} We also assume a gas solar metallicity, $[O/H] = [O/H]_{\odot}$. Since we are assuming a single value of $n_e$ and solar metallicity, the first term of Equation~\ref{eq:carni} reduces to unity. Note we do not include an uncertainty on $n_e$ when calculating an error on $M_{\rm{gas}}$ (or for the geometry of the system or volume filling factor), we propagate only the background noise and Poisson noise from the total flux (estimated using {\tt photutils.calc\_total\_error} function\footnote{\url{https://photutils.readthedocs.io/}}). We also investigate the kinematics of the outflow, including the velocity of the outflow. Since the velocities and velocity dispersions measured for the broad $\mathrm{\left[ O \textsc{iii}\right] }$~component (shown in Figure~\ref{fig:oiiiwfour}) only account for the velocity of the outflow along the line of sight, {\color{referee} whereas in reality the outflows will have a spread of observed radial velocities that will be lower than the actual bulk velocity of the outflow. The actual outflow velocity across 3-dimensions is best approximated by the most blueshifted velocity in the observed velocity distribution \citep{leung19}. A common parameter to measure this bulk velocity of the outflow is the maximum velocity , $v_{\rm{[OIII]}}$, determined as: \begin{equation}\label{eq:velocity} v_{\rm{[OIII]}} = |\Delta v_{\rm{max}}| + 2\sigma_{\rm{broad,[OIII],max}}, \end{equation} where $|\Delta v_{\rm{max}}|$ is the maximum difference in the velocity of the narrow and broad $\mathrm{\left[ O \textsc{iii}\right] }$~components, $\sigma_{\rm{broad,[OIII],max}}$ is the maximum velocity dispersion of the broad $\mathrm{\left[ O \textsc{iii}\right] }$~component. The relation in Equation~\ref{eq:velocity} is defined by the properties of a normal distribution which is used to model the emission line velocity profiles \citep[see][]{rupke13}}. Not taking into account the line of sight effects on the velocity will result in an underestimate of the mass outflow rate (see Equation~\ref{eq:outflow}). The physical extent of the outflow is also a key measurement for determining the scale over which these outflows will impact on the galaxy. We calculated the extent, $\rm{r}_{\rm{max}}$, as the most distant spatial extent of the broadened emission away from the central AGN (assumed to be the brightest pixel in the flux of the integrated $\mathrm{\left[ O \textsc{iii}\right] }$~narrow emission shown in Figure~\ref{fig:oiiinfour}, with the location highlighted by the blue crosses in the top panels of Figure~\ref{fig:oiiiwfour}). We deconvolved our estimate of $\rm{r}_{\rm{max}}$ using an estimate of the seeing from observations of the standard star Feige 34. Not performing such a deconvolution results in an overestimate of the maximum physical extent and therefore an underestimate of the mass outflow rate (see Equation~\ref{eq:outflow}). Combining the velocity and physical extent allows for a calculation of the timesacle of the outflow: \begin{equation}\label{eq:timescale} t_{\rm{out}}~[\rm{yr}] = \bigg( \frac{\rm{r}_{\rm{max}}}{\rm{km}} \bigg) \bigg( \frac{\rm{v}_{\rm{[OIII]}}}{\rm{km}~\rm{yr}^{-1}} \bigg)^{-1} . \end{equation} The mass outflow rate is then calculated in the following way: \begin{equation}\label{eq:outflow} \bigg(\frac{\dot{\rm{M}}_{\rm{out}}}{\rm{M}_{\odot}~\rm{yr}^{-1}} \bigg) = B \bigg( \frac{\rm{M}_{[OIII]}}{\rm{M}_{\odot}} \bigg) \bigg( \frac{\rm{t}_{\rm{out}}}{\rm{yr}} \bigg)^{-1}. \end{equation} Note that this method assumes that the outflow rate is constant over the time that the outflow has been active, $t_{\rm{out}}$. A factor of B between $1-3$ is typically applied to account for the geometry of the outflows \citep{harrison18}. For example, for a spherical outflow a factor of $B=3$ would be employed, whereas a biconical outflow covering only 1/3 of a sphere would need a factor of $B=1$. Given that our AGN host galaxies are disk-dominated and are assumed to be feeding the AGN through secular processes from the disk from the same angular momentum vector, we presume the outflow will not be spherical (see S19 and \citealt{npk12}) and therefore use a conservative value of $B=1$ throughout this work. This assumption may result in an underestimate of the outflow rate in these systems. \rowcolors{1}{lightgray}{} \begin{table*} \centering \caption{Properties of the 4 disk-dominated AGN with outflow rates calculated from the extent and flux of $\mathrm{\left[ O \textsc{iii}\right] }$~in spectral observations taken with KCWI. {\color{referee} We list black hole masses, $\log_{10}$ $[\rm{M}_{\rm{BH}}$/$\rm{M}_{\odot}]$, the $\mathrm{\left[ O \textsc{iii}\right] }$~luminosity of the broad outflow component, $\log_{10}$ $[\rm{L}_{\rm{OIII}}$/$\rm{erg}~\rm{s}^{-1}]$, the Eddington ratio of the AGN, $\lambda_{\rm{Edd}}$, the accretion rate of the AGN, $\dot{m}$ (see Equation~\ref{eq:bhmdot}), the mass in the outflow, $[\rm{M}_{\rm{OIII}}$/$\rm{M}_{\odot}]$ (see Equation~\ref{eq:carni}), the bulk outflow velocity, $v_{\rm{max},[OIII]}$ (see Equation~\ref{eq:velocity}), the maximum radial extent of the outflow, $r_{\rm{max}}$ (see Section~\ref{sec:calcgasmass}), the outflow rate, $\dot{\rm{M}}_{\rm{out}}$ (see Equation~\ref{eq:outflow}), and the timescale of the outflow, $\rm{t}_{\rm{out}}$ (see Equation~\ref{eq:timescale}).}} \label{table:rates} \begin{tabular*}{\textwidth}{Cp{2.0cm}Cp{1.5cm}Cp{1.5cm}Cp{1.0cm}Cp{1.1cm}Cp{1.5cm}Cp{1.5cm}Cp{1.0cm}Cp{1.5cm}Cp{1.25cm}} \hline Name & $\log_{10}$ $[\rm{M}_{\rm{BH}}$/$\rm{M}_{\odot}]*$ & $\log_{10}$ $[\rm{L}_{\rm{OIII}}$/$\rm{erg}~\rm{s}^{-1}]$ & $\lambda_{\rm{Edd}}$* & $\dot{m}$* $[\mathrm{M_{\odot}\,yr^{-1}}]$ & $\log_{10}$ $[\rm{M}_{\rm{OIII}}$/$\rm{M}_{\odot}]$ & $v_{\rm{max},[OIII]}$ $[\rm{km}~\rm{s}^{-1}]$ & $r_{\rm{max}}$ $[\rm{kpc}]$ & $\dot{\rm{M}}_{\rm{out}}$ $[\mathrm{M_{\odot}\,yr^{-1}}]\dagger$ & $\rm{t}_{\rm{out}}$ $\rm{[Myr]}$ \\ \hline Harry & $6.56^{+0.13}_{-0.12}$ & $41.2\pm1.2$ & $0.08^{+0.33}_{-0.02}$ & $0.02^{+0.04}_{-0.01}$ & $5.1\pm0.1$ & $836\pm28$ & $0.6\pm0.3$ & $0.19\pm0.09$ & $0.6\pm0.3$\\ Padma & $7.62^{+0.14}_{-0.14}$ & $42.2\pm0.2$ & $0.20^{+0.45}_{-0.09}$ & $0.07^{+0.4}_{-0.3}$ & $6.03\pm0.09$ & $1710\pm6$ & $2.4\pm0.4$ & $0.7\pm0.1$ & $1.4\pm0.2$ \\ Neville & $6.30^{+0.12}_{-0.12}$ & $41.6\pm0.4$ & $0.86^{+2.90}_{-0.26}$ & $0.07^{+0.11}_{-0.04}$ & $5.5\pm0.1$ & $1316\pm29$ & $2.1\pm0.3$ & $0.18\pm0.03$ & $1.6\pm0.2$ \\ Theodore & $6.73^{+0.11}_{-0.11}$ & $41.6\pm0.6$ & $0.77^{+1.68}_{-0.35}$ & $0.06^{+0.04}_{-0.02}$ & $5.4\pm0.2$ & $675\pm18$ & $1.3\pm0.4$ & $0.12\pm0.04$ & $1.9\pm0.6$ \\ \hline \end{tabular*} \justify \vspace{0.5em} * Measurements from SSL17. Black hole masses are calculated using a virial assumption by measuring the full width half maximum of the broadened H$\alpha$ ~component. SMBH accretion rates are calculated using bolometric luminosities inferred from WISE W3 magnitudes (see Section~\ref{sec:mdot}).\\ {\color{referee} $\dagger$ The quoted uncertainties on the outflow rates do not include an estimate of the uncertainty on the electron density, $n_e$ (see Section~\ref{sec:calcgasmass}). In this study we use a value of $n_e=500~\rm{cm}^{-3}$ to calculate the mass in the outflow to be consistent with \cite{carniani15}, but we note that taking the extremes in $n_e$ found by \citet[][$50-1000~\rm{cm}^{-3}$]{mingozzi19}, results in outflow rates either 10 times larger ($n_e =50~\rm{cm}^{-3}$) or two times smaller ($n_e=1000~\rm{cm}^{-3}$) than quoted here. The mean outflow rate of the four targets would therefore be in the range of $\langle\dot{M}_{\rm{out}}\rangle = 0.15-3~\rm{M_{\odot}}~\rm{yr}^{-1}$.} \end{table*} The kinetic energy outflow rate and momentum flux of the outflow can then be calculated as: \begin{equation}\label{eq:kinout} \dot{E}_{\rm{out}} = \frac{1}{2} \dot{M}_{\rm{out}} v_{\rm{[OIII]}}^2 \end{equation} and \begin{equation}\label{eq:momout} \dot{P}_{\rm{out}} = \dot{M}_{\rm{out}}v_{\rm{[OIII]}} \end{equation} respectively. \subsection{Black hole accretion rates}\label{sec:mdot} The SMBH accretion rate can be inferred from the bolometric luminosity of the AGN, $L_{\rm{bol}}$; \begin{equation}\label{eq:bhmdot} \dot{m} = L_{\rm{bol}}/\eta c^2, \end{equation} where the radiative efficiency, $\eta =0.15$ (see \citealt{elvis02}). Bolometric luminosities were originally inferred by SSL17 for these four targets using the WISE W3 band magnitudes at $12\mu m$, by applying a correction from \cite{richards06}. It is possible that the W3 flux densities could be contaminated by star formation, however \cite{richards06} concluded that since there were minimal differences between their composite SEDs of Type 1 AGN around $\sim12\mu m$ this suggested minimal host galaxy contamination. Unlike for $\mathrm{\left[ O \textsc{iii}\right] }$~which could still have some star formation contamination in the narrow component for our four targets (e.g. see top panel of Figure~\ref{fig:oiiinfour} for Neville). In addition, the normalisation factor used to convert $L_{\rm{[OIII]}}$ to $L_{\rm{bol}}$ is highly uncertain. While \cite{heckman04} suggest a normalisation factor of $\sim3500$, there is some debate in the literature over the correct value, with some arguing it is $\mathrm{\left[ O \textsc{iii}\right] }$~luminosity dependent \citep[e.g.][estimate it ranges from 87-454]{lamastra09}. We therefore decided to use the bolometric luminosities previously calculated by SSL17 using the less problematic W3 flux densities. \section{Results}\label{sec:results} The top panels of Figure~\ref{fig:oiiiwfour} show the integrated flux in the broad $\mathrm{\left[ O \textsc{iii}\right] }$~component which are used to calculate the gas masses, velocities, physical extents and outflow rates given in Table~\ref{table:rates}. The mean $\mathrm{\left[ O \textsc{iii}\right] }$~gas mass in the outflow for the four targets is $\langle\rm{M}_{\rm{[OIII]}}\rangle = 5.5\pm0.2~\rm{M}_{\odot}$ (with a range of $5.1-6.03 ~\rm{M}_{\odot}$), with a corresponding mean outflow rate of $\langle\dot{M}_{\rm{out}}\rangle = 0.3\pm0.1~\rm{M}_{\odot}~\rm{yr}^{-1}$ (range $0.12-0.7~\rm{M}_{\odot}~\rm{yr}^{-1}$)\footnote{{\color{referee} Note that the uncertainties on these values do not include the uncertainties on the electron density $n_e$ (see Section~\ref{sec:calcgasmass}). In this study we use a value of $n_e=500~\rm{cm}^{-3}$ to be consistent with \cite{carniani15} in order to calculate the mass in the outflow, but we note that taking the extremes in $n_e$ found by \citet[][$50-1000~\rm{cm}^{-3}$]{mingozzi19}, results in outflow rates either 10 times larger ($n_e =50~\rm{cm}^{-3}$) or two times smaller ($n_e=1000~\rm{cm}^{-3}$) than quoted. The mean outflow rate of the four targets would therefore be in the range of $\langle\dot{M}_{\rm{out}}\rangle = 0.15-3~\rm{M_{\odot}}~\rm{yr}^{-1}$.}}. The outflows are substantial with a mean maximum radial extent of $\langle\rm{r}_{\rm{max}}\rangle = 1.6\pm0.4~\rm{kpc}$ (range $0.6-2.4~\rm{kpc}$), which is $\sim25\%$ of the galaxy Petrosian radius on average. {\color{referee} These extents are similar to those found in other AGN outflow studies, for example \citet{bae17} found that the mean outflow radius in their sample (20 Type 2 AGN at $z<0.1$) was $\sim1.8~\rm{kpc}$, \citet{harrison14} found a range in $\mathrm{\left[ O \textsc{iii}\right] }$~outflow extents of $1.5-4.3~\rm{kpc}$ (16 Type 2 AGN $z<0.2$), and \cite{kang18} measured outflows ranging from $0.60-7.45~\rm{kpc}$ in size (23 Type 2 AGN $z<0.2$)}. Figure~\ref{fig:ratios} shows the resolved narrow and broad $\mathrm{\left[ O \textsc{iii}\right] }$/H$\beta$ ratios and reveals how the outflows are ionized by the AGN in all four targets. The gas mass values are consistent with those found by S19 using a narrowband imaging technique with the Shane-3m at the Lick Observatory, although are on average $\sim1$ dex larger. This is unsurprising given that S19 struggled to cleanly separate the broad and narrow emission using narrowband data (either due to extended star formation or subtraction of the central AGN PSF), and were only able to derive a lower limit on the gas mass for Neville. This suggests that the PSF subtraction dominated the uncertainty in the measurements of S19, resulting in an underestimate of the $\mathrm{\left[ O \textsc{iii}\right] }$~gas masses. Note that the values initially quoted by S19 were affected by a standard star flux calibration error, and were on average overestimated by $2.6$ dex. This has since been corrected with an erratum (Smethurst et al. 2021; erratum submitted). We are not able to directly compare the velocities or maximum extents of the outflows (and therefore the outflow rates) derived as S19 used $|\Delta v_{\rm{max}}|$ rather than $v_{\rm{[OIII]}}$ and did not deconvolve their measurement of $r_{\rm{max}}$ (see Section~\ref{sec:calcgasmass}), both of which lead to an underestimate of the outflow rates. Note that $v_{\rm{[OIII]}}$, as used in this study is a more accurate representation of the maximum outflow velocity (see Section~\ref{sec:calcgasmass} and Equation~\ref{eq:velocity}). Despite the many limitations to narrowband imaging, it does allow for a higher spatial resolution in order to discern the basic morphology and features of each outflow. The KCWI data in this study has low spatial resolution and does not allow us to draw any conclusions about the features of each outflow (the biggest limitation is the seeing, estimated at $1.1''$). The top panels of Figure~\ref{fig:oiiiwfour} reveal how the brightest pixel in the broadened $\mathrm{\left[ O \textsc{iii}\right] }$~emission ionised by the outflow also coincides with the brightest pixel in the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission (ionised by the central AGN and star formation) for 3 of our sources (Padma has an offset). If there is structure to the outflows, it is lost due to the combination of the large pixel size of KCWI and the seeing. Therefore, in order to make any statements about the morphology of these outflows, more observations will be required with a higher spatial resolution IFU with AO capabilities (e.g. such as MUSE on the VLT). \subsection{Harry (J0813+5422)}\label{subsec:harry} Harry has the strongest bar feature of each of the four galaxies targetted in this study (as seen in Figures~\ref{fig:hsttargets} \& \ref{fig:kcwitargets}). The spiral features are picked up in the $\rm{H}\beta$~emission seen in the top left panel of Figure~\ref{fig:hbetafour}, with the velocity map revealing the ordered rotation in this feature. The narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission, shown in Figure~\ref{fig:oiiinfour}, is centrally concentrated and shows some ordered rotation, suggesting this emission is ionised by a combination of the AGN and central star formation. The blueshifted, broadened $\mathrm{\left[ O \textsc{iii}\right] }$~emission shown in Figure~\ref{fig:oiiiwfour} however, does not show clear rotation in the velocity map. Figure~\ref{fig:ratios} reveals how the central region and the outflow have high $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$~ratios, suggesting that the outflow is indeed ionised by the AGN. Table~\ref{table:rates} reveals that Harry has the lowest ionised gas mass, the lowest SMBH accretion rate and the lowest spatial extent of the four targets. This suggests Harry's outflow is relatively new, therefore it is unsurprising that Harry has the shortest timescale over which it is estimated to have been active of all four targets: $0.6~\rm{Myr}$ (see Table~\ref{table:rates}). \subsection{Padma (J1012+1017)}\label{subsec:padma} Figure~\ref{fig:hsttargets} reveals that Padma has a bar lens feature \cite[an oval-like structure along the bar major axis, see][]{athan15} surrounded by spiral structure. This spiral structure is not detected in the $\rm{H}\beta$~emission flux shown in Figure~\ref{fig:hbetafour}, however the corresponding $\rm{H}\beta$~velocity map shows the most ordered rotation of the four targets studied (and similarly for the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~velocity map in Figure~\ref{fig:oiiinfour}). The brightest point in the broadened $\mathrm{\left[ O \textsc{iii}\right] }$~flux is offset from the brightest point in the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~flux (shown by the blue c ross in Figure~\ref{fig:oiiiwfour}). Padma has the largest ionised gas mass of all four targets, at an order of magnitude larger than Harry. Padma also has the largest SMBH mass, SMBH accretion rate, outflow velocity and physical extent ($2.4~\rm{kpc}$), leading to the largest outflow rate of the four targets of $0.7\pm0.1~\rm{M}_{\odot}~\rm{yr}^{-1}$. The ratio between the SMBH accretion rate and the outflow rate is therefore much larger, meaning more of the inflowing material is ejected in the outflow than is accreted by the SMBH (i.e. a higher mass loading factor, see \citealt{qui21} for example). \subsection{Neville (J1034+3938)}\label{subsec:neville} Neville has prominent flocculent spiral features and a possible weak bar, as revealed by the HST imaging in Figure~\ref{fig:hsttargets}. Emission from this flocculent structure is identifiable in the $\rm{H}\beta$~emission flux shown in Figure~\ref{fig:hbetafour}, where clear rotational structure can be seen in the velocity map. The centre of Neville's $\rm{H}\beta$~velocity and velocity dispersion map show broadened emission with little rotation, suggesting the central $\rm{H}\beta$~emission is ionised by the AGN and not star formation, which is confirmed by the relatively high $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$~ratios seen in Figure~\ref{fig:ratios}. Extended narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission across a spiral feature can be seen in Figure~\ref{fig:oiiinfour}, suggesting ionisation by star formation is also present along with ionisation from the central AGN. S19 also reported extended $\mathrm{\left[ O \textsc{iii}\right] }$~emission from Neville in their narrowband imaging data, resulting in an uncertain isolation of the emission ionised by the outflow alone. With one of the largest SMBH accretion rates, the SMBH is accreting at a similar order of magnitude to the measured outflow rate. The outflow has one of the highest velocities and physical extents ($2.1~\rm{kpc}$) after Padma. \subsection{Theodore (J1314+4218)}\label{subsec:theodore} Theodore has a strong bar feature with faint, loosely wound spiral arms emerging from the ends (as seen in HST imaging in Figure~\ref{fig:hsttargets}). Figure~\ref{fig:kcwitargets} reveals how only the bar feature is picked up in the KCWI observations. This is particularly apparent in the flux in the $\rm{H}\beta$~emission shown in Figure~\ref{fig:hbetafour}, which also reveals some rotational structure in the corresponding velocity map. This bar feature is also just noticeable in the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission (Figure~\ref{fig:oiiinfour}), suggesting ionisation due to ongoing star formation in the bar. This could also extend into the central regions of the galaxy as the narrow $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$~ratio in the top right panel of Figure~\ref{fig:ratios} is low, suggesting ionisation dominated by star formation. However, the broad $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$~ratios in the bottom panels of the same figure are high, suggesting the outflow is ionised by the AGN and not stellar winds. Like Neville, the SMBH accretion rate of Theodore is of the same order of magnitude as the outflow rate (a factor of just $\sim2$ difference). The resulting outflow has the lowest velocity of the four targets observed. \section{Discussion}\label{sec:discussion} Given that the targets we have observed in this study are all disk-dominated with little to no bulge component (see Figure~\ref{fig:hsttargets}), we assume the galaxy and the SMBH have co-evolved via a non-merger process \citep{walker96,hopkins12, martig12, tonini16, stevens16}. We must therefore consider which processes are able to drive an inflow of gas of at least $0.21-0.77~\rm{M}_{\odot}\rm{yr}^{-1}$ (to power both the accretion of the SMBH and the outflow) for an extended period of $0.6-1.9~\rm{Myr}$ (the time over which the outflows in our four targets have been active, see Table~\ref{table:rates} and Equation~\ref{eq:timescale}). Bars and spiral arms are long-lived morphological features and could therefore feasibly drive an inflow to the central regions of a galaxy over many $\rm{Gyr}$ \citep{fanali15, hunt18, jung18}\footnote{Note that these simulations only considered galactic scale inflows and did not consider how gas was transferred from kpc to sub-pc scales in the central regions. Therefore, these simulations don't provide estimates for the amount of gas that makes it to the AGN accretion disk itself, merely that which is transferred to the central gas reservoir.}. All four of our targets show clear spiral features (see Figure~\ref{fig:hsttargets}), with Harry and Theodore showing a strong bar feature, Neville a weak bar feature \citep{nair10b} and Padma a barlens feature \cite[an oval-like strucutre along the bar major axis, see][]{athan15}. Simulations suggest both bars and spiral arms can drive inflows at rates an order of magnitude larger than needed to power the combined outflow and SMBH accretion rates for all four targets \cite[$0.1-\rm{few}$ $M_{\odot}~\rm{yr}^{-1}$;][]{regan04, davies09, lin13,fanali15,slater19}. This order of magnitude difference is promising, since our simplifying assumption here is that the inflow must be at least enough to power both the SMBH accretion and the outflow, this means that the inflow would be sufficient to also fuel central star formation or contribute to the central gas reservoir \citep{tacconi86, boker03, bigiel08, leroy13, moreno21}. This suggests that bars and spiral arms would be capable of driving inflows which could sustain both the SMBH growth and an outflow from the AGN, while still contributing gas to the central gas reservoir of the galaxies. S19 compared their AGN outflow rates and SMBH accretion rates to the results of \cite{bae17}, who studied a sample of $20$ nearby ($0.024 < z < 0.098$) Type 2 AGN with mixed morphologies (two of their sample are ongoing mergers) using the Magellan/IMACS-IFU and VLT/VIMOS-IFU\footnote{These IFUs had a large enough wavelength range to allow \cite{bae17} to empirically determine the column densities of the ionised gas, $n_e$, using the $\mathrm{\left[ S \textsc{ii}\right] }$~line ratio, unlike in this study with KCWI. They found a range of $54 < n_e < 854~\rm{cm}^{-3}$, with an average $n_e\sim360\pm230~\rm{cm}^{-3}$ which is similar to the value of $n_e=500~\rm{cm}^{-3}$ used in this study. B17 also used the $M -\sigma_*$ relation of \cite{park12} to derive black hole masses (rather than the virial assumption of \cite{greene05} as implemented by SSL17). In addition they calculated bolometric luminosities from the luminosity of the central narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission \cite[see][]{heckman04}, as opposed to deriving them using the WISE W3 band at $12\mu m$ as implemented by SSL17. The reader is urged to bear these caveats in mind while the two studies are compared.}. Although we only have four targets in this study we can still make some comparisons to the \cite{bae17} sample. The velocities of the outflows in our sample are comparable to the \cite{bae17} sample (when calculated in the same way as Equation~\ref{eq:velocity}), with our four targets having higher velocities by a factor of $\sim1.35$ on average. However, the average outflow rates for our four targets are much lower than those of the merger powered \cite{bae17} sample, $\sim15$ times lower on average. However, the black hole accretion rates are larger in our four targets than the \cite{bae17} sample by a factor of $\sim3$ on average. This is in agreement with the findings of S19, who discussed the possibility that this scenario could be explained by higher spin of the SMBHs in the disk-dominated sample, following the hypothesis of \cite{npk12}. Given that the outflow rates of the merger-grown \cite{bae17} sample are $\sim15$ times larger than the outflow rates of the four disk-dominated galaxies studied in this work, this suggests that the inflow rates funnelled by merger processes must be much larger than in secular processes. However, given the comparable accretion rates of the black holes powering the AGN, these inflows do not contribute to the growth of the black hole, but instead are used to power a large outflow which can have considerable impact on the surrounding galaxy. This supports the conclusions of \cite{mcalpine20}, who found using the EAGLE simulations, that mergers do not induce a significant amount of SMBH growth, instead finding that the majority of mass is accreted by the SMBH outside the merger period. Similarly \cite{martin18} showed using the Horizon-AGN simulation that only $\sim35\%$ of all of the matter contained in SMBHs by $z\sim0$ is a result of mergers (either major or minor). Combining these results with our findings here suggests that secular processes are responsible for the majority of SMBH growth, whereas mergers are responsible for the majority of outflowing material and the subsequent feedback on the galaxy. \begin{figure} \centering \includegraphics[width=0.475\textwidth]{outflow_energetics_compare_RW18_LbolSSL17.png} \caption{The mass outflow rate (top), energy injection rate (middle), and momentum flux (bottom) against the AGN bolometric luminosity ($L_{\rm{bol}} = 3500~L_{[OIII]}$) for our four sources (red crosses). This figure is a recreation of Figure 11 from \protect\citet{RW18}; we compare our sources with their estimates for 5221 Type 1 AGNs from SDSS ($z<0.3$; shown by the grey circles). This figure shows how our secularly powered outflows are typical of low-redshift Type 1 AGN and that they have momentum conserving outflows.} \label{fig:energetics} \end{figure} We also compare the outflow rates, kinetic energy outflow rate and momentum flux of the outflow calculated for our sample to a sample of $\sim5000$ Type 1 AGN identified in SDSS from \cite{RW18}\footnote{Note that \cite{RW18} used SDSS spectra to determine outflow gas masses, which may miss some outflow flux outside the fibre (leading to a possible underestimate of the outflow rate) and inferred the physical extent of the outflow using an empirical relation with $\mathrm{\left[ O \textsc{iii}\right] }$~luminosity from \cite{kang18}. {\color{referee} In addition, \cite{RW18} estimated bulk outflow velocities as $v_{out} = (\sigma_{\rm{broad,[OIII],max}}^2 + |\Delta v_{\rm{max}}|^{2})^{0.5}$, which is different from how we estimated the bulk velocities in this study (see Equation~\ref{eq:velocity}). Calculating our outflow velocities in this way results in lower values than quoted in Table~\ref{table:rates}, by $541~\rm{km}~\rm{s}^{-1}$ on average. This particularly affects the comparison of $\dot{E}_{out}$ which has a $v_{\rm{out}}^2$ dependency, leading to an average difference in $\log_{10}\dot{E}_{\rm{out}}$ of $\sim0.7$ dex (and $\sim0.34$ dex in $\log_{10}\dot{P}_{\rm{out}}$). Readers should bear these caveats in mind while comparing the results of this study with those from \cite{RW18} in Figure~\ref{fig:energetics}, however we note that these differences due to the alternate bulk outflow velocity estimate used do not account for the differences between our four targets and the Type 1 AGN population seen in Figure~\ref{fig:energetics}.}} in Figure~\ref{fig:energetics}. We find that the outflow rates of our four targets are comparable to the larger AGN population given their bolometric luminosities. However, given their larger velocities, this results in higher kinetic energy injection rates and momentum flux compared to the larger AGN population, but still within the typical range. This figure demonstrates that the secularly powered outflows of our four targets are typical of low-redshift Type 1 AGN. It is worth noting here that many AGN are found in non-merger systems (for example see \citealt{smethurst16, aird19}), with a wide-range of morphologies, which may also be fuelled by these same secular processes. Given that we find that our outflows and accretion rates are typical of the larger low-redshift AGN population, and given the results of simulations such as \cite{martin18} and \cite{mcalpine20}, it is possible that the majority of low-redshift AGN (both growth and outflows) are powered by secular processes. The momentum flux of the outflows allows us to probe whether the outflows are momentum-conserving \cite[i.e. winds driven by radiation pressure][]{thompson15, costa18} or energy-conserving \cite[i.e. driven by fast, small-scale winds][]{faucher12, costa14}. The average ratio of $\log_{10}[c\dot{P}_{\rm{out}}/L_{\rm{bol}}] = -0.91\pm0.08$ suggests that these outflows are momentum-conserving. If the ratio was higher than unity, then an extra boost of momentum from an energy-conserving wind (which does work on the surrounding material, therefore increasing the momentum of the large-scale wind) would be required. The measurements of the kinetic energy injection rate allow us to probe the physical driver of the outflows observed in our four targets. For example the ratio of $\dot{E}_{\rm{out}}/L_{\rm{bol}}$ is between $0.004\%-0.12\%$ for our targets, meaning that the AGN is energetically sufficient to drive the observed outflows. This is in agreement with the high $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$~ratios seen in Figure~\ref{fig:ratios} suggesting that the outflows are ionised by the AGN rather than star formation. Such low values of $\dot{E}_{\rm{out}}/L_{\rm{bol}}$ are often interpreted as outflows which are incapable of impacting their surrounding galaxy through AGN feedback. Many theoretical works claim that only those outflows with $\dot{E}_{\rm{out}}/L_{\rm{bol}} \gtrsim 0.5-5\%$ are capable of quenching galaxies \citep{dimatteo05, hopkins10, harrison18}; however Figure~\ref{fig:energetics} shows how the majority of low-redshift AGN do not achieve such high efficiencies, with the majority $<1\%$. To determine whether the outflows of our four targets will have an effect on their host galaxies, we first compare the velocity of each outflow to the escape velocity of the galaxy at a radius equal to the maximum extent of the outflow. We assume an $n=1$ Sersic profile to model the light distribution in each galaxy and calculate the fraction within the most distant spatial extent of the outflow, $r_{max}$. We then assume a constant mass-to-light ratio in order to work out the total stellar mass of the galaxy within that radius, $M_{*,r<r_{\rm{max}}}$. The escape velocity of the galaxy at the maximum extent of each outflow is then calculated as $v_{esc, gal} = (GM_{*,r<r_{max}}/r_{max})^{0.5}$, assuming spherical symmetry. The average $v_{[OIII]}$ for the four targets in our sample is $1134\pm 205~\rm{km/s}$, which is $\sim30.5$ times larger than the average escape velocity of the galaxy. We can therefore assume that these outflows, despite their relatively lower rates, will escape the galactic potential and cause AGN feedback to the galaxy by driving gas out of the central regions, or cause feedback to the galactic halo through heating the intergalactic medium (note the large radial extent of the outflows in these four targets of $0.6-2.4~\rm{kpc}$, which is $\sim25\%$ of the galaxy Petrosian radius on average). In order to determine whether the outflows are impacting each galaxy, we would need an estimate of the resolved SFR (e.g. from H$\alpha$ and/or D$_n4000$). The wavelength range of KCWI does not cover these spectral features in the redshift range of these sources; an IFU with a larger wavelength range would be necessary to quantify the feedback efficacy. Since these are Type 1 AGN the SFRs derived from SDSS spectra are also unreliable due to contamination from the AGN. However, it is worth noting that these four targets have galaxy $u-r$ colours\footnote{Calculated in a `donut' shaped aperture by removing the SDSS PSF magnitude from the Petrosian magntiude.} in the range $1.7-2.5$ ($\pm0.1$; although note this is not the case for the parent sample of disk-dominated galaxies, see Section~\ref{sec:sample}) and would therefore be classified as either Green Valley or Red Sequence galaxies \citep{baldry04, smethurst16}. In addition, SSL17 demonstrated how these disk-dominated systems lay on the typical galaxy stellar mass-SMBH mass correlation (i.e. within the scatter), suggesting that non-merger co-evolution of galaxies with their SMBH is possible. Therefore, if \emph{both} merger-driven and non-merger-driven SMBH growth lead to co-evolution, this suggests that this co-evolution is regulated by feedback in both scenarios. Confirming whether AGN outflows in disk-dominated galaxies are powerful enough to cause feedback is therefore of great importance for our understanding of galaxy evolution through co-evolution. An IFU with a larger wavelength range (to cover e.g. $\rm{H}\alpha$ in order to probe the SFR), high spatial resolution (to more accurately resolve the regions impacted by the outlow) and better seeing (this is the biggest limiting factor using KCWI) would allow for a more detailed study on the feedback effects of outflows powered by secular processes in these disk-dominated systems. For example, an IFU such as MUSE on the Very Large Telescope (VLT), used with adapative optics, would be ideal for this science case. \section{Conclusion}\label{sec:conc} We have observed four disk-dominated galaxies hosting luminous AGN with KCWI, an IFU available at the Keck observatory. These galaxies are assumed to have their evolution (and therefore their SMBH growth) dominated by non-merger processes due to their lack of central bulge (see Figure~\ref{fig:hsttargets}). We performed spectral fits to each of the reduced data cubes from KCWI and detected blueshifted broadened $\mathrm{\left[ O \textsc{iii}\right] }$~components in all four targets with $\mathrm{\left[ O \textsc{iii}\right] }$/$\rm{H}\beta$ ratios indicative of ionisation by the AGN. With these spectra we were able to spectrally isolate the broadened $\mathrm{\left[ O \textsc{iii}\right] }$~emission from the narrow $\mathrm{\left[ O \textsc{iii}\right] }$~emission ionised by the central AGN (see Figures~\ref{fig:oiiinfour} \&~\ref{fig:oiiiwfour}). From these fits we calculated the integrated flux in $\mathrm{\left[ O \textsc{iii}\right] }$~$4958\rm{\AA}~\&~5007\rm{\AA}$ across each target and from this calculated the total ionised gas mass in the outflow (see Equation~\ref{eq:carni}). From the maximum extent of the outflow (see top panels of Figure~\ref{fig:oiiiwfour}) and the bulk velocity of the outflow we were able to estimate the outflow rate (see Equation~\ref{eq:outflow}), energy injection rate and momentum flux for these four systems. Our conclusions are as follows: \begin{enumerate} \item The outflow rates of the four targets range from $0.12-0.7~\rm{M}_{\odot}~\rm{yr}^{-1}$, with corresponding SMBH accretion rates in the range $0.02-0.7~\rm{M}_{\odot}~\rm{yr}^{-1}$. The velocities, outflow rates, kinetic energy injection rate and momentum flux of these secularly powered outflows are all typical of other low-redshift AGN outflows in the literature. \item Secular processes such as funnelling of gas by bars and spiral arms are more than capable of providing enough gas to power both the accretion and outflow rates measured in this study, with simulations suggesting they can power inflows an order of magnitude larger than the combined SMBH accretion and AGN outflow rates observed. This suggests that a significant amount of inflow funnelled to the centre by secular processes, will not necessarily be used for SMBH growth or AGN outflows, but will contribute to the central gas reservoir of the galaxy. \item The maximum radial extent of the outflows is substantial, ranging from $0.6-2.4~\rm{kpc}$, which is on average $\sim25\%$ of the galaxy Petrosian radius. \item The outflow velocities in all of our AGN exceed ($\sim30$ times larger on average) the escape velocity of the galaxy at the maximum radial extent of the outflow. This suggests that these outflows will have a feedback effect on their galaxies, perhaps expelling gas from the central regions or heating the surrounding halo. This suggests that if the co-evolution of SMBHs and galaxies is possible through both merger and non-merger driven growth, then AGN feedback may be responsible for regulating this co-evolution in both scenarios. Further spectral observations using an IFU with a larger wavelength range and higher spatial resolution will be needed to quantify the resolved feedback efficacy of these outflows. \item We find that the outflow rates in the merger-powered AGN sample of \cite{bae17} are $\sim51$ times larger than in our four disk dominated targets, whereas the SMBH accretion rates are $\sim3$ times lower. This is in agreement with the findings of \cite{smethurst19b} who attributed this to the hypothesised spin up of SMBHs due to a secular feeding mechanism. \end{enumerate} Combining our results with the conclusions of recent simulations \citep[e.g.][]{martin18, mcalpine20} suggests that secular processes are responsible for the majority of SMBH growth over cosmic time. A higher spatial resolution IFU study, supported by adaptive optics, of the larger parent sample of these four disk-dominated galaxies would allow for a more detailed study on the SMBH growth processes and AGN feedback effects of outflows powered by secular processes in these disk-dominated systems. \section*{Acknowledgements} RJS gratefully acknowledges funding from Christ Church, Oxford. BDS gratefully acknowledges current support from a UK Research and Innovation (UKRI) Future Leaders Fellowship (MR/T044136/1) and past support at the time of KCWI proposal and observing from the National Aeronautics and Space Administration (NASA) through Einstein Postdoctoral Fellowship Award Number PF5-160143 issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. This research is based on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. These observations are associated with program HST-GO-14606. This research made use of Astropy,\footnote{http://www.astropy.org} a community-developed core Python package for Astronomy \citep{astropy13, astropy18} and the affiliated {\tt ccdproc} package \citep{ccdproc}. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawai'ian community. We are most fortunate to have the opportunity to conduct observations from this mountain. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is \url{www.sdss.org}. SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'orio Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University and Yale University. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mn2e}
2024-02-18T23:40:57.830Z
2021-08-13T02:00:17.000Z
algebraic_stack_train_0000
3,793
10,782
proofpile-arXiv_066-2513
\section{Introduction} Software systems used for decision-making are becoming increasingly complex and opaque. At the same time, such systems are used in processes of high social relevance, such as loan applications or parole decisions. It is an urgent question whether we should really \emph{trust} such opaque systems, which evade the understanding even of their programmers, to make critical decisions \cite{Panesar2019Ethics}. The concept of trust also plays an essential role in requirements engineering (RE), for instance, in ISO/IEC 25022 \cite{ISO25022}. However, trust remains a rather vague concept that is hard to measure and is, therefore, a difficult requirement to engineer towards \cite{ISO29148, Amershi2019Guidelines}. Many see explainability as a suitable means to foster stakeholder trust \cite{Langer2021What, Chazette2021Exploring}: If we better understand how the system produces its outputs and the explanation for a given output fits with our expectations of how a good decision should be made, this explanation presents a reason to trust the system. Thus, at first glance, a requirement for explainability seems to be more suitable than to have a requirement for trust directly. Its assumed potential to increase trust is one of the reasons why explainability has become a \enquote*{hot} topic in computer science and interdisciplinary research \cite{Langer2021What}, and now proliferates in the RE community as a non-functional requirement \cite{Chazette2021Exploring, Chazette2020Explainability, Koehl2019Explainability}. Indeed, explainability and trust are often connected in the literature \cite{ Langer2021What, Koehl2019Explainability, Chazette2020Explainability, Chazette2021Exploring, Gilpin2018Explaining, Richardson2018Survey, Anjomshoae2019Explainable, Fox2017Explainable, Anjomshoae2019Explanations, Nalepa2018From, Atzmueller2019Towards, Paez2019Pragmatic, Pieters2011Explanation, Gregor1999Explanations, Hois2019How, Dam2018Explainable, Clinciu2019Survey, Cai2019Effects, Hoffman2018Explaining, Mathews2019Explainable, Wang2019Designing, DeGraaf2017How, Abdul2018Trends, Adadi2018Peeking, Baaj2019Some, Balog2019Transparent, Baum2018From, Baum2018Towards, Carvalho2019Machine, Clos2017Towards, Conati2021Toward, Cotter2017Explainaing, Dodge2019Explaining, Freitas2014Comprehensible, Gilpin2018Society, Glass2008Toward, Green2009Generating, Guidotti2019Survey, Henin2019Towards, Holzinger2019Causability, Lage2019Exploring, Madumal2019Explainable, Madumal2019Grounded, Michael2019Machine, Miller2019Explanation, Nothdurft2013Impact, Olson2019Counterfactual, Ras2018Explanation, Ribeiro2016Why, Riedl2019Human, Rosenfeld2019Explainability, Sato2019Context, Schneider2019Personalized, Sevastjanova2018Going, Sheh2017Did, Sheh2018Defining, Sokol2018Conversational, Sokol2020Explainability, Sreedharan2018Handling, Swartout1983Xplain, Tintarev2007Explanations, Tintarev2007Effective, Tintarev2011Designing, Ter2017News, Vig2009Tagsplanations, Wang2018Explainable, Watts2019Local, Weber2019Explaining } and many researchers, at least implicitly, assume some form of what we will call the \emph{Explainability-Trust-Hypothesis} (\textsc{ET}\xspace) in the following: \begin{description} \item[(\textsc{ET}\xspace)] Explainability is a suitable means for facilitating trust in a stakeholder. \end{description} Recent psychological research has shown, however, that this widely accepted hypothesis should be called into question. Several studies have shown either no effect or even a negative effect of explanations on subjects' trust in a system \cite{Chen2019User, Cheng2019Explaining, Kizilcec2016How, Papenmeier2019How}. In this paper, we will discuss what these findings tell us about the relationship between explainability and trust and how to proceed when engineering for trust based on explainability. \section{\textsc{ET}\xspace in the Literature} The idea of a close connection between explanations or explainability and increased trust as expressed by \textsc{ET}\xspace is pervasive in the literature on explainable AI (XAI). For illustration, consider the following quotes: \begin{itemize} \item \enquote{In order for humans to trust black-box methods, we need explainability […].} \cite{Gilpin2018Explaining} \item \enquote{[…] in many, if not most, cases, the explanation is beneficial […] to foster better trust […].} \cite{Richardson2018Survey} \item \enquote{Increasing user’s trust in the system [… is] among the listed motivations for the explanations.} \cite{Anjomshoae2019Explainable} \item \enquote{The need for explainable AI is motivated mainly by three reasons: the need for trust […].} \cite{Fox2017Explainable} \item \enquote{Explanations are particularly essential […] as it [sic] raises trust […] in the system.} \cite{Anjomshoae2019Explanations} \item \enquote{[…] explainability will also enhance trust at the user side […].} \cite{Nalepa2018From} \item \enquote{[…] the provided […] explainability will also enhance trust in the system at the level of the users […].} \cite{Atzmueller2019Towards} \item \enquote{The main goal of Explainable Artificial Intelligence (XAI) has been variously described as as a search for explainability, […] for ways of […] generating trust in the model and its predictive performance.} \cite{Paez2019Pragmatic} \item \enquote{Artificial agents need to explain their decision to the user in order to gain trust […].} \cite{Pieters2011Explanation} \item \enquote{Explanations, by virtue of making the performance of a system transparent to its users, are influential […] for improving users' trust […].} \cite{Gregor1999Explanations} \item \enquote{[…] explainability provides transparency and contri\-butes to trust […].} \cite{Hois2019How} \item \enquote{Explainability is […] a pre-requisite for practitioner trust […].} \cite{Dam2018Explainable} \end{itemize} Other authors are more cautious. While they do connect explanations and trust in some way, their statements are more hedged than the above examples, mainly through the use of modals (e.g., \enquote{could}) or by speaking about \emph{appropriate} trust: \begin{itemize} \label{cautious quotes} \item \enquote{[…] XAI will be key for both expert and non-expert users to enable them to have a deeper understanding and the appropriate level of trust […].} \cite{Clinciu2019Survey} \item \enquote{[…] comparative explanations could help establish a more appropriate level of trust.} \cite{Cai2019Effects} \item \enquote{[…] there is a need to explain […] so that users and decision makers can develop appropriate trust […].} \cite{Hoffman2018Explaining} \item \enquote{Explainable Machine Learning (XAI) […] enables human users to […] appropriately trust […] emerging generation of artificially intelligent partners.} \cite{Mathews2019Explainable} \item \enquote{[…] explanations are often proposed to […] moderate trust to an appropriate level […].} \cite{Wang2019Designing} \end{itemize} Overall, many authors assume some sort of systematic connection between trust and explanations. While some remain cautious about the exact nature of that relationship, many seem to endorse the straightforward relationship suggested by \textsc{ET}\xspace. \section{Empirical Evidence Concerning \textsc{ET}\xspace} \label{evidence} Despite its intuitive appeal, \textsc{ET}\xspace is not without problems. As we shall see in this section, the empirical evidence is not conclusive enough to support \textsc{ET}\xspace. \subsection{Empirical Findings} \label{empirical findings} Although there are empirical findings supporting the claim that explanations can lead to increased trust in systems \cite{Chakraborti2019Plan, Nagulendra2016Providing}, various empirical studies also provide evidence against that hypothesis. For instance, providing information about what kind of information will be analyzed within AI-based personnel selection can positively \emph{and} negatively affect variables that are commonly associated with trust towards intelligent systems (e.g., perceived fairness) \cite{Langer2018Information, Langer2021Spare, Newman2020Eliminating}. Furthermore, results by Schlicker et al. \cite{Schlicker2021Expect} indicate that providing an explanation does not affect healthcare professionals' perceived justice of automated scheduling decisions. Given that perceived justice is usually also associated with trust \cite{Colquitt2011Justice}, this finding provides further evidence against \textsc{ET}\xspace. These are just some of many examples where empirical research has found no support for the positive relation between explanations and trust (further examples are \cite{Chen2019User, Cheng2019Explaining}). In fact, some studies even found a negative effect of explanations on trust. For instance, Kizilcec et al. \cite{Kizilcec2016How} found that providing too much information eroded trust. Similarly, Papenmeier et al. \cite{Papenmeier2019How} found that the presence of an explanation either did not affect or even reduced trust. \subsection{Discussion of Empirical Findings} Overall, there is some tension between previous empirical research and the various claims that explanations lead to trust. Thus, while it remains compatible with the data that some explanations will increase trust under certain conditions, \textsc{ET}\xspace in its generality should not be assumed. Once we take a closer look at the idea underlying \textsc{ET}\xspace, these findings are not surprising. We can think of three straight\-forward reasons why explanations might fail to foster trust: \begin{enumerate} \item If a person's trust in a system is already maximal, an explanation cannot further increase their trust. \item If the explanation reveals a problem of the system, the explanation might decrease rather than increase trust. \item If a person cannot comprehend the explanation or cannot use it to evaluate the system, the explanation might not change their trust in the system. \end{enumerate} Compelling arguments can be made that these reasons do indeed often play a role: Studies show that some people have a very high initial trust in automated systems \cite{Dzindolet2003Role}, explainability methods are often used for debugging systems \cite{Adadi2018Peeking, Carvalho2019Machine}, and many such methods produce explanations that are too technical for laypeople to understand \cite{Langer2021What, Gilpin2018Society}. It would be interesting to conduct research on whether these reasons are at play when explanations fail to increase trust. To this end, a meta-analysis could be a valuable starting point. For now, these considerations indicate why the relationship between explanations and trust is not as straightforward as assumed in \textsc{ET}\xspace. Therefore, a requirement for explainability is not necessarily a suitable substitute for a requirement for trust in RE. \section{From Trust to Trustworthiness} Does the above discussion indicate that one should not try to engineer for trust via explainability? At this point, we can distinguish two motivations for why someone might want to elicit trust in a system: First, the developer or deployer of a system might want more people to use their technology. Second, we as a society might want reliable technologies that can improve our lives to receive the appropriate trust from their potential users and other stakeholders. In the first case, the software developer or deployer might hope for trust independently of whether the system fulfills further desiderata like reliability, safety, or fairness. In other words, they might want users to trust their product whether or not it is actually trustworthy. In that case, explanations might not always help them reach their goal. However, we can assume that many people who speak more generally about trust in technology, especially legislators, are interested in trust rather for the second reason. As we have seen in Section \ref{cautious quotes}, many of the more cautious quotes related to \textsc{ET}\xspace focus on appropriate trust as opposed to trust in general. We will argue below that in the case where people are looking for the appropriate trust in a reliable system, explanations remain useful. An important mediator for such trust is a system's \emph{trustworthiness}, to which we will now turn. \subsection{Differentiating Trust and Trustworthiness} Trust is an attitude a stakeholder holds \emph{towards} a system. Trustworthiness, by contrast, is a property \emph{of} a system: intuitively, a system is trustworthy for a stakeholder when it is warranted for the stakeholder to put trust in the system. While there are many different conceptualizations of trustworthiness \cite{McLeod2020Trust, hardin2002trust, hawley2019trustworthy, jones2012trustworthiness}, we will settle for an operationalization of trustworthiness that we deem suitable for the context of engineering artificial systems: \begin{definition}[Trustworthiness] \label{def_TW} A system $S$ is \emph{trustworthy} to a stakeholder $H$ in a context $C$ if and only if \begin{enumerate} \item[(a)] $S$ works properly in $C$, and \item[(b)] $H$ would be justified\footnote{We rely on an internalist notion of justification (cf.\ e.g., \cite{Pappas2017Internalist}).} to believe that (a) if $H$ came to believe that (a). \end{enumerate} \end{definition} So, we see that trustworthiness is a property of a system that is parameterized with a stakeholder. Fulfilling condition (a) of Definition \ref{def_TW} is primarily up to the system, while fulfilling condition (b) also depends on the stakeholder in question.\footnote{In our view, the trustworthiness of a system can differ between stakeholders. For instance, a newly developed system for cancer detection might be trustworthy to its engineer who understands it in detail, but not to his friend, the oncologist, who does not have any insight into the system or any of its components.} Note that \enquote{works properly} is a deliberately vague expression. While it will be important to spell out this notion more precisely in future research on trust and trustworthiness, we shall not delve into the matter here. For current purposes, just note that merely fulfilling all specified requirements might not be enough for a system to \enquote*{work properly} in the sense of Definition \ref{def_TW}. An autonomous hiring system, for example, has to be just and fair in order to be considered as working properly, even if that has not been specified as an explicit requirement. Ideally, we want both: that a given system is trustworthy \emph{and} that it is actually trusted. Unfortunately, though, the two can come apart. A judge might put great trust in a system that assesses defendants, while, in fact, the system might be racist and, therefore, not trustworthy. In this case, there is trust without trustworthiness, or \emph{unwarranted trust} \cite{jacovi2021formalizing}. Likewise, an elderly person, suspicious of new technological developments, may not trust their navigation system although they know that it works very reliably and will guide them to their destination safely and quickly. In this case, there is trustworthiness without actual trust, or \emph{failed trust} \cite{lee2004trust}. Looking back at the two potential reasons to engineer for trust we discussed above, it can be seen that trustworthiness is closely related to the idea of appropriate trust in a reliable system: The system's reliability is captured in part (a) of the definition above. Part (b) helps to ensure that if the person trusts the system, they are justified to do so and, thus, their trust is appropriate. Nevertheless, trustworthiness does not automatically guarantee the appropriate trust of all stakeholders. \subsection{Trustworthiness as the Primary Concern} If the system's trustworthiness does not necessarily go hand in hand with stakeholders' trust, the natural question to ask is which of the two should be given priority, even if we ideally want both. We argue that there are good reasons to give priority to trustworthiness. \subsubsection{Practical Reasons} From a pragmatic point of view, it is reasonable to spend less energy on features that designers can hardly control and instead prioritize whatever features are more controllable at design time \cite{Amershi2019Guidelines}. If we follow this reasoning, trustworthiness should take priority over trust, since our control over trust is very limited at design time, while we arguably have much better (though not complete) control over trustworthiness at design time. Recall that trustworthiness is mainly a property of the system, while trust is an attitude of the stakeholders. Granted, even trustworthiness is parameterized with a stakeholder, but this might be less troublesome than it initially looks: Part (a) of Definition \ref{def_TW} is clearly controlled at design time, for it is the main objective of designers to make the system work properly, no matter how we spell this out. Part (b) seems more problematic, as it depends on specific stakeholders and what is justified for them to believe. This, however, is also not entirely outside the control of designers. In fact, designers have considerable control over (b) as they can already deliver appropriate justifications for certain stakeholders to believe in (a) as part of their system or alongside their system. (In the next section, we will see that explanations can be of help here.) Trust, on the other hand, can be controlled much less at design time: It can be elicited, for instance, by certain experiences a person has with a system, clever marketing and advertisement, or by the person's prior knowledge, beliefs, or preconceptions. So, whether someone trusts a system depends not only on its design and the stakeholders' interaction with it, but also heavily on the stakeholder's mindset, general attitude towards the system, prior experience with similar systems, and social network's attitude toward such systems \cite{Hoff2015Trust}. System designers can only influence some of these variables, while for others there is almost no possibility to influence them directly. So, we can conclude that system designers have much less influence on the actual trust that people build in a system than the system's trustworthiness. Therefore, trustworthiness takes priority from a pragmatic point of view. \subsubsection{Moral Reasons} From a normative point of view, we may run a different argument coming to the same conclusion: If designers neglect trustworthiness and build an untrustworthy system, we will probably have either an untrustworthy system that most stakeholders will not trust in the long run or an untrustworthy system that is trusted mistakenly, which can have devastating consequences. Neither of these scenarios is desirable and, arguably, deploying a trustworthy system will frequently have morally better consequences, even if it is not trusted. Think back, for example, to the racist decision system in court. If an untrustworthy system is employed in court, it is much more likely to do wrong than a trustworthy system, regardless of whether it is trusted. So, trustworthiness should often take priority, for even a trustworthy system that fails to spark trust can be expected to be morally superior to a similar untrustworthy system. \subsubsection{Sustainability Reasons} Trustworthiness may also prove to be the more sustainable desideratum compared to trust. An essential factor in a person's tendency to trust a system is the quality of experiences they have made with the system \cite{Bailey2007Automation, YuvilerGavish2011Effect, Manzey2012Human}. If people are convinced to trust a system that does not work properly, their trust might easily be violated if the system fails. Contrary to that, with a trustworthy system, people can adjust their level of trust to the system's abilities. Consequently, it will become less likely that the system disappoints people's expectations and, over time, a system that works very well will potentially gain more trust through positive experiences. Thus, while the stakeholders' trust in a system is also important, the system's trustworthiness is a worthy goal to engineer for and might even take priority before actual trust. \subsection{Trustworthiness and Explainability} Several authors have remarked upon the relation between trustworthiness and explanations \cite{Pierrard2019New, Friedrich2011Taxonomy, Polley2021Towards, Robbins2019Misdirected, Markus2021Role, Mittelstadt2019Explaining, Baum2017Challenges, Darlington2013Aspects, McInerney2018Explore}. In a nutshell, their idea is that a system's explainability promotes its trustworthiness. If this idea holds up, it can serve as an important motivation behind XAI. Examples from the literature are: \begin{itemize} \item \enquote{Explaining decisions […] by intelligent systems is […] essential for […] becoming trustworthy to humans.} \cite{Pierrard2019New} \item \enquote{[…] objectives of explanations are manifold, including aims such as increasing trustworthiness […].} \cite{Friedrich2011Taxonomy} \item \enquote{A trustworthy system should give fair and reliable results along with its explanations.}\cite{Polley2021Towards} \item \enquote{It should be clear that explicability is considered to be an important part of […] \enquote*{trustworthy} […] AI.} \cite{Robbins2019Misdirected} \item \enquote{[…] explainable AI can contribute to the bigger goal of creating trustworthy AI […].} \cite{Markus2021Role} \item \enquote{[…] xAI is to produce methods that make algorithmic decision-making systems more trustworthy […]} \cite{Mittelstadt2019Explaining} \item \enquote{To be ideally trustworthy, a […] system needs to provide us with a rationalizing explanation which is accurate, graspable, and permissible.} \cite{Baum2017Challenges} \end{itemize} We, too, claim such a connection: we suggested that designers have some control over the fulfillment of condition (b) of Definition \ref{def_TW}, namely by providing justification to the stakeholder to believe that the system \enquote*{works properly}. Plausibly, one way to do so is by giving explanations. The reasoning here is quite straightforward: if we want to be justified in our beliefs about how well a system works, it will often be helpful to have a sufficient understanding of the system. Accurate explanations can help us to gain this understanding and, therefore, the justification. So, while explanations might not help with trust, they are likely to help with trust\emph{worthiness}. Note that this is not an empirical point but rather a theoretical one. Granted, \emph{what} someone believes or whether they \emph{feel} justified in their beliefs are empirical questions of psychology. However, the question that we are after, namely whether someone's belief would be justified, is essentially a question of epistemology and, therefore, not an empirical one. So, while we cannot assume \textsc{ET}\xspace, our discussion suggests a tight connection between explanations and trustworthiness. \section{Future Research Directions} We argued that explainability can contribute to a system's trustworthiness and discussed why trustworthiness should often take precedence over trust in design processes. However, a range of questions remains to be answered by future research. For one thing, it is unclear how trustworthiness can be reliably assessed and measured. To this end, we need empirical and conceptual research to gain insights into what requirements to place on systems to make them trustworthy and how to meet these requirements. A more elaborate operationalization of system trustworthiness needs to be developed and agreed on and ways to assess trustworthiness have to be found. A second issue that needs further research is spelling out the exact relationship between explainability and trustworthiness. It needs to be clarified which explanations, under which conditions, can justify a stakeholder's belief that the system works properly. With this in mind, we suggest paying particular attention to the context in which an explanation is given, as different stakeholders and situations might require different explanations to make the system trustworthy \cite{Langer2021What, Chazette2021Exploring}. Third, it remains important to investigate what role explanations can play to increase trust in a system. The findings we discussed in section \ref{empirical findings} indicate many unexplored factors in the relationship between explanations and trust that call for empirical research into this relationship. While it became evident that not \emph{all} explanations foster trust, there still is the strong suspicion that some explanations in the right contexts can actually do so -- and it remains to be seen which ones. To better understand how stakeholders build trust in a system based on explanations, it will, for example, be worth studying how the timing and presentation of explanatory information as well as stakeholders' expectations affect their trust-building. Fourth, future research should examine how to elicit, increase, and maintain stakeholders' trust in trustworthy AI systems. To this end, researchers should investigate how explainability and other (contextual) factors may work together and interact to determine trust. Work on this question may be closely tied up with research on the other issues just mentioned. \section{Conclusion} In summary, our exposition highlights three lessons for requirements engineers, developers, and researchers: first, current research does not imply a close relation between explanations or explainability and trust; second, trustworthiness is a property worth engineering towards; and third, further empirical research is needed to properly understand the relationship between explainability, trustworthiness, and trust. These lessons have particular implications for RE: When designers want to ensure that stakeholders trust their system, they should not use explainability as a substitute -- at least according to current research. However, if they want to make their system trust\emph{worthy}, ensuring explainability might be very helpful and, thus, still of great importance. Also, one must not confuse trust and trustworthiness when formulating requirements. Overall, RE and many other disciplines would profit from more research on trust, trustworthiness, and explainability. \section*{Acknowledgments} Work on this paper was funded by the Volkswagen Foundation grants \textsc{AZ} 98509, 98512, 98513, and 98514 \href{https://explainable-intelligent.systems}{\enquote{Explainable Intelligent Systems}} (\textsc{EIS}) and by the \textsc{DFG} grant 389792660 as part of \href{https://perspicuous-computing.science}{\textsc{TRR}~248}. We thank three anonymous reviewers for their feedback. {\raggedbottom \bibliographystyle{IEEEtran}
2024-02-18T23:40:57.862Z
2021-08-16T02:13:04.000Z
algebraic_stack_train_0000
3,794
3,785
proofpile-arXiv_066-2560
\section*{Introduction} The $n$-dimensional associahedron, a polytope whose faces are in bijection with planar trees with $n+2$ leaves, was first introduced as a topological cell complex by J. Stasheff to describe algebras whose product is associative up to homotopy \cite{Stasheff63}. The problem of giving polytopal realizations of these CW-complexes has a rich history \cite{CeballosZiegler12}, and the algebras that they encode, called $\mathrm{A}_\infty$-algebras, have been extensively studied in various branches of mathematics. They were used in algebraic topology for the study of iterated loop spaces \cite{May72,BoardmanVogt73} or the study of homotopy theory of differential graded associative algebras \cite{LefevreHasegawa03,Vallette14} ; in symplectic topology to define Fukaya categories of symplectic manifolds \cite{Seidel08,fo3-I,fo3-II}, through the interpretation of the associahedra as moduli spaces of disks with marked boundary points; and more recently, in mathematical physics, mirror symmetry, Galois cohomology or non-commutative probability. The $n$-dimensional multiplihedron is a polytope whose faces are in bijection with 2-colored planar trees with $n+1$ leaves. It was first introduced as a topological cell complex by J. Stasheff to describe morphisms between $\mathrm{A}_\infty$-algebras \cite{Stasheff70}. It was only recently realized as a convex polytope in the work of S. Forcey \cite{Forcey08}, followed by the work of S. Forcey and S. Devadoss \cite{DevadossForcey08}, F. Ardila and J. Doker \cite{AD13}, and F. Chapoton and V. Pilaud \cite{CP22}. The multiplihedra were studied in algebraic topology \cite{BoardmanVogt73}, as well as in symplectic topology \cite{MauWoodward10,mau-wehrheim-woodward} and Morse theory \cite{mazuir-I,mazuir-II}, as they can be respectively realized as moduli spaces of quilted disks with marked boundary points and as moduli spaces of 2-colored metric trees. In this paper, we define and study a cellular approximation of the diagonal of the multiplihedra. The need for such an approximation comes from the fact that the standard thin diagonal $\triangle_P:P\to P\times P, x\mapsto (x,x)$ of a polytope $P$ is not cellular in general, i.e. its image is not a union of faces of $P\times P$. A cellular approximation of the diagonal is a cellular map $\triangle_P^{\textrm{cell}} : P \to P\times P$ which is homotopic to $\triangle_P$ and which agrees with $\triangle_P$ on the vertices of $P$. The Alexander--Whitney map \cite{EilenbergMacLane53} and the Serre diagonal \cite{Serre51} respectively define cellular approximations for the diagonal of the simplices and for the diagonal of the cubes, yielding the cup product in singular cohomology and the cup product in cubical cohomology. A cellular approximation for the diagonal of the associahedra was constructed in \cite{MTTV19} and yields a universal formula for the tensor product of two $\ensuremath{\mathrm{A}_\infty}$-algebras. See also \cite{SaneblidzeUmble04,MarklShnider06}. By the term \textit{universal}, we mean that the same formula applies uniformly to any pair of $\ensuremath{\mathrm{A}_\infty}$-algebras. In a similar fashion, the cellular approximation of the diagonal of the multiplihedra will be used to define a universal tensor product of $\ensuremath{\mathrm{A}_\infty}$-morphisms in this paper. Our main results can be summarized as follows. \begin{enumerate} \item We define a cellular approximation of the diagonal on Forcey--Loday realizations of the multiplihedra (\cref{def:diagonal-multipl-forcey-loday}). \item We endow them with a compatible operadic bimodule structure over the Loday realizations of the associahedra (\cref{thm:MainOperad}). \item We compute explicitly the associated combinatorial formula for the cellular image of the diagonal (\cref{thm:formuladiagonal}). \item We apply the cellular chains functor to the diagonal in order to define a universal tensor product of $\mathrm{A}_\infty$-morphisms (\cref{prop:diagonal-polytopale-m-infini}), and we study its properties (\cref{ss:homotopy-properties}). \end{enumerate} To achieve these goals, we use the theory of cellular approximations of diagonals developed by the first author in \cite{LA21}, which is based on the theory of fiber polytopes of \cite{BilleraSturmfels92} and the method introduced in \cite{MTTV19}. We prove that the Forcey--Loday realizations of the multiplihedra \cite{Forcey08} can be obtained from the Ardila--Doker realization of the multiplihedra \cite{AD13} by projection (\cref{prop:lifting}). These last realizations are generalized permutahedra, in the sense of A. Postnikov \cite{Postnikov09}, which allows us to apply the results of \cite{LA21} directly, both to define a cellular approximation of the diagonal and to describe its cellular image combinatorially. The tensor product of $\ensuremath{\mathrm{A}_\infty}$-morphisms defined by this diagonal does not however define a symmetric monoidal structure on the category $\infAalg$ of $\ensuremath{\mathrm{A}_\infty}$-algebras and their $\ensuremath{\mathrm{A}_\infty}$-morphisms, since it is not strictly compatible with the composition. This is not a defect of our construction: in \cref{thm:nofunctorial}, we prove that there is no tensor product of $\ensuremath{\mathrm{A}_\infty}$-morphisms which is strictly compatible with the composition of $\ensuremath{\mathrm{A}_\infty}$-morphisms. This proposition should be compared to a similar result by M. Markl and S. Shnider, saying that there is no strictly associative tensor product of $\ensuremath{\mathrm{A}_\infty}$-algebras \cite[Theorem 13]{MarklShnider06}. The preceding two properties are in fact always satisfied up to homotopy (see \cref{th:homotopy-properties}), which points towards the idea that the category $\infAalg$ should possess some kind of \textit{homotopy} symmetric monoidal structure. An analogous phenomenon was already observed for the category of homotopy representations of an algebraic group \cite{AriasAbadCrainicDherin11,poliakova2020cellular}. Our results can be readily applied to different fields. The operadic bimodule structure of Point~(2) above was used in the work of the second author, in order to realize $\mathrm{A}_\infty$-algebras and $\mathrm{A}_\infty$-morphisms in Morse theory \cite{mazuir-I,mazuir-II}. The algebraic tensor product in Point~(4) has applications in Heegaard Floer homology and could be used to relate the Fukaya categories of products of symplectic manifolds via Lagrangian correspondences, see \cref{ss:diag-symp}. We also expect future applications of our work to the computation of the homology of fibered spaces, using the construction of the convolution $\ensuremath{\mathrm{A}_\infty}$-algebra associated to an $\ensuremath{\mathrm{A}_\infty}$-coalgebra and an $\ensuremath{\mathrm{A}_\infty}$-algebra in \cref{prop:convolution-ainf}. This last construction can also be related to the deformation theory of $\infty$-morphisms developed in \cite{RobertNicoudWierstraI,RobertNicoudWierstraII}, see \cref{sec:RNW}. Moreover, our geometric methods shed a new light on a result of M. Markl and S. Shnider \cite{MarklShnider06}, pointing towards possible links with discrete and continuous Morse theory (\cref{rem:Morse}). Finally, the results of this paper can be straightforwardly extended to the "multiploperahedra", a family of polytopes which is to the operahedra of \cite{LA21} what the multiplihedra are to the associahedra. They belong at the same time to the families of graph-multiplihedra \cite{DevadossForcey08} and of nestomultiplihedra \cite{AD13}. Together with the results of \cite[Section 4]{LA21}, one would obtain a tensor product of $\infty$-morphisms between homotopy operads, defined by explicit formul\ae. \subsection*{Layout} We introduce the Forcey--Loday and the Ardila-Doker realizations of the multiplihedra in \cref{sec:I}. We define a cellular approximation of their diagonal and endow the Forcey--Loday multiplihedra with an operadic bimodule structure over the Loday associahedra in \cref{sec:II}. We compute explicitly the associated combinatorial formula for the image of our diagonal in \cref{sec:III}. We define a tensor product of \ensuremath{\mathrm{A}_\infty} -algebras and of \ensuremath{\mathrm{A}_\infty} -morphisms and study its properties in \cref{sec:IV}. We finally sketch future applications of our work in \cref{sec:V}. \subsection*{Conventions} We use the conventions and notations of \cite{Ziegler95} for convex polytopes and the ones of \cite{LodayVallette12} for operads. The word operad will always mean non-symmetric operad \cite[Section 5.2.8]{LodayVallette12} in this paper. We denote by $[n]\coloneqq \{1,\ldots,n\}$ and by $\{ e_i\}_{i \in [n]}$ the standard basis of $\mathbb{R}^n$. The abbreviation "dg" will stand for the words "differential graded". \subsection*{Acknowledgements} We would like to thank Bruno Vallette for numerous discussions and for his careful reading of our paper, as well as Alexandru Oancea and Eric Hoffbeck for their comments on earlier versions. We are also indebted to Lino Amorim and Robert Lipshitz, for explaining to us their work and for their detailed insights on possible applications of our results in symplectic topology. We finally express our gratitude to Sushmita Venugopalan, for taking the time to discuss potential connections between our work and results on toric varieties, and to Daniel Robert-Nicoud, for discussing his work with us and suggesting new directions of research. \section{Realizations of the multiplihedra} \label{sec:I} Drawing from the work of Forcey in \cite{Forcey08}, we define the weighted Forcey--Loday realizations of the multiplihedra and describe their geometric properties in \cref{prop:PropertiesKLoday}. We then show how they can be recovered from the Ardila--Doker realizations of the multiplihedra, which are in particular generalized permutahedra. \subsection{2-colored trees and multiplihedra} \subsubsection{2-colored trees} We consider in this section \textit{planar rooted trees}, which we simply abbreviate as \textit{trees}. The term \emph{edge} refers to both internal and external edges. The external edges will sometimes be called leaves. \begin{definition}[Cut] A \emph{cut} of a tree is a subset of edges or vertices which contains precisely one edge or vertex in each non-self crossing path from an incoming edge to the root. \end{definition} A cut divides a tree into an upper part that we color in blue and a lower part that we color in red. The edges and vertices of the cut are represented by drawing a black line over them, as pictured in \cref{Fig2:InclusionOrder}. \begin{definition}[2-colored tree] \label{def:2coloredtree} A \emph{2-colored tree} is a tree together with a cut. We call \emph{2-colored maximal tree} a 2-colored binary tree whose cut is made of edges only. \end{definition} We denote by $\CT{n}$ (resp. $\CMT{n}$) the set of 2-colored trees (resp. 2-colored maximal trees) with $n$ leaves, for $n\geq 1$. \begin{definition}[Face order]\leavevmode The \emph{face order} $s\subset t$ on 2-colored trees is defined as follows: a 2-colored tree $s$ is less than a 2-colored tree $t$ if $t$ can be obtained from $s$ by a sequence of contractions of monochrome edges or moves of the cut from a family of edges to an adjacent vertex. \begin{figure}[h] \[\vcenter{\hbox{ \begin{tikzpicture}[yscale=0.7,xscale=1] \draw[very thick, MidnightBlue] (-2.5,3.5)--(-2,2.5); \draw[very thick, MidnightBlue] (-1.5,3.5)--(-2,2.5); \draw[very thick, MidnightBlue] (-2,2.5) -- (-1.75,2); \draw[very thick, MidnightBlue] (-1.25, 2) -- (-1,2.5); \draw[very thick, MidnightBlue] (-0.5,2.5) -- (0,1.5); \draw[very thick, MidnightBlue] (0,1.5)--(0,2.5); \draw[very thick, MidnightBlue] (0,1.5)--(0.5,2.5); \draw[very thick, MidnightBlue] (1,1)--(1.5,1.5); \draw[very thick, MidnightBlue] (1.5,1.5)--(1,2.5); \draw[very thick, MidnightBlue] (1.5,1.5)--(2,2.5); \draw[very thick, MidnightBlue] (2,2.5)--(1.5,3.5); \draw[very thick, MidnightBlue] (2,2.5)--(2,3.5); \draw[very thick, MidnightBlue] (2,2.5)--(2.5,3.5); \draw[very thick, Red!60] (0,-1)--(0, 1.5); \draw[very thick, Red!60] (0,0)--(-1.5,1.5); \draw[very thick, Red!60] (-1.5,1.5)--(-1.75, 2); \draw[very thick, Red!60] (-1.5,1.5)--(-1.25, 2); \draw[very thick, Red!60] (0,0)--(1, 1); \draw (-2,2) to (-1,2); \draw (-0.25,1.5)-- (0.25, 1.5) ; \draw (0.75,1) to (1.25,1); \end{tikzpicture}}} \quad \subset \quad \vcenter{\hbox{ \begin{tikzpicture}[yscale=0.7,xscale=1] \draw[very thick, MidnightBlue] (-1.5,1.5)--(-1.5,2.5); \draw[very thick, MidnightBlue] (-2,2.5) -- (-1.75,2); \draw[very thick, MidnightBlue] (-1.25, 2) -- (-1,2.5); \draw[very thick, MidnightBlue] (-0.5,2.5) -- (0,1.5); \draw[very thick, MidnightBlue] (0,1.5)--(0,2.5); \draw[very thick, MidnightBlue] (0,1.5)--(0.5,2.5); \draw[very thick, MidnightBlue] (1,1)--(1.5,1.5); \draw[very thick, MidnightBlue] (1.5,1.5)--(1,2.5); \draw[very thick, MidnightBlue] (1.5,1.5)--(1.33,2.5); \draw[very thick, MidnightBlue] (1.5,1.5)--(1.66,2.5); \draw[very thick, MidnightBlue] (1.5,1.5)--(2,2.5); \draw[very thick, MidnightBlue] (-1.5,1.5)--(-1.75, 2); \draw[very thick, MidnightBlue] (-1.5,1.5)--(-1.25, 2); \draw[very thick, Red!60] (0,-1)--(0, 1.5); \draw[very thick, Red!60] (0,0)--(-1.5,1.5); \draw[very thick, Red!60] (0,0)--(1, 1); \draw (-1.75,1.5) to (-1.25,1.5); \draw (-0.25,1.5) to (0.25,1.5); \draw (0.75,1) to (1.25,1); \end{tikzpicture}}} \] \caption{Two 2-colored trees, related by the face order.} \label{Fig2:InclusionOrder} \end{figure} \end{definition} \begin{definition}[Tamari-type order]\leavevmode The \emph{Tamari-type order} $s<t$ on 2-colored maximal trees is generated by the following three covering relations: \[ {\vcenter{\hbox{ \begin{tikzpicture}[yscale=0.5,xscale=0.5] \draw[very thick, MidnightBlue] (0,-0.5)--(0,0) -- (-2,2)--(-2,2.5); \draw[very thick, MidnightBlue] (-1,1)--(0,2)--(0,2.5) ; \draw[very thick, MidnightBlue] (0,0)--(1,1)--(1,2.5) ; \draw[very thick, Red!60] (0,-0.5)--(0,-1) ; \draw (-0.5,-0.5) --(0.5, -0.5); \draw (-2,2.5) node[above] {$t_1$}; \draw (0,2.5) node[above] {$t_2$}; \draw (1,2.5) node[above] {$t_3$}; \draw (0,-1) node[below] {$t_4$}; \end{tikzpicture}}}} \prec {\vcenter{\hbox{ \begin{tikzpicture}[yscale=0.5,xscale=0.5] \draw[very thick, MidnightBlue] (0,-0.5)--(0,0) -- (2,2)--(2,2.5); \draw[very thick, MidnightBlue] (1,1)--(0,2)--(0,2.5) ; \draw[very thick, MidnightBlue] (0,0)--(-1,1)--(-1,2.5) ; \draw[very thick, Red!60] (0,-0.5)--(0,-1) ; \draw (-0.5,-0.5) --(0.5, -0.5); \draw (2,2.5) node[above] {$t_3$}; \draw (0,2.5) node[above] {$t_2$}; \draw (-1,2.5) node[above] {$t_1$}; \draw (0,-1) node[below] {$t_4$}; \end{tikzpicture}}}}\ , \quad {\vcenter{\hbox{ \begin{tikzpicture}[yscale=0.5,xscale=0.5] \draw[very thick, MidnightBlue] (-2, 2.5)--(-2, 3) ; \draw[very thick, MidnightBlue] (0, 2.5)--(0, 3) ; \draw[very thick, MidnightBlue] (1, 2.5)--(1, 3) ; \draw[very thick, Red!60] (0,-0.5)--(0,0) -- (-2,2)--(-2,2.5); \draw[very thick, Red!60] (-1,1)--(0,2)--(0,2.5) ; \draw[very thick, Red!60] (0,0)--(1,1)--(1,2.5) ; \draw (-2.5,2.5) --(1.5, 2.5); \draw (-2,3) node[above] {$t_1$}; \draw (0,3) node[above] {$t_2$}; \draw (1,3) node[above] {$t_3$}; \draw (0,-0.5) node[below] {$t_4$}; \end{tikzpicture}}}} \prec {\vcenter{\hbox{ \begin{tikzpicture}[yscale=0.5,xscale=0.5] \draw[very thick, Red!60] (0,-0.5)--(0,0) -- (2,2)--(2,2.5); \draw[very thick, Red!60] (1,1)--(0,2)--(0,2.5) ; \draw[very thick, Red!60] (0,0)--(-1,1)--(-1,2.5) ; \draw[very thick, MidnightBlue] (2,2.5)--(2,3) ; \draw[very thick, MidnightBlue] (0,2.5)--(0,3) ; \draw[very thick, MidnightBlue] (-1,2.5)--(-1,3) ; \draw (-1.5,2.5) --(2.5, 2.5); \draw (2,3) node[above] {$t_3$}; \draw (0,3) node[above] {$t_2$}; \draw (-1,3) node[above] {$t_1$}; \draw (0,-0.5) node[below] {$t_4$}; \end{tikzpicture}}}}\ , \quad {\vcenter{\hbox{ \begin{tikzpicture}[yscale=0.5,xscale=0.5] \draw[very thick, MidnightBlue] (0,0.5)--(0,1); \draw[very thick, MidnightBlue] (0,1)--(-0.5,1.5)--(-0.5,2); \draw[very thick, MidnightBlue] (0,1)--(0.5,1.5)--(0.5,2); \draw[very thick, Red!60] (0,0)--(0,0.5); \draw (-0.5,0.5) --(0.5, 0.5); \draw (-0.5,2) node[above] {$t_1$}; \draw (0.5,2) node[above] {$t_2$}; \draw (0,0) node[below] {$t_3$}; \end{tikzpicture}}}} \prec {\vcenter{\hbox{ \begin{tikzpicture}[yscale=0.5,xscale=0.5] \draw[very thick, MidnightBlue] (-0.5,2)--(-0.5,2.5); \draw[very thick, MidnightBlue] (0.5,2)--(0.5,2.5); \draw[very thick, Red!60] (0,0.5)--(0,1); \draw[very thick, Red!60] (0,1)--(-0.5,1.5)--(-0.5,2); \draw[very thick, Red!60] (0,1)--(0.5,1.5)--(0.5,2); \draw (-1,2) --(1, 2); \draw (-0.5,2.5) node[above] {$t_1$}; \draw (0.5,2.5) node[above] {$t_2$}; \draw (0,0.5) node[below] {$t_3$}; \end{tikzpicture}}}} \ ,\] where each $t_i$, $1\leq i\leq 4$, is a binary tree of the appropriate color. \end{definition} We add a minimum element $\emptyset_n$ to the poset of 2-colored trees $(\CT{n}, \subset)$. \begin{proposition} The posets $(\CT{n}, \subset)$ and $(\CMT{n}, <)$ are lattices. \end{proposition} \begin{proof} The poset of 2-colored trees was proven in \cite{Forcey08} to be isomorphic to the face lattice of a polytope, the multiplihedron; see Point~(3) of \cref{prop:PropertiesKLoday}. The Hasse diagram of the poset of 2-colored maximal trees was proven to be isomorphic to the oriented 1-skeleton of the multiplihedron, and also to be the Hasse diagram of a lattice in \cite[Proposition 117]{CP22}. \end{proof} \begin{remark} F. Chapoton and V. Pilaud introduced in \cite{CP22} the shuffle of two generalized permutahedra (see \cref{sec:generalizedpermutahedra} for definition and examples). The fact that the poset $(\CMT{n}, <)$ is a lattice follows from the fact that the multiplihedron arises as the shuffle of the associahedron and the interval, which both have the lattice property, and that the shuffle operation preserves the lattice property in this case, see \cite[Corollary 95]{CP22}. \end{remark} \subsubsection{Grafting of trees} \label{sss:grafting} We will denote the operation of grafting a planar tree $v$ at the $i^{\rm th}$-leaf of a 2-colored tree $u$ by $u \circ_i v$. We will also denote the grafting of a level of 2-colored trees $v_1, \ldots, v_k$ on the $k$ leaves of a planar tree by $u(v_1, \ldots, v_k)$. We denote by $c^{\mathrm{T}}_n$ and by $c^{\mathrm{B}}_n$ the corollae with $n$ leaves fully painted with the upper and the lower color respectively; we denote by $c_n$ the corolla with $n$ leaves with frontier color at the vertex. It is straightforward to see that these two grafting operations on corollae generate all the 2-colored trees of codimension $1$: we call $(\mathrm{B})$, for ``bottom'', the first type of 2-colored trees $c_{p+1+r}\circ_{p+1} c^\mathrm{T}_q$, with $p+q+r=n$ and $2\leq q\leq n$, and we call $(\mathrm{T})$, for ``top'', the second type of 2-colored trees $c^\mathrm{B}_k(c_1, \ldots, c_k)$, with $i_1+\cdots+i_k=n$, $i_1, \ldots,i_k\geq 1$, and $k\geq 2$. \begin{figure}[h] \[\vcenter{\hbox{ \begin{tikzpicture}[yscale=0.7,xscale=1] \draw[very thick, MidnightBlue] (0.5,1)--(0,2); \draw[very thick, MidnightBlue] (0.5,1)--(0.5,2); \draw[very thick, MidnightBlue] (0.5,1)--(1,2); \draw[very thick, MidnightBlue] (0,0)--(0.5, 1); \draw[very thick, MidnightBlue] (0,0)--(-0.5, 1); \draw[very thick, MidnightBlue] (0,0)--(-1.5,1); \draw[very thick, MidnightBlue] (0,0)--(1.5, 1); \draw[very thick, Red!60] (0,-1)--(0, 0); \draw (-0.25,0) to (0.25,0); \draw (0,-2) node {type $(\mathrm{B})$}; \end{tikzpicture}}}\qquad \vcenter{\hbox{ \begin{tikzpicture}[yscale=0.7,xscale=1] \draw[very thick, MidnightBlue] (-1.5,1)--(-1.5,2); \draw[very thick, MidnightBlue] (-2,2) -- (-1.75,1.5); \draw[very thick, MidnightBlue] (-1.25, 1.5) -- (-1,2); \draw[very thick, MidnightBlue] (-0.5,2) -- (0,1); \draw[very thick, MidnightBlue] (0,1)--(0.5,2); \draw[very thick, MidnightBlue] (1.5,1)--(1,2); \draw[very thick, MidnightBlue] (1.5,1)--(1.33,2); \draw[very thick, MidnightBlue] (1.5,1)--(1.66,2); \draw[very thick, MidnightBlue] (1.5,1)--(2,2); \draw[very thick, MidnightBlue] (-1.5,1)--(-1.75, 1.5); \draw[very thick, MidnightBlue] (-1.5,1)--(-1.25, 1.5); \draw[very thick, Red!60] (0,-1)--(0, 1); \draw[very thick, Red!60] (0,0)--(-1.5,1); \draw[very thick, Red!60] (0,0)--(1.5, 1); \draw (-1.75,1) to (-1.25,1); \draw (-0.25,1) to (0.25,1); \draw (1.25,1) to (1.75,1); \draw (0,-2) node {type $(\mathrm{T})$}; \end{tikzpicture}}}\] \caption{Examples of 2-colored trees of type $(\mathrm{B})$ and $(\mathrm{T})$ respectively. } \label{Fig5:FacetsColoredTrees} \end{figure} \subsubsection{Multiplihedra} \label{sec:multiplihedra} \begin{definition}[Multiplihedra] For any $n\geq 1$, an \emph{$(n-1)$-dimensional multiplihedron} is a polytope of dimension $(n-1)$ whose face lattice is isomorphic to the lattice $(\CT{n}, \subset)$ of 2-colored trees with $n$ leaves. \end{definition} \begin{figure}[h] \[ \begin{tikzpicture}[xscale=0.8,yscale=1] \draw[fill, opacity=0.12] (-2,2)--(2,2)--(4,0)--(2,-2)--(-2,-2)--(-4,0)--cycle; \draw (-2,2) node[above left] {$\TreeLa$}; \draw (2,2) node[above right] {$\TreeRa$}; \draw (-2,-2) node[below left] {$\TreeLc$}; \draw (2,-2) node[below right] {$\TreeRc$}; \draw (-4.2,0) node[left] {$\TreeLb$}; \draw (4,0) node[right] {$\TreeRb$}; \draw (-3,1) node[above left] {$\TreeLab$}; \draw (-3,-1) node[below left] {$\TreeLbc$}; \draw (3,1) node[above right] {$\TreeRab$}; \draw (3,-1) node[below right] {$\TreeRbc$}; \draw (0,2.1) node[above] {$\TreeCa$}; \draw (0,-2.1) node[below] {$\TreeCb$}; \draw (0,0) node {$\TreeCab$}; \draw (-1.99,1.99) node {$\bullet$}; \draw (1.99,-1.99) node {$\bullet$}; \draw[thick] (-2,2)--(2,2) node[midway,sloped,allow upside down,scale=0.1]{\thickmidarrow}; \draw[thick] (2,2)--(4,0) node[midway,sloped,allow upside down,scale=0.1]{\thickmidarrow}; \draw[thick] (4,0)--(2,-2) node[midway,sloped,allow upside down,scale=0.1]{\thickmidarrow}; \draw[thick] (-2,2)--(-4,0) node[midway,sloped,allow upside down,scale=0.1]{\thickmidarrow}; \draw[thick] (-4,0)--(-2,-2) node[midway,sloped,allow upside down,scale=0.1]{\thickmidarrow}; \draw[thick] (-2,-2)--(2,-2) node[midway,sloped,allow upside down,scale=0.1]{\thickmidarrow}; \end{tikzpicture} \] \caption{A 2-dimensional multiplihedron and the Tamari-type poset $(\CMT{3}, <)$ on its oriented 1-skeleton.} \label{Fig4:J3} \end{figure} The dimension of a face labeled by a 2-colored tree is given by the sum of the degrees of its vertices defined by \[ \left|{\vcenter{\hbox{ \begin{tikzpicture}[scale=0.5] \draw[very thick, MidnightBlue] (0,-0.5) -- (0,1.5); \draw[very thick, MidnightBlue] (0,0) -- (-1,1)--(-1,1.5); \draw[very thick, MidnightBlue] (0,0) -- (1,1)--(1,1.5); \draw (1,1.5) node[above] {$k$}; \draw (-1,1.5) node[above] {$1$}; \draw (0,1.5) node[above] {$\cdots$}; \end{tikzpicture}}}}\right|=k-2\ , \quad \left|{\vcenter{\hbox{ \begin{tikzpicture}[scale=0.5] \draw[very thick, Red!60] (0,-0.5) -- (0,1.5); \draw[very thick, Red!60] (0,0) -- (-1,1)--(-1,1.5); \draw[very thick, Red!60] (0,0) -- (1,1)--(1,1.5); \draw (1,1.5) node[above] {$k$}; \draw (-1,1.5) node[above] {$1$}; \draw (0,1.5) node[above] {$\cdots$}; \end{tikzpicture}}}}\right|=k-2\ , \quad \left|{\vcenter{\hbox{ \begin{tikzpicture}[scale=0.5] \draw[very thick, MidnightBlue] (0,0) -- (0,1.5); \draw[very thick, MidnightBlue] (0,0) -- (-1,1)--(-1,1.5); \draw[very thick, MidnightBlue] (0,0) -- (1,1)--(1,1.5); \draw[very thick, Red!60] (0,0) -- (0,-0.5); \draw (-0.5,0)--(0.5,0); \draw (1,1.5) node[above] {$k$}; \draw (-1,1.5) node[above] {$1$}; \draw (0,1.5) node[above] {$\cdots$}; \end{tikzpicture}}}}\right|=k-1\ . \] The codimension of a 2-colored tree is then equal to the number of blue and red vertices. In the example of the 2-colored tree depicted on the left of \cref{Fig2:InclusionOrder}, the dimension is equal to 4 and the codimension is equal to 5. As proven in \cite[Proposition 117]{CP22}, the oriented $1$-skeleton of a multiplihedron is the Hasse diagram of the Tamari-type poset. \subsection{Forcey--Loday realizations of the multiplihedra} Jean-Louis Loday gave in \cite{Loday04a} realizations of the associahedra in the form of polytopes with integer coordinates. Stefan Forcey generalized this construction in \cite{Forcey08} in order to give similar realizations for the multiplihedra. \begin{definition}[Weighted 2-colored maximal tree] A \emph{weighted 2-colored maximal tree} is a pair $(t, \omega)$ made up of a 2-colored maximal tree $t\in \CMT{n}$ with $n$ leaves with a weight $\omega= (\omega_1, \ldots, \omega_n) \in \mathbb{R}_{>0}^n$. We call $\omega$ the \emph{weight} and $n$ the \emph{length} of the weight $\omega$. \end{definition} Let $(t, \omega)$ be a weighted 2-colored maximal tree with $n$ leaves. We order its $n-1$ vertices from left to right. At the $i^{\rm th}$ vertex, we consider the sum $\alpha_i$ of the weights of the leaves supported by its left input and the sum $\beta_i$ of the weights of the leaves supported by its right input. If the $i^{\rm th}$ vertex is colored by the upper color, we consider the product $\alpha_i\beta_i$ and if the $i^{\rm th}$ vertex is colored by the lower color, we consider the product $2\alpha_i\beta_i$. The associated string produces a point with integer coordinates $M(t, \omega) \in \mathbb{R}_{>0}^{n-1}$. For example, if only the first and last vertices of $t$ are blue, we obtain a point of the form \[M(t, \omega) = \big(2\alpha_1\beta_1, \alpha_2\beta_2, \ldots, \alpha_{n-2}\beta_{n-2}, 2\alpha_{n-1}\beta_{n-1}\big)\in \mathbb{R}_{>0}^{n-1}\ . \] \begin{figure}[h!] \[ \vcenter{\hbox{\begin{tikzpicture}[scale=1.5] \draw[thick] (1,0)--(2,0); \draw (1,0) node[above] {$\TreeBa$}; \draw (1.95,0) node[above] {$\TreeBb$}; \draw (1,0) node[below] {$1$}; \draw (2,0) node[below] {$2$}; \end{tikzpicture}}} \qquad \qquad \vcenter{\hbox{ \begin{tikzpicture}[scale=1.5] \draw (1,-0.05)--(1,0.05); \draw (2,-0.05)--(2,0.05); \draw (3,-0.05)--(3,0.05); \draw (4,-0.05)--(4,0.05); \draw (-0.05, 1)--(0.05,1); \draw (-0.05, 2)--(0.05,2); \draw (-0.05, 3)--(0.05,3); \draw (-0.05, 4)--(0.05,4); \draw[->] (0,0)--(5,0); \draw[->] (0,0)--(0,5); \draw (1,0) node[below] {$1$}; \draw (2,0) node[below] {$2$}; \draw (3,0) node[below] {$3$}; \draw (4,0) node[below] {$4$}; \draw (0,1) node[left] {$1$}; \draw (0,2) node[left] {$2$}; \draw (0,3) node[left] {$3$}; \draw (0,4) node[left] {$4$}; \draw[thick] (1,2)--(1,4)--(2,4)--(4,2)--(4,1)--(2,1)--cycle; \draw (1,2) node[below left] {$\TreeLa$}; \draw (2,1) node[below left] {$\TreeRa$}; \draw (2,4) node[above right] {$\TreeLc$}; \draw (4,2) node[above right] {$\TreeRc$}; \draw (1,4) node[above left] {$\TreeLb$}; \draw (4,1) node[below right] {$\TreeRb$}; \end{tikzpicture}}} \] \caption{Examples of points associated to 2-colored maximal trees, with standard weight.} \end{figure} \begin{definition}[Forcey--Loday Realization] \label{def:ForceyLoday} The \emph{Forcey--Loday realization of weight $\omega$} of the $(n-1)$-dimensional multiplihedron is the polytope \[\mathrm{J}_\omega \coloneqq \conv \big\{M(t, \omega)\mid t\in \CMT{n} \big\}\subset \mathbb{R}^{n-1}\ .\] \end{definition} The Forcey--Loday realization associated to the standard weight $(1, \ldots, 1)$ will simply be denoted by $\mathrm{J}_n$. By convention, we define the polytope $\mathrm{J}_\omega$ with weight $\omega=(\omega_1)$ of length $1$ to be made up of one point labeled by the 2-colored tree $\mathrm{i}^\T_\B\coloneqq \TreeIab$\ . \begin{figure}[h] \[ \begin{tikzpicture}[scale=0.8, J4] \draw[->] (4,-4,-3)--(5,-4,-3) node[below left] {$x_1$}; \draw[->] (4,-4,-3)--(4,-3, -3) node[below right] {$x_2$}; \draw[->] (4,-4,-3)--(4,-4,-2) node[above] {$x_3$}; \draw[thick, opacity=0.2] (1,4,1)--(1,2,3)--(2,1,3)--(3,1,2)--(3,2,1)--cycle; \draw[thick] (2,8,2)--(2,4,6)--(4,2,6)--(6,2,4)--(6,4,2)--cycle; \draw[thick] (4,1,6)--(6,1,4)--(6,1,2)--(6,2,1)--(6,4,1)--(2,8,1)--(1,8,1)--(1,8,2)--(1,4,6)--(1,2,6)--(2,1,6)--cycle; \draw[thick] (4,1,6)--(4,2,6); \draw[thick] (6,1,4)--(6,2,4); \draw[thick, opacity=0.2] (6,1,2)--(3,1,2); \draw[thick, opacity=0.2] (6,2,1)--(3,2,1); \draw[thick] (6,4,1)--(6,4,2); \draw[thick] (2,8,1)--(2,8,2)--(1,8,2); \draw[thick, opacity=0.2] (1,8,1)--(1,4,1); \draw[thick] (1,4,6)--(2,4,6); \draw[thick, opacity=0.2] (1,2,6)--(1,2,3); \draw[thick, opacity=0.2] (2,1,6)--(2,1,3); \draw[fill, opacity=0.12] (2,8,2)--(2,4,6)--(4,2,6)--(6,2,4)--(6,4,2)--cycle; \draw[fill, opacity=0.18] (6,2,4)--(6,1,4)--(6,1,2)--(6,2,1)--(6,4,1)--(6,4,2)--cycle; \draw[fill, opacity=0.18] (6,2,4)--(6,1,4)--(4,1,6)--(4,2,6)--cycle; \draw[fill, opacity=0.18] (4,1,6)--(4,2,6)--(2,4,6)--(1,4,6)--(1,2,6)--(2,1,6)--cycle; \draw[fill, opacity=0.06] (2,8,2)--(6,4,2)--(6,4,1)--(2,8,1)--cycle; \draw[fill, opacity=0.06] (2,8,1)--(2,8,2)--(1,8,2)--(1,8,1)--cycle; \draw[fill, opacity=0.06] (2,8,2)--(1,8,2)--(1,4,6)--(2,4,6)--cycle; \end{tikzpicture} \] \caption{The Forcey--Loday realization of the multiplihedron $\mathrm{J}_4$ .} \end{figure} \begin{proposition}\label{prop:PropertiesKLoday} The Forcey--Loday realization $\mathrm{J}_\omega$ satisfies the following properties. \begin{enumerate}[leftmargin=*] \item Let $t\in \CMT{n}$ be a 2-colored maximal tree. \noindent For $p+q+r=n$, with $2\leq q\leq n$, the point $M(t, \omega)$ is contained in the half-space defined by the inequality \begin{equation}\label{Eq:B}\tag{$\mathrm{B}$} x_{p+1}+\cdots+x_{p+q-1}\geq \sum_{p+1\leq a<b\leq p+q} \omega_a \omega_b\ , \end{equation} with equality if and only if the 2-colored maximal tree $t$ can be decomposed as $t=u\circ_{p+1} v$, where $u\in\CMT{p+1+r}$ and $v\in \Tam{q}$. \noindent For $i_1+\cdots+i_k=n$, with $i_1, \ldots,i_k\geq 1$ and $k\geq 2$, the point $M(t, \omega)$ is contained in the half-space defined by the inequality \begin{equation}\label{Eq:T}\tag{$\mathrm{T}$} x_{i_1}+x_{i_1+i_2}+\cdots+x_{i_1+\cdots+i_{k-1}}\leq 2\sum_{1\leq j<l\leq k} \omega_{I_j} \omega_{I_l}\ , \end{equation} where $I_j=[i_1+\cdots +i_{j-1}+1, \ldots, i_1+\cdots +i_j]$ and $\omega_{I_j}\coloneqq\sum_{a\in I_j} \omega_a$, with equality if and only if the 2-colored maximal tree $t$ can be decomposed as $t=u(v_1, \ldots, v_k)$, where $u\in\Tam{k}$ and $v_j\in \CMT{i_j}$, for $1\leq j\leq k$. \item The polytope $\mathrm{J}_\omega$ is the intersection of the half-spaces defined in \emph{(1)}. \item The face lattice $(\mathcal{L}(\mathrm{J}_\omega), \subset)$ is isomorphic to the lattice $(\CT{n}, \subset)$ of 2-colored trees with $n$ leaves. \item Any face of a Forcey--Loday realization of a multiplihedron is isomorphic to a product of a Loday realization of an associahedron with possibly many Forcey--Loday realizations of multiplihedra, via a permutation of coordinates. \end{enumerate} \end{proposition} \begin{proof} Points~(1)--(3) were proved in \cite{Forcey08}. We prove Point~(4) by induction on $n$. It clearly holds true for $n=1$. Let us suppose that it holds true up to $n-1$ and let us prove it for the polytopes $\mathrm{J}_\omega$, for any weight $\omega$ of length $n$. We examine first facets. In the case of a facet of type $(\mathrm{B})$ associated to $p+q+r=n$ with $2\leq q \leq n-1$, we consider the following two weights \[ \overline{\omega}\coloneqq (\omega_1, \ldots, \omega_{p}, \omega_{p+1}+\cdots+\omega_{p+q}, \omega_{p+q+1}, \ldots, \omega_{n}) \quad \text{and} \quad \widetilde{\omega}\coloneqq (\omega_{p+1}, \ldots, \omega_{p+q}) \] and the isomorphism \begin{align*} \begin{array}{rccc} \Theta_{p,q,r}\ : & \mathbb{R}^{p+r}\times \mathbb{R}^{q-1} &\xrightarrow{\cong} &\mathbb{R}^{n-1}\\ &(x_1, \ldots, x_{p+r})\times (y_1, \ldots, y_{q-1}) & \mapsto& (x_1, \ldots, x_{p} , y_1, \ldots, y_{q-1}, x_{p+1}, \ldots, x_{p+r})\ . \end{array} \end{align*} The image of the vertices of $\mathrm{J}_{\overline{\omega}}\times \mathrm{K}_{\widetilde{\omega}}$ are sent to the vertices of the facet of $\mathrm{J}_\omega$ labelled by the 2-colored tree $c_{p+1+r}\circ_{p+1} c^\mathrm{T}_q$. In other words, the permutation of coordinates $\Theta$ sends bijectively $\mathrm{J}_{\overline{\omega}}\times \mathrm{K}_{\widetilde{\omega}}$ to $\mathrm{J}_\omega$. Similarly, in the case of a facet of type $(\mathrm{T})$ associated to $i_1+\cdots+i_k=n$ with $i_1, \ldots,i_k\geq 1$ and $k\geq 2$, we consider the following weights \[ \overline{\omega}\coloneqq \big(\sqrt{2}\omega_{I_1}, \ldots, \sqrt{2}\omega_{I_k}\big) \quad \text{and} \quad \widetilde{\omega}_j\coloneqq (\omega_{i_1+\cdots+i_{j-1}+1}, \ldots, \omega_{i_1+\cdots+i_{j-1}+i_j}), \ \text{for}\ 1\leq j\leq k, \] and the isomorphism \begin{align*} \begin{array}{rccc} \Theta^{i_1, \ldots, i_k}\ : & \mathbb{R}^{k-1}\times \mathbb{R}^{i_1-1}\times \cdots \times \mathbb{R}^{i_k-1} &\xrightarrow{\cong} &\mathbb{R}^{n-1} \end{array} \end{align*} which sends \[(x_1, \ldots, x_{k-1})\times (y_1^1, \ldots, y^1_{i_1-1})\times \cdots \times (y_1^k, \ldots, y^k_{i_k-1})\] to \[( y^1_1,\ldots, y^1_{i_1-1}, x_1, y^2_1, \ldots, y^2_{i_2-1}, x_2, y^3_1, \ldots, x_{k-1}, y^k_1, \ldots, y^k_{i_k-1} )\ .\] The image of the vertices of $\mathrm{K}_{\overline{\omega}}\times \mathrm{J}_{\widetilde{\omega}_1}\times \cdots \times \mathrm{J}_{\widetilde{\omega}_k}$ are sent to the vertices of the facet of $\mathrm{J}_\omega$ labelled by the 2-colored tree $c^\mathrm{B}_k(c_1, \ldots, c_k)$. In other words, the permutation of coordinates $\Theta$ sends bijectively $\mathrm{K}_{\overline{\omega}}\times \mathrm{J}_{\widetilde{\omega}_1}\times \cdots \times \mathrm{J}_{\widetilde{\omega}_k}$ to $\mathrm{J}_\omega$. We can finally conclude the proof with these decompositions of facets of $\mathrm{J}_\omega$, the induction hypothesis, and Point~(5) of \cite[Proposition~1]{MTTV19}. \end{proof} \subsection{Ardila-Doker realizations of the multiplihedra} \label{sec:generalizedpermutahedra} \begin{definition}[Permutahedron] The \emph{$(n-1)$-dimensional permutahedron} is the polytope in $\mathbb{R}^n$ equivalently defined as: \begin{itemize}[leftmargin=*] \item the convex hull of the points $\displaystyle \sum_{i=1}^{n}i e_{\sigma(i)}$ for all permutations $\sigma \in \mathbb{S}_n$, or \item the intersection of the hyperplane $\displaystyle \left\{x \in \mathbb{R}^n \ \bigg| \ \sum_{i=1}^{n} x_i = \binom{n+1}{2}\right\}$ with the affine half-spaces \\ $\displaystyle \left\{x \in \mathbb{R}^n \ \bigg| \ \sum_{i \in I} x_i \geq \binom{|I|+1}{2}\right\}$ for all $\emptyset\neq I \subseteq [n]$. \end{itemize} \end{definition} For a face $F$ of a polytope $P\subset\mathbb{R}^n$, the \emph{normal cone} of $F$ is the cone \[\mathcal{N}_P(F)\coloneqq \left\{ c \in (\mathbb{R}^n)^{*} \ \bigg | \ F \subseteq \{ x \in P \ | \ c x =\max_{y \in P} c y \}\right\} \ . \] The codimension of $\mathcal{N}_P(F)$ is equal to the dimension of $F$. The \emph{normal fan} of $P$ is the collection of the normal cones $\mathcal{N}_P \coloneqq \{\mathcal{N}_P(F) \ | \ F \in \mathcal{L}(P)\setminus\emptyset \}$. We refer to \cite[Chapter 7]{Ziegler95} for more details. \begin{definition}[Generalized permutahedron] A \emph{generalized permutahedron} is a polytope equivalently defined as: \begin{itemize}[leftmargin=*] \item a polytope whose normal fan coarsens the one of the permutahedron, or \item the convex set \[ \left\{ x \in \mathbb{R}^n \ : \ \sum_{i=1}^{n}x_i = z_{[n]} \ , \sum_{i \in I} x_i \geq z_I \text{ for all } I \subseteq [n] \right\} \ , \] where $\{ z_I \}_{I \subseteq [n]}$ are real numbers which satisfy the inequalities $z_I+z_J \leq z_{I\cup J} + z_{I \cap J}$ for all $I,J \subseteq [n]$, and where $z_\emptyset =0$. \end{itemize} \end{definition} Generalized permutahedra were introduced by A. Postnikov in \cite{Postnikov09}. Loday realizations of the associahedra are all generalized permutahedra (see \cite[Corollary 8.2]{Postnikov09}), while Forcey--Loday realizations of the multiplihedra are not. However, F. Ardila and J. Doker introduced in \cite{AD13} realizations of the multiplihedra that are generalized permutahedra. They are obtained from the Loday realizations of the associahedra via the operation of \emph{$q$-lifting}. We will consider the special case $q=1/2$ of their construction. \begin{definition}[Lifting of a generalized permutahedron {\cite[Definition 2.3]{AD13}}] For a generalized permutahedron $P\subset \mathbb{R}^n$, its \emph{$\tfrac{1}{2}$-lifting} $P \left(\tfrac{1}{2}\right) \subset \mathbb{R}^{n+1}$ is defined by \[P \left(\tfrac{1}{2}\right) \coloneqq \left\{ x \in \mathbb{R}^{n+1} \ : \ \sum_{i=1}^{n+1} x_i = z_{[n]} \ , \sum_{i \in I} x_i \geq \tfrac{1}{2}z_I \ , \sum_{i \in I \cup \{n+1\}} x_i \geq z_I \text{ for all } I \subseteq [n] \right\} \ . \] \end{definition} \begin{proposition}[{\cite[Proposition 2.4]{AD13}}] The $\tfrac{1}{2}$-lifting $P \left(\tfrac{1}{2}\right)$ of a generalized permutahedron is again a generalized permutahedron. \end{proposition} \begin{proposition} The $\tfrac{1}{2}$-lifting $\mathrm{K}_\omega\left(\tfrac{1}{2}\right)$ of the Loday realization of weight $\omega$ of the associahedron is a realization of the multiplihedron. \end{proposition} \begin{proof} This is a particular case of \cite[Corollary 4.10]{AD13}. \end{proof} We call the lifting of the Loday associahedron $\mathrm{K}_\omega\left(\tfrac{1}{2}\right)$ the\emph{ Ardila--Doker realization} of the multiplihedron. It is related to the Forcey--Loday realization via the projection $\pi: \mathbb{R}^{n+1} \to \mathbb{R}^n$ which forgets the last coordinate. \begin{proposition} \label{prop:lifting} The Forcey--Loday realization of the multiplihedron is the image under the projection $\pi$ of the $\tfrac{1}{2}$-lifting of the Loday realization of the associahedron, scaled by $2$. That is, we have \[ \mathrm{J}_\omega = \pi \left(2 \mathrm{K}_\omega\left(\tfrac{1}{2}\right)\right) \ . \] \end{proposition} \begin{proof} This follows from the vertex description of $\tfrac{1}{2}$-lifting given in \cite[Definition 3.5.3]{Doker11}, together with the description of the projection from the permutahedron to the multiplihedron given in the proof of \cite[Theorem 3.3.6]{Doker11}. The coordinates of a vertex in $2 \mathrm{K}_\omega$ are of the form $(2\alpha_1\beta_1, \ldots, 2\alpha_n\beta_n)$. A coordinate $2\alpha_i\beta_i$ is then multiplied by $1/2$ in the lifting if and only if its associated vertex in the 2-colored maximal tree is of the upper color. We thus recover the description of \cref{def:ForceyLoday}. \end{proof} In summary, we have the following diagram: \medskip \begin{equation*} \begin{matrix} $ \small \text{Loday}$ & & $ \small \text{Ardila--Doker}$ & & $ \small \text{Forcey--Loday}$ \\ $ \small \text{associahedron}$ & & $ \small \text{multiplihedron}$ & & $ \small \text{multiplihedron}$ \\ & & & & \\ \mathrm{K}_\omega & \hookrightarrow & \mathrm{K}_\omega \left(\tfrac{1}{2}\right) & \overset{\pi ( 2 \cdot ) }{\twoheadrightarrow} & \mathrm{J}_\omega \\ & & & & \\ \mathbb{R}^n & \hookrightarrow & \mathbb{R}^{n+1} & \twoheadrightarrow & \mathbb{R}^n \\ & & & & \\ $ \small \text{Gen. permutahedron}$ & & $ \small \text{Gen. permutahedron}$ & & $ \small \textit{Not}\text{ a gen. permutahedron}$ \end{matrix} \end{equation*} \section{Diagonal of the multiplihedra} \label{sec:II} In this section, we define a cellular approximation of the diagonal of the Forcey--Loday realizations of the multiplihedra, and we endow them with an operadic bimodule structure over the Loday realizations of the associahedra in the category $\mathsf{Poly}$. We use the methods of \cite{MTTV19} and the general theory developed in \cite{LA21}. Our construction of the cellular approximation relies crucially on the fact that the Forcey--Loday multiplihedra, are obtained from the Ardila--Doker multiplihedra by projection (\cref{prop:lifting}). \subsection{The monoidal category $\mathsf{Poly}$} Let us recall the definition of the symmetric monoidal category $(\mathsf{Poly}, \times)$ from \cite[Section~2.1]{MTTV19}. \begin{description} \item[{\sc Objects}] An object of $\mathsf{Poly}$ is a $d$-dimensional polytope $P$ in the $n$-dimensional Euclidian space $\mathbb{R}^n$, for any $0\leq d\leq n$. \item[{\sc Morphisms}] A morphism in $\mathsf{Poly}$ is a continuous map $f: P\to Q$ which sends $P$ homeomorphically to the underlying set $|\mathcal{D}|$ of a polytopal subcomplex $\mathcal{D}\subset~\mathcal{L}(Q)$ of $Q$ such that $f^{-1}(\mathcal D)$ defines a polytopal subdivision of $P$. \end{description} We will use the notion of \textit{operad}, \textit{operadic bimodule} and \textit{Hadamard product} of operads and operadic bimodules in the rest of this paper. For the sake of concision, we refer respectively to \cite[Section 1.1.1]{mazuir-I}, \cite[Section 1.1.3]{mazuir-I} and \cite[Section 5.1.12]{LodayVallette12} for a complete definition of these notions. An operad will in particular be a non-symmetric operad in the language of \cite[Section 5.2.8]{LodayVallette12}. The fact that the category $\mathsf{Poly}$ is monoidal will moreover allow us to define operads and operadic bimodules in polytopes. \subsection{Positively oriented polytopes and diagonal maps} For a polytope $P$, we will denote by $\rho_z P \coloneqq 2z-P$ its reflection with respect to a point $z \in P$. \begin{definition} A \emph{positively oriented polytope} $(P, \vec v)$ is a polytope $P \subset \mathbb{R}^n$ together with a vector $\vec v\in \mathbb{R}^n$ which is not perpendicular to any edge of $P\cap \rho_z P$, for any $z \in P$. \end{definition} Any positively oriented polytope admits a diagonal map of the form \begin{align*} \begin{array}{rlcl} \triangle_{(P,\vec v)}\ : & P &\to &P \times P\\ &z & \mapsto& \bigl(\bm_{\vec v}(P\cap \rho_zP),\, \tp_{\vec v}(P\cap \rho_z P)\bigr) \ . \end{array} \end{align*} Such a diagonal map is a morphism in $\mathsf{Poly}$, coincides with the usual thin diagonal $x\mapsto (x, x)$ on vertices, and is fiber-homotopic to it, see \cite[Proposition~5]{MTTV19} and \cite[Proposition 1.1]{LA21}. Its cellular image admits a combinatorial description in terms of the fundamental hyperplane arrangement of $P$, as we will now recall. \begin{definition}[Fundamental hyperplane arrangement] \label{def:fundamentalhyperplane} An \emph{edge hyperplane} of $P$ is an hyperplane in $\mathbb{R}^n$ which is orthogonal to the direction of an edge of $P\cap\rho_z P$ for some $z \in P$. The \emph{fundamental hyperplane arrangement} $\mathcal{H}_P$ of $P$ is the collection of all edge hyperplanes of $P$. \end{definition} Recall that a face $F$ of a polytope $P \subset \mathbb{R}^n$ is equal to the intersection of a family of facets $\{F_i\}$. If we choose an outward pointing normal vector $\vec F_i$ for each facet $F_i$ (see \cite[Definition 1.24]{LA21}) and a basis $\{b_k\}$ of the orthogonal complement of the affine hull of $P$ in $\mathbb{R}^n$, then the normal cone of $F$ is given by $\mathcal{N}_P(F)=\cone(\{\vec F_i\} \cup \{b_k,-b_k\})$. \begin{proposition}[{\cite[Theorem 1.23]{LA21}}] \label{thm:universalformula} Let $(P,\vec v)$ be a positively oriented polytope in $\mathbb{R}^n$. For each $H\in\mathcal{H}_P$, we choose a normal vector $\vec d_H$ such that $\langle \vec d_H, \vec v \rangle >0$. We have \begin{eqnarray*} (F,G) \in \Ima \triangle_{(P,\vec v)} &\iff& \forall H \in \mathcal{H}_P , \ \exists i , \ \langle \vec F_i, \vec d_H \rangle < 0 \text{ or } \exists j , \ \langle \vec G_j, \vec d_H \rangle > 0 \ . \end{eqnarray*} \end{proposition} We finally recall general facts from \cite[Section 1.6]{LA21}. \begin{definition}[Coarsening projection] \label{def:coarseningprojection} Let $P$ and $Q$ be two polytopes in $\mathbb{R}^n$ such that the normal fan of $P$ refines the normal fan of $Q$. The \emph{coarsening projection} from $P$ to $Q$ is the application $\theta : \mathcal{L}(P)\to\mathcal{L}(Q)$ which sends a face $F$ of $P$ to the face $\theta(F)$ of $Q$ whose normal cone $\mathcal{N}_Q(\theta(F))$ is the minimal cone with respect to inclusion which contains $\mathcal{N}_P(F)$. \end{definition} \begin{proposition} \label{prop:refinementofnormalfans} Let $P$ and $Q$ be two polytopes such that the normal fan of $P$ refines the one of $Q$. If $P$ is positively oriented by $\vec v$, then so is $Q$. Moreover, the coarsening projection from $P$ to $Q$ commutes with the diagonal maps $\triangle_{(P,\vec v)}$ and $\triangle_{(Q,\vec v)}$, and we have \begin{eqnarray*} (F,G) \in \Ima \triangle_{(Q,\vec v)} &\iff& \forall H \in \mathcal{H}_P , \ \exists i , \ \langle \vec F_i, \vec d_H \rangle < 0 \text{ or } \exists j , \ \langle \vec G_j, \vec d_H \rangle > 0 \ . \end{eqnarray*} \end{proposition} We will apply \cref{prop:refinementofnormalfans} to $P$ the permutahedron and $Q$ the Ardila--Doker multiplihedron, in order to define a diagonal map on the Forcey--Loday multiplihedron and to compute an explicit formula for its cellular image in \cref{thm:formuladiagonal}. \subsection{Good orientation vectors and generalized permutahedra} The projection $\pi : \mathbb{R}^{n+1} \to \mathbb{R}^n$ forgetting the last coordinate defines an affine isomorphism between any hyperplane $H$ of equation $\sum_{i=1}^{n+1} x_i = c \in \mathbb{R}$, and $\mathbb{R}^n$. The inverse map $(\pi_{| H})^{-1}$ is given by the assignment \[ (x_1, \ldots, x_n) \mapsto \left(x_1, \ldots, x_n, c- \sum_{i=1}^{n}x_i\right) \ . \] If a polytope $P$ is contained in the hyperplane $H$, then the polytope $\pi(P)$ is affinely isomorphic to $P$, and the projection $\pi$ defines a bijection between the faces of $P$ and the faces of $\pi(P)$. Moreover, for every face $F$ of $P$, we have $\dim F = \dim \pi(F)$. However, the projection $\pi$ does not preserve orthogonality in general, so if $P$ is positively oriented by $\vec v$, the projection $\pi(P)$ might not be positively oriented by $\pi(\vec v)$. We restrict our attention to a certain class of orientation vectors for which this property holds, in the case where $P$ is a generalized permutahedron. \begin{definition} \label{def:goodvector} A \emph{good orientation vector} is a vector $\vec v=(v_1, \ldots, v_{n+1})\in \mathbb{R}^{n+1}$ satisfying \[v_{i}\geq2v_{i+1}\ , \ \text{for any}\ 1\leq i\leq n\ , \quad \text{and}\quad v_{n+1}>0 \ . \] \end{definition} Observe that the family of good orientation vectors is stable under the projection forgetting the last coordinate: if $\vec v$ is a good orientation vector, then so is $\pi(\vec v)$. Being a good orientation vector is a more restrictive condition than being a principal orientation vector in the sense of \cite[Definition 3.15]{LA21}. Thus, a good orientation vector orients positively any generalized permutahedron. \begin{proposition} \label{prop:goodprojection} Let $P \subset \mathbb{R}^{n+1}$ be a generalized permutahedron, and let $\vec v \in \mathbb{R}^{n+1}$ be a good orientation vector. Then, the polytope $\pi(P)$ is positively oriented by $\pi(\vec v)$. Moreover, the projection $\pi$ commutes with the diagonal maps of $P$ and $\pi(P)$, that is $\triangle_{(\pi(P),\pi(\vec v))}=(\pi \times \pi)\triangle_{(P,\vec v)}$. \end{proposition} \begin{proof} Since $P$ is a generalized permutahedron, the direction of the edges of the intersection $P\cap\rho_z P$, for any $z \in P$, are vectors with coordinates equal to $0,1$ or $-1$, and the same number of $1$ and $-1$ (combine Proposition 1.27 and Proposition 3.4 of \cite{LA21}). The direction $\vec d$ of such an edge satisfies $\langle \vec d, \vec v \rangle \neq 0$, since the first non-zero coordinate of $\vec d$ will contribute a greater amount than the sum of the remaining coordinates in the scalar product. For the same reason, we have $\langle \pi(\vec d), \pi(\vec v) \rangle \neq 0$. As $\pi(P\cap\rho_z P)=\pi(P)\cap\rho_{\pi(z)}\pi(P)$, we have in particular that the image of the edges of $P\cap\rho_z P$ under $\pi$ are the edges of $\pi(P)\cap\rho_{\pi(z)}\pi(P)$ and thus that $\pi(P)$ is positively oriented by $\pi(\vec v)$. For the last part of the statement, observe that $\pi$ preserves the orientation of the edges: if we have $\langle \vec d, \vec v \rangle >0$, then we have $\langle \pi(\vec d), \pi(\vec v) \rangle > 0$. Hence, the image of the vertex $\tp_{\vec v}(P\cap\rho_z P)$, which maximizes $\langle - ,\vec v \rangle$ over $P\cap\rho_z P$, under $\pi$ is equal to the vertex $\tp_{\pi(\vec v)}(\pi(P)\cap\rho_{\pi(z)} \pi(P))$ which maximizes $\langle - ,\pi(\vec v) \rangle$ over $\pi(P)\cap\rho_{\pi(z)} \pi(P)$. The argument for the minimum $\bm(P\cap\rho_z P)$ is the same. \end{proof} \begin{proposition} Let $P\subset\mathbb{R}^{n+1}$ be a generalized permutahedron. Any two good orientation vectors $\vec v, \vec w$ define the same diagonal maps on $P$ and $\pi(P)$, that is, we have $\triangle_{(P,\vec v)}=\triangle_{(P,\vec w)}$ and $\triangle_{(\pi(P),\pi(\vec v))}=\triangle_{(\pi(P),\pi(\vec w))}$. \end{proposition} \begin{proof} Good orientation vectors are principal orientation vectors \cite[Definition 3.15]{LA21}. Since all principal orientation vectors live in the same chamber of the fundamental hyperplane arrangement of the permutahedron, they all define the same diagonal on the permutahedron \cite[Proposition 1.21]{LA21}, and thus the same diagonal on any generalized permutahedron (\cref{prop:refinementofnormalfans}). So, we have $\triangle_{(P,\vec v)}=\triangle_{(P,\vec w)}$. Finally, using \cref{prop:goodprojection}, we have $\triangle_{(\pi(P),\pi(\vec v))}=(\pi \times \pi)\triangle_{(P,\vec v)}=(\pi \times \pi)\triangle_{(P,\vec w)}=\triangle_{(\pi(P),\pi(\vec w))}$. \end{proof} \subsection{Diagonal of the Forcey--Loday multiplihedra} \label{sec:diagonal} \begin{definition} A \emph{well-oriented realization of the multiplihedron} is a positively oriented polytope which realizes the multiplihedron and such that the orientation vector induces the Tamari-type order on the set of vertices. \end{definition} \begin{proposition} \label{prop:OrientationVector} Any good orientation vector induces a well-oriented realization $\left( \mathrm{J}_\omega, \vec v \right)$ of the Forcey--Loday multiplihedron, for any weight $\omega$. \end{proposition} \begin{proof} Using \cref{def:ForceyLoday}, we can compute that any edge of the realization of the multiplihedron $\mathrm{J}_\omega$ is directed, according to the Tamari type order, by either $ e_i$ or $ e_i- e_j$, for $i<j$. Since $\vec v$ has strictly decreasing coordinates, the scalar product is in each case positive. It remains to show that $P\cap\rho_z P$ is oriented by $\vec v$, for any $z \in P$. This follows directly from \cref{prop:goodprojection}, and the fact that $\mathrm{J}_\omega$ arises as the projection under $\pi$ of a generalized permutahedron as shown in \cref{prop:lifting}. \end{proof} Any good orientation vector therefore defines a diagonal map $\triangle_\omega : \mathrm{J}_\omega\to \mathrm{J}_\omega \times \mathrm{J}_\omega$, for any weight $\omega$. These diagonal maps are all equivalent up to isomorphim in the category $\mathsf{Poly}$. \begin{proposition} \label{prop:transitionmap} For any pair of weights $\omega$ and $\theta$ of length $n$, there exists a unique isomorphism $\mathrm{tr}=\mathrm{tr}_\omega^\theta : \mathrm{J}_\omega \to \mathrm{J}_\theta$ in the category $\mathsf{Poly}$, which preserves homeomorphically the faces of the same type and which commutes with the respective diagonals. \end{proposition} \begin{proof} The arguments of \cite[Sections~3.1-3.2]{MTTV19} hold in the present case using \cref{prop:PropertiesKLoday}. We note that the crucial condition above is that the map $\mathrm{tr}$ commutes with the respective diagonals: this makes the map $\mathrm{tr}$ unique and highly non-trivial to construct, see the proof of \cite[Proposition 7]{MTTV19}. \end{proof} \begin{definition} \label{def:diagonal-multipl-forcey-loday} We define $\triangle_n : \mathrm{J}_n \to \mathrm{J}_n\times \mathrm{J}_n$ to be the diagonal induced by any good orientation vector for the Forcey--Loday realization of standard weight $\omega=(1, \ldots, 1)$. \end{definition} \subsection{Operadic bimodule structure on the Forcey--Loday multiplihedra} We will use the transition maps $\mathrm{tr}$ of \cref{prop:transitionmap} above to endow the family of standard weight Forcey--Loday multiplihedra with an operadic bimodule structure over the standard weight Loday associahedra. The uniqueness property of the map $\mathrm{tr}$ will be used in a crucial way. \begin{definition}[Action-composition maps] \label{def:action-composition} For any $n,m\geq 1$ and any $1\leq i \leq m$, for any $k\geq 2$ and any $i_1,\ldots,i_k \geq 1$, we define the \emph{action-composition maps} by \[ \vcenter{\hbox{ \begin{tikzcd}[column sep=1cm] \circ_{p+1}\ : \ \mathrm{J}_{p+1+r}\times \mathrm{K}_q \arrow[rr, "\mathrm{tr}\times \mathrm{id}"] & & \mathrm{J}_{(1,\ldots,q,\ldots,1)}\times \mathrm{K}_q \arrow[rr,hookrightarrow, "\Theta_{p,q,r}"] & & \mathrm{J}_{n}\ \ \text{and} \end{tikzcd} }} \] \[ \vcenter{\hbox{ \begin{tikzcd}[column sep=1cm] \gamma_{i_1,\ldots,i_k}\ : \ \mathrm{K}_{k}\times \mathrm{J}_{i_1} \times \cdots \times \mathrm{J}_{i_k} \arrow[rr, "\mathrm{tr}\times \mathrm{id}"] & & \mathrm{K}_{(i_1,\ldots,i_k)} \times \mathrm{J}_{i_1} \times \cdots \times \mathrm{J}_{i_k} \arrow[rr,hookrightarrow, "\Theta^{i_1, \ldots , i_k}"] & & \mathrm{J}_{i_1+\cdots + i_k}\ , \end{tikzcd} }} \] where the last inclusions are given by the block permutations of the coordinates introduced in the proof of \cref{prop:PropertiesKLoday}. \end{definition} Recall from \cite[Theorem 1]{MTTV19} that the diagonal maps $\triangle_n : \mathrm{K}_n \to \mathrm{K}_n \times \mathrm{K}_n$ define a morphism of operads, where the operad $\{ \mathrm{K}_n \times \mathrm{K}_n \}$ is to be understood as the Hadamard product $\{ \mathrm{K}_n \} \times \{ \mathrm{K}_n \}$. The next proposition shows that the diagonal maps $\triangle_n : \mathrm{K}_n \to \mathrm{K}_n \times \mathrm{K}_n$ and $\triangle_n : \mathrm{J}_n \to \mathrm{J}_n \times \mathrm{J}_n$ are compatible with the action-composition maps introduced in \cref{def:action-composition}. \begin{proposition} \label{prop:thetacommutes} The diagonal maps $\triangle_n$ commute with the maps $\Theta$. \end{proposition} \begin{proof} First observe that a good orientation vector has decreasing coordinates, thereby induces the diagonal maps $\triangle_n : \mathrm{K}_n \to \mathrm{K}_n \times \mathrm{K}_n$ and the operad structure on $\{\mathrm{K}_n\}$ defined in \cite{MTTV19}. Following \cite[Proposition 4.14]{LA21}, to prove the claim it suffices to show that the preimage under $\Theta^{-1}$ of a good orientation vector is still a good orientation vector for each associahedron and multiplihedron. This is easily seen to be the case from the definition of $\Theta$, in the proof of \cref{prop:PropertiesKLoday}. \end{proof} \begin{samepage} \begin{theorem}\label{thm:MainOperad}\leavevmode \begin{enumerate}[leftmargin=*] \item The collection $\{\mathrm{J}_n\}_{n\geq 1}$ together with the action-composition maps $\circ_i$ and $\gamma_{i_1,\ldots,i_k}$ form an operadic bimodule over the operad $\{\mathrm{K}_n\}$ in the category $\mathsf{Poly}$. \item The maps $\{\triangle_n : \mathrm{J}_n \to \mathrm{J}_n\times \mathrm{J}_n\}_{n\geq 1}$ form a morphism of $(\{\mathrm{K}_n\},\{\mathrm{K}_n\})$-operadic bimodules in the category $\mathsf{Poly}$. \end{enumerate} \end{theorem} \end{samepage} \begin{proof} Using \cref{prop:thetacommutes}, we can apply the proof of \cite[Theorem~1]{MTTV19} \emph{mutatis mutandis}. The uniqueness of the transition map $\mathrm{tr}$ is the key argument, as it forces the operadic axioms to hold. We also point out that $\{ \mathrm{J}_n\times \mathrm{J}_n \}$ is to be understood as the Hadamard product $\{ \mathrm{J}_n \} \times \{ \mathrm{J}_n \}$, and that its $(\{\mathrm{K}_n\},\{\mathrm{K}_n\})$-operadic bimodule structure is defined as the pullback of its natural $(\{\mathrm{K}_n \times \mathrm{K}_n\},\{\mathrm{K}_n \times \mathrm{K}_n\})$-operadic bimodule structure under the diagonal maps $\{ \triangle_n : \mathrm{K}_n \to \mathrm{K}_n \times \mathrm{K}_n \}$. \end{proof} Point (1) of \cref{thm:MainOperad} was already mentioned in \cite[Section 1.2]{mazuir-I}, where associahedra and multiplihedra are realized as compactifications of moduli spaces of metric trees and used to construct $\ensuremath{\mathrm{A}_\infty}$-structures on the Morse cochains of a closed manifold. \section{Cellular formula for the diagonal of the multiplihedra} \label{sec:III} We compute in \cref{thm:formuladiagonal} an explicit cellular formula for the diagonal of the Forcey--Loday multiplihedra, using again the key fact that the Ardila--Doker multiplihedron is a generalized permutahedron to which one can apply \cref{prop:refinementofnormalfans} and the results of \cite{LA21}. We then explain geometrically why this formula necessarily has to differ from the "magical formula" computed for the associahedra in \cite{MTTV19}. \subsection{2-colored nested linear graphs} \label{ss:2-col} Let $\ell$ be a \emph{linear graph} with $n$ vertices, as represented in \cref{fig:bijections}. We respectively write $V(\ell)$ and $E(\ell)$ for its sets of vertices and edges. Any subset of edges $N\subset E(\ell)$ defines a subgraph of $\ell$ whose edges are $N$ and whose vertices are all the vertices adjacent to an edge in $N$. We call this graph the \emph{closure} of~$N$. \begin{definition}[Nest and nesting] \leavevmode \begin{itemize}[leftmargin=*] \item A \emph{nest} of a linear graph $\ell$ with $n$ vertices is a non-empty set of edges $N \subset E(\ell)$ whose closure is a connected subgraph of $\ell$. \item A \emph{nesting} of a linear graph $\ell$ is a set $\mathcal{N}=\{N_i\}_{i\in I}$ of nests such that \begin{enumerate}[leftmargin=*] \item the \emph{trivial nest} $E(\ell)$ is in $\mathcal{N}$, \item for every pair of nests $N_i\neq N_j$, we have either $N_i \subsetneq N_j$, $N_j \subsetneq N_i$ or $N_i \cap N_j = \emptyset$, and \item if $N_i \cap N_j = \emptyset$ then no edge of $N_i$ is adjacent to an edge of $N_j$. \end{enumerate} \end{itemize} \end{definition} Two nests that satisfy Conditions (2) and (3) are said to be \textit{compatible}. We denote the set of nestings of $\ell$ by $\mathcal{N}(\ell)$. We naturally represent a nesting by circling the closure of each nest as in \cref{fig:bijections}. A nesting is moreover \emph{maximal} if it has maximal cardinality $|\mathcal{N}|=|E(\ell)|$. \begin{definition}[2-colored nesting] A \emph{2-colored nesting} is a nesting where each nest is either colored in blue, red or both red and blue (that is, purple), and which satisfy the following properties: \begin{enumerate}[leftmargin=*] \item if a nest $N$ is blue or purple, then all nests contained in $N$ are blue, and \item if a nest $N$ is red or purple, then all nests that contain $N$ are red. \end{enumerate} \end{definition} We call \emph{monochrome} the nests that are either blue or red, and \emph{bicolored} the purple nests. We denote by $\ensuremath{\mathrm{mono}}(\mathcal{N})$ the set of monochrome nests of a 2-colored nesting $\mathcal{N}$, and by $\mathcal{N}_2(\ell)$ the set of 2-colored nestings of $\ell$. A 2-colored nesting is moreover \emph{maximal} if it has maximal cardinality, and it is made of monochrome nests only. \begin{remark} The data of a 2-colored nesting on a graph is equivalent to the data of a marked tubing on its line graph, as defined in \cite{DevadossForcey08}. See also \cite[Remark 2.4]{LA21}. \end{remark} \begin{lemma} \label{lemma:bijection} There is a bijection between (2-colored) trees with $n$ leaves and (2-colored) nested linear graphs with $n$ vertices. Under this map, (2-colored) maximal trees are in bijection with maximal (2-colored) nested linear graphs. \end{lemma} \noindent Under this bijection, vertices of 2-colored trees correspond to nests, and their colors agree under the previous conventions. \begin{figure}[h!] \resizebox{0.8\linewidth}{!}{ \begin{tikzpicture} \node (b1) at (-2,3) {}; \node (b2)at (-2,2) {}; \node (b3) at (-2,1) {}; \node (b4) at (-2,0) {}; \node (b5) at (-6,1.5) {}; \draw[MidnightBlue,thick] (b2)--(-3,1.5) node {}; \draw[MidnightBlue,thick] (b3)--(-3,1.5) node {}; \draw[MidnightBlue,thick] (-3,1.5)--(-3.8,2) node {}; \draw[Red!60,thick] (-5,1.5)--(-3.8,2) node {}; \draw[Red!60,thick] (-5,1.5)--(-3.8,0.9) node {}; \draw[Red!60,thick] (-5,1.5)--(-6,1.5) node {}; \draw[MidnightBlue,thick] (-2,0)--(-3.8,0.9) node {}; \draw[MidnightBlue,thick] (-2,3)--(-3.8,2) node {}; \draw[-] (-3.8,3)--(-3.8,0) node {}; \node (B) at (-1.25,1.5) {$\longleftrightarrow$}; \node (x1) [circle,draw=none,minimum size=4mm,inner sep=0.1mm] at (0.15,2.55) {}; \node (x2) [circle,draw=none,minimum size=4mm,inner sep=0.1mm] at (0.15,1.5) {}; \node (x3) [circle,draw=none,minimum size=4mm,inner sep=0.1mm] at (0.15,0.38) {}; \node (t4)[circle,draw=black,minimum size=4mm,inner sep=0.1mm] at (-0,0) {}; \node (t3)[circle,draw=black,minimum size=4mm,inner sep=0.1mm] at (0,1) {}; \node (t2) [circle,draw=black,minimum size=4mm,inner sep=0.1mm] at (0,2) {}; \node (t1) [circle,draw=black,minimum size=4mm,inner sep=0.1mm] at (0,3) {}; \draw[-] (t4)--(t3) node {}; \draw[-] (t3)--(t2) node {}; \draw[-] (t2)--(t1) node {}; \draw [MidnightBlue,rounded corners,thick] (-0.15,0.7) -- (-0.3,0.9) -- (-0.3,2.1) -- (-0.15,2.3) -- (.15,2.3) -- (.3,2.1) -- (.3,.9) -- (.15,.7) -- cycle; \draw [Purple!80,rounded corners,thick] (-0.15,0.6) -- (-0.4,0.8) -- (-0.4,3.1) -- (-0.15,3.3) -- (.15,3.3) -- (0.4,3.1) -- (.4,.8) -- (.15,.6) -- cycle; \draw [Red!60,rounded corners,thick] (-0.15,-0.35) -- (-0.5,-0.1) -- (-0.5,3.15) -- (-0.15,3.4) -- (.15,3.4) -- (0.5,3.15) -- (0.5,-0.1) -- (0.15,-0.35) -- cycle; \node (C) at (1.25,1.5) {$\longleftrightarrow$}; \node (D) at (3.35,1.5) {$\red{ \bullet \purple{\blue{\bullet \bullet} \bullet}}$}; \end{tikzpicture}} \caption{Bijections between 2-colored trees, 2-colored nested linear graphs, and 2-colored parenthesizations.} \label{fig:bijections} \end{figure} \subsection{Cellular formula for the diagonal} \label{ss:cellular-formula} \begin{definition} Let $(\ell,\mathcal{N})$ be a nested linear graph. We respectively denote by $B(\mathcal{N})$, $P(\mathcal{N})$ and $R(\mathcal{N})$ the set of blue, purple and red nests of $\mathcal{N}$. We define $Q(\mathcal{N})$ to be the set whose elements are the unions of nests \[ \bigcup_{i=1}^k R_i \cup \bigcup_{B \in B(\mathcal{N})} B \cup \bigcup_{P \in P(\mathcal{N})} P \] where $R_1,\ldots,R_k \in R(\mathcal{N})$, the case $\cup R_i = \emptyset$ being allowed, and where two unions that result in the same set are identified. \end{definition} We number the edges of the linear graph with $n$ vertices from bottom to top as represented in \cref{fig:bijections}, starting at $1$ and ending at $n-1$. To each blue nest $B \in B(\mathcal{N})$ in a 2-colored nesting $\mathcal{N}$ of a linear graph with $n$ vertices, we associate the \emph{characteristic vector} $\vec B\in \mathbb{R}^n$ which has a $1$ in position $i$ if $i \in B$, $0$ in position $i$ if $i \notin B$ and 0 in position $n$. To each union of nests $Q \in Q(\mathcal{N})$, we associate the characteristic vector $\vec Q \in \mathbb{R}^n$ which has a $1$ in position $i$ if $i \in Q$, $0$ in position $i$ if $i \notin Q$ and 1 in position $n$. We denote moreover by $\vec n$ the vector $(1,\ldots,1) \in \mathbb{R}^n$. \begin{lemma} \label{lemma:normalcones} The normal cone of the face of the Ardila--Doker realization of the multiplihedron labeled by the 2-colored nesting $\mathcal{N}$ is given by \[\cone\left(\{-\vec B\}_{B \in B(\mathcal{N})} \cup \{-\vec Q\}_{Q \in Q(\mathcal{N})} \cup \{\vec n, - \vec n\} \right) \ . \] \end{lemma} \begin{proof} This follows from the description of the Ardila--Doker multiplihedron as a generalized permutahedron: the normal cone of a face of the multiplihedron is a union of normal cones of faces of the permutahedron, and these faces can be easily determined from the projection from the permutahedron to the multiplihedron, written down explicitly in the proof of \cite[Theorem 3.3.6]{Doker11}. \end{proof} We are now ready to compute the cellular formula for the diagonal of the Forcey--Loday multiplihedra. We introduce \[ D(n)\coloneqq \{(I,J) \ | \ I,J\subset\{1,\ldots,n\}, |I|=|J|, I\cap J=\emptyset, \min(I\cup J)\in I \}. \] We number again the edges of the linear graph with $n$ vertices from bottom to top, starting at $1$ and ending at $n-1$. Blue nests and unions of blue, purple and red nests can then in particular be seen as subsets of $\{1,\ldots,n-1\}$, hence of $\{1,\ldots,n\}$. \begin{samepage} \begin{theorem} \label{thm:formuladiagonal} The cellular image of the diagonal map $\triangle_n : \mathrm{J}_n \to \mathrm{J}_n \times \mathrm{J}_n$ introduced in \cref{def:diagonal-multipl-forcey-loday} admits the following description. For $\mathcal{N}$ and $\mathcal{N}'$ two 2-colored nestings of the linear graph with $n$ vertices, we have that \begin{eqnarray*} (\mathcal{N},\mathcal{N}') \in \Ima\triangle_n & \iff & \forall (I,J) \in D(n), \\ && \exists B \in B(\mathcal{N}), |B\cap I|>|B\cap J| \text{ or } \\ && \exists Q \in Q(\mathcal{N}), |(Q\cup \{n\}) \cap I|>| (Q\cup \{n\}) \cap J| \text{ or } \\ && \exists B' \in B(\mathcal{N}'), |B'\cap I|<|B'\cap J| \text{ or } \nonumber \\ && \exists Q' \in Q(\mathcal{N}'), |(Q'\cup \{n\}) \cap I|<| (Q'\cup \{n\}) \cap J| \ . \end{eqnarray*} \end{theorem} \end{samepage} \begin{proof} The essential ingredient is the computation of the fundamental hyperplane arrangement of the permutahedron, which was done in \cite[Section 3.1]{LA21}. The result follows in three steps: \begin{enumerate}[leftmargin=*] \item Since a good orientation vector $\vec v$ is also a principal orientation vector \cite[Definition 3.15]{LA21}, it orients positively the permutahedron. \item Using \cref{prop:refinementofnormalfans} and the description of the normal cones of the faces of the multiplihedron in \cref{lemma:normalcones}, we get the above formula for the Ardila--Doker realizations of the multiplihedra. \item \cref{prop:goodprojection} garantees that this formula holds for the Forcey--Loday realizations, which completes the proof. \end{enumerate} \end{proof} We now make this formula explicit in dimension 1, 2 and 3. We write 2-colored nestings of a linear graph with $n$ vertices as 2-colored parenthesizations of a word with $n$ symbols $\bullet$, which are easier to read and shorter to type, see \cref{fig:bijections}. We moreover only write pairs of faces $(F,G)$ such that $\dim F + \dim G = \dim P$. \begin{equation*} \begin{matrix} \triangle_2(\purple{\bullet \bullet}) & = & \blue{\bullet \bullet} \times \purple{\bullet \bullet} \cup \purple{\bullet \bullet} \times \red{\bullet \bullet} \end{matrix} \end{equation*} \[ \resizebox{\hsize}{!}{$\displaystyle{ \renewcommand*{\arraystretch}{1.5} \begin{matrix} \triangle_3(\purple{\bullet \bullet \bullet}) & = & \blue{\blue{\bullet \bullet} \bullet} \times \purple{\bullet \bullet \bullet} & \cup & \purple{\bullet \bullet \bullet} \times \red{\bullet \red{\bullet \bullet}} & \cup & \blue{\bullet \bullet \bullet} \times \purple{\bullet \blue{\bullet \bullet}} \\ & \cup & \blue{\bullet \bullet \bullet} \times \red{\bullet \purple{\bullet \bullet}} & \cup & \purple{\bullet \blue{\bullet \bullet}} \times \red{\bullet \purple{\bullet \bullet}} & \cup & \purple{\blue{\bullet \bullet} \bullet} \times \red{\purple{\bullet \bullet} \bullet} \\ & \cup & \purple{\blue{\bullet \bullet} \bullet} \times \red{\bullet \bullet \bullet} & \cup & \red{\purple{\bullet \bullet} \bullet} \times \red{\bullet \bullet \bullet} \end{matrix} }$} \] \[ \resizebox{\hsize}{!}{$\displaystyle{ \renewcommand*{\arraystretch}{1.5} \begin{matrix} & & & & \triangle_4(\purple{\bullet \bullet \bullet \bullet}) = \\ & & \blue{\blue{\blue{\bullet \bullet}\bullet}\bullet} \times \purple{\bullet \bullet \bullet \bullet} & \cup & \purple{\bullet \bullet \bullet \bullet} \times \red{\bullet \red{\bullet \red{\bullet \bullet}}} & \cup & \blue{\blue{\bullet \bullet \bullet}\bullet} \times \purple{\bullet \blue{\bullet \bullet}\bullet} \\ & \cup & \red{\purple{\bullet \bullet}\purple{\bullet \bullet}} \times \red{\bullet \bullet\red{\bullet \bullet}} & \cup & \blue{\blue{\bullet \bullet \bullet}\bullet} \times \purple{\bullet \blue{\bullet \bullet \bullet}} & \cup & \red{\purple{\bullet \bullet}\bullet \bullet} \times \red{\bullet \bullet\red{\bullet \bullet}} \\ & \cup & \blue{\bullet \blue{\bullet \bullet}\bullet} \times \purple{\bullet \blue{\bullet \bullet \bullet}} & \cup & \red{\purple{\bullet \bullet \bullet}\bullet} \times \red{\bullet \red{\bullet \bullet}\bullet} & \cup & \blue{\blue{\bullet \bullet}\bullet \bullet} \times \purple{\bullet \bullet \blue{\bullet \bullet}} \\ & \cup & \red{\purple{\bullet \bullet \bullet}\bullet} \times \red{\bullet \red{\bullet \bullet \bullet}} & \cup & \purple{\blue{\blue{\bullet \bullet}\bullet}\bullet} \times \red{\purple{\bullet \bullet \bullet}\bullet} & \cup & \purple{\bullet \bullet\blue{\bullet \bullet}} \times \red{\bullet \red{\bullet \purple{\bullet \bullet}}} \\ & \cup & \purple{\blue{\bullet \bullet}\blue{\bullet \bullet}} \times \red{\purple{\bullet \bullet}\purple{\bullet \bullet}} & \cup & \purple{\bullet \blue{\bullet \bullet}\bullet} \times \red{\bullet \red{\purple{\bullet \bullet}\bullet}} & \cup & \blue{\blue{\bullet \bullet}\bullet \bullet} \times \red{\purple{\bullet \bullet}\purple{\bullet \bullet}} \\ & \cup & \purple{\bullet \blue{\bullet \bullet} \bullet} \times \red{\bullet \red{\bullet \bullet \bullet}} & \cup & \purple{\bullet \blue{\blue{\bullet \bullet}\bullet}} \times \red{\bullet \purple{\bullet \bullet\bullet}} & \cup & \purple{\blue{\bullet \bullet}\bullet \bullet} \times \red{\purple{\bullet \bullet}\red{\bullet \bullet}}\\ & \cup & \blue{\bullet \blue{\bullet \bullet} \bullet} \times \red{\bullet \purple{\bullet\bullet \bullet}} & \cup & \blue{\blue{\bullet \bullet \bullet}\bullet} \times \red{\bullet \purple{\bullet \bullet \bullet}} & \cup & \purple{\blue{\bullet \bullet}\bullet \bullet} \times \red{\bullet \bullet\red{\bullet \bullet}} \\ & \cup & \red{\purple{\blue{\bullet \bullet}\bullet}\bullet} \times \red{\purple{\bullet \bullet}\bullet \bullet} & \cup & \purple{\bullet \blue{\bullet \bullet \bullet}} \times \red{\bullet \purple{\bullet \blue{\bullet \bullet}}} & \cup & \purple{\blue{\blue{\bullet \bullet}\bullet}\bullet} \times \red{\purple{\bullet \bullet}\bullet \bullet} \\ & \cup & \purple{\bullet \blue{\bullet \bullet \bullet}} \times \red{\bullet \red{ \bullet \purple{\bullet \bullet}}} & \cup & \red{\bullet \purple{\bullet \bullet}\bullet} \times \red{\bullet \red{\bullet \bullet \bullet}} & \cup & \red{\red{\purple{\bullet \bullet}\bullet }\bullet} \times \red{\bullet \bullet \bullet \bullet} \\ & \cup & \blue{\bullet \bullet \bullet \bullet} \times \purple{\bullet \blue{\bullet\blue{\bullet \bullet}}} & \cup & \red{\purple{\blue{\bullet \bullet}\bullet}\bullet} \times \red{\bullet \bullet \bullet \bullet} & \cup & \blue{\bullet \bullet \bullet \bullet} \times \red{\bullet \purple{\bullet \blue{\bullet \bullet}}} \\ & \cup & \purple{\blue{\blue{\bullet \bullet}\bullet}\bullet} \times \red{\bullet \bullet \bullet \bullet} & \cup & \blue{\bullet \bullet \bullet \bullet} \times \red{\bullet\red{\bullet\purple{\bullet \bullet}}} & \cup & \red{\purple{\bullet \bullet}\blue{\bullet \bullet}} \times \red{\bullet \bullet \purple{\bullet \bullet}} \\ & \cup & \purple{\blue{\bullet \bullet \bullet}\bullet} \times \red{\purple{\bullet \blue{\bullet \bullet}}\bullet} & \cup & \purple{\blue{\bullet \bullet}\blue{\bullet \bullet}} \times \red{\bullet \bullet \purple{\bullet \bullet}} & \cup & \purple{\blue{\bullet \bullet \bullet}\bullet} \times \red{\bullet \red{\purple{\bullet \bullet}\bullet}} \\ & \cup & \purple{\blue{\bullet \bullet \bullet}\bullet} \times \red{\bullet \blue{\bullet \bullet}\bullet} & \cup & \blue{\blue{\bullet \bullet}\bullet \bullet} \times \red{\bullet \bullet\purple{\bullet \bullet}} & \cup & \purple{\blue{\bullet \bullet \bullet}\bullet} \times \red{\bullet \red{\bullet \bullet \bullet}} \\ & \cup & \red{\purple{\bullet \blue{\bullet \bullet}}\bullet} \times \red{\bullet \purple{\bullet \bullet}\bullet} & \cup & \red{\blue{\bullet \bullet \bullet}\bullet} \times \red{\bullet \purple{\bullet \bullet}\bullet} & \cup & \purple{\blue{\blue{\bullet \bullet}\bullet}\bullet} \times \red{\bullet \purple{\bullet \bullet}\bullet} \end{matrix} }$} \] We also compute in \cref{table:numerology} the number of faces of complementary dimensions and the number of pairs of vertices in the cellular image of the diagonal of the multiplihedra in dimensions $0$ to $6$. They are compared with the diagonals induced by the same orientation vector on the Loday associahedra and the permutahedra. The two sequences of numbers that we obtain did not appear before in \cite{OEIS}. \medskip \begin{figure}[h] \centerline{\begin{tabular}{c|c|rrrrrrr|l} \textbf{Pairs $(F,G) \in \Ima\triangle_{(P,\vec v)}$} & \textbf{Polytopes} & \textbf{0} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{\cite{OEIS}} \\ \hline & \text{Associahedra} & 1 & 2 & 6 & 22 & 91 & 408 & 1938 & \OEIS{A000139} \\ $\dim F + \dim G = \dim P$ & \text{Multiplihedra} & 1 & 2 & 8 & 42 & 254 & 1678 & 11790 & to appear \\ & \text{Permutahedra} & 1 & 2 & 8 & 50 & 432 & 4802 & 65536 & \OEIS{A007334} \\ \hline & \text{Associahedra} & 1 & 3 & 13 & 68 & 399 & 2530 & 16965 & \OEIS{A000260} \\ $\dim F=\dim G =0$ & \text{Multiplihedra} & 1 & 3 & 17 & 122 & 992 & 8721 & 80920 & to appear \\ & \text{Permutahedra} & 1 & 3 & 17 & 149 & 1809 & 28399 & 550297 & \OEIS{A213507} \end{tabular}} \caption{Number of pairs of faces in the cellular image of the diagonal of the associahedra, multiplihedra and permutahedra of dimension $0\leq \dim P \leq 6$, induced by any good orientation vector.} \label{table:numerology} \end{figure} \subsection{About the cellular formula} \label{ss:about} Given a face $F$ of a positively oriented polytope $(P, \vec v)$, the orientation vector $\vec v$ defines a unique vertex $\tp F$ (resp. $\bm F$) which maximizes (resp. minimizes) the scalar product $\langle - , \vec v \rangle$ over $F$. By \cite[Proposition 1.15]{LA21}, any pair of faces $(F,G) \in \Ima \triangle_{(P,\vec v)}$ satisfies $\tp F \leq \bm G$. In the case of the simplices, the cubes and the associahedra, the converse also holds: the image of the diagonal is given by the "magical formula" \begin{align} \label{eq:magical-formula} (F,G) \in \Ima \triangle_n \iff \tp F\leq \bm G \ . \end{align} This formula, however, does not hold for the diagonal of the Forcey--Loday multiplihedra. \begin{proposition} \label{prop:pas-top-bot} The diagonal on the multiplihedron $\mathrm{J}_4$ is such that \[ \Ima \triangle_4 \subsetneq \{ (F,G) , \ \tp F\leq \bm G \} \ . \] \end{proposition} \begin{proof} The pairs of faces $(F,G)$ that satisfy $\dim F + \dim G = 3$ and $\tp F\leq \bm G$ include the four pairs \begin{equation} \label{eq:quatre-paires-inclues} \begin{matrix} \purple{\blue{\bullet \bullet \bullet}\bullet} \times \red{\purple{\bullet\blue{\bullet\bullet}}\bullet} & \red{\blue{\bullet\bullet\bullet}\bullet} \times \red{\bullet\purple{\bullet\bullet}\bullet} \\ \purple{\blue{\bullet\bullet\bullet}\bullet} \times \red{\bullet\blue{\bullet\bullet}\bullet} & \red{\purple{\bullet\blue{\bullet\bullet}}\bullet} \times \red{\bullet\purple{\bullet\bullet}\bullet} \end{matrix} \end{equation} and the four pairs \begin{equation} \label{eq:quatre-paires-exclues} \begin{matrix} \purple{\blue{\bullet \bullet \bullet}\bullet} \times \red{\red{\bullet\purple{\bullet\bullet}}\bullet} & \blue{\blue{\bullet\bullet\bullet}\bullet} \times \red{\bullet\purple{\bullet\bullet}\bullet} \\ \purple{\blue{\bullet\bullet\bullet}\bullet} \times \red{\bullet\red{\bullet\bullet}\bullet} & \purple{\blue{\bullet\blue{\bullet\bullet}}\bullet} \times \red{\bullet\purple{\bullet\bullet}\bullet} \ . \end{matrix} \end{equation} While the image $\Ima \triangle_4$ contains the four pairs in (\ref{eq:quatre-paires-inclues}), it does \emph{not} include the four pairs in (\ref{eq:quatre-paires-exclues}), as can be checked directly from \cref{thm:formuladiagonal}. \end{proof} \begin{remark} We point out that Formula (\ref{eq:magical-formula}) also does not hold neither for the permutahedra nor the operahedra in general, as proven in \cite[Section 3.2]{LA21}. \end{remark} The diagonal $\triangle_n$ being a section of the projection $\pi : \mathrm{J}_n \times \mathrm{J}_n \to \mathrm{J}_n , (x,y) \mapsto (x+y)/2$ \cite[Proposition 1.1]{LA21}, one can in fact represent its cellular image by projecting it to $\mathrm{J}_n$: for each pair of faces $(F,G) \in \Ima \triangle_n$, one draws the polytope $(F+G)/2$ in $\mathrm{J}_n$. This defines a polytopal subdivision of $\mathrm{J}_n$. The polytopal subdivision of $\mathrm{J}_3$ can be found in \cite[Figure 3]{LA21}, while the polytopal subdivision of $\mathrm{J}_4$ is illustrated on the first page of this article. \cref{prop:pas-top-bot} can then be illustrated geometrically as follows. There are two distinct diagonals on $\mathrm{J}_4$ which agree with the Tamari-type order on the vertices. The first one, corresponding to the diagonal defined in this paper, is induced by the choice of any orientation vector $\vec v=(v_1,v_2,v_3,v_4)$ satisfying $v_1>v_2>v_3>v_4$ and $v_1 + v_4 > v_2+v_3$ (here we work with the Ardila--Doker realization of the multiplihedron). Changing the last condition to $v_1 + v_4 < v_2+v_3$ gives the second choice of diagonal, which is in fact exactly the diagonal of Saneblidze--Umble \cite[Section 5]{SaneblidzeUmble04}. These two diagonals on $\mathrm{J}_4$ then differ by four pairs of faces, as represented in~\cref{fig:four-pairs}: the first diagonal includes the pairs of~(\ref{eq:quatre-paires-inclues}), while the second diagonal includes the pairs of~(\ref{eq:quatre-paires-exclues}). Under the projection $\pi : \mathrm{J}_4 \times \mathrm{J}_4 \to \mathrm{J}_4, (x,y) \mapsto (x+y)/2$, these two families of faces induce two distinct polytopal subdivisions of the same "diamond" inside $\mathrm{J}_4$, represented in \cref{fig:diamonds}. We also refer to the last paragraph of \cref{ss:diagonals} for an algebraic counterpart of \cref{prop:pas-top-bot}. \begin{remark} The two previous families of orientation vectors correspond to two adjacent chambers in the fundamental hyperplane arrangement of the permutahedron \cite[Theorem 3.6]{LA21}, separated by the hyperplane $x_1+x_4=x_2+x_3$, pictured in blue in \cite[Figure 12]{LA21}. A way to relate the diagonal constructed in this article to the diagonal of \cite[Section 5]{SaneblidzeUmble04} would possibly be to find further choices of chambers in the fundamental hyperplane arrangements of the permutahedra (or the multiplihedra) in all dimensions $n \geq 4$ recovering the latter diagonal, see also \cite[Remark~3.18]{LA21}. \end{remark} \begin{figure}[h] \resizebox{0.7\linewidth}{!}{ \begin{tikzpicture}[scale=0.5, J4] \draw (1,2,3) node {$\bullet$}; \draw (6,4,2) node {$\bullet$}; \draw[thick, opacity=0.2] (1,4,1)--(1,2,3)--(2,1,3)--(3,1,2)--(3,2,1)--cycle; \draw[thick, opacity=0.2] (2,8,2)--(2,4,6)--(4,2,6)--(6,2,4)--(6,4,2)--cycle; \draw[thick, opacity=0.2] (4,1,6)--(6,1,4)--(6,1,2)--(6,2,1)--(6,4,1)--(2,8,1)--(1,8,1)--(1,8,2)--(1,4,6)--(1,2,6)--(2,1,6)--cycle; \draw[thick, opacity=0.2] (4,1,6)--(4,2,6); \draw[thick, opacity=0.2] (6,1,2)--(3,1,2); \draw[thick, opacity=0.2] (6,2,1)--(3,2,1); \draw[thick, opacity=0.2] (6,4,1)--(6,4,2); \draw[thick, opacity=0.2] (2,8,1)--(2,8,2)--(1,8,2); \draw[thick, opacity=0.2] (1,8,1)--(1,4,1); \draw[thick, opacity=0.2] (1,4,6)--(2,4,6); \draw[very thick, blue] (4,1,6)--(6,1,4); \draw[very thick, blue] (6,1,4)--(6,2,4); \draw[very thick, blue] (1,2,6)--(1,2,3); \draw[very thick, blue] (1,2,6)--(2,1,6); \draw[very thick, blue] (2,1,6)--(2,1,3); \draw[very thick, blue] (1,2,3)--(2,1,3); \draw[fill=blue, opacity=0.12] (1,2,3)--(2,1,3)--(2,1,6)--(1,2,6)--cycle; \end{tikzpicture} \begin{tikzpicture}[scale=0.5, J4] \draw[thick, opacity=0.2] (1,4,1)--(1,2,3)--(2,1,3)--(3,1,2)--(3,2,1)--cycle; \draw[thick, opacity=0.2] (2,8,2)--(2,4,6)--(4,2,6)--(6,2,4)--(6,4,2)--cycle; \draw[thick, opacity=0.2] (4,1,6)--(6,1,4)--(6,1,2)--(6,2,1)--(6,4,1)--(2,8,1)--(1,8,1)--(1,8,2)--(1,4,6)--(1,2,6)--(2,1,6)--cycle; \draw[thick, opacity=0.2] (4,1,6)--(4,2,6); \draw[thick, opacity=0.2] (6,1,2)--(3,1,2); \draw[thick, opacity=0.2] (6,2,1)--(3,2,1); \draw[thick, opacity=0.2] (6,4,1)--(6,4,2); \draw[thick, opacity=0.2] (2,8,1)--(2,8,2)--(1,8,2); \draw[thick, opacity=0.2] (1,8,1)--(1,4,1); \draw[thick, opacity=0.2] (1,4,6)--(2,4,6); \draw[thick, opacity=0.2] (2,1,6)--(2,1,3); \draw[thick, opacity=0.2] (1,2,3)--(2,1,3); \draw[very thick, blue] (4,1,6)--(6,1,4); \draw[very thick, blue] (6,1,4)--(6,2,4); \draw[very thick, blue] (4,1,6)--(4,2,6); \draw[very thick, blue] (4,2,6)--(6,2,4); \draw[fill=blue, opacity=0.12] (4,1,6)--(6,1,4)--(6,2,4)--(4,2,6)--cycle; \draw[very thick, blue] (1,2,6)--(1,2,3); \draw[very thick, blue] (1,2,6)--(2,1,6); \end{tikzpicture}} \[\] \resizebox{0.7\linewidth}{!}{ \begin{tikzpicture}[scale=0.5, J4] \draw[thick, opacity=0.2] (1,4,1)--(1,2,3)--(2,1,3)--(3,1,2)--(3,2,1)--cycle; \draw[thick, opacity=0.2] (2,8,2)--(2,4,6)--(4,2,6)--(6,2,4)--(6,4,2)--cycle; \draw[thick, opacity=0.2] (4,1,6)--(6,1,4)--(6,1,2)--(6,2,1)--(6,4,1)--(2,8,1)--(1,8,1)--(1,8,2)--(1,4,6)--(1,2,6)--(2,1,6)--cycle; \draw[thick, opacity=0.2] (4,1,6)--(4,2,6); \draw[thick, opacity=0.2] (6,1,2)--(3,1,2); \draw[thick, opacity=0.2] (6,2,1)--(3,2,1); \draw[thick, opacity=0.2] (6,4,1)--(6,4,2); \draw[thick, opacity=0.2] (2,8,1)--(2,8,2)--(1,8,2); \draw[thick, opacity=0.2] (1,8,1)--(1,4,1); \draw[thick, opacity=0.2] (1,4,6)--(2,4,6); \draw[thick, opacity=0.2] (6,1,4)--(6,2,4); \draw[very thick, red] (4,1,6)--(4,2,6); \draw[very thick, red] (4,2,6)--(6,2,4); \draw[very thick, red] (1,2,6)--(1,2,3); \draw[very thick, red] (1,2,6)--(2,1,6); \draw[very thick, red] (2,1,6)--(2,1,3); \draw[very thick, red] (1,2,3)--(2,1,3); \draw[fill=red, opacity=0.12] (1,2,3)--(2,1,3)--(2,1,6)--(1,2,6)--cycle; \end{tikzpicture} \begin{tikzpicture}[scale=0.5, J4] \draw[thick, opacity=0.2] (1,4,1)--(1,2,3)--(2,1,3)--(3,1,2)--(3,2,1)--cycle; \draw[thick, opacity=0.2] (2,8,2)--(2,4,6)--(4,2,6)--(6,2,4)--(6,4,2)--cycle; \draw[thick, opacity=0.2] (4,1,6)--(6,1,4)--(6,1,2)--(6,2,1)--(6,4,1)--(2,8,1)--(1,8,1)--(1,8,2)--(1,4,6)--(1,2,6)--(2,1,6)--cycle; \draw[thick, opacity=0.2] (4,1,6)--(4,2,6); \draw[thick, opacity=0.2] (6,1,2)--(3,1,2); \draw[thick, opacity=0.2] (6,2,1)--(3,2,1); \draw[thick, opacity=0.2] (6,4,1)--(6,4,2); \draw[thick, opacity=0.2] (2,8,1)--(2,8,2)--(1,8,2); \draw[thick, opacity=0.2] (1,8,1)--(1,4,1); \draw[thick, opacity=0.2] (1,4,6)--(2,4,6); \draw[thick, opacity=0.2] (2,1,6)--(2,1,3); \draw[thick, opacity=0.2] (1,2,3)--(2,1,3); \draw[thick, opacity=0.2] (1,2,6)--(1,2,3); \draw[thick, opacity=0.2] (1,2,6)--(2,1,6); \draw[very thick, red, -] (4,1,6)--(6,1,4); \draw[very thick, red, -] (6,1,4)--(6,2,4); \draw[very thick, red] (4,1,6)--(4,2,6); \draw[very thick, red, -] (4,2,6)--(6,2,4); \draw[fill=red, opacity=0.12] (4,1,6)--(6,1,4)--(6,2,4)--(4,2,6)--cycle; \draw[very thick, red, -] (2,1,3)--(1,2,3); \draw[very thick, red] (2,1,3)--(2,1,6); \end{tikzpicture}} \caption{The four pairs of~(\ref{eq:quatre-paires-inclues}) represented in blue on the two top copies of $\mathrm{J}_4$ and the four pairs of~(\ref{eq:quatre-paires-exclues}) represented in red on the two bottom copies of $\mathrm{J}_4$. The minimal (top right) and maximal (bottom left) vertices for the Tamari-type order are drawn in black, in the top left copy.} \label{fig:four-pairs} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=0.6\linewidth]{paires-choisies.png} \end{subfigure} ~ \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=0.6\linewidth]{paires-exclues.png} \end{subfigure} \caption{The two distinct subdivisions of the same "diamond" in $\mathrm{J}_4$, respectively induced by the pairs of~(\ref{eq:quatre-paires-inclues}) and~(\ref{eq:quatre-paires-exclues}).} \label{fig:diamonds} \end{figure} \section{Tensor product of \ensuremath{\mathrm{A}_\infty} -morphisms and \ensuremath{\mathrm{A}_\infty} -functors} \label{sec:IV} We begin by proving that for a certain choice of cellular orientation, the cellular chains functor maps the Loday associahedra to the operad \ensuremath{\mathrm{A}_\infty}\ encoding \ensuremath{\mathrm{A}_\infty} -algebras and the Forcey--Loday multiplihedra to the operadic bimodule \Minf\ encoding \ensuremath{\mathrm{A}_\infty} -morphisms between them. It then maps the respective geometric diagonals to algebraic ones, which can be used to define compatible tensor products of $\ensuremath{\mathrm{A}_\infty}$-algebras and $\ensuremath{\mathrm{A}_\infty}$-morphisms (with signs). Tensor product of \ensuremath{\mathrm{A}_\infty} -categories and \ensuremath{\mathrm{A}_\infty} -functors are defined in a similar fashion, and we relate them to the different notions of \ensuremath{\mathrm{A}_\infty} -categories with identities. We finally study coassociativity, cocommutativity and compatibility with composition of \ensuremath{\mathrm{A}_\infty} -morphisms for these diagonals. We show that these properties are always satisfied up to homotopy, hinting at the idea that the category $\infAalg$ should possess some kind of \textit{homotopy} symmetric monoidal structure. \subsection{\ensuremath{\mathrm{A}_\infty} -algebras and \ensuremath{\mathrm{A}_\infty} -morphisms} \label{ss:ainf-alg-ainf-morph} \subsubsection{Definitions} We work in the rest of this article with homological convention. We will refer to chain complexes as \emph{dg modules}, where the abbreviation dg stands for "differential graded", and their differential will always have degree $-1$. \begin{definition}[$\mathrm{A}_\infty$-algebra] \label{def:ainf-alg} An \emph{$\mathrm{A}_\infty$-algebra} is the data of a dg module $(A,\partial)$ together with operations \[ m_n : A^{\otimes n} \to A \ , \ n \geq 2 \] of degree $|m_n|=n-2$, satisfying the equations \[ [ \partial , m_n ] = - \sum_{\substack{p+q+r=n \\ 2 \leq q \leq n-1}} (-1)^{p+qr}m_{p+1+r}(\mathrm{id}^{\otimes p} \otimes m_q \otimes \mathrm{id}^{\otimes r}) \ , \ n\geq 2 \ . \] \end{definition} \begin{definition}[$\mathrm{A}_\infty$-morphism] \label{def:ainf-morph} An \emph{$\mathrm{A}_\infty$-morphism} $F : A\rightsquigarrow B$ between two $\mathrm{A}_\infty$-algebras $(A,\{m_n\})$ and $(B,\{m_n'\})$ is a family of linear maps \[f_n : A^{\otimes n} \to B \ , \ n \geq 1\] of degree $|f_n|=n-1$, satisfying the equations \[ [ \partial , f_n] = \sum_{\substack{p+q+r=n \\ q \geq 2}} (-1)^{p+qr}f_{p+1+r}(\mathrm{id}^{\otimes p} \otimes m_q \otimes \mathrm{id}^{\otimes r}) \ - \sum_{\substack{i_1+\cdots+i_k=n \\ k \geq 2}} (-1)^{\varepsilon} m_k'(f_{i_1}\otimes\cdots\otimes f_{i_k}) \ , \ n \geq 1 \ ,\] where $\varepsilon = \sum_{u=1}^{k}(k-u)(1-i_u)$. \end{definition} For three $\ensuremath{\mathrm{A}_\infty}$-algebras $A$, $B$, $C$ and two $\ensuremath{\mathrm{A}_\infty}$-morphisms $F : A \rightsquigarrow B$, $B \rightsquigarrow C$, their composition $G \circ F : A \rightsquigarrow C$ is the $\ensuremath{\mathrm{A}_\infty}$-morphism whose operation of arity $n$ is given by the formula \[ (G \circ F)_n := \sum_{i_1+\cdots+i_k=n} (-1)^{\varepsilon} g_k(f_{i_1}\otimes\cdots\otimes f_{i_k}) \ . \] This composition is associative. We moreover point out that a standard \textit{dg (associative) algebra} can be defined as an \ensuremath{\mathrm{A}_\infty} -algebra whose higher operations $m_n$ vanish for $n \geq 3$. For more details on these notions, we refer to \cite[Chapter 9]{LodayVallette12}. \begin{definition} We denote by $\infAalg$ the category of $\ensuremath{\mathrm{A}_\infty}$-algebras with $\ensuremath{\mathrm{A}_\infty}$-morphisms. \end{definition} Representing the operations $m_n$ as corollae \arbreop{0.15} of arity $n$, the equations of \cref{def:ainf-alg} read as \begin{equation} [ \partial , \arbreop{0.15} ] = - \sum_{\substack{p+q+r=n \\ 2 \leq q \leq n-1}} (-1)^{p+qr} \eqainf \ . \label{eq:ainf-alg} \end{equation} Representing the operations $m_n$ in blue \arbreopbleu{0.15}, the operations $m'_n$ in red \arbreoprouge{0.15} and the operations $f_n$ by \arbreopmorph{0.15}, the equations of \cref{def:ainf-morph} can be rewritten as \begin{align} [ \partial , \arbreopmorph{0.15} ] = \sum_{\substack{p+q+r=n \\ q \geq 2}} (-1)^{p+qr} \eqainfmorphun \ - \sum_{\substack{i_1+\cdots+i_k=n \\ k \geq 2}} (-1)^{\varepsilon} \eqainfmorphdeux \ . \label{eq:ainf-morph} \end{align} Finally, representing the operations $f_n$ by \arbreopmorphcompun\ and the operations $g_n$ by \arbreopmorphcompdeux, the formula for the composition of \ensuremath{\mathrm{A}_\infty} -morphisms reads as \begin{align} \sum_{i_1+\cdots+i_k=n} (-1)^{\varepsilon} \compainf \ . \label{eq:ainf-comp} \end{align} \subsubsection{The operad \ensuremath{\mathrm{A}_\infty}\ and the operadic bimodule \Minf} \label{sss:operad-ainf-operadic-bimod-minf} \begin{definition}[Operad \ensuremath{\mathrm{A}_\infty}] The \emph{operad \ensuremath{\mathrm{A}_\infty}} is the quasi-free dg operad generated in arity $n \geq 2$ by one operation $\arbreop{0.15}$ of degree $n-2$ \[ \ensuremath{\mathrm{A}_\infty} := \left( \mathcal{T}( \arbreopdeux , \arbreoptrois, \arbreopquatre , \cdots ) , \partial \right) \ , \] and whose differential is defined by Equations (\ref{eq:ainf-alg}). \end{definition} \begin{definition}[Operadic bimodule \Minf] The operadic bimodule \Minf\ is the quasi-free $(\ensuremath{\mathrm{A}_\infty} ,\ensuremath{\mathrm{A}_\infty} )$-operadic bimodule generated in arity $n \geq 1$ by one operation $\arbreopmorph{0.15}$ of degree $n-1$ \[ \Minf := \left( \mathcal{T}^{\ensuremath{\mathrm{A}_\infty} , \ensuremath{\mathrm{A}_\infty}}(\arbreopunmorph , \arbreopdeuxmorph , \arbreoptroismorph , \arbreopquatremorph , \cdots ) , \partial \right) \ , \] and whose differential is defined by Equations (\ref{eq:ainf-morph}). \end{definition} We denote by $\ensuremath{\mathrm{End}}_A$ the \textit{endomorphism operad} of a dg module $A$, i.e. the operad whose dg module of operations of arity $n$ is $\ensuremath{\mathrm{End}}_A(n) := \ensuremath{\mathrm{Hom}} (A^{\otimes n},A)$. An \ensuremath{\mathrm{A}_\infty} -algebra structure on $A$ is then equivalent to the datum of a morphism of operads $\ensuremath{\mathrm{A}_\infty} \rightarrow \ensuremath{\mathrm{End}}_A$. We denote similarly by $\ensuremath{\mathrm{Hom}}^A_B$ the $(\ensuremath{\mathrm{End}}_B , \ensuremath{\mathrm{End}}_A)$-operadic bimodule defined by $ \ensuremath{\mathrm{Hom}}^A_B(n) := \ensuremath{\mathrm{Hom}} (A^{\otimes n},B)$. An \ensuremath{\mathrm{A}_\infty} -morphism between two \ensuremath{\mathrm{A}_\infty} -algebras $A$ and $B$ is then equivalent to the datum of a morphism of operadic bimodules $\Minf \rightarrow \ensuremath{\mathrm{Hom}}^A_B$. Composition of \ensuremath{\mathrm{A}_\infty} -morphisms can also be formulated at the level of the operadic bimodule \Minf\ as a morphism of $(\ensuremath{\mathrm{A}_\infty} , \ensuremath{\mathrm{A}_\infty})$-operadic bimodules $\Minf \rightarrow \Minf \circ_{\ensuremath{\mathrm{A}_\infty}} \Minf$, where the notation $\circ_{\ensuremath{\mathrm{A}_\infty}}$ denotes the \emph{relative composite product} \cite[Section 11.2.1]{LodayVallette12}. We write the first factor of $\Minf \circ_{\ensuremath{\mathrm{A}_\infty}} \Minf$ using green for the color above the gauge and red for the color below the gauge, \[ \Minf := \mathcal{T}^{\ensuremath{\mathrm{A}_\infty} , \ensuremath{\mathrm{A}_\infty}}(\arbreopunmorphcompdeux , \arbreopdeuxmorphcompdeux , \arbreoptroismorphcompdeux , \arbreopquatremorphcompdeux , \cdots ) \ , \] and its second factor using blue for the color above the gauge and green for the color below the gauge \[ \Minf := \mathcal{T}^{\ensuremath{\mathrm{A}_\infty} , \ensuremath{\mathrm{A}_\infty}}(\arbreopunmorphcompun , \arbreopdeuxmorphcompun , \arbreoptroismorphcompun , \arbreopquatremorphcompun , \cdots ) \ . \] \begin{definition}[Composition morphism] The \emph{composition morphism} is defined to be the morphism of $(\ensuremath{\mathrm{A}_\infty} ,\ensuremath{\mathrm{A}_\infty} )$-operadic bimodules $\ensuremath{\mathrm{comp}} : \Minf \rightarrow \Minf \circ_{\ensuremath{\mathrm{A}_\infty}} \Minf$ given on the generating operations of \Minf\ by \[ \ensuremath{\mathrm{comp}} \left( \arbreopmorph{0.15} \right) = \sum_{i_1+\cdots+i_k=n} (-1)^{\varepsilon} \compainf \ . \] \end{definition} \noindent The composition of two \ensuremath{\mathrm{A}_\infty} -morphisms $A \rightsquigarrow B$ and $B \rightsquigarrow C$ is then equivalent to the following composition of morphisms of operadic bimodules \[ \Minf \overset{\ensuremath{\mathrm{comp}}}{\longrightarrow} \Minf \circ_{\ensuremath{\mathrm{A}_\infty}} \Minf \longrightarrow \ensuremath{\mathrm{Hom}}^B_C \circ_{\ensuremath{\mathrm{End}}_B} \ensuremath{\mathrm{Hom}}^A_B \longrightarrow \ensuremath{\mathrm{Hom}}^A_C \ . \] \subsubsection{The Forcey--Loday multiplihedra realize the operadic bimodule \Minf} \label{sss:forcey--loday-realize} \begin{definition}[Cellular orientation] \leavevmode Let $P\subset\mathbb{R}^n$ be a polytope, and let $F$ be a face of $P$. A \emph{cellular orientation of $F$} is a choice of orientation of its linear span. A \emph{cellular orientation of $P$} is a choice of cellular orientation for each face $F$ of $P$. \end{definition} We respectively denote by $\mathsf{CW}$ and $\mathsf{dg-mod}$ the symmetric monoidal categories of CW complexes and of dg modules over $\mathbb{Z}$, and by $C_\bullet^{\mathrm{cell}} : \mathsf{CW} \rightarrow \mathsf{dg-mod}$ the cellular chains functor. A choice of a cellular orientation for every polytope $P \in \mathsf{Poly}$ defines an inclusion $\mathsf{Poly} \subset \mathsf{CW}$. Then, the strong symmetric monoidal functor $C_\bullet^{\mathrm{cell}}$ respectively sends operads and operadic bimodules in polytopes to dg operads and dg operadic bimodules. \begin{definition}[Left-levelwise order] \label{def:left-levelwise-tree} Let $t$ be a (2-colored) tree $t$. The \emph{left-levelwise order} on the vertices of $t$ is defined by ordering them from bottom to top and from left to right, proceeding one level at a time. \end{definition} \begin{figure}[h!] \centering \begin{subfigure}{0.4\textwidth} \centering \exampleleftlevelwiseone \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \exampleleftlevelwisetwo \end{subfigure} \caption{The tree on the left decomposes as $(c_4\circ_3 c_4)\circ_3 c_3$ and the orientation on the face it labels is determined by the product $K_4 \times K_4 \times K_3$. The tree on the right decomposes as $ (c_4\circ_1 c_3)\circ_6 c_4$ and defines the orientation determined by the product $K_4 \times K_3 \times K_4$.} \label{fig:left-levelwise-order} \end{figure} Given a tree $t$, there is a unique decomposition $t=(\cdots ((c_{n_1} \circ_{i_1} c_{n_2})\circ_{i_2}c_{n_3})\cdots \circ_{i_k} c_{n_{k+1}})$ where the corollae $c_n$ are grafted according to this total order. Using the grafting operations defined in \cref{sss:grafting}, a 2-colored tree admits similarly a unique decomposition as a sequence of blue corollae, red corollae and 2-colored corollae ordered according to this total order. We can then make the same choices of cellular orientations as in \cite[Section 1.4]{mazuir-I}, illustrated in \cref{fig:left-levelwise-order} : \begin{itemize} \item For the Loday associahedra $\mathrm{K}_n \subset \mathbb{R}^{n-1}$ of \cite{MTTV19}, we choose the basis $\{e_1 - e_{j+1}\}_{1\leq j \leq n-2}$ as positively oriented basis of the top dimensional cell $\arbreop{0.15}$. We then choose the orientation of any other face $t$ of $\mathrm{K}_n$ to be the image of the positively oriented bases of the top cells of the polytopes $\mathrm{K}_{n_i}$ under the sequence of partial compositions following the left-levelwise order on $t$. \item We choose the basis $\{- e_j\}_{1\leq j \leq n-1}$ as positively oriented basis of the top dimensional cell $\arbreopmorph{0.15}$ of the Forcey--Loday multiplihedra $\mathrm{J}_n \subset \mathbb{R}^{n-1}$. We then choose the orientation of any other face $t$ of $\mathrm{J}_n$ to be the image of the positively oriented bases of the top cells of the polytopes $\mathrm{K}_{n_i}$ and $\mathrm{J}_{n_j}$ under the sequence of action-compositions maps, following the left-levelwise order on $t$. \end{itemize} \begin{proposition} \label{prop:cellular-chains} These cellular orientations on the Loday associahedra and the Forcey--Loday multiplihedra provide an isomorphism of dg operads $C_\bullet^{\mathrm{cell}}(\{\mathrm{K}_n\})\cong \mathrm{A}_\infty$ and an isomorphism of dg operadic bimodules $C_\bullet^{\mathrm{cell}}(\{\mathrm{J}_n\})\cong \Minf$. \end{proposition} \begin{proof} The choice of a cellular orientation endows the $\mathrm{K}_n$ and $\mathrm{J}_n$ with a natural CW structure (see \cite[Proposition 4.22]{LA21}). The choice of the left-levelwise order on trees ensures that we recover precisely the usual sign conventions for the partial compositions of the quasi-free operad $\ensuremath{\mathrm{A}_\infty}$ and for the action-composition maps of the quasi-free operadic bimodule $\Minf$. The signs for the respecive differentials were computed in \cite[Section 1.4]{mazuir-I}. \end{proof} \subsection{Tensor product of \ensuremath{\mathrm{A}_\infty} -algebras and \ensuremath{\mathrm{A}_\infty} -morphisms} \subsubsection{Diagonals on the operad \ensuremath{\mathrm{A}_\infty}\ and on the operadic bimodule \Minf} \begin{definition}[Operadic diagonals] $ $ \begin{enumerate}[leftmargin=*] \item A \emph{diagonal on the operad \ensuremath{\mathrm{A}_\infty}} is a morphism of dg operads $\triangle : \ensuremath{\mathrm{A}_\infty} \rightarrow \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty}$ which satisfies $\triangle (\arbreopdeux) = \arbreopdeux \otimes \arbreopdeux$. \item Given a diagonal on the operad \ensuremath{\mathrm{A}_\infty}, a \emph{diagonal on the operadic bimodule \Minf} is a morphism of operadic bimodules $\triangle : \Minf \rightarrow \Minf \otimes \Minf$ which satisfies $\triangle ( \arbreopunmorph ) = \arbreopunmorph \otimes \arbreopunmorph$, and where $\Minf \otimes \Minf$ is endowed with its $(\ensuremath{\mathrm{A}_\infty} , \ensuremath{\mathrm{A}_\infty})$-operadic bimodule structure induced by the diagonal on \ensuremath{\mathrm{A}_\infty} . \end{enumerate} \end{definition} Diagonals provide an adapted framework to define tensor products of \ensuremath{\mathrm{A}_\infty} -algebras and \ensuremath{\mathrm{A}_\infty} -morphisms. Given a diagonal $\ensuremath{\mathrm{A}_\infty} \to \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty}$ and two \ensuremath{\mathrm{A}_\infty} -algebras $A$ and $B$, one can define an \ensuremath{\mathrm{A}_\infty} -algebra structure on $A \otimes B$ by considering the following composition \[ \ensuremath{\mathrm{A}_\infty} \longrightarrow \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty} \longrightarrow \ensuremath{\mathrm{End}}_A \otimes \ensuremath{\mathrm{End}}_B \longrightarrow \ensuremath{\mathrm{End}}_{A \otimes B} \ . \] Given similarly a diagonal $\Minf \to \Minf \otimes \Minf$ and two \ensuremath{\mathrm{A}_\infty} -morphisms $F_1 : A_1 \rightsquigarrow B_1$ and $F_2 : A_2 \rightsquigarrow B_2$, one can define an \ensuremath{\mathrm{A}_\infty} -morphism $F_1 \otimes F_2 : A_1 \otimes A_2 \rightsquigarrow B_1 \otimes B_2$ by the following composition \[ \Minf \rightarrow \Minf \otimes \Minf \rightarrow \ensuremath{\mathrm{Hom}}^{A_1}_{B_1} \otimes \ensuremath{\mathrm{Hom}}^{A_2}_{B_2} \rightarrow \ensuremath{\mathrm{Hom}}^{A_1 \otimes A_2}_{B_1 \otimes B_2} \ . \] We moreover point out that the conditions $\triangle (\arbreopdeux) = \arbreopdeux \otimes \arbreopdeux$ and $\triangle ( \arbreopunmorph ) = \arbreopunmorph \otimes \arbreopunmorph$ respectively imply that these constructions recover the standard tensor product of dg algebras and the standard tensor product of ordinary morphisms between dg algebras. \subsubsection{Admissible edges and permutations} We fix a (2-colored) nested linear graph $(\ell,\mathcal{N})$. We denote by $N_i$ the unique minimal nest of $\mathcal{N}$ with respect to nest inclusion, which contains the edge $i$. \begin{definition}[Admissible edge] For a nested linear graph $(\ell,\mathcal{N})$, an edge $i$ is \emph{admissible} with respect to $\mathcal{N}$ if $i \neq \min N_i$. For a 2-colored nested linear graph $(\ell,\mathcal{N})$, an edge $i$ is \emph{admissible} with respect to $\mathcal{N}$ when $N_i$ is bicolored, or if $i \neq \min N_i$ when $N_i$ is monochrome. We denote the set of admissible edges of $\mathcal{N}$ by $\mathrm{Ad}(\mathcal{N})$. \end{definition} \begin{definition} [Left-levelwise order] \label{def:left-levelwise-graph} The \emph{left-levelwise order} on $\mathcal{N}$ is defined by ordering the nests by decreasing order of cardinality, and ordering two nests of the same cardinality according to the increasing order on their minimal elements. \end{definition} \noindent Under the bijection of \cref{lemma:bijection}, the left-levelwise order on the nesting of a nested linear graph is equivalent to the left-levelwise order on the vertices of the corresponding tree $t$, as defined in \cref{def:left-levelwise-tree} . Consider the left-levelwise order $N^1<N^2<\cdots < N^k$ on the nesting $\mathcal{N}=\{N^j\}_{1\leq j \leq k}$. We endow the set $\mathrm{Ad}(\mathcal{N})$ with a total order, by ordering the admissible edges of $N_1 \setminus \cup_{2\leq j \leq k} N_j$ in increasing order, then the admissible edges of $N_2 \setminus \cup_{3\leq j \leq k} N_j$ in increasing order, and so on. Given two nestings $\mathcal{N}, \mathcal{N}'$ of $\ell$, we endow the set $\mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}')$ with the total order given by following the total order on $\mathrm{Ad}(\mathcal{N})$ and then the total order on $\mathrm{Ad}(\mathcal{N}')$. We denote by $\triangle^K$ and $\triangle^J$ the algebraic diagonals obtained from the polytopal ones by applying the cellular chains functor, see \cref{prop:diagonal-polytopale-a-infini,prop:diagonal-polytopale-m-infini} below. The proofs of these two propositions include the proofs of the following two lemmas. \begin{lemma} \label{prop:signs-ass} For a pair of nestings of complementary dimensions $(\mathcal{N}, \mathcal{N}')\in \Ima\triangle^K$, the function $\sigma_{\mathcal{N}\mathcal{N}'}: \mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}') \to (1,2,\ldots,|\mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}')|)$ defined on $i \in \mathrm{Ad}(\mathcal{N})$ by \begin{equation*} \sigma_{\mathcal{N}\mathcal{N}'}(i)= \begin{cases} \min N_i -1 & \text{ if } i \in \mathrm{Ad}(\mathcal{N})\cap \mathrm{Ad}(\mathcal{N}') \text{ and } 1 \neq \min N_i < \min N_i' \\ i-1 & \text{ otherwise ,} \end{cases} \end{equation*} and similarly on $i \in \mathrm{Ad}(\mathcal{N}')$ by reversing the roles of $\mathcal{N}$ and $\mathcal{N}'$, induces a permutation of the set $\{1,2,\ldots,|\mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}')|\}$ that we will still denote by $\sigma_{\mathcal{N}\mathcal{N}'}$. \end{lemma} \begin{lemma} \label{prop:signs-mul} For a pair of 2-colored nestings of complementary dimensions $(\mathcal{N},\mathcal{N}')\in \Ima\triangle^J$, the function $\sigma_{\mathcal{N}\mathcal{N}'}: \mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}') \to (1,2,\ldots,|\mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}')|)$ defined on $i \in \mathrm{Ad}(\mathcal{N})$ by \begin{equation*} \sigma_{\mathcal{N}\mathcal{N}'}(i)= \begin{cases} \min N_i & \text{ if } i \in \mathrm{Ad}(\mathcal{N})\cap \mathrm{Ad}(\mathcal{N}') , N_i \text{ is monochrome and } N_i' \text{ is not} \\ \min N_i & \begin{array}{l} \text{ if } i \in \mathrm{Ad}(\mathcal{N})\cap \mathrm{Ad}(\mathcal{N}'), N_i \text{ and } N_i' \text{ are monochrome} \\ \text{ and } \min N_i < \min N_i' \ , \end{array} \\ i & \text{ otherwise ,} \end{cases} \end{equation*} and similarly on $i \in \mathrm{Ad}(\mathcal{N}')$ by reversing the roles of $\mathcal{N}$ and $\mathcal{N}'$, induces a permutation of the set $\{1,2,\ldots,|\mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}')|\}$ that we will still denote by $\sigma_{\mathcal{N}\mathcal{N}'}$. \end{lemma} \subsubsection{The polytopal diagonals on \ensuremath{\mathrm{A}_\infty}\ and \Minf} \label{ss:diagonals} We use nested linear graphs introduced in \cref{ss:2-col} to work with the operad \ensuremath{\mathrm{A}_\infty}\ and the operadic bimodule \Minf . The generating operation of arity $n$ of \ensuremath{\mathrm{A}_\infty}\ corresponds to the trivial nested linear graph with $n$ vertices $\black{ \bullet \cdots \bullet }$, while the generating operation of arity $n$ of \Minf\ is represented by the trivial 2-colored nested linear graph with $n$ vertices $\purple{\bullet \cdots \bullet}$. \begin{proposition} \label{prop:diagonal-polytopale-a-infini} The image under the functor $C_\bullet^{\mathrm{cell}}$ of the diagonal of the Loday associahedra constructed in \cite{MTTV19} defines a diagonal on the operad \ensuremath{\mathrm{A}_\infty} , that we denote \ensuremath{\triangle^{K}} . It is determined by the formula \[ \ensuremath{\triangle^{K}} \left( \black{ \bullet \cdots \bullet } \right) = \sum_{\substack{ \mathcal{N},\mathcal{N}' \in \mathcal{N}_n \\ \tp(\mathcal{N}) \leq \bm(\mathcal{N'}) \\ |\mathcal{N}|+|\mathcal{N}'|=n }} (-1)^{|\mathrm{Ad}(\mathcal{N})\cap \mathrm{Ad}(\mathcal{N}')|}\mathrm{sgn}(\sigma_{\mathcal{N}\mathcal{N}'})\mathcal{N} \otimes \mathcal{N}' \ , \] where $\bullet \cdots \bullet$ stands for the linear graph with $n$ vertices. \end{proposition} \begin{proof} The image of the diagonal on the Loday associahedra under the functor $C_\bullet^{\mathrm{cell}}$ defines a diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ as this functor is strong monoidal. This diagonal $\ensuremath{\triangle^{K}} : \ensuremath{\mathrm{A}_\infty} \rightarrow \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty}$ is determined by the image of the generating operations of the quasi-free operad \ensuremath{\mathrm{A}_\infty} , which are the trivially nested linear graphs. The signs arise from the choices of cellular orientations on the Loday associahedra made in \cref{sss:forcey--loday-realize} as follows. As explained in the proof of \cite[Proposition 4.27]{LA21}, the computation of the signs boils down to the computation of the determinant of the bases $e_j^{F}, e_j^{G}$ determining the cellular orientations of the faces $F$ and $G$ associated to the nestings $\mathcal{N}$ and $\mathcal{N}'$, expressed in the basis $e_j$ of the top dimensional cell of $\mathrm{K}_n$. The second part of the proof of \cite[Theorem 1.26]{LA21} shows that $\dim(F\cap \rho_z G)=0$, for any $z \in (\mathring F+ \mathring G)/2$. Combined with the fact that $\dim F + \dim G = \dim \mathrm{K}_n$, this implies that the two bases $e_j^F, e_j^G$ form together a basis of the linear span of $\mathrm{K}_n$. Writing horizontally the $e_j^F$ and then the $e_j^G$ in the basis $e_j$ defines a square matrix. The positions of the rightmost non-zero entries of each line are given by the admissible edges of $\mathcal{N}$ and $\mathcal{N}'$. The permutation $\sigma_{\mathcal{N}\mathcal{N}'}$ corresponds to a permutation of the lines of this matrix, sending these righmost entries to the diagonal, except for one case: when $\mathcal{N}$ and $\mathcal{N}'$ share the same admissible edge. In this case, linear independence guarantees that the two vectors differ in another place. We moreover point out that that the $-1$ term in the definition of the permutation $\sigma_{\mathcal{N}\mathcal{N}'}$ in \cref{prop:signs-ass} stems from the fact that $\mathrm{K}_n$ is defined in $\mathbb{R}^{n-1}$ but has dimension $n-2$. \end{proof} We compute in particular \[ \begin{matrix} \ensuremath{\triangle^{K}} ( \black { \bullet \bullet } ) &=& & \black{\bullet \bullet} \otimes \black{\bullet \bullet} \ , & & \\ \ensuremath{\triangle^{K}} ( \black { \bullet \bullet \bullet } ) &=& & \black{\black{\bullet \bullet} \bullet} \otimes \black{\bullet \bullet \bullet} &+& \black{\bullet \bullet \bullet} \otimes \black{ \bullet \black{\bullet \bullet}} \ , \\ \ensuremath{\triangle^{K}} ( \black{ \bullet \bullet \bullet \bullet} ) &=& & \black{\bullet \bullet \bullet \bullet} \otimes \black{\bullet \black{ \bullet \black{ \bullet \bullet }} } &+& \black{ \black{ \black{\bullet \bullet} \bullet } \bullet } \otimes \black{ \bullet \bullet \bullet \bullet } \\ & & -& \black{ \black{\bullet \bullet } \bullet \bullet } \otimes \black{ \bullet \bullet \black{ \bullet \bullet }} &+& \black{ \black{ \bullet \bullet \bullet } \bullet } \otimes \black{ \bullet \black{ \bullet \bullet } \bullet } \\ & & +& \black{ \black{ \bullet \bullet \bullet } \bullet } \otimes \black{ \bullet \black{ \bullet \bullet \bullet }} &+& \black{ \bullet \black{ \bullet \bullet } \bullet } \otimes \black{ \bullet \black{ \bullet \bullet \bullet }} \ . \end{matrix} \] \begin{remark} \cref{prop:diagonal-polytopale-a-infini} completes the work of \cite{MTTV19}, by explicitly computing the signs for the polytopal diagonal on the dg level. This formula corresponds in fact to the formula originally computed in \cite{MarklShnider06} (up to signs verification). We also conjecture that this diagonal is equal to the diagonal constructed in \cite{SaneblidzeUmble04}. \end{remark} \begin{definition}[Tensor product of \ensuremath{\mathrm{A}_\infty} -algebras] \label{def:tensor-product-ainf-alg} Given $A$ and $B$ two \ensuremath{\mathrm{A}_\infty} -algebras, their tensor product as \ensuremath{\mathrm{A}_\infty} -algebras is defined to be the dg module $A \otimes B$ endowed with the \ensuremath{\mathrm{A}_\infty} -algebra structure induced by the diagonal \ensuremath{\triangle^{K}} . \end{definition} \begin{proposition} \label{prop:diagonal-polytopale-m-infini} The image under the functor $C_\bullet^{\mathrm{cell}}$ of the diagonal on the Forcey--Loday multiplihedra constructed in this paper defines a diagonal on the operadic bimodule \Minf , that we denote \ensuremath{\triangle^{J}} . It is determined by the formula \[ \ensuremath{\triangle^{J}} \left( \purple{\bullet \cdots \bullet} \right) = \sum_{ \mathcal{N},\mathcal{N}'} (-1)^{|\mathrm{Ad}(\mathcal{N})\cap \mathrm{Ad}(\mathcal{N}')|} \mathrm{sgn}(\sigma_{\mathcal{N}\mathcal{N}'}) \mathcal{N} \otimes \mathcal{N}' \ ,\] where the sum runs over the pairs $\mathcal{N},\mathcal{N}' \in \mathcal{N}^2_n$ such that $\ |\ensuremath{\mathrm{mono}}(\mathcal{N})|+|\ensuremath{\mathrm{mono}}(\mathcal{N}')|=n-1$ and which satisfy the conditions in \cref{thm:formuladiagonal}. \end{proposition} \begin{proof} The proof is similar to the proof of \cref{prop:diagonal-polytopale-a-infini}. Note that in this case, there is no $-1$ term in the definition of the permutation $\sigma_{\mathcal{N}\mathcal{N}'}$ in \cref{prop:signs-mul} since $\mathrm{J}_n$ is full-dimensional. \end{proof} We compute in particular \[ \begin{matrix} \ensuremath{\triangle^{J}} ( \purple{ \bullet } ) &=& & \purple{ \bullet } \otimes \purple{ \bullet } \ , & & \\ \ensuremath{\triangle^{J}} ( \purple{ \bullet \bullet } ) &=& & \blue{\bullet \bullet} \otimes \purple{\bullet \bullet} & + & \purple{\bullet \bullet} \otimes \red{\bullet \bullet} \ , \\ \ensuremath{\triangle^{J}} (\purple{\bullet \bullet \bullet}) &=& & \blue{\blue{\bullet \bullet}\bullet} \otimes \purple{\bullet \bullet \bullet} & + & \purple{\bullet \bullet \bullet} \otimes \red{\bullet \red{\bullet \bullet}} \\ & &- & \blue{\bullet \bullet \bullet} \otimes \purple{\bullet \blue{\bullet \bullet}} & - & \blue{\bullet \bullet \bullet} \otimes \red{\bullet \purple{\bullet \bullet}} \\ & &+ & \purple{\bullet \blue{\bullet \bullet}} \otimes \red{\bullet \purple{\bullet \bullet}} & - & \purple{\blue{\bullet \bullet} \bullet} \otimes \red{\purple{\bullet \bullet} \bullet} \\ & & + & \purple{\blue{\bullet \bullet} \bullet} \otimes \red{\bullet \bullet \bullet} & + & \red{\purple{\bullet \bullet} \bullet} \otimes \red{\bullet \bullet \bullet} \ . \end{matrix} \] \begin{definition}[Tensor product of \ensuremath{\mathrm{A}_\infty} -morphisms] \label{def:tensor-ainf-morph} Let $F_1 : A_1 \rightsquigarrow B_1$ and $F_2 : A_2 \rightsquigarrow B_2$ be two \ensuremath{\mathrm{A}_\infty} -morphisms between \ensuremath{\mathrm{A}_\infty}-algebras. Their tensor product is defined to be the \ensuremath{\mathrm{A}_\infty} -morphism $F_1 \otimes F_2 : A_1 \otimes A_2 \rightsquigarrow B_1 \otimes B_2$ induced by the diagonal \ensuremath{\triangle^{J}} on \Minf \ . \end{definition} One can ask whether the dg "magical formula" for the diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ also defines a diagonal on the operadic bimodule \Minf, i.e. if by relaxing the conditions of \cref{thm:formuladiagonal} to the condition $\tp(\mathcal{N}) \leq \bm(\mathcal{N'})$, the formula of \cref{prop:diagonal-polytopale-m-infini} still defines a diagonal on \Minf \ . A simple computation in arity 4 shows that the answer to this question is negative. In other words, it is not possible to naively extend the "magical formula" for the tensor product of \ensuremath{\mathrm{A}_\infty} -algebras to define a tensor product of \ensuremath{\mathrm{A}_\infty} -morphisms, see also \cref{ss:about}. \subsection{Categorification} \subsubsection{Tensor product of \ensuremath{\mathrm{A}_\infty} -categories and \ensuremath{\mathrm{A}_\infty} -functors} The horizontal categorifications of the notions of \ensuremath{\mathrm{A}_\infty} -algebra and \ensuremath{\mathrm{A}_\infty} -morphism are the notions of \ensuremath{\mathrm{A}_\infty} -category and \ensuremath{\mathrm{A}_\infty} -functor, respectively. We refer to \cite[Chapter 1]{Seidel08} for the definitions of these two notions. We borrow the notations from \cite{Seidel08} and will moreover use the sign conventions of \cref{ss:ainf-alg-ainf-morph}. \begin{definition}[Tensor product of \ensuremath{\mathrm{A}_\infty} -categories] \label{def:tensor-product-ainf-cat} The \emph{tensor product} of two $\mathrm{A}_\infty$-categories $\cat{A}$ and $\cat{B}$ is given by \begin{itemize}[leftmargin=*] \item the set of objects $\mathrm{Ob}(\cat{A}\otimes \cat{B})\coloneqq \mathrm{Ob}(\cat{A})\times\mathrm{Ob}(\cat{B})$, \item for each pair of objects $X_1\times Y_1,X_2\times Y_2 \in \mathrm{Ob}(\cat{A}\otimes \cat{B})$, the dg module of morphisms \[\cat{A}\otimes \cat{B}(X_1\times Y_1,X_2\times Y_2)\coloneqq \cat{A}(X_1,X_2)\otimes\cat{B}(Y_1,Y_2) \ , \] \end{itemize} and by defining the higher compositions $m_n$ as in \cref{prop:diagonal-polytopale-a-infini}. \end{definition} \begin{samepage} \begin{definition}[Tensor product of \ensuremath{\mathrm{A}_\infty} -functors] The \emph{tensor product} of two $\mathrm{A}_\infty$-functors $\cat{F}:\cat{A}_1 \rightsquigarrow \cat{B}_1$ and $\cat{G}:\cat{A}_2 \rightsquigarrow \cat{B}_2$ is given by the function \[ \mathrm{Ob}(\cat{F}\otimes \cat{G})\coloneqq \mathrm{Ob}(\cat{F})\times \mathrm{Ob}(\cat{G}) : \mathrm{Ob}(\cat{A}_1\otimes\cat{B}_1) \to \mathrm{Ob}(\cat{A}_2\otimes\cat{B}_2) \ , \] and by defining the operations $(\cat{F} \otimes \cat{G})_n$ as in \cref{prop:diagonal-polytopale-m-infini}. \end{definition} \end{samepage} \subsubsection{Identities} The category $H_*(\cat{A})$ associated to an \ensuremath{\mathrm{A}_\infty} -category $\cat{A}$ does not necessarily have identity morphisms. As explained in \cite[Section 1.2]{Seidel08}, there exist three notions of \ensuremath{\mathrm{A}_\infty} -category with identity morphisms : \textit{strictly unital \ensuremath{\mathrm{A}_\infty} -category}, \textit{cohomologically unital \ensuremath{\mathrm{A}_\infty} -category} and \textit{homotopy unital \ensuremath{\mathrm{A}_\infty} -category}. \begin{enumerate}[leftmargin=*] \item A \textit{cohomologically unital} \ensuremath{\mathrm{A}_\infty} -category is an \ensuremath{\mathrm{A}_\infty} -category $\cat{A}$ which is such that $H_*(\cat{A})$ has identity morphisms. \item A \textit{strictly unital} \ensuremath{\mathrm{A}_\infty} -category is an \ensuremath{\mathrm{A}_\infty} -category together with an element $e_X \in \cat{A} (X ,X )$ for every $X \in \mathrm{Ob}(\cat{A})$ such that $\partial (e_X) = 0$, $m_2 (e , \cdot ) = m_2 (\cdot , e ) = \ensuremath{\mathrm{id}}$ and $m_n ( \cdots , e , \cdots ) = 0 \text { for } n \geq 3$. \item A \textit{homotopy unital} \ensuremath{\mathrm{A}_\infty} -category is defined to be an \ensuremath{\mathrm{A}_\infty} -category together with elements $e_X \in \cat{A} (X ,X )$ and endowed with additional operations encoding the fact that the previous relations on the $m_n$ and the $e_X$ are satisfied only up to higher coherent homotopies, see also \cite[Section 6.1]{HirshMilles12}. \end{enumerate} We have in particular that \[ \text{unital} \Rightarrow \text{homotopy unital} \Rightarrow \text{cohomologically unital} \ . \] The proof of the following proposition is straightforward. \begin{proposition} $ $ \begin{enumerate}[leftmargin=*] \item If $\cat{A}$ and $\cat{B}$ are cohomologically unital \ensuremath{\mathrm{A}_\infty} -categories, the tensor \ensuremath{\mathrm{A}_\infty} -category $\cat{A} \otimes \cat{B}$ is again cohomologically unital. \item If $\cat{A}$ and $\cat{B}$ are unital \ensuremath{\mathrm{A}_\infty} -categories, the tensor \ensuremath{\mathrm{A}_\infty} -category $\cat{A} \otimes \cat{B}$ is again unital, with identity morphisms $e_{X \times Y} := e_X \otimes e_Y$ for $X \in \mathrm{Ob}(\cat{A})$ and $Y \in \mathrm{Ob}(\cat{B})$. \end{enumerate} \end{proposition} If $\cat{A}$ and $\cat{B}$ are homotopy unital \ensuremath{\mathrm{A}_\infty} -categories, we have to define the additional operations associated to the fact that the elements $e_X \otimes e_Y$ are identity morphisms up to homotopy in order to endow the \ensuremath{\mathrm{A}_\infty} -category $\cat{A} \otimes \cat{B}$ with a homotopy unital \ensuremath{\mathrm{A}_\infty} -category structure. In other words, we have to define a diagonal on the operad \uAinf\ encoding homotopy unital \ensuremath{\mathrm{A}_\infty} -algebras, which has not been done yet to the authors knowledge. An idea would be to define a diagonal on the unital associahedra, which are CW-complexes constructed by Muro and Tonks in \cite{MuroTonks} and which form an operad whose image under the cellular chains is the operad \uAinf\ . However, not all unital associahedra are polytopes, meaning that the present techniques cannot be directly applied to them. \subsection{Homotopy properties of diagonals on \ensuremath{\mathrm{A}_\infty}\ and \Minf } \label{ss:homotopy-properties} \subsubsection{The 2-colored viewpoint} The operad \ensuremath{\mathrm{A}_\infty}\ together with the operadic bimodule \Minf\ define the quasi-free 2-colored operad \[ A_\infty^2 := \left( \mathcal{T} (\arbreopdeuxcol{Red!60} , \arbreoptroiscol{Red!60} , \arbreopquatrecol{Red!60}, \cdots, \arbreopdeuxcol{MidnightBlue} , \arbreoptroiscol{MidnightBlue} , \arbreopquatrecol{MidnightBlue} , \cdots, \arbreopunmorph , \arbreopdeuxmorph , \arbreoptroismorph , \arbreopquatremorph , \cdots ) , \partial \right) \ , \] whose differential is given by the equations of \cref{def:ainf-alg} and \cref{def:ainf-morph}. We refer to \cite[Section 11]{yau-colored} for a complete definition of a 2-colored operad. The data of \ensuremath{\mathrm{A}_\infty} -algebra structures on two dg modules $A$ and $B$ together with an \ensuremath{\mathrm{A}_\infty} -morphism $A \rightsquigarrow B$ between them is equivalent to a morphism of 2-colored operads $\ensuremath{\mathrm{A}_\infty^2} \longrightarrow \ensuremath{\mathrm{End}} ( A \text{\hspace{2pt}} ; B) $, where $\ensuremath{\mathrm{End}} ( A ; B)$ is the \textit{endomorphism 2-colored operad} naturally associated to $A$ and $B$. The data of a diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ and of a diagonal on the operadic bimodule \Minf\ is moreover equivalent to the datum of a morphism of 2-colored operads $\ensuremath{\mathrm{A}_\infty^2} \longrightarrow \ensuremath{\mathrm{A}_\infty^2} \otimes \ensuremath{\mathrm{A}_\infty^2}$, while the composition of \ensuremath{\mathrm{A}_\infty} -morphisms can be defined by a morphism of 2-colored operads $\ensuremath{\mathrm{A}_\infty^2} \longrightarrow \ensuremath{\mathrm{A}_\infty^2} \circ_{\ensuremath{\mathrm{A}_\infty}} \ensuremath{\mathrm{A}_\infty^2} $. \subsubsection{Coassociativity and cocommutativity} \label{sss:coassoc-cocomm} First, we would like to know whether given three \ensuremath{\mathrm{A}_\infty} -algebras $A$, $B$ and $C$, the two \ensuremath{\mathrm{A}_\infty} -algebra structures $( A \otimes B) \otimes C$ and $A \otimes ( B \otimes C)$ on the dg module $A \otimes B \otimes C$ are the same. In operadic terms, this amounts to ask if the diagonal on \ensuremath{\mathrm{A}_\infty}\ is coassociative. \begin{proposition} $ $ \label{prop:nocoassoc} \begin{enumerate}[leftmargin=*] \item There is no diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ which is coassociative. \item There is no diagonal on the operadic bimodule \Minf\ which is coassociative. \end{enumerate} \end{proposition} \begin{proof} The non-existence of a coassociative diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ was already proven in \cite[Section 6]{MarklShnider06}. The non-existence of a coassociative diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ implies the non-existence of a coassociative diagonal on the operad \Minf . Given indeed diagonals $\triangle^{\ensuremath{\mathrm{A}_\infty}}$ and $\triangle^{\Minf}$, it is not possible to compare the two morphisms of dg operadic bimodules $ ( \triangle^{\Minf} \otimes \ensuremath{\mathrm{id}}^{\Minf} ) \triangle^{\Minf}$ and $(\ensuremath{\mathrm{id}}^{\Minf} \otimes \triangle^{\Minf} )\triangle^{\Minf}$, as the $(\ensuremath{\mathrm{A}_\infty} , \ensuremath{\mathrm{A}_\infty})$-operadic bimodule structures induced on $\Minf^{\otimes 3}$ by $ ( \triangle^{\ensuremath{\mathrm{A}_\infty}} \otimes \ensuremath{\mathrm{id}}^{\ensuremath{\mathrm{A}_\infty}} ) \triangle^{\ensuremath{\mathrm{A}_\infty}}$ and $(\ensuremath{\mathrm{id}}^{\ensuremath{\mathrm{A}_\infty}} \otimes \triangle^{\ensuremath{\mathrm{A}_\infty}} ) \triangle^{\ensuremath{\mathrm{A}_\infty}}$ do not coincide. We can in fact prove a stronger result: for any diagonal $\triangle : \Minf \to \Minf \otimes \Minf$, we have that \[ \left( (\ensuremath{\mathrm{id}} \otimes \triangle ) \triangle - (\triangle \otimes \ensuremath{\mathrm{id}}) \triangle \right) \left( \purple{\bullet \bullet \bullet} \right) \neq 0 \ . \] The proof of this result involves computations identical to the ones of \cite[Section 6]{MarklShnider06}, that we do not include for the sake of concision. \end{proof} This proposition implies in particular that a diagonal on the 2-colored operad \ensuremath{\mathrm{A}_\infty^2}\ is never coassociative. In the specific cases of \ensuremath{\triangle^{K}}\ and \ensuremath{\triangle^{J}}\ we compute moreover that \begin{align*} &\left( (\ensuremath{\mathrm{id}} \otimes \ensuremath{\triangle^{K}} ) \ensuremath{\triangle^{K}} - (\ensuremath{\triangle^{K}} \otimes \ensuremath{\mathrm{id}}) \ensuremath{\triangle^{K}} \right) \left( \black{ \bullet \bullet \bullet \bullet } \right) \\ = \ & - \partial \left( \black{ \black{ \bullet \bullet \bullet } \bullet } \otimes \black{ \bullet \black{ \bullet \bullet } \bullet } \otimes \black{ \bullet \black{ \bullet \bullet \bullet } } \right) \ , \end{align*} and that \begin{align*} &\left( (\ensuremath{\mathrm{id}} \otimes \ensuremath{\triangle^{J}} ) \ensuremath{\triangle^{J}} - (\ensuremath{\triangle^{J}} \otimes \ensuremath{\mathrm{id}}) \ensuremath{\triangle^{J}} \right) \left( \purple{\bullet \bullet \bullet} \right) \\ = \ &\partial \left( \blue{\bullet \bullet \bullet} \otimes \purple{ \bullet \blue{\bullet \bullet}} \otimes \red{\bullet \purple{\bullet \bullet}} - \purple{\blue{\bullet \bullet} \bullet} \otimes \red{\purple{\bullet \bullet} \bullet} \otimes \red{\bullet \bullet \bullet} \right) \ . \end{align*} Given two \ensuremath{\mathrm{A}_\infty} -algebras $A$ and $B$, we would also like to know whether the \ensuremath{\mathrm{A}_\infty} -algebra structure on $B \otimes A$ can simply be obtained from the maps defining the \ensuremath{\mathrm{A}_\infty} -algebra structure on $A \otimes B$ \[ m_n^{A \otimes B} : ( A \otimes B)^{\otimes n} \rightarrow A \otimes B \] by rearranging $(A \otimes B)^{\otimes n}$ into $(B \otimes A)^{\otimes n}$ and $A \otimes B$ into $B \otimes A$. In operadic terms, this amounts to ask if the diagonal on \ensuremath{\mathrm{A}_\infty}\ is cocommutative or not. \begin{proposition} \label{prop:not-cocomm} The diagonals \ensuremath{\triangle^{K}}\ and \ensuremath{\triangle^{J}}\ are not cocommutative. \end{proposition} \begin{proof} We compute indeed that \[ \left( \ensuremath{\triangle^{K}} - \tau \ensuremath{\triangle^{K}} \right) \left( \black{ \bullet \bullet \bullet } \right) = \partial \left( \black{ \bullet \bullet \bullet } \otimes \black{ \bullet \bullet \bullet } \right) \ , \] where $\tau$ acts by the permutation $(1 \ 2)$ on the operad $\ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty}$. We also compute that \[ \left( \ensuremath{\triangle^{J}} - \tau \ensuremath{\triangle^{J}} \right) \left( \purple{\bullet \bullet} \right) = \partial \left( \purple{\bullet \bullet} \otimes \purple{\bullet \bullet} \right) \ . \] \end{proof} \noindent We conjecture in fact that \cref{prop:not-cocomm} holds for any diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ and for any diagonal on the operadic bimodule \Minf . \subsubsection{Compatibility with the composition} \label{sss:comp-composition} We would finally like to know whether the tensor product is functorial with respect to the composition of \ensuremath{\mathrm{A}_\infty} -morphisms. In other words, if given four \ensuremath{\mathrm{A}_\infty} -morphisms $F_1 : A_1 \rightsquigarrow B_1$, $G_1 : B_1 \rightsquigarrow C_1$, $F_2 : A_2 \rightsquigarrow B_2$ and $G_2 : B_2 \rightsquigarrow C_2$ they satisfy the following equality \[ ( G_1 \otimes F_1) \circ (G_2 \otimes F_2) = (G_1 \otimes G_2) \circ (F_1 \otimes F_2) \ . \] In operadic terms, this amounts to ask if the diagonal $\triangle$ on \Minf\ together with the composition morphism \ensuremath{\mathrm{comp}}\ of \cref{sss:operad-ainf-operadic-bimod-minf} satisfy the following equality \[ ( \ensuremath{\mathrm{comp}} \otimes \ensuremath{\mathrm{comp}} ) \triangle = (\triangle \circ_{\ensuremath{\mathrm{A}_\infty}} \triangle ) \ensuremath{\mathrm{comp}} \ . \] \begin{proposition} \label{thm:nofunctorial} There is no diagonal on the operadic bimodule \Minf\ which is compatible with the composition of \ensuremath{\mathrm{A}_\infty} -morphisms. \end{proposition} \begin{proof} Let $\triangle$ be a diagonal $\Minf \rightarrow \Minf \otimes \Minf$. The compatibility with the differential implies that $\triangle$ is necessarily of the form \[ \triangle(\purple{\bullet }) = \purple{\bullet } \otimes \purple{\bullet } \] and \[ \begin{matrix} \triangle (\purple{\bullet \bullet }) &= &\alpha (\blue{\bullet \bullet }\otimes \purple{\bullet \bullet } + \purple{\bullet \bullet }\otimes\red{\bullet \bullet }) \\ & &+ \ (1-\alpha)(\red{\bullet \bullet }\otimes \purple{\bullet \bullet }+\purple{\bullet \bullet}\otimes \blue{\bullet \bullet}) \ , \end{matrix} \] where $\alpha \in \mathbb{Z}$. We compute that if the equality \[ ( \ensuremath{\mathrm{comp}} \otimes \ensuremath{\mathrm{comp}} ) \triangle ( \purple{\bullet \bullet} ) = (\triangle \circ_{\ensuremath{\mathrm{A}_\infty}} \triangle ) \ensuremath{\mathrm{comp}} ( \purple{\bullet \bullet} ) \] holds, we necessarily have that $\alpha = 0$ and that $\alpha =1$, which is not possible. \end{proof} \noindent In the case of the diagonals \ensuremath{\triangle^{K}}\ and \ensuremath{\triangle^{J}} , we compute that \[ \left( \ensuremath{\mathrm{comp}} \circ \ensuremath{\triangle^{J}} - (\ensuremath{\triangle^{J}} \circ_{\ensuremath{\mathrm{A}_\infty}} \ensuremath{\triangle^{J}} ) \circ \ensuremath{\mathrm{comp}} \right) \left( \arbreopdeuxmorph \right) = \partial \left( \arbreopcompun \otimes \arbreopcompdeux \right) \ . \] \subsubsection{Homotopy properties} While coassociativity, cocommutativity and compatibility with the composition are not satisfied by the diagonals \ensuremath{\triangle^{K}}\ and \ensuremath{\triangle^{J}} , we will now prove that a diagonal on the 2-colored operad \ensuremath{\mathrm{A}_\infty^2}\ always satisfies these properties up to homotopy. We use the notion of homotopy between morphisms of 2-colored operads as defined in \cite[Section 3.10]{MSS}. \begin{proposition} \label{th:homotopy-properties} Let $\triangle$ be a diagonal on the 2-colored operad \ensuremath{\mathrm{A}_\infty^2} . \begin{enumerate} \item The morphisms of operads $(\triangle \otimes \ensuremath{\mathrm{id}} ) \triangle)$ and $(\ensuremath{\mathrm{id}} \otimes \triangle) \triangle)$ are homotopic. In other words, a diagonal on \ensuremath{\mathrm{A}_\infty^2}\ is always coassociative up to homotopy. \item The morphisms of operads $\triangle$ and $\tau \triangle$ are homotopic. In other words, a diagonal on \ensuremath{\mathrm{A}_\infty^2}\ is always cocommutative up to homotopy. \item The morphisms of operads $\ensuremath{\mathrm{comp}} \circ \ensuremath{\triangle^{J}}$ and $( \ensuremath{\triangle^{J}} \circ_{\ensuremath{\mathrm{A}_\infty}} \ensuremath{\triangle^{J}} ) \circ \ensuremath{\mathrm{comp}}$ are homotopic. In other words, a diagonal on \ensuremath{\mathrm{A}_\infty^2}\ is always compatible with the composition of \ensuremath{\mathrm{A}_\infty} -morphisms up to homotopy. \end{enumerate} \end{proposition} \begin{proof} The proof of this proposition is a simple adaptation of the results of \cite[Section 2]{MarklShnider06} in the context of 2-colored dg operads, applied to the minimal model \ensuremath{\mathrm{A}_\infty^2}\ for the 2-colored dg operad $As^2$ encoding pairs of dg algebras together with morphisms between them. \end{proof} While \cref{thm:nofunctorial} shows that it is not possible to endow the category $\infAalg$ with a symmetric monoidal category structure using the viewpoint of diagonals, \cref{th:homotopy-properties} exhibits a first level of homotopies that could be involved in the definition of some kind of \textit{homotopy symmetric monoidal} category structure on \infAalg . This question will be studied in a future work by D. Poliakova and the two authors of this paper. As a first step towards solving that problem, we will inspect in particular which higher coherent homotopies arise from the lack of coassociativity of $\triangle^{K_n}$ and $\triangle^{J_n}$ on the level of polytopes. \section{Further applications} \label{sec:V} We first prove that a diagonal on the dg operad \ensuremath{\mathrm{A}_\infty}\ is equivalent to a retraction of the bar-cobar resolution $\AAinf$ onto the operad \ensuremath{\mathrm{A}_\infty}\ . We then explain how to associate a convolution \ensuremath{\mathrm{A}_\infty} -algebra to an \ensuremath{\mathrm{A}_\infty} -coalgebra and an \ensuremath{\mathrm{A}_\infty} -algebra, as well as \ensuremath{\mathrm{A}_\infty} -morphisms between convolution \ensuremath{\mathrm{A}_\infty} -algebras, using diagonals on \ensuremath{\mathrm{A}_\infty}\ and \Minf . We finally describe two possible applications of our results in symplectic topology: in the context of Heegard Floer homology, and to study tensor products of Fukaya categories/algebras and \ensuremath{\mathrm{A}_\infty} -functors between them. \subsection{Retractions and diagonals} \label{ss:retract-diag} Recall that the operad \ensuremath{\mathrm{A}_\infty}\ is the minimal model $\ensuremath{\mathrm{A}_\infty} =\Omega As^{\text{!`}}$ of the dg operad $As$ encoding associative algebras. Another cofibrant replacement of the operad $As$ is given by the bar-cobar (or Boardman-Vogt) resolution $\AAinf := \Omega B As$, which is defined as the quasi-free operad \[ \AAinf := \left( \mathcal{T} (\premiertermecobarbarA , \premiertermecobarbarD , \premiertermecobarbarB , \premiertermecobarbarC , \cdots , \mathrm{PT_n} \text{\hspace{2pt}}, \cdots ) , \partial \right) \ , \] where $\mathrm{PT_n}$ is the set of planar rooted trees of arity $n$ and the degree of a tree is defined as the number of its internal edges. We refer to \cite[Section 9.3]{LodayVallette12} for a complete study of the operad $\AAinf$, and in particular for a definition of its differential. There exists an explicit embedding of dg operads $\ensuremath{\mathrm{A}_\infty} \rightarrow \AAinf$, as constructed in \cite[Section 4]{MarklShnider06} and in \cite[Section 1.3.1.5]{mazuir-I}. The problem of the construction of an explicit morphism of dg operads $\AAinf \rightarrow \ensuremath{\mathrm{A}_\infty}$ is more complicated and is the subject of the following proposition. \begin{definition}[Retraction] A morphism of dg operads $\AAinf \rightarrow \ensuremath{\mathrm{A}_\infty}$ sending $\premiertermecobarbarA$ to $\premiertermecobarbarA$ will be called a \emph{retraction of the operad $\AAinf$ onto the operad \ensuremath{\mathrm{A}_\infty} }. \end{definition} \begin{proposition} \label{prop:retract} The datum of a diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ is equivalent to the datum of a retraction $r : \AAinf \rightarrow \ensuremath{\mathrm{A}_\infty}$. \end{proposition} \begin{proof} We apply the general theory of operadic twisting morphisms \cite[Section 6.4]{LodayVallette12} to prove the following sequence of isomorphisms: \begin{eqnarray*} \ensuremath{\mathrm{Hom}}_{\mathsf{Op}} (\Omega As^{\text{!`}}, \Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}}) & \cong & \mathrm{Tw}(As^{\text{!`}}, \Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}}) \\ & \cong & \mathrm{Tw}(B As,\Omega As^{\text{!`}}) \\ & \cong & \ensuremath{\mathrm{Hom}}_{\mathsf{Op}} (\Omega B As, \Omega As^{\text{!`}}) \ . \end{eqnarray*} The first and last isomorphisms are given by the bar-cobar adjunction. We thus only need to explain the second isomorphism. A twisting morphism $As^{\text{!`}}\to \Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}}$ is by definition a Maurer--Cartan element in the convolution pre-Lie algebra associated to the convolution dg operad $\ensuremath{\mathrm{Hom}} (As^{\text{!`}}, \Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}})$. This convolution dg operad is in turn isomorphic to the desuspension $\mathcal{S}^{-1}(\Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}})$. Since the cooperad $As^{\text{!`}}$ is 1-dimensional in every arity, and since the arity-wise linear dual dg cooperad of the desuspended dg operad $\mathcal{S}^{-1}(\Omega As^{\text{!`}})$ is isomorphic to the bar construction $B As$, we have that the desuspension $\mathcal{S}^{-1}(\Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}})$ is isomorphic to the convolution dg operad $\ensuremath{\mathrm{Hom}} (B As, \Omega As^{\text{!`}})$. We hence have the following isomorphisms of dg operads \[ \ensuremath{\mathrm{Hom}} (As^{\text{!`}}, \Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}}) \cong \mathcal{S}^{-1}(\Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}}) \cong \ensuremath{\mathrm{Hom}} (B As, \Omega As^{\text{!`}}) \ . \] This implies an isomorphism on the level of the Maurer--Cartan elements of the associated dg pre-Lie algebras, that is \[ \mathrm{Tw}(As^{\text{!`}}, \Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}}) \cong \mathrm{Tw}(B As,\Omega As^{\text{!`}}) \ . \] We finally check that the condition $\triangle (\premiertermecobarbarA) = \premiertermecobarbarA \otimes \premiertermecobarbarA$ is equivalent to the condition $r(\premiertermecobarbarA)=\premiertermecobarbarA$. \end{proof} \cref{prop:retract} clarifies in particular the construction of the diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ given in \cite{MarklShnider06}. The operad $\AAinf$ can indeed be seen as the cellular chains on the cubical realization of the associahedra \cite[Section 9.3.1]{LodayVallette12}. It comes with an elementary diagonal $\AAinf \rightarrow \AAinf \otimes \AAinf$ defined using the Serre cubical diagonal of \cite{Serre51}. M. Markl and S. Shnider then define a retraction $r:\AAinf \rightarrow \ensuremath{\mathrm{A}_\infty}$ and deduce a diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ as the composite \[ \ensuremath{\mathrm{A}_\infty} \longrightarrow \AAinf \longrightarrow \AAinf \otimes \AAinf \overset{r \otimes r}{\longrightarrow} \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty} \ . \] Their choice of retraction recovers the diagonal constructed directly on the level of the associahedra in \cite[Theorem 2]{MTTV19}. A similar proof would however not adapt to the case of the multiplihedra, as they are not simple polytopes hence do not admit a cubical realization. \begin{remark} \label{rem:Morse} As observed in \cite[Remark 1.6]{LA21}, the methods used to construct our cellular approximation of the diagonal could be related to the Fulton--Sturmfels formula \cite[Theorem 4.2]{FultonSturmfels97}, appearing in the study of the intersection theory on toric varieties. We also expect an interpretation of \cref{prop:retract} in terms of Morse theory, in the vein of \cite{FriedmanMardonesSinha21,Frankland07}. There should also be an interpretation in terms of discrete Morse theory as in \cite[Section 1.1.4]{Thorngren18} for the case of the standard simplices. \end{remark} \subsection{Convolution \ensuremath{\mathrm{A}_\infty} -algebra} \label{ss:conv-ainf-alg} \subsubsection{Standard convolution algebra} Given a dg algebra $A$ and a dg coalgebra $C$, recall from \cite[Section 1.6]{LodayVallette12} that one can define the \textit{convolution algebra} of $C$ and $A$ as the dg algebra $(\ensuremath{\mathrm{Hom}} (C,A) , [ \partial , \cdot ] , \star)$, where $\ensuremath{\mathrm{Hom}} (C,A)$ is the dg module of maps $C \rightarrow A$, endowed with the convolution product $f \star g := \mu_A \circ ( f \otimes g) \circ \Delta_C$. The convolution algebra construction is in fact functorial, i.e. fits into a bifunctor $\mathsf{(dg-cog)^{op}} \times \mathsf{dg-alg} \rightarrow \mathsf{dg-alg}$ defined on objects as $(C,A) \mapsto \ensuremath{\mathrm{Hom}} (C,A)$. A Maurer-Cartan element $\alpha$ of $\ensuremath{\mathrm{Hom}} (C,A)$, i.e. a map $\alpha : C \rightarrow A$ such that $[ \partial , \alpha ] + \alpha \star \alpha = 0$, is then called a \emph{twisting morphism}. Twisting morphisms define twisted differentials on the tensor product $C \otimes A$ via the formula \[ \partial_\alpha := \partial_{C \otimes A} + (\ensuremath{\mathrm{id}} \otimes \mu_A ) ( \ensuremath{\mathrm{id}} \otimes \alpha \otimes \ensuremath{\mathrm{id}} ) ( \Delta_C \otimes \ensuremath{\mathrm{id}} ) \ . \] Twisted differentials appear in the computation of the singular homology of fiber spaces \cite{Brown59}. Given a fibration $F \rightarrow X \rightarrow B$ satisfying some mild assumptions, the singular homology of $X$ can then be computed as the homology of the tensor product $C_*(B) \otimes C_*(F)$ endowed with a twisted differential, where $C_*(F)$ is seen as a dg module over the dg algebra $C_*(\Omega B)$. \subsubsection{Convolution \ensuremath{\mathrm{A}_\infty} -algebra} \label{sss:conv-ainf-alg} One defines an \textit{\ensuremath{\mathrm{A}_\infty} -coalgebra} structure on a dg module $C$ to be a morphism of dg operads $\ensuremath{\mathrm{A}_\infty} \rightarrow \ensuremath{\mathrm{coEnd}}_C$, where $\ensuremath{\mathrm{coEnd}}_C(n) = \ensuremath{\mathrm{Hom}} ( C , C^{\otimes n} )$. Put differently, it is the structure dual to the structure of \ensuremath{\mathrm{A}_\infty} -algebra, i.e. it corresponds to a collection of operations $c_n : C \rightarrow C^{\otimes n}$ of degree $n-2$ satisfying the equations obtained by inverting inputs and outputs in the equations for \ensuremath{\mathrm{A}_\infty} -algebras. The notion of an \ensuremath{\mathrm{A}_\infty} -morphism between \ensuremath{\mathrm{A}_\infty} -coalgebras is defined in a similar fashion: either in terms of operations $f_n : C \rightarrow D^{\otimes n}$ of degree $n-1$ and satisfying the equations dual to the equations for \ensuremath{\mathrm{A}_\infty} -morphisms, or equivalently as a morphism of dg operadic bimodules $\Minf \rightarrow \ensuremath{\mathrm{coHom}}^{C_1}_{C_2}$. Our results allow us to extend the convolution algebra construction when $C$ is an \ensuremath{\mathrm{A}_\infty} -coalgebra and $A$ is an \ensuremath{\mathrm{A}_\infty} -algebra. \begin{proposition} \label{prop:convolution-ainf} $ $ \begin{enumerate}[leftmargin=*] \item Let $C$ be an \ensuremath{\mathrm{A}_\infty} -coalgebra and $A$ be an \ensuremath{\mathrm{A}_\infty} -algebra. A diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ yields an \ensuremath{\mathrm{A}_\infty} -algebra structure on the dg module $(\ensuremath{\mathrm{Hom}} (C,A) , \partial)$. We call this \ensuremath{\mathrm{A}_\infty} -algebra the \emph{convolution \ensuremath{\mathrm{A}_\infty} -algebra of $C$ and $A$}. \item Let $F : A_1 \rightsquigarrow A_2$ be an \ensuremath{\mathrm{A}_\infty} -morphism between two \ensuremath{\mathrm{A}_\infty} -algebras $A_1$ and $A_2$ and $G : C_2 \rightsquigarrow C_1$ be an \ensuremath{\mathrm{A}_\infty} -morphism between two \ensuremath{\mathrm{A}_\infty} -coalgebras $C_2$ and $C_1$. A diagonal on the operad \Minf\ yields an \ensuremath{\mathrm{A}_\infty} -morphism between the convolution \ensuremath{\mathrm{A}_\infty} -algebras $\ensuremath{\mathrm{Hom}} (C_1,A_1)$ and $\ensuremath{\mathrm{Hom}} (C_2,A_2)$. \end{enumerate} \end{proposition} \begin{proof} $ $ \begin{enumerate}[leftmargin=*] \item Given a diagonal $\ensuremath{\mathrm{A}_\infty} \rightarrow \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty}$, the following composite of morphism of operads defines the \ensuremath{\mathrm{A}_\infty} -algebra structure on $\ensuremath{\mathrm{Hom}}(C,A)$ : \[ \ensuremath{\mathrm{A}_\infty} \to \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty} \to \ensuremath{\mathrm{coEnd}}_C\otimes \ensuremath{\mathrm{End}}_A \to \ensuremath{\mathrm{End}}_{\ensuremath{\mathrm{Hom}}(C,A)} \ , \] where the morphism of dg operads $\ensuremath{\mathrm{coEnd}}_C \otimes \ensuremath{\mathrm{End}}_A \to \ensuremath{\mathrm{End}}_{\ensuremath{\mathrm{Hom}}(C,A)}$ is straightforward to define. \item Given a diagonal $\Minf \rightarrow \Minf \otimes \Minf$, we consider in a similar fashion the composite of morphism of operadic bimodules \[ \Minf \to \Minf \otimes \Minf \to \ensuremath{\mathrm{coHom}}^{C_2}_{C_1} \otimes \ensuremath{\mathrm{Hom}}^{A_1}_{A_2} \to \ensuremath{\mathrm{Hom}}^{\ensuremath{\mathrm{Hom}}(C_1,A_1)}_{\ensuremath{\mathrm{Hom}}(C_2, A_2)} \ . \] \end{enumerate} \end{proof} \begin{proposition} \label{coroll:nobifunctor} For any diagonal on $\ensuremath{\mathrm{A}_\infty}$, and for any diagonal on $\Minf$, the convolution $\ensuremath{\mathrm{A}_\infty}$-algebra $\ensuremath{\mathrm{Hom}}(C,A)$ does not define a bifunctor $(\infAcog)^{\mathrm{op}} \times \infAalg \rightarrow \infAalg$. \end{proposition} \begin{proof} This is a direct corollary to \cref{thm:nofunctorial}. \end{proof} \cref{prop:convolution-ainf} implies in particular that for an \ensuremath{\mathrm{A}_\infty} -coalgebra $C$ and an \ensuremath{\mathrm{A}_\infty} -algebra $A$, it is still possible to define the notion of a \textit{twisting morphism} $\alpha : C \rightarrow A$ as a Maurer-Cartan element in the \ensuremath{\mathrm{A}_\infty} -algebra $\ensuremath{\mathrm{Hom}} (C,A)$, see \cite[Equation 1, p.8]{dotsenko2018twisting} for instance. It also implies that the \ensuremath{\mathrm{A}_\infty} -morphism $\ensuremath{\mathrm{Hom}} (C_1,A_1) \rightsquigarrow \ensuremath{\mathrm{Hom}} (C_2,A_2)$ defined by the \ensuremath{\mathrm{A}_\infty} -morphism $F : A_1 \rightsquigarrow A_2$ and $G : C_2 \rightsquigarrow C_1$, sends a twisting morphism $C_1 \rightarrow A_1$ to a twisting morphism $C_2 \rightarrow A_2$. We will use this key property in order to pursue the work of Brown \cite{Brown59} and \cite{Proute86} on the homology of fibered spaces in a forthcoming paper. \subsubsection{Diagonals as twisting morphisms} \label{sec:RNW} The results of \cref{sss:conv-ainf-alg} can be interpreted in a more general framework, developed by D. Robert-Nicoud and F. Wierstra in \cite{RobertNicoudWierstraI,RobertNicoudWierstraII}. \begin{proposition} \label{coroll:twisting} The datum of a diagonal on $\ensuremath{\mathrm{A}_\infty}$ is equivalent to the datum of a twisting morphism $\alpha \in \mathrm{Tw}(B As,\Omega As^{\text{!`}})$ sending $\premiertermecobarbarA$ to $\premiertermecobarbarA$. \end{proposition} \begin{proof} This result was proven in the proof of \cref{prop:retract}. \end{proof} Setting $\mathcal{C}=B As$ and $\mathcal{P}=\Omega As^{\text{!`}}$ and working in the context of non-symmetric operads where the operad $\ensuremath{\mathrm{L}_\infty}$ of \cite{RobertNicoudWierstraI,RobertNicoudWierstraII} is replaced by the operad $\ensuremath{\mathrm{A}_\infty}$, we recover \cref{coroll:twisting} (and thus \cref{prop:retract}) via \cite[Theorem 7.1]{RobertNicoudWierstraI} and Point~(1) of \cref{prop:convolution-ainf} via \cite[Theorem 4.1]{RobertNicoudWierstraI}. We denote by $\Aalg$ the category of $\ensuremath{\mathrm{A}_\infty}$-algebras and their \emph{strict} morphisms \cite[Section 10.2.1]{LodayVallette12}. It is shown in \cite[Corollary 5.4]{RobertNicoudWierstraI} that the assignments \begin{eqnarray} \ensuremath{\mathrm{Hom}}(-,\mathrm{id}) &:& (\infAcog)^{\mathrm{op}} \times \Aalg \to \Aalg \label{eq:bif1} \\ \ensuremath{\mathrm{Hom}}(\mathrm{id}, -) &:& (\Acog)^{\mathrm{op}} \times \infAalg \to \Aalg \label{eq:bif2} \end{eqnarray} given by the convolution $\ensuremath{\mathrm{A}_\infty}$-algebra extend to bifunctors. The authors also show that these two bifunctors do \emph{not} extend to a bifunctor \begin{eqnarray} \ensuremath{\mathrm{Hom}}(-,-) &:& \mathsf{(\infAcog)^{op}} \times \infAalg \to \infAalg \label{eq:bifunctor} \end{eqnarray} in general, since this assignment is not compatible with the composition of $\ensuremath{\mathrm{A}_\infty}$-morphisms \cite[Theorem 6.6]{RobertNicoudWierstraI}. Point~(2) of \cref{prop:convolution-ainf} allows us to define the assignment (\ref{eq:bifunctor}) directly, and \cref{coroll:nobifunctor} can be seen as a stronger version of \cite[Theorem 6.6]{RobertNicoudWierstraI}, in the special case of $\ensuremath{\mathrm{A}_\infty}$-algebras. The main result of \cite{RobertNicoudWierstraII} says that if a twisting morphism $\alpha \in \mathrm{Tw}(B As,\Omega As^{\text{!`}})$ is Koszul, then the possible compositions of the two bifunctors (\ref{eq:bif1}) and (\ref{eq:bif2}) are homotopic and that they extend to a bifunctor on the level of the homotopy categories \cite[Theorem 3.6 and Corollary 3.8]{RobertNicoudWierstraII}. This should be seen as a statement analogous to Point (3) of \cref{th:homotopy-properties}. It would be interesting to know how the results of \cite{RobertNicoudWierstraI,RobertNicoudWierstraII} can be interpreted from the viewpoint of diagonals, and if they admit an interpretation on the level of polytopes. \subsection{Diagonals in symplectic topology} \label{ss:diag-symp} \subsubsection{The work of Lipshitz, {Oszv\'ath} and Thurston} In \cite{LOT20}, R. Lipshitz, P. Oszv\'ath and D. Thurston also study diagonals on the dg operad \ensuremath{\mathrm{A}_\infty}\ and on the dg operadic bimodule \Minf . They however work exclusively on the dg level, constructing abstract diagonals by using the fact that \ensuremath{\mathrm{A}_\infty}\ and \Minf\ are contractible, and do not provide explicit formulae for these diagonals as in \cref{prop:diagonal-polytopale-a-infini} and \cref{prop:diagonal-polytopale-m-infini}. The goal of their work is to study bordered Heegaard Floer homology of 3-manifolds. Given a 3-manifold $Y$ with two boundary components, they aim to construct a \emph{bimodule twisted complex} $CFDD^-(Y)$, also called a \emph{type $DD$-bimodule}. The definition of such an object uses a diagonal on the dg operad \ensuremath{\mathrm{A}_\infty} . A diagonal on \Minf\ is then needed in order to relate the categories of bimodules defined with different diagonals on \ensuremath{\mathrm{A}_\infty} , which in turn is needed for properties like the associativity of tensor products. They also expect that diagonals on \Minf\ could be needed in a distant future to define \ensuremath{\mathrm{A}_\infty} -morphisms between bimodule twisted complexes arising from a cobordism between 3-manifolds $Y_1$ and $Y_2$. Thus, the explicit formula for the diagonal defined in this paper could be used to compute invariants of 3 and 4-manifolds, via implementation in a computer program for instance. \subsubsection{K\"unneth theorems in Lagrangian Floer theory} \label{sss:amorim-fukaya} Let $(M,\omega)$ be a closed symplectic manifold, i.e. a closed manifold $M$ together with a closed non-degenerate 2-form $\omega$ on $M$. The \emph{Fukaya category} $\mathrm{Fuk}(M,\omega)$ of $(M,\omega)$ is defined to be the (curved filtered unital) \ensuremath{\mathrm{A}_\infty} -category whose objects are (unobstructed) Lagrangian submanifolds of $M$ and higher compositions are defined by counting pseudo-holomorphic disks with Lagrangian boundary conditions and marked points on their boundary, as represented in \cref{fig:pseudo-hol-disk-bord-lagrang}. We refer for instance to~\cite{smith-prolegomenon}~and~\cite{auroux-fukaya} for introductions to this subject. Given a closed spin Lagrangian submanifold $L \subset M$, K. Fukaya also constructs in \cite{fukaya-cyclic-symmetry} a strictly unital \ensuremath{\mathrm{A}_\infty} -algebra $\mathcal{F}(L)$, the \emph{Fukaya algebra} of the Lagrangian $L$, whose higher multiplications are again defined by counting pseudo-holomorphic disks. \begin{figure}[h] \centering \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale = 0.5] \draw (0,0) circle (3) ; \draw (360/20 : 3) node[scale = 0.1]{\pointbullet}; \draw (3*360/20 : 3) node[scale = 0.1]{\pointbullet}; \draw (9*360/20 : 3) node[scale = 0.1]{\pointbullet}; \draw (7*360/20 : 3) node[scale = 0.1]{\pointbullet}; \draw (270 : 3) node[scale = 0.1]{\pointbullet}; \draw (360/20 : 3) node[right,scale=0.7]{$x_n$}; \draw (3*360/20 : 3) node[above=3pt,scale=0.7]{$x_{n-1}$}; \draw (9*360/20 : 3) node[left,scale=0.7]{$x_1$}; \draw (7*360/20 : 3) node[above=3pt,scale=0.7]{$x_2$}; \draw (270 : 3) node[below=3pt,scale=0.7]{$y$}; \draw (2*360/20 : 3) node[above right,scale=0.8]{$L_{n-1}$}; \draw (8*360/20 : 3) node[above left,scale=0.8]{$L_1$}; \draw (11.5*360/20 : 3) node[below left,scale=0.8]{$L_0$}; \draw (18.5*360/20 : 3) node[below right,scale=0.8]{$L_n$}; \draw[densely dotted] (0,3.3) arc (90:108:3.3) ; \draw[densely dotted] (0,3.3) arc (90:72:3.3) ; \node at (0,0) {$M$} ; \end{tikzpicture} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \begin{tikzpicture}[scale = 0.5] \draw[fill,blue!70] (0,0) circle (3) ; \draw (0,0) circle (3) ; \draw (360/20 : 3) node[scale = 0.1]{\pointbullet}; \draw (3*360/20 : 3) node[scale = 0.1]{\pointbullet}; \draw (9*360/20 : 3) node[scale = 0.1]{\pointbullet}; \draw (7*360/20 : 3) node[scale = 0.1]{\pointbullet}; \draw (360/20 : 3) node[right,scale=0.7]{$x_n$}; \draw (3*360/20 : 3) node[above=3pt,scale=0.7]{$x_{n-1}$}; \draw (9*360/20 : 3) node[left,scale=0.7]{$x_1$}; \draw (7*360/20 : 3) node[above=3pt,scale=0.7]{$x_2$}; \draw (270 : 3) node[below=3pt,scale=0.7]{$y$}; \draw (2*360/20 : 3) node[above right,scale=0.8]{$L_{n-1}$}; \draw (8*360/20 : 3) node[above left,scale=0.8]{$L_1$}; \draw (11.5*360/20 : 3) node[below left,scale=0.8]{$L_0$}; \draw (18.5*360/20 : 3) node[below right,scale=0.8]{$L_n$}; \draw[densely dotted] (0,3.3) arc (90:108:3.3) ; \draw[densely dotted] (0,3.3) arc (90:72:3.3) ; \node at (0,2) {$M_0$} ; \draw[fill,red!50] (0,-1) circle (2) ; \draw (0,-1) circle (2) ; \node at (0,-1) {$M_1$} ; \draw (30 : 2) + (0,-1) node[xshift=-0.3cm,scale=0.8]{$\mathcal{L}_{01}$}; \draw (270 : 3) node[scale = 0.1]{\pointbullet}; \end{tikzpicture} \end{subfigure} \caption{On the left, a pseudo-holomorphic disk defining the \ensuremath{\mathrm{A}_\infty} -category structure on $\mathrm{Fuk}(M)$. On the right, a pseudo-holomorphic quilted disk defining an \ensuremath{\mathrm{A}_\infty} -functor $\mathrm{Fuk}(M_0)\rightsquigarrow\mathrm{Fuk}(M_1)$} \label{fig:pseudo-hol-disk-bord-lagrang} \end{figure} In \cite{amorim-lagrangian}, L. Amorim shows that given two symplectic manifolds $M_1$ and $M_2$ together with Lagrangians $L_i \subset M_i$, the Fukaya algebra of the product Lagrangian $L_1 \times L_2$ is quasi-isomorphic to the tensor product of their Fukaya algebras, i.e. $\mathcal{F}(L_1 \times L_2) \simeq \mathcal{F}(L_1) \otimes \mathcal{F}(L_2)$. His proof relies on a theorem that he proves in~\cite{amorim-tensor}, giving a criterion for an \ensuremath{\mathrm{A}_\infty} -algebra $C$ to be quasi-isomorphic to the tensor \ensuremath{\mathrm{A}_\infty} -algebra $A \otimes B$ (see \cref{def:tensor-product-ainf-alg}) of two commuting \ensuremath{\mathrm{A}_\infty} -subalgebras $A \subset C$ and $B \subset C$, which he then applies to the two \ensuremath{\mathrm{A}_\infty} -subalgebras $\mathcal{F}(L_1) \subset \mathcal{F}(L_1 \times L_2)$ and $\mathcal{F}(L_2) \subset \mathcal{F}(L_1 \times L_2)$. Fukaya generalizes this result in \cite{fukaya-unobstructed}, working this time on the level of Fukaya categories. He proves that for two closed symplectic manifolds $M_0$ and $M_1$ there exists a unital \ensuremath{\mathrm{A}_\infty} -functor \[ \mathrm{Fuk}(M_0) \otimes \mathrm{Fuk}(M_1) \longrightarrow \mathrm{Fuk}(M_0^- \times M_1) \] which is a homotopy equivalence to its image. Let now $M_0$ and $M_1$ be two compact symplectic manifolds. Define a \emph{Lagrangian correspondence} from $M_0$ to $M_1$ to be a Lagrangian submanifold $\mathcal{L} \subset M_0^{-} \times M_1$. In \cite{mau-wehrheim-woodward}, S. Mau, K. Wehrheim and C. Woodward associate to a Lagrangian correspondence $\mathcal{L}$ (with additional technical assumptions) an \ensuremath{\mathrm{A}_\infty} -functor $\Phi_{\mathcal{L}} : \mathrm{Fuk}(M_0) \rightsquigarrow \mathrm{Fuk}(M_1)$. It is defined on objects as \[ \Phi_{\mathcal{L}} (L_0) := \pi_{M_1} ( L_0 \times_{M_0} \mathcal{L} ) \ , \] where $\pi_{M_1}$ denotes the projection $M_0 \times M_0^{-} \times M_1 \rightarrow M_1$ and $\times_{M_0}$ is the fiber product over $M_0$. The operations of $\Phi_{\mathcal{L}}$ are defined by counting pseudo-holomorphic quilted disks with Lagrangian boundary conditions, seam condition on $\mathcal{L}$ and marked points on their boundary, as represented in \cref{fig:pseudo-hol-disk-bord-lagrang}. The tensor product of $\ensuremath{\mathrm{A}_\infty}$-functors defined in the present paper allows one to consider the $\ensuremath{\mathrm{A}_\infty}$-functor $\Phi_{\mathcal{L}_M} \otimes \Phi_{\mathcal{L}_N}$ associated to a pair of Lagrangian correspondences, raising the following question. \begin{samepage} \begin{problem} \label{problem} Does the diagram \begin{center} \begin{tikzcd}[column sep = 12ex] \mathrm{Fuk}(M_0) \otimes \mathrm{Fuk}(N_0) \arrow[d,squiggly] \arrow[r,"\Phi_{\mathcal{L}_M} \otimes \Phi_{\mathcal{L}_N}",squiggly] & \mathrm{Fuk}(M_1) \otimes \mathrm{Fuk}(N_1) \arrow[d,squiggly] \\ \mathrm{Fuk}(M_0 \times N_0) \arrow[r,below,"\Phi_{\tau ( \mathcal{L}_M \times \mathcal{L}_N)}",squiggly] & \mathrm{Fuk}(M_1 \times N_1) \end{tikzcd} \end{center} commute up to homotopy of \ensuremath{\mathrm{A}_\infty} -functors? \end{problem} \end{samepage} In this diagram, $\mathcal{L}_M \subset M_0^{-} \times M_1$, $\mathcal{L}_N \subset N_0^- \times N_1$ and the symplectomorphism $\tau$ is defined by rearranging the factors of $M_0^{-} \times M_1 \times N_0^- \times N_1$ into the factors of $M_0^{-} \times N_0^- \times M_1 \times N_1$. In other words, we would like to know whether the \emph{algebraic (tensor) product} of geometric \ensuremath{\mathrm{A}_\infty} -functors between Fukaya categories defined in this paper is homotopic to the \ensuremath{\mathrm{A}_\infty} -functor defined by the \emph{geometric product} of the Lagrangian correspondences. We refer to \cite[Section 13]{fukaya-unobstructed} for a discussion on two definitions of the notion of a homotopy between \ensuremath{\mathrm{A}_\infty} -functors. \newpage \bibliographystyle{amsalpha}
2024-02-18T23:40:58.085Z
2022-07-20T02:21:11.000Z
algebraic_stack_train_0000
3,800
25,828
proofpile-arXiv_066-2570
\section{Introduction} Let $\mathcal{A}=(a_{i,j})_{i,j=0}^{\infty}$ be an infinite matrix of real numbers. For $m\in \mathds{Z}_{+}$ fixed, $m \geq 1$, we say that the entry $a_{i,j}$ lies in the $m$-diagonal if $j-i=m$. Obviously, the $0$-diagonal is the usual principal diagonal of $\mathcal{A}$. The matrix $\mathcal{A}$ is an \emph{ $m$-diagonal matrix} if all of its nonzero elements lie in its $m$-diagonal and \emph{lower (upper) triangular matrix} if $a_{i,j}=0$ whenever $j>i$ ($j<i$). The symbols $\mathcal{A}^{\mathsf{T}}$ and $[\mathcal{A}]_n$ denote the transposed matrix and the squared matrix of the first $n$ rows and columns of $\mathcal{A}$, respectively. $\mathcal{I}$ is called the \emph{unit matrix}; its $(i,j)$th entry is $\delta_{i,j}$ where $\delta_{i,j}=0$ if $i\neq j$ and $\delta_{i,i}=1$. $\mathcal{A}$ is called \emph{positive definite of infinite order} if $\det ([\mathcal{A}]_n)>0$ for all $n\geq 1$, where $\det ([\mathcal{A}]_n)$ is the determinant of $[\mathcal{A}]_n$. If $\det ([\mathcal{A}]_n)>0$ for all $1 \leq n\leq k$ and $\det ([\mathcal{A}]_n)=0$ for all $ n> k$, we say that $\mathcal{A}$ is a \emph{positive definite matrix of order $k$}. According to the definitions given in \cite[Ch. II]{Coo50}, if $\mathcal{A}$ and $\mathcal{B}$ are two infinite matrices such that $\mathcal{A}\cdot \mathcal{B}=\mathcal{I}$, then $\mathcal{B}$ is called a right-hand inverse of $\mathcal{A}$, denoted by $\mathcal{A}^{-1}$; and $\mathcal{A}$ is called a left-hand inverse of $\mathcal{B}$, denoted by $^{-1}\mathcal{B}$. The transposed of $\mathcal{A}^{-1}$ ($^{-1}\!\mathcal{A}$ ) and $\mathcal{A}^{m}$\; (the $m$th power of the matrix $\mathcal{A}$, with $m \in \mathds{Z}_{+}$) are denoted by $\mathcal{A}^{-\mathsf{T}}$ ($^{-\mathsf{T}}\!\mathcal{A}$) and $\mathcal{A}^{m\mathsf{T}}$ respectively. We will denote by $\mathcal{U}$ the infinite matrix whose $(i,j)$th entry is $\delta_{i+1,j}$ for $i,j\in \mathds{Z}_{+}$; i.e the \emph{upper (or backward) shift infinite matrix} given by the expression \begin{equation*}\label{UpperShift} \mathcal{U}=\left( \begin{array}{cccc} 0 & 1 & 0 & \cdots \\ 0 & 0 & 1 & \cdots \\ 0 &0 & 0 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{array} \right). \end{equation*} The matrix $\mathcal{U}^{\mathsf{T}}$ is called the \emph{lower (or forward) shift infinite matrix} and it is easy to check that $\displaystyle \mathcal{U}\cdot \mathcal{U}^{\mathsf{T}}= \mathcal{I};$ i.e. $\mathcal{U}^{\mathsf{T}}=\mathcal{U}^{-1}$, the right hand inverse of $\mathcal{U}$ (c.f. \cite[\S 0.9]{HorJoh13} ). An infinite Hankel matrix is an infinite matrix in which each ascending skew-diagonal from left to right is constant. In other words, $\mathcal{H}=(h_{i,j})_{i,j=0}^{\infty}$ is a Hankel matrix if $h_{i,j+1}=h_{i+1,j}$ for all $i,j \in \mathds{Z}_{+}$ or equivalently if \begin{equation}\label{HankelCond} \mathcal{U} \mathcal{H}-\mathcal{H} \mathcal{U}^{-1}=\mathcal{O}, \end{equation} where $\mathcal{O}$ denote the infinite null matrix. If $\{ r_i\}_{i=0}^{\infty}$ is a sequence of real numbers, we denote $\diag{r_i}$ the infinite diagonal matrix whose $i$th main diagonal entry is $r_i$, and by $\han{ r_i}$ the associated Hankel matrix, defined as $$\han{ r_i }=\left( \begin{array}{cccc} r_0 & r_1 & r_2 & \cdots \\ r_1 & r_2 & r_3 & \cdots \\ r_2 & r_3 & r_4 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{array}\right).$$ We say that a matrix $\mathcal{M}=(m_{i,j})_{i,j=0}^{\infty}$ is a \emph{Hankel-Sobolev matrix} if there exists a sequence of Hankel matrices $\left\{\mathcal{H}_k\right\}_{k=0}^{\infty}$ such that \begin{equation}\label{Hankel-Sobolev} \mathcal{M}= \sum_{k=0}^{\infty} \left(\mathcal{U}^{-k}\;\mathcal{D}_k \; \mathcal{H}_{k} \; \mathcal{D}_k\;\mathcal{U}^{k}\right), \end{equation} where $\mathcal{D}_0=\mathcal{I} $ and $\mathcal{D}_k=\diag{\frac{(k+i)!}{i!}}$ for each $k>0$, e.g. $$ \mathcal{D}_1=\begin{pmatrix}1 & 0 & 0 & \cdots\\ 0 & 2 & 0 & \cdots\\ 0 & 0 & 3 &\cdots\\ \vdots &\vdots & \vdots & \ddots \end{pmatrix}, \; \mathcal{D}_2=\begin{pmatrix}2 & 0 & 0 &\cdots\\ 0 & 6 & 0 & \cdots\\ 0 & 0 & 12 & \cdots\\ \vdots &\vdots & \vdots &\ddots\end{pmatrix} \text{ and } \mathcal{D}_3=\begin{pmatrix}6 & 0 & 0 &\cdots\\ 0 & 24 & 0 & \cdots\\ 0 & 0 & 60 & \cdots\\ \vdots &\vdots & \vdots & \ddots\end{pmatrix}. $$ We say that a Hankel-Sobolev matrix $\mathcal{M}$ is of \emph{index} $d \in \mathds{Z}_{+}$ if $\mathcal{H}_d\neq\mathcal{O}$ and $\mathcal{H}_k=\mathcal{O}$ for all $k>d$. Otherwise, we will say that $\mathcal{M}$ is of \emph{infinite index}. Let $\mathcal{M}$ be a Hankel-Sobolev matrix of index $d \in \overline{\mathds{Z}}_{+}=\mathds{Z}_{+} \cup \{\infty\}$. If $\mathcal{H}_k\neq \mathcal{O}$ for all $k<d$, we say that $\mathcal{M}$ is \emph{non-lacunary} and \emph{lacunary} in any other case. Hankel-Sobolev matrices appeared for the first time in \cite{BaLoPi99,Pij98} in close connection with the moment problem for a Sobolev inner product. Some of the properties of this class of infinite matrices have also been studied in \cite{Di08,MaSz00,MaSz01,PiQuiRo11,RoSa03,Zag05}. Let $\mathds{M}$ be the linear space of all infinite matrices of real numbers. For each $\eta \in \mathds{Z}_{+}$ fixed, we denote by $\Phi( \cdot,\eta)$ and $\Psi( \cdot,\eta)$ the operators from $\mathds{M}$ to itself given by the expressions \begin{align}\label{Phi-operator} \Phi( \mathcal{A},\eta):=&\sum_{\ell=0}^{\eta} (-1)^\ell \binom{\eta}{\ell} \mathcal{U}^{\eta-\ell} \mathcal{A} \mathcal{U}^{-\ell} , \\ \label{Psi-operator} \Psi(\mathcal{A}, \eta)=&\sum_{k=0}^{\eta}(-1)^{k}\, \binom{\eta}{k} \; \mathcal{A}^{(\eta-k)\; \mathsf{T}}\;\mathcal{A}^{k} , \end{align} where $\mathcal{A} \in \mathds{M}$ and $\binom{\eta}{\ell} $ denote binomial coefficients. Obviously, $\Phi( \cdot,\eta)$ is a linear operator. Theorem \ref{RelaOperadores} establishes the relation between these two operators. One of the main results of this work is the following intrinsic characterization of the Hankel-Sobolev matrices using the operator $\Phi( \cdot,\eta)$. \begin{theorem} \label{ThCharact-HankelS} An infinite matrix $\mathcal{M}$ is a Hankel-Sobolev matrix of index $d \in {\mathds{Z}_{+}}$, if and only if $\mathcal{M}$ is a symmetric matrix and \begin{equation}\label{ThCharact-01} \Phi( \mathcal{M},2d+1)=\mathcal{O} \quad \text{ and } \quad \Phi( \mathcal{M},2d)\neq\mathcal{O}.\end{equation} Moreover, for $k=0,\,1,\dots,\;d$; the Hankel matrix $\mathcal{H}_{d-k}$ in \eqref{Hankel-Sobolev} is given by \begin{equation}\label{ThCharact-02} \mathcal{H}_{d-k}= \frac{(-1)^{d-k}}{(2d-2k)!}\; \Phi( \mathcal{M}_{d-k},2 d-2k), \end{equation} where $\displaystyle \mathcal{M}_d=\mathcal{M}$ and $\displaystyle \mathcal{M}_{d-k}=\mathcal{M}_{d-k+1}- \mathcal{U}^{-d-1+k} \mathcal{D}_{d+1-k} \mathcal{H}_{d+1-k} \mathcal{D}_{d+1-k} \mathcal{U}^{d+1-k}$ for $k=1,\;2,\dots, \; d$. \end{theorem} An infinite matrix $\displaystyle \mathcal{G}=(g_{i,j})_{i,j=0}^{\infty}$ is a \emph{lower Hessenberg infinite matrix} if $g_{i,j}=0$ whenever $j-i>1$ and at least one entry of the $1$-diagonal is different from zero, i.e. \begin{equation}\label{InfLowerHess} \mathcal{G}= \left(\begin{array}{cccccc} g_{0,0} & g_{0, 1} & 0 &\cdots & 0 &\cdots\\ g_{1,0} & g_{1,1} & g_{1, 2} &\cdots & 0 &\cdots\\ \vdots & \vdots & \vdots & \ddots & \vdots & \cdots\\ g_{n,0} & g_{n,1} & g_{n,2} &\cdots & g_{n, n+1} & \cdots\\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{array} \right). \end{equation} Additionally, if $\mathcal{G}$ is a lower Hessenberg infinite matrix and all the entries in the $1$-diagonal are equal to $1$, we say that $\mathcal{G}$ is \emph{monic}. If all the entries of the $1$-diagonal of $\mathcal{G}$ are non-zero, we say that $\mathcal{G}$ is a \emph{non-degenerate lower Hessenberg infinite matrix} (for brevity hereafter referred as \emph{non-degenerate Hessenberg matrix}). An upper Hessenberg matrix is a matrix whose transpose is a lower Hessenberg matrix. Given a non-degenerate Hessenberg matrix $\mathcal{G}$, we can generate a sequence of polynomials $\displaystyle \{Q_n\}_{n=0}^{\infty}$ as follows. Assume that $Q_{0}(z) = t_{0,0}>0$, then \begin{align}\label{SobolevRR01} g_{0, 1} Q_{1}(x) =& x Q_0(x)-g_{0,0} Q_0(x), \nonumber\\ g_{1, 2} Q_2(x) =& xQ_1(x)-g_{1,1} Q_1(x)- g_{1,0} Q_0(x), \nonumber\\ \vdots \qquad \vdots & \qquad \vdots \nonumber\\ g_{n, n+1} Q_{n+1}(x) =& x Q_{n}(x) - \sum_{k=0}^{n}g_{n,k} \, Q_{k}(x) ,\\ \vdots \qquad \vdots & \qquad \vdots \nonumber \end{align} Hereafter, we will say that $\displaystyle \{Q_n\}$ is the \emph{sequence of polynomials generated by} $\displaystyle \mathcal{G}$. As $\mathcal{G}$ is non-degenerate, $Q_n$ is a polynomial of degree $n$. Let $\mathcal{T}$ be the lower triangular infinite matrix whose entries are the coefficients of the sequence of polynomials $\{Q_n\}$, i.e. \begin{equation}\label{InfiniteMatForm02} \mathcal{Q}(x)= \mathcal{T}\;\mathcal{P}(x) \quad \text{ where } \quad \mathcal{T}= \left(\begin{array}{ccccc} t_{0,0} & 0 & \cdots & 0 &\cdots\\ t_{1,0} & t_{1, 1} & \cdots & 0 &\cdots\\ \vdots & \vdots & \ddots & \vdots &\vdots\\ t_{n,0} & t_{n,1} & \cdots & t_{n, n} &\cdots \\ \vdots & \vdots & \cdots & \vdots &\ddots \end{array} \right), \end{equation} $\mathcal{Q}(x)=(Q_0(x),Q_1(x),\cdots,Q_n(x), \cdots)^{\mathsf{T}}$ and $\mathcal{P}(x)=(1,x,\cdots,x^n, \cdots)^{\mathsf{T}}$. As $\mathcal{G}$ is non-degenerate, $t_{i,i}=\left(\prod_{k=0}^{i-1} g_{k,k+1}\right)^{-1}\neq 0$ for all $i \geq 1$. Therefore, there exists a unique lower triangular infinite matrix $\mathcal{T}^{-1}$ such that $\mathcal{T}\cdot \mathcal{T}^{-1}=\mathcal{I}$ (c.f. \cite[(2.1.I)]{Coo50}), i.e. $\mathcal{T}$ has a unique right-hand inverse. Furthermore, in this case $\mathcal{T}^{-1}$ is also a left-hand inverse of $\mathcal{T}$ and it is its only two-sided inverse (c.f. \cite[Remark (a) pp. 22 ]{Coo50}), \begin{equation}\label{InfiniteMatForm03} \mathcal{T}^{-1}= \left(\begin{array}{ccccc} \tau_{0,0} & 0 & \cdots & 0 &\cdots\\ \tau_{1,0} & \tau_{1, 1} & \cdots & 0 &\cdots\\ \vdots & \vdots & \ddots & \vdots &\vdots\\ \tau_{n,0} & \tau_{n,1} & \cdots & \tau_{n, n} &\cdots \\ \vdots & \vdots & \cdots & \vdots &\ddots \end{array} \right), \; \text{where}\quad \tau_{i,i}=t_{i,i}^{-1}.\end{equation} We will denote by $\mathcal{M}$ the \emph{matrix of formal moments associated to} $\mathcal{G}$ (a non-de\-ge\-ne\-ra\-te Hessenberg matrix) defined by \begin{equation}\label{GenMomentMatrix} \mathcal{M}= \left(m_{i,j}\right)_{i,j=0}^{\infty}=\mathcal{T}^{-1}\; \mathcal{T}^{-\mathsf{T}}. \end{equation} We say that a non-degenerate Hessenberg matrix $\mathcal{G}$ is a \emph{Hessenberg-Sobolev matrix of index $d \in {\overline{\mathds{Z}}_{+}}$} if its associated matrix of formal moments is a Hankel-Sobolev matrix of index $d$. In the following theorem, we give a characterization of these matrices. \begin{theorem} \label{ThCharact-HessenS} A non-degenerate Hessenberg matrix $\mathcal{G}$ is a Hessenberg-Sobolev matrix of index $d \in {\mathds{Z}_{+}}$, if and only if \begin{equation*}\label{ThCharact-HessenS-01} \Psi( \mathcal{G},2d+1)=\mathcal{O} \quad \text{ and } \quad \Psi( \mathcal{G},2d)\neq\mathcal{O}.\end{equation*} \end{theorem} The proof of this theorem is an immediate consequence of Theorem \ref{ThCharact-HankelS} and Theorem \ref{RelaOperadores} (stated in section \ref{Sect-Hessenberg}). Let $\mathds{P}$ be the linear space of polynomials with real coefficients, $\mathcal{G}$ be a non-de\-ge\-ne\-ra\-te Hessenberg matrix and $\mathcal{M}$ its associated matrix of formal moments. If $p(x)=\sum_{i=0} ^{n_1} a_i x^i$ and $q(x)=\sum_{j=0} ^{n_2} b_j x^j$ are two polynomials in $\mathds{P}$ of degree $n_1$ and $n_2$ respectively. Then the bilinear form \begin{eqnarray}\label{MatInnerProd} \langle p,q \rangle_{\mathcal{G}}= (a_0,\cdots,a_{n_1},0,\cdots) \mathcal{M} (b_0,\cdots,b_{n_2},0,\cdots)^{\mathsf{T}} = \sum_{i=0} ^{n_1} \sum _{j=0} ^{n_2} a_i \,m_{i,j}\,{b_j}; \end{eqnarray} defines an inner product on $\mathds{P}$ and $\|\cdot \|_{\mathcal{G}}=\sqrt{\langle \cdot ,\cdot \rangle_{\mathcal{G}}}$ is the norm induced by \eqref{MatInnerProd} on $\mathds{P}$ (see Theorem \ref{Th-FormalOP}). Let $d\in\mathds{Z}_{+}$ and $\vec{\mathbf{\mu}}_d= (\mu_0,\mu_1,\dots,\mu_d)$ be a vector of $d+1$ measures, we write $\vec{\mathbf{\mu}}_d \in \mathfrak{M}_{d}(\mathds{R})$ if for each $k$ ($0\leq k \leq d$) the measure $\mu_k$ is a non-negative finite Borel measure with support $\Delta_k \subset \mathds{R}$, $\mathds{P}\subset L^1\left(\mu_k\right)$, $\mu_0$ is positive and $\Delta_0$ contains infinitely many points. If $d=\infty$ and $\vec{\mathbf{\mu}}_{\infty}$ is a sequence of measures that satisfy the above conditions, we write $\vec{\mathbf{\mu}}_{\infty}\in \mathfrak{M}_{\infty}(\mathds{R})$. For $d\in\overline{\mathds{Z}}_{+}$ and $\vec{\mathbf{\mu}}_d \in \mathfrak{M}_{d}(\mathds{R})$ , we define on $\mathds{P}$ the Sobolev inner product \begin{equation} \label{Sob-InnerP} \langle f, g \rangle_{\vec{\mathbf{\mu}}_d} = \sum_{k=0}^{d} \int f^{(k)}(x) g^{(k)}(x) d\mu_{k}(x) = \sum_{k=0}^{d} \langle f^{(k)}, g^{(k)}\rangle_{\mu_k}\;, \end{equation} where $f,g \in \mathds{P}$ and $f^{(k)}$ denote the $k$th derivative of $f$. The symbol $\|\cdot\|_{\vec{\mathbf{\mu}}_d}=\sqrt{\langle \cdot, \cdot \rangle_{\vec{\mathbf{\mu}}_d}}$ denote the Sobolev norm associated to \eqref{Sob-InnerP}. Note that although $d$ is usually considered a non-negative integer (see \cite{MaXu15}), the case $d=\infty$ has sense on $\mathds{P}$. If all the measures $\mu_k$ involved in \eqref{Sob-InnerP} are positive, we say that the Sobolev inner product is \emph{non-lacunary} and \emph{lacunary} in any other case. Taking into account the nature of the support of the measures involved in \eqref{Sob-InnerP}, we have the following three cases: \begin{description} \item[Continuous case.] The measures $\mu_0, \cdots, \mu_d$ are supported on infinite subsets. \item[Discrete case.] The support of the measure $\mu_0$ is an infinite subset and the measures $\mu_1, \cdots, \mu_d$ are supported on finite subsets. \item[Discrete-continuous case.] The support of the measure $\mu_d$ is an infinite subset and the measures $\mu_0, \cdots, \mu_{d-1}$ are supported on finite subsets. \end{description} The notion of Sobolev moment and several related topics were firstly introduced in \cite{BaLoPi99}. The \emph{$(n,k)$-moment associated to the inner product } \eqref{Sob-InnerP} is defined as $s_{n,k}=\langle x^n,x^k\rangle_{\vec{\mathbf{\mu}}_d}$ ($n,k\geq 0$), provided the integral exists. In \cite{BaLoPi99}, it was proved that the infinity matrix of moments $\mathcal{S}$ with entries $s_{n,k}$, ($n,k\geq 0$) is a Hankel-Sobolev matrix (see \cite{BaLoPi99} and Subsection \ref{Sec-S-MomentP} of this paper). Furthermore, if ${Q_n}$ is the sequence of orthogonal polynomials with respect to \eqref{Sob-InnerP} with leading coefficient $c_{n} >0$, then the infinite matrix $\mathcal{G}_{\vec{\mathbf{\mu}}_d}$ with entries $g_{i,j}=\langle xQ_i, Q_j \rangle_{\vec{\mathbf{\mu}}_d} $ is a non-degenerate Hessenberg matrix. In this case, the sequence of orthogonal polynomials ${Q_n}$ is the sequence of polynomials generated by $\mathcal{G}_{\vec{\mathbf{\mu}}_d}$. The following theorem gives a characterization of the non-degenerate Hessenberg matrices whose sequence of polynomials generated is ortogonal with respect to a So\-bo\-lev inner product as \eqref{Sob-InnerP}. \begin{theorem} [Favard type theorem for continuous case] \label{ThFavardSobolev} Let $\mathcal{G}$ be a non-degenerate Hessenberg matrix. Then, there exists $d \in \mathds{Z}_{+}$ and $\vec{\mathbf{\mu}}_d \in \mathfrak{M}_{d}(\mathds{R})$ such that $ \langle p, q \rangle_{\vec{\mathbf{\mu}}_d} =\langle p, q \rangle_{\mathcal{G}} $ if and only if \; \begin{enumerate} \item $\mathcal{G}$ is a Hessenberg-Sobolev matrix of index $d \in \mathds{Z}_{+}$. \item For each $k=0,\,1,\dots,\;d$; the Hankel matrix $\mathcal{H}_{d-k}$ defined by \eqref{ThCharact-02}, is a positive definite matrix of infinite order. \end{enumerate} \end{theorem} The Favard type theorems for the cases discrete and the discrete-continuous are Theorems \ref{ThFavardSobolevDiscrete} and \ref{ThFavardSobolevDiscreteCont}, respectively. Some basic aspects about the classical moment problem and the Sobolev moment problem are revisited in subsection \ref{Sec-S-MomentP}. In Section \ref{SecMatrixOper}, we proceed with the study of the properties of the matrix operator $\Phi( \cdot,\eta)$, the Hankel-Sobolev matrices and the proof of Theorem \ref{ThCharact-HankelS}. We revisited the Sobolev moment problem in subsection \ref{Sec-S-MomentP}. The third section is devoted to study the properties of the bilinear form \eqref{MatInnerProd} and the nexus between the operators $\Phi( \cdot,\eta)$ and $\Psi( \cdot,\eta)$. In the last section, we prove the extension of the Favard Theorem for Sobolev orthogonality stated in Theorem \ref{ThFavardSobolev}. \section{Hankel-Sobolev matrices}\label{SecMatrixOper} First of all, we need to prove that he notion of a Hankel-Sobolev matrix introduced in \eqref{Hankel-Sobolev} is well-defined. \begin{proposition} \label{Uniqueness-HankelS} Let $\mathcal{M}$ be a Hankel-Sobolev matrix, then the decomposition of $\mathcal{M}$ established in \eqref{Hankel-Sobolev} is unique. \end{proposition} \begin{proof} We first recall that for each $k \in \mathds{Z}_{+}$, $\mathcal{D}_k$ is a diagonal matrix with positive entries in the main diagonal. Furthermore, if $\mathcal{A}$ is an infinite matrix and $k\in \mathds{Z}_{+}$ is fixed, the matrix $\left(\mathcal{U}^{-k}\; \mathcal{A} \;\mathcal{U}^{k}\right)$ is obtained adding to $\mathcal{A}$ the first $k$ rows and columns of zeros. Suppose there are two sequences of Hankel matrices, $\left\{\mathcal{H}_k\right\}_{k=0}^{\infty}$ and $\left\{\widehat{\mathcal{H}}_k\right\}_{k=0}^{\infty}$, such that $$ \mathcal{M}= \sum_{k=0}^{\infty} \left(\mathcal{U}^{-k}\;\mathcal{D}_k \; \mathcal{H}_{k} \; \mathcal{D}_k\;\mathcal{U}^{k}\right) \quad \text{ and }\quad \mathcal{M}= \sum_{k=0}^{\infty} \left(\mathcal{U}^{-k}\;\mathcal{D}_k \; \widehat{\mathcal{H}}_{k} \; \mathcal{D}_k\;\mathcal{U}^{k}\right). $$ Therefore, $$\sum_{k=0}^{\infty} \left(\mathcal{U}^{-k}\;\mathcal{D}_k \; \left(\mathcal{H}_{k} -\widehat{\mathcal{H}}_{k} \right)\; \mathcal{D}_k\;\mathcal{U}^{k}\right)=\mathcal{O}.$$ Hence, for each $k\in \mathds{Z}_{+}$ fixed, the matrix $\left(\mathcal{H}_{k} -\widehat{\mathcal{H}}_{k} \right)$ is a Hankel matrix whose first row has all its entries equal to zero, i.e. $ \mathcal{H}_{k} =\widehat{\mathcal{H}}_{k}$, which completes the proof. \end{proof} Obviously, the matrix operator $\Phi( \cdot,\eta)$ defined in \eqref{Phi-operator} is linear. Before proving the Theorem \ref{ThCharact-HankelS}, we need to study some other properties of this operator and some auxiliary results. \begin{proposition}[Recurrence] \label{RecF-1} Let $\eta \in \mathds{Z}_{+}$ fixed and $\mathcal{A} \in \mathds{M}$ , then \begin{equation}\label{RecuRelat} \Phi( \mathcal{A},\eta+1)=\mathcal{U}\,\Phi( \mathcal{A},\eta)-\Phi( \mathcal{A},\eta) \,\mathcal{U}^{-1} . \end{equation} \end{proposition} \begin{proof} \begin{align*} \mathcal{U} \, \Phi( \mathcal{A},\eta)-\Phi( \mathcal{A},\eta) \mathcal{U}^{-1} =& \mathcal{U} \left( \sum_{\ell=0}^{\eta} (-1)^\ell \binom{\eta}{\ell} \mathcal{U}^{\eta-\ell} \mathcal{A} \mathcal{U}^{-\ell} \right)\\ & -\left(\sum_{\ell=0}^{\eta} (-1)^\ell \binom{\eta}{\ell} \mathcal{U}^{\eta-\ell} \mathcal{A} \mathcal{U}^{-\ell}\right) \mathcal{U}^{-1}\\ =& \sum_{\ell=0}^{\eta} (-1)^\ell \binom{\eta}{\ell} \mathcal{U}^{\eta+1-\ell}\mathcal{A} \mathcal{U}^{-\ell}\\ &+\sum_{\ell=1}^{\eta+1} (-1)^{\ell} \binom{\eta}{\ell-1} \mathcal{U}^{\eta+1-\ell} \mathcal{A} \mathcal{U}^{-\ell}\\ =& \mathcal{U}^{\eta+1} \mathcal{A} + \left( \sum_{\ell=1}^{\eta} (-1)^\ell \left( \binom{\eta}{\ell-1}+ \binom{\eta}{\ell}\right) \mathcal{U}^{\eta+1-\ell} \mathcal{A} \mathcal{U}^{-\ell} \right) \\ & + (-1)^{\eta+1} \mathcal{A} \mathcal{U}^{-\eta-1}\\ =& \mathcal{U}^{\eta+1} \mathcal{A} + \left( \sum_{\ell=1}^{\eta} (-1)^\ell \binom{\eta+1}{\ell} \mathcal{U}^{\eta+1-\ell} \mathcal{A} \mathcal{U}^{-\ell} \right) \\ & + (-1)^{\eta+1} \mathcal{A} \mathcal{U}^{-\eta-1}= \Phi( \mathcal{A},\eta+1). \end{align*} \end{proof} The following proposition is an immediate consequence of proposition \ref{RecF-1} and \eqref{HankelCond}. \begin{proposition} \label{PropHankel} If for a matrix $\mathcal{A} \in \mathds{M}$, there exists $\eta \in \mathds{Z}_{+}$ such that $\Phi( \mathcal{A},\eta)=\mathcal{O}$, then \begin{enumerate} \item[(a)] $\Phi( \mathcal{A},\eta_1)=\mathcal{O}$ for all $\eta_1 \geq \eta.$ \item[(b)] For all $c\in \mathds{R}$ and $\eta \geq 1$ the matrix $c\,\Phi( \mathcal{A},\eta-1)$ is a Hankel matrix. \end{enumerate} \end{proposition} \begin{proposition} \label{lemma-symmetric} Assume that $\mathcal{A}\in \mathds{M}$ is a symmetric matrix, then $\Phi( \mathcal{A},\eta)$ is a symmetric (antisymmetric) matrix if and only if $\eta$ is an even (odd) integer number. \end{proposition} \begin{proof} \begin{align*} \left(\Phi( \mathcal{A},\eta)\right)^{\mathsf{T}}=&\sum_{\ell=0}^{\eta} (-1)^\ell \binom{\eta}{\ell} \left(\mathcal{U}^{\eta-\ell} \mathcal{A} \mathcal{U}^{-\ell} \right)^{\mathsf{T}}= \sum_{\ell=0}^{\eta} (-1)^\ell \binom{\eta}{\eta-\ell} \mathcal{U}^{\ell} \mathcal{A} \mathcal{U}^{\ell-\eta} \\ = & \sum_{\ell=0}^{\eta} (-1)^{\eta-\ell} \binom{\eta}{\ell} \mathcal{U}^{\eta-\ell} \mathcal{A} \mathcal{U}^{-\ell}= (-1)^{\eta} \sum_{\ell=0}^{\eta} (-1)^{\ell} \binom{\eta}{\ell} \mathcal{U}^{\eta-\ell} \mathcal{A} \mathcal{U}^{-\ell} \\= & (-1)^{\eta} \Phi( \mathcal{A},\eta).\\ \end{align*} \end{proof} \begin{theorem} \label{Th-Suff} Let $d\in \overline{\mathds{Z}}_{+}$ and $\mathcal{M}$ be a Hankel-Sobolev matrix of index $d$, as in \eqref{Hankel-Sobolev}. Denote $$ \mathcal{M}_\eta= \sum_{k=0}^\eta \mathcal{U}^{-k} \mathcal{D}_k \mathcal{H}_{k} \mathcal{D}_k \mathcal{U}^{k}, \quad 0\leq \eta \leq d. $$ Then \begin{enumerate} \item[(a)]$\Phi( \mathcal{M}_\eta,2\eta+1)=\mathcal{O}$. \item[(b)] $\displaystyle \mathcal{H}_\eta= \frac{(-1)^\eta}{(2\eta )!}\Phi( \mathcal{M}_\eta,2\eta)$. \end{enumerate} \end{theorem} Before proving the previous theorem, we need the next lemma, which is a version of the famous Euler’s Finite Difference Theorem (c.f. \cite[\S 6.1]{QuGo16}). \begin{lemma}\label{Euler_Lem} Let $\displaystyle f(z)$ be a complex polynomial of degree $n$ and leading coefficient $a_n \in \mathds{C}$. Then, for all $m,\nu\in \mathds{Z}_{+}$ $$\sum_{\ell=0}^{\nu} (-1)^{\ell} \binom{\nu}{\ell} f(\ell)=\left\{\begin{array}{rl} 0, &\text{if } 0 \leq n < \nu, \\ (-1)^{\nu } \;\nu! \; a_ \nu, &\text{if } n=\nu. \end{array}\right.$$ \end{lemma} \begin{proof}[Proof of Theorem \ref{Th-Suff}] \ Let $0\leq k \leq d$ fixed and $\displaystyle \mathcal{R}_k=\mathcal{U}^{-k}\mathcal{D}_k \mathcal{H}_{k} \mathcal{D}_k\mathcal{U}^{k} $. Then $\displaystyle \mathcal{M}_\eta= \sum_{k=0}^\eta \mathcal{R}_k $ and from linearity $\displaystyle \Phi( \mathcal{M}_\eta,2\eta+1)= \sum_{k=0}^\eta \Phi \left(\mathcal{R}_k,2\eta+1 \right)$. \begin{align} \nonumber \Phi(\mathcal{R}_k,2\eta+1)=& \sum_{\ell=0}^{2\eta+1} (-1)^\ell \binom{2\eta+1}{\ell} \mathcal{U}^{2\eta+1-\ell}\mathcal{R}_k\mathcal{U}^{-\ell} \\ \nonumber =& \sum_{\ell=0}^{\eta} (-1)^\ell \binom{2\eta+1}{\ell} \mathcal{U}^{2\eta+1-\ell}\mathcal{R}_k\mathcal{U}^{-\ell} + \sum_{\ell=\eta+1}^{2\eta+1} (-1)^\ell \binom{2\eta+1}{\ell} \mathcal{U}^{2\eta+1-\ell}\mathcal{R}_k\mathcal{U}^{-\ell} \\ \nonumber =& \sum_{\ell=0}^{\eta} (-1)^\ell \binom{2\eta+1}{\ell} \mathcal{U}^{2\eta+1-\ell}\mathcal{R}_k\mathcal{U}^{-\ell} \\ \nonumber &+ \sum_{\ell=\eta+1}^{2\eta+1} (-1)^\ell \binom{2\eta+1}{2\eta+1-\ell} \mathcal{U}^{2\eta+1-\ell}\mathcal{R}_k\mathcal{U}^{-\ell} \\ \nonumber =& \sum_{\ell=0}^{\eta} (-1)^\ell \binom{2\eta+1}{\ell} \; \mathcal{U}^{2\eta+1-\ell}\mathcal{R}_k\mathcal{U}^{-\ell} - \sum_{\ell=0}^{\eta} (-1)^\ell \binom{2\eta+1}{\ell} \; \mathcal{U}^{\ell}\mathcal{R}_k\mathcal{U}_{2\eta+1-\ell} \\ \label{Th-Suff-01} =& \sum_{\ell=0}^{\eta} (-1)^\ell \binom{2\eta+1}{\ell} \; \left(\mathcal{U}^{2\eta+1-\ell}\mathcal{R}_k\mathcal{U}^{-\ell} - \mathcal{U}^{\ell} \mathcal{R}_k\mathcal{U}_{2\eta+1-\ell} \right). \end{align} Throughout the proof, given a sequence of double indexes $\{a_{i,j}\}$, we denote by $\mathcal{A}=[a_{i,j}]$ the corresponding infinite matrix, whose $(i,j)$ entry is $a_{i,j}$. Let $\{m_{k,i}\}$ be the sequence of real numbers such that $m_{k,i+j-2}$ is the $(i,j)$th entry of the infinite Hankel matrix $ \mathcal{H}_{k}$, where $i,j=1,2,\dots$ In that case, we write $ \displaystyle \mathcal{H}_{k}=\left[m_{k,i+j-2}\right]$. Therefore with the same notation $ \displaystyle \mathcal{R}_{k}=\left[\frac{(k+i-1)!}{(i-1)!}\frac{(k+j-1)!}{(j-1)!}m_{k,i+j-2}\right]$ and $\displaystyle \mathcal{U}^{2\eta+1-\ell} \mathcal{R}_k\mathcal{U}^{-\ell} = \left[r_{i,j}\right]$ where $$ r_{i,j}= \frac{(k+i+2(\eta-1)-\ell)!}{(2(\eta-1)-\ell)!}\frac{(k+j+\ell-1)!}{(j+\ell-1)!}\; m_{k,i+j+2\eta-1}.$$ Note that $r_{i,j}=r_{j,i}$, i.e. $ \displaystyle \mathcal{U}^{2\eta+1-\ell} \mathcal{R}_k\mathcal{U}^{-\ell} $ is a symmetric matrix, therefore \begin{equation}\label{Th-Suff-02} \mathcal{U}^{2\eta+1-\ell} \mathcal{R}_k\mathcal{U}^{-\ell} = \left( \mathcal{U}^{2\eta+1-\ell} \mathcal{R}_k\mathcal{U}^{-\ell} \right)^{\mathsf{T}}= \mathcal{U}^{\ell} \mathcal{R}_k\mathcal{U}^{\ell -2\eta-1}. \end{equation} Combining \eqref{Th-Suff-01}-\eqref{Th-Suff-02} we get (a) in Theorem \ref{Th-Suff}. From (a) and Proposition \ref{RecF-1}, it follows that \begin{align*} \Phi( \mathcal{M}_\eta,2\eta)= & \Phi( \mathcal{R}_\eta,2\eta) + \sum_{k=0}^{\eta-1} \Phi( \mathcal{R}_k,2\eta) \\ = & \Phi( \mathcal{R}_\eta,2\eta) + \mathcal{U} \sum_{k=0}^{\eta-1} \Phi( \mathcal{R}_k,2\eta-1)- \sum_{k=0}^{\eta-1} \Phi( \mathcal{R}_k,2\eta-1) \mathcal{U}^{-1}\\ = & \Phi( \mathcal{R}_\eta,2\eta) + \mathcal{U} \, \Phi( \mathcal{H}_{\eta-1},2\eta-1)- \Phi( \mathcal{H}_{\eta-1},2\eta-1) \mathcal{U}^{-1}= \Phi( \mathcal{R}_\eta,2\eta) .\\ \Phi( \mathcal{R}_\eta,2\eta) = & \sum_{\ell=0}^{2\eta} (-1)^\ell \binom{2\eta}{\ell} \mathcal{U}^{2\eta-\ell} \mathcal{R}_\eta \mathcal{U}^{-\ell}= \sum_{\ell=0}^{2\eta} (-1)^\ell \binom{2\eta}{\ell} \mathcal{U}^{\eta-\ell} \mathcal{D}_\eta \mathcal{H}_{\eta} \mathcal{D}_\eta\mathcal{U}^{\eta-\ell}. \end{align*} As in the first part of the proof \begin{align*} \mathcal{R}_{\eta}= & \mathcal{D}_\eta \mathcal{H}_{\eta} \mathcal{D}_\eta =\left[\frac{(\eta+i-1)!}{(i-1)!}\frac{(\eta+j-1)!}{(j-1)!}m_{\eta,i+j-2}\right] \\ =& \left[\left(\eta! \right)^2\, \binom{\eta+i-1}{\eta}\binom{\eta+j-1}{\eta}m_{\eta,i+j-2}\right]. \end{align*} and therefore \begin{align}\nonumber \Phi( \mathcal{R}_\eta,2\eta) = & \left(\eta! \right)^2 \\ \nonumber & \cdot \left[\left(\sum_{\ell=0}^{2\eta} (-1)^{\ell}\binom{2\eta}{\ell}\binom{\eta+i-1+(\eta-\ell)}{\eta}\binom{\eta+j-1-(\eta-\ell)}{\eta} \right)\;m_{\eta,i+j-2}\right] \\ \label{Th-Suff-1} = & \left(\eta! \right)^2\,\left[\left(\sum_{\ell=0}^{2\eta} (-1)^{\ell}\binom{2\eta}{\ell}\binom{2\eta+i-1-\ell}{\eta}\binom{j-1+\ell}{\eta} \right)\;m_{\eta,i+j-2}\right] . \end{align} Clearly $\displaystyle f(\ell)= \binom{2\eta+i-1-\ell}{\eta}\binom{j-1+\ell}{\eta}$ is a polynomial of degree $2\eta$ in $\ell$ and leading coefficient $\displaystyle \frac{(-1)^\eta}{(\eta!)^2} $. By Lemma \ref{Euler_Lem} we deduce that \begin{equation}\label{Th-Suff-2} \sum_{\ell=0}^{2\eta} (-1)^{\ell}\binom{2\eta}{\ell}\binom{2\eta+i-1-\ell}{\eta}\binom{j-1+\ell}{\eta}=(-1)^{\eta}\; \binom{2\eta}{\eta}. \end{equation} Hence, from \eqref{Th-Suff-1}-\eqref{Th-Suff-2} we get $\displaystyle \Phi( \mathcal{R}_\eta,2\eta) =(-1)^\eta (2\eta)![m_{\eta,i+j-2}]=(-1)^\eta\;(2\eta )!\,\mathcal{H}_\eta$ and (b). \end{proof} We will assume that $\mathcal{A}$ is an infinite symmetric matrix because this is obviously a necessary condition for \eqref{Hankel-Sobolev} to take place since the Hankel matrices $\mathcal{H}_k$ are symmetric. \begin{theorem} Let $\mathcal{A}$ be an infinite symmetric matrix, $\eta \in \mathds{Z}_{+}$ (fixed) such that $\Phi( \mathcal{A},2\eta+1)=\mathcal{O}$. Then \begin{equation}\label{Th-Suff3} \Phi( \mathcal{A}_\eta,2\eta-1)=\mathcal{O}, \end{equation} where $ \mathcal{A}_\eta=\mathcal{A}-\mathcal{R}_\eta$ and $\displaystyle \mathcal{R}_\eta= \frac{(-1)^\eta}{(2\eta)!}\mathcal{U}^{-\eta} \mathcal{D}_\eta\Phi( \mathcal{A},2\eta) \mathcal{D}_\eta \mathcal{U}^{\eta}$. \end{theorem} \begin{proof} If $\mathcal{A}_\eta=\mathcal{O}$, the theorem is obvious. Assume that $\mathcal{A}_\eta\neq \mathcal{O}$, from Theorem \ref{Th-Suff}, we get $\Phi( \mathcal{R}_\eta,2\eta)= \frac{(-1)^\eta}{(2\eta)!}\Phi( \mathcal{A},2\eta)$, i.e. $\Phi( \mathcal{A}_\eta,2\eta)= \mathcal{O}$. According to the recurrence formula \eqref{RecuRelat}, we have $ \mathcal{U}\,\Phi( \mathcal{A}_\eta,2\eta-1) = \Phi( \mathcal{A}_\eta,2\eta-1)\mathcal{U}^{-1}$ which is equivalent to stating that $\Phi( \mathcal{A}_\eta,2\eta-1)$ is a Hankel matrix and therefore it is a symmetric matrix. On the other hand, $\mathcal{A}_\eta$ is a symmetric matrix since it is the difference of two symmetric matrices. Hence, from Proposition \ref{lemma-symmetric} we get that $\Phi( \mathcal{A}_\eta,2\eta-1)$ is antisymmetric which establishes \eqref{Th-Suff3}. \end{proof} \begin{proof}[Proof of Theorem \ref{ThCharact-HankelS}] From Theorem \ref{Th-Suff}, a Hankel-Sobolev matrix of index $d\in \mathds{Z}_{+}$ satisfies the conditions \eqref{ThCharact-01} and each Hankel matrix $\mathcal{H}_k$ holds \eqref{ThCharact-02}, which establishes the first implication of the theorem. For the converse, assume that $\mathcal{M}$ is a symmetric infinite matrix and there exits $d\in \mathds{Z}_{+}$ such that the conditions \eqref{ThCharact-01} are satisfied. From Proposition \ref{PropHankel}, $H_d=\frac{(-1)^d}{(2d)!}\Phi( \mathcal{M},2d)\neq \mathcal{O}$ is a Hankel matrix. Denote $\mathcal{M}_{d-1}=\mathcal{M}_{d}-\mathcal{R}_d$, where $\mathcal{M}_{d}=\mathcal{M}$ and $\displaystyle \mathcal{R}_d= \mathcal{U}^{-d} \mathcal{D}_d \mathcal{H}_d \mathcal{D}_d \mathcal{U}^{d}$. From \eqref{Th-Suff3} and Proposition \ref{PropHankel}, $H_{d-1}=\frac{(-1)^{d-1}}{(2d-2)!}\Phi( \mathcal{M}_{d-1},2d-2)$ is a Hankel matrix. Let $\mathcal{M}_{d-k}=\mathcal{M}_{d+1-k}-\mathcal{R}_{d+1-k}$ and $\displaystyle \mathcal{R}_{d+1-k}= \mathcal{U}^{-d-1+k} \mathcal{D}_{d+1-k} \mathcal{H}_{d+1-k} \mathcal{D}_{d+1-k} \mathcal{U}^{d+1-k}$. Repeating the previous argument, we get that $H_{d-k}=\frac{(-1)^{d-k}}{(2d-2k)!}\Phi( \mathcal{M}_{d-k},2d-2k)$ is a Hankel matrix for $k=2,\dots, d$. By construction, it is clear that $$\mathcal{M}=\sum_{k=0}^{d}\mathcal{R}_{d-k}= \sum_{k=0}^{d} \mathcal{U}^{k-d} \mathcal{D}_{d-k} \mathcal{H}_{d-k} \mathcal{D}_{d-k} \mathcal{U}^{d-k},$$ i.e. $\mathcal{M}$ is a Hankel-Sobolev matrix and the proof is complete. \end{proof} \subsection{The Sobolev moment problem} \label{Sec-S-MomentP} Let $\mu$ be a finite positive Borel measure supported on the real line and $\mathbf{L}_2(\mu)$ be the usual Hilbert space of square integrable functions with respect to $\mu$ with the inner product \begin{equation}\label{L2-InnerP} \langle f,g \rangle_{\mu}= \int_{\mathds{R}} f(x) g(x) d\mu(x), \quad \mbox{for all } f,g \in \mathbf{L}_2(\mu).\end{equation} The $n$th moment associated to the inner product \eqref{L2-InnerP} (or the measure $\mu$) is defined as $m_n=\langle x^n,1\rangle_{\mu}$ ($n\geq 0$), provided the integral exists. The Hankel matrix $\han{ m_n }$ is called the \emph{matrix of moments} associated with $\mu$. The classical moment problem consists in solving the following question: given an arbitrary sequence of real numbers $\{m_n\}_{n\geq 0}$ (or equivalently the associated Hankel matrix $\han{ m_n }$) \, and a closed subset $\Delta \subset \mathds{R}$, find a positive Borel measure $\mu$ supported on $\Delta$, whose $n$th moment is $m_n$, i.e. \begin{equation*}\label{MomEstandasDef-1} m_n= \int_{\Delta} x^n d\mu(x), \quad \text{ for all $n\geq 0$}. \end{equation*} It is said that the moment problem $(\mathcal{H};\Delta)$ is \emph{definite}, if it has at least one solution and \emph{determinate} if the solution is unique. There are three named \emph{classical moment problems}: the \emph{Hamburger moment problem} when the support of $\mu$ is on the whole real line; the \emph{Stieltjes moment problem} if $\Delta=[0,\infty)$ and the \emph{Hausdorff moment problem} for a bounded interval $\Delta$ (without loss of generality, $\Delta=[0, 1]$). As H. J. Landau write in the introduction of \cite[p.1]{Lan87} \emph{``The moment problem is a classical question in analysis, remarkable not only for its own elegance, but also for the extraordinary range of subjects, theoretical and applied, which it has illuminated''}. For a more details on the classical moment problem, the reader is referred to \cite{Akh65,GoMe10,Lan87,ShoTam63,Sch17} and for historical aspects to \cite{Kje93} or \cite[\S 2.4]{GoMe10}. Without restriction of generality, we now turn our attention to the Hamburger moment problem referring to the following lemma as a necessary and sufficient condition for the problem of moments to be defined and determined. \begin{lemma}[{\cite[Th. 1.2]{ShoTam63}}]\label{SchoTam-lemma} Let $\{ m_n\}_{n=0}^{\infty}$ be a sequence of real numbers and denote by $\mathcal{H}=\han{ m_n }$ the associated Hankel matrix. Then \begin{enumerate} \item The Hamburger moment problem $(\mathcal{H};\mathds{R})$ has a solution, whose support is not reducible to a finite set of points, if and only if $\mathcal{H}$ is a matrix positive definite of infinite order (i.e. $\det ([\mathcal{H}]_n)>0$ for all $n\geq 1$). \item The Hamburger moment problem $(\mathcal{H};\mathds{R})$ has a solution, whose support consists of precisely $k$ distinct points, if and only if $\mathcal{H}$ is a matrix positive definite of order $k$ (i.e. $\det ([\mathcal{H}]_n)>0$ for all $1 \leq n\leq k$, and $\det ([\mathcal{H}]_n)=0$ for all $ n> k$). The moment problem is determined in this case. \end{enumerate} \end{lemma} The analogous results for the moment problem of Stieltjes $(\mathcal{H};\mathds{R}_{+})$ or the moment problem of Hausdorff $(\mathcal{H};[0,1])$ are \cite[Th. 1.3]{ShoTam63} and \cite[Th. 1.5]{ShoTam63} respectively. Other equivalent formulations of these results can be seen in \cite[Ch. 1, \S 7]{Pell03} or \cite[\S 3.2]{ShoTam63}. The $(n,k)$-moment associated to the inner product \eqref{Sob-InnerP} is defined as $m_{n,k}=\langle x^n,x^k\rangle_{\vec{\mathbf{\mu}}}$ ($n,k\geq 0$), provided the integral exists. In the sequel, the values $\langle x^n,x^k\rangle_{\vec{\mathbf{\mu}}}$ are called {\em S-moments}. Here, instead of sequence of moment, we have the infinity matrix of moments $\mathcal{M}$ with entries $m_{n,k}=\langle x^n,x^k\rangle_{\vec{\mathbf{\mu}}}$, ($n,k\geq 0$). Now, the Sobolev moment problem (or $S$-moment problem) consists of solving the following question: given an infinite matrix $ \mathcal{M}=(m_{i,j})_{i,j=0}^{\infty}$ and $d+1$ subset $\Delta_{k} \subset \mathds{R}$ ($0\leq k \leq d$), find a set of $d+1$ measures $\{\mu_0, \mu_1, \dots, \mu_d \}$, where $\mu_d \neq 0 $ and $\sopor{\mu_k} \subset \Delta_k$, such that $m_{i,j}=\langle x^i,x^j\rangle_{\vec{\mathbf{\mu}}}$ for $i,j=0,1, \dots$ As in the standard case, the problem is considered \emph{definite} if it has at least one solution, and \emph{determinate} if this solution is unique. There are three conventional cases of $S$-moment problems: the \emph{Hamburger $S$-moment problem} when $\Delta_0=\cdots=\Delta_d=\mathds{R}$; the \emph{Stieltjes $S$-moment problem} if $\Delta_0=\cdots=\Delta_d=[0,\infty)$ and the \emph{Hausdorff $S$-moment problem} for $\Delta_0=\cdots=\Delta_d=[0,1]$. Nonetheless, other combinations of the sets $\Delta_{k} \subset \mathds{R}$ ($0\leq k \leq d$) are possible too. An equivalent formulation of the Sobolev moment problem is made for the special case $d=\infty$. The following result was proved in \cite[Th. 1]{BaLoPi99}. In virtue of Theorem \ref{ThCharact-HankelS}, now we can reformulate it in the following equivalent form. \begin{theorem}\label{Th-hp1} Given an infinite symmetric matrix $ \mathcal{M}=(m_{i,j})_{i,j=0}^{\infty}$ and $d+1$ subset $\Delta_{k} \subset \mathds{R}$ ($0\leq k \leq d \in \mathds{Z}_{+}$), the $S$-moment problem is definite (or determinate) if and only if \; $\Phi( \mathcal{M},2d+1)=\mathcal{O}$, $\Phi( \mathcal{M},2d)\neq\mathcal{O}$ and for each $k=0,1, \dots, d$ the Hankel matrix $ \mathcal{H}_{k}$ (defined in \eqref{ThCharact-02}) is such that the classical moment problem $( \mathcal{H}_{k};\Delta_k)$ is definite (or determinate). \end{theorem} Although \cite{BaLoPi99} is devoted to the study of the case in which $d$ is finite and the measures involved are supported on subset of the real line, there are no difficulties in extending these results when $ d =\infty $ or if the measures are supported on the unit circle, as confirmed by the authors of \cite{MaSz00,MaSz01}. The $S$-moments problem for discrete Sobolev-type inner products was studied in \cite{Zag05}. . \section{Hessenberg-Sobolev matrices}\label{Sect-Hessenberg} From the definition of the matrix of formal moments $\mathcal{M}$ in \eqref{GenMomentMatrix}, we have next two immediate consequences. \begin{proposition}\label{PropoSimDef+} Let $\mathcal{G}$ be a non-degenerate Hessenberg matrix and $\mathcal{M}$ be its associated matrix of formal moments. Then $\mathcal{M}$ is a symmetric and positive definite infinite matrix. \end{proposition} \begin{proof} Obviously, $\displaystyle \mathcal{M}^{\mathsf{T}} =\left(\mathcal{T}^{-1}\; \mathcal{T}^{-\mathsf{T}}\right)^{\mathsf{T}}=\mathcal{T}^{-1}\; \mathcal{T}^{-\mathsf{T}}=\mathcal{M}.$ Moreover, for all $n \geq 1$, $$ \det\left([\mathcal{M}]_k\right) =\det\left([\mathcal{T}]_k^{-1}\right) \det\left([\mathcal{T}]_k^{-\mathsf{T}}\right)=1 \cdot \tau_{1, 1}^2 \cdot \ldots \cdot \tau_{k-1, k-1}^2 > 0,$$ where $\tau_{i, i}$ is the $(i,i)$-entry of $\mathcal{T}^{-1}$. \end{proof} The following theorem clarifies the relation between the sequence of polynomials generated by $\mathcal{G}$ and the matrix of formal moments. \begin{theorem}\label{Th-FormalOP} Let $\mathcal{G}$ be a non-degenerate Hessenberg matrix and $\mathcal{M}= \left( m_{i,j}\right)$ be its associated matrix of formal moments. \begin{enumerate}[label=\arabic*)] \item If $p(x)=\sum_{i=0} ^{n_1} a_i x^i$ and $q(x)=\sum_{j=0} ^{n_2} b_j x^j$ are two polynomials in $\mathds{P}$ of degree $n_1$ and $n_2$ respectively. Then the bilinear form \eqref{MatInnerProd} define an inner product on $\mathds{P}$ and $\|\cdot \|_{\mathcal{G}}=\sqrt{\langle \cdot ,\cdot \rangle_{\mathcal{G}}}$ is the norm induced by \eqref{MatInnerProd} on $\mathds{P}$. \item Let $m_{i,j}$ be the $(i,j)$th entry of $\mathcal{M}$, as in \eqref{GenMomentMatrix}, then $m_{i,j}=\langle x^{i},x^{j} \rangle_{\mathcal{G}}$. \item $\{Q_n\}$, the sequence of polynomials generated by $\mathcal{G}$, is the sequence of orthonormal polynomials with respect to the inner product \eqref{MatInnerProd}. \item $g_{j,k}=\langle x Q_{j}, Q_k\rangle_{\mathcal{G}}$, where $g_{j,k}$ is the $(j,k)$-entry of the matrix $\mathcal{G}$ given in \eqref{InfLowerHess}, with $j\geq 0$ and $0\leq k \leq j+1$. \end{enumerate} \end{theorem} \medskip \begin{proof} From proposition \ref{PropoSimDef+}, the statement 1) is straightforward. The assertion 2) follows from \eqref{MatInnerProd}. Let $\mathcal{E}_j$ be the infinite column-vector whose $i$-entry is $\delta_{i,j}$, where $i,j\in \mathds{Z}_{+}$. Denote by $Q_n(x)=\sum _{i=0} ^{n} t_{n,i} \,x^i$ the $n$-th polynomial generated by $\mathcal{G}$, as in \eqref{InfiniteMatForm02}. Then, for $j=0, \ldots , n$ \begin{align*} \langle Q_n, x^j \rangle_{\mathcal{G}} =& ( t_{n,0},\cdots, t_{n,n},0,\cdots) \; \mathcal{M} \; \mathcal{E}_j =\mathcal{E}_n^{\mathsf{T}}\mathcal{ T}\mathcal{M} \mathcal{E}_j=\mathcal{E}_n^{\mathsf{T}}\mathcal{ T} \, \mathcal{T}^{-1}\, \mathcal{T}^{-\mathsf{T}}\, \mathcal{E}_j\\ =& \mathcal{E}_n^{\mathsf{T}}\, \mathcal{T}^{-\mathsf{T}} \, \mathcal{E}_j= \tau_{n,n} \, \mathcal{E}_n^{\mathsf{T}}\, \mathcal{E}_j= \tau_{n,n} \, \delta_{n,j}; \end{align*} where $\tau_{n,n}\neq 0$. Furthermore, \begin{align*} \langle Q_n, Q_n \rangle_{\mathcal{G}} =& ( t_{n,0},\cdots, t_{n,n},0,\cdots) \; \mathcal{M} \; ( t_{n,0},\cdots, t_{n,n},0,\cdots)^{\mathsf{T}}\\ =& \mathcal{E}_n^{\mathsf{T}}\, \mathcal{ T}\,\mathcal{M} \,\mathcal{T}^{\mathsf{T}} \, \mathcal{E}_n = \mathcal{E}_n^{\mathsf{T}}\,\mathcal{ T}\,\mathcal{T}^{-1}\, \mathcal{T}^{-\mathsf{T}}\, \mathcal{T}^{\mathsf{T}} \, \mathcal{E}_n =1. \end{align*} Hence, $Q_n$ is the $n$-th orthonormal polynomial with respect to \eqref{MatInnerProd}. The fourth assertion is straightforward from \eqref{SobolevRR01} and the orthogonality. \end{proof} \begin{remark}From \eqref{SobolevRR01}-\eqref{InfiniteMatForm02}, we have that the leading coefficient of $Q_n$ is $$t_{n,n}=\left(\prod_{k=0}^{n-1} g_{k,k+1}\right)^{-1}\neq 0; \quad \text{ for all \;} n \geq 1.$$ Therefore, the corresponding $n$th-monic polynomial orthogonal with respect to $\langle \cdot, \cdot \rangle_{\mathcal{G}}$ is $q_n=\tau_{n,n} \,Q_n$ and $\|q_n\|_{\mathcal{G}}=\tau_{n,n}=t_{n,n}^{-1}$ as in \eqref{InfiniteMatForm03}.\end{remark} \begin{theorem}The matrices $\mathcal{G}$ and $\mathcal{M}$ are closely related by the expression \begin{equation}\label{RelatHessHank} \mathcal{G}=\mathcal{T} \mathcal{M}\mathcal{U}^{-1} \mathcal{T}^{\mathsf{T}}. \end{equation} \end{theorem} \begin{proof} From \eqref{InfiniteMatForm02} $\displaystyle Q_{n}(x)=\sum_{i=0}^n t_{n,i} x^{i}$, therefore \begin{align*} g_{k,\ell}=&\langle x Q_{k}, Q_\ell\rangle_{\mathcal{G}}= \sum_{i=0}^k\sum_{j=0}^\ell t_{k,i} t_{\ell,j} \langle x^{i+1}, x^{j}\rangle_{\mathcal{G}}=\sum_{i=0}^k\sum_{j=0}^\ell t_{k,i} t_{\ell,j} \ell_{i+1,j} \end{align*} which is the $(k,\ell)$ entry of matrix $\mathcal{T} \mathcal{M}\mathcal{U}^{-1} \mathcal{T}^{\mathsf{T}}$ and \eqref{RelatHessHank} is proved. \end{proof} \begin{theorem} \label{RelaOperadores} Let $\mathcal{G} \in \mathds{M}$ be a non-degenerate Hessenberg matrix, $\mathcal{M} \in \mathds{M}$ be its matrix of formal moments associated and $\eta \in \mathds{Z}_{+}$ fixed. Then\; \begin{equation*}\label{EqualOperator} \Phi( \mathcal{M},\eta)= \mathcal{T}^{-1} \Psi( \mathcal{G},\eta)\mathcal{T}^{-\mathsf{T}}. \end{equation*} \end{theorem} \begin{proof} From \eqref{GenMomentMatrix} and \eqref{RelatHessHank}, we get that $\mathcal{G}=\mathcal{T}^{-\mathsf{T}} \mathcal{U}^{-1} \mathcal{T}^{\mathsf{T}}$. Therefore, for each $k \in \mathds{Z}_{+}$ we obtain $\mathcal{G}^k=\mathcal{T}^{-\mathsf{T}} \mathcal{U}^{-k} \mathcal{T}^{\mathsf{T}}$, or equivalently \begin{equation}\label{RelatHessHank-2} \mathcal{U}^{-k}=\mathcal{T}^{\mathsf{T}}\mathcal{G}^k \mathcal{T}^{-\mathsf{T}} \quad \text{ and } \quad \mathcal{U}^{k}=\mathcal{T}^{-1}\mathcal{G}^{k\mathsf{T}}\mathcal{T}. \end{equation} Now, from \eqref{Phi-operator}, \eqref{Psi-operator}, \eqref{GenMomentMatrix} and \eqref{RelatHessHank-2} it follows that \begin{align*} \Phi( \mathcal{M},\eta)=& \sum_{\ell=0}^{\eta} (-1)^\ell \binom{\eta}{\ell} \mathcal{U}^{\eta-\ell} \mathcal{M} \mathcal{U}^{-\ell} =\sum_{\ell=0}^{\eta} (-1)^\ell \binom{\eta}{\ell} \left( \mathcal{T}^{-1}\mathcal{G}^{(\eta-\ell)\mathsf{T}}\mathcal{T}\right) \mathcal{M} \left(\mathcal{T}^{\mathsf{T}}\mathcal{G}^{\ell} \mathcal{T}^{-\mathsf{T}} \right)\\ =& \sum_{\ell=0}^{\eta} (-1)^\ell \binom{\eta}{\ell} \left( \mathcal{T}^{-1}\mathcal{G}^{(\eta-\ell)\mathsf{T}}\mathcal{T}\right) \left( \mathcal{T}^{-1}\; \mathcal{T}^{-\mathsf{T}}\right) \left(\mathcal{T}^{\mathsf{T}}\mathcal{G}^{\ell} \mathcal{T}^{-\mathsf{T}} \right)\\ =& \mathcal{T}^{-1} \left( \sum_{\ell=0}^{\eta} (-1)^\ell \binom{\eta}{\ell} \mathcal{G}^{(\eta-\ell)\mathsf{T}} \mathcal{G}^{\ell} \right) \mathcal{T}^{-\mathsf{T}}= \mathcal{T}^{-1} \Psi( \mathcal{G},\eta)\mathcal{T}^{-\mathsf{T}}. \end{align*} \end{proof} \section{Favard type theorems} One of the main problems in the general theory of orthogonal polynomials is to characterize the non-degenerate Hessenberg matrices, for which there exists a non-discrete positive measure $\mu$ supported on the real line such that the inner product \eqref{MatInnerProd} can be represented as \begin{equation}\label{3D-InnerProd} \langle p,q \rangle_{\mathcal{G}}=\langle p,q \rangle_{\mu}:= \int p\,q\,d\mu. \end{equation} The aforementioned characterization is the well-known \emph{Favard Theorem} (see \cite{Dur93} or \cite{AlMa01} for an overview of this theorem and its extensions), that we revisited according to the view-point of this paper. \begin{theorem}[Favard theorem] \label{Th-Favard} Let $\mathcal{G}$ be a non-degenerate Hessenberg matrix and $\langle \cdot, \cdot \rangle_{\mathcal{G}}$ be the inner product on $\mathds{P}$ defined by \eqref{MatInnerProd}. Then, there exists a non-discrete positive measure $\mu$ such that $\langle p,q \rangle_{\mathcal{G}}= \langle p, q \rangle_{\mu}$ for all $p,q \in \mathds{P}$ if and only if \; $\Psi( \mathcal{G},1)=\mathcal{O}$. \end{theorem} \begin{proof} Assume that there exists a non-discrete positive measure $\mu$ such that $\langle p,q \rangle_{\mathcal{G}}= \langle p, q \rangle_{\mu}$ for all $p,q \in \mathds{P}$, where $\mathcal{G}$ is a non-degenerate Hessenberg matrix. From the orthogonality of the generated polynomials $Q_n$ (Theorem \ref{Th-FormalOP}) and the fact that the operator of multiplication by the variable is symmetric with respect to $\langle \cdot, \cdot \rangle_{\mu}$ ( $\langle x p, q \rangle_{\mu}=\langle p, x q \rangle_{\mu}$), it is clear that $\mathcal{G}$ is a symmetric tridiagonal matrix, which is equivalent to $\Psi( \mathcal{G},1)=\mathcal{O}$. On the other hand, if $\mathcal{G}$ is a non-degenerate Hessenberg matrix such that $\Psi( \mathcal{G},1)=\mathcal{O}$, we get that $\mathcal{G}$ a symmetric Hessenberg matrix or equivalently a non-degenerate tridiagonal matrix. From Theorem \ref{RelaOperadores}, $$ \mathcal{O}=\Phi( \mathcal{M},1)=\mathcal{U} \mathcal{M}-\mathcal{M} \mathcal{U}^{-1},$$ i.e. $\mathcal{M} $ is a Hankel matrix, which from Proposition \ref{PropoSimDef+} is definite positive. From Lemma \ref{SchoTam-lemma}, the proof is complete. \end{proof} Obviously, under the assumptions of Theorems \ref{Th-FormalOP} and \ref{Th-Favard}, the sequence $\{Q_n\}$ of polynomials generated by $\mathcal{G}$ is the sequence of orthogonal polynomials with respect to the measure $\mu$ (i.e. with respect to the inner product \eqref{3D-InnerProd}). \begin{example} The sequence of polynomials $\{1, x, x^2 , \ldots , x^n , \ldots \}$ is generated by the non-degenerated Hessenberg matrix $$ \mathcal{G} = \left( \begin{array}{cccc} 0 & 1 & 0 & \ldots \\ 0 & 0 & 1 & \ldots \\ 0 & 0 & 0 & \ldots \\ \vdots & \vdots & \vdots & \ddots \end{array} \right), $$ hence from Theorem \ref{Th-FormalOP}, the sequence is orthonormal with respect to the inner product \eqref{MatInnerProd}. As $\mathcal{G}$ is a non-symmetric matrix, $\Psi( \mathcal{G},1)\neq \mathcal{O}$. Then, from Theorem \ref{Th-Favard} there does not exist a non-discrete positive measure $\mu$, such that $\langle p,q \rangle_{\mathcal{G}}= \langle p, q \rangle_{\mu}$ for all $p,q \in \mathds{P}$. \end{example} \begin{proof}[{Proof of Theorem \ref{ThFavardSobolev}.}] Let $p(x)=\sum_{i=0} ^{n_1} a_i x^i$ and $q(x)=\sum_{j=0} ^{n_2} b_j x^j$ be polynomials in $\mathds{P}$ of degree $n_1$ and $n_2$ respectively. Then, from the Sobolev inner product \eqref{Sob-InnerP} we have the representation \begin{equation} \label{Sob-InnerP-2} \langle p, q \rangle_{\vec{\mathbf{\mu}}_d}= (a_0,\cdots,a_{n_1},0,\cdots) \mathcal{S} (b_0,\cdots,b_{n_2},0,\cdots)^{\mathsf{T}} = \sum_{i=0} ^{n_1} \sum _{j=0} ^{n_2} a_i \,s_{i,j}\,{b_j} \;, \end{equation} where $ \mathcal{S}=(s_{i,j})_{i,j=0}^{\infty}$ with $s_{i,j}=\langle x^i,x^j\rangle_{\vec{\mathbf{\mu}}_d}$, is a Hankel-Sobolev matrix of index $d$. If $\mathcal{G}$ is a Hessenberg-Sobolev matrix of index $d \in {\mathds{Z}_{+}}$, from \eqref{MatInnerProd} and \eqref{Sob-InnerP-2}, to prove that $ \langle p, q \rangle_{\vec{\mathbf{\mu}}_d} =\langle p, q \rangle_{\mathcal{G}} $ is equivalent to prove that $\mathcal{S} =\mathcal{M}$, where $\mathcal{M}$ is the matrix of formal moments associated to $\mathcal{G}$. Let $\mathcal{G}$ be a non-degenerate Hessenberg matrix. Assume that there exists $d \in {\mathds{Z}_{+}}$ and $\vec{\mathbf{\mu}}_d= (\mu_0,\mu_1,\dots,\mu_d) \in \mathfrak{M}_{d}(\mathds{R})$ (continuous case), such that $\mathcal{S} =\mathcal{M}$. From Theorem \ref{Th-hp1}, $\mathcal{S}$ is a Hankel-Sobolev matrix. Therefore, combining Theorem \ref{ThCharact-HankelS} and Theorem \ref{RelaOperadores}, we get that $\Psi( \mathcal{G},2d+1)=\mathcal{O}$ \; and \; $\Psi( \mathcal{G},2d)\neq\mathcal{O}$. Furthermore, each matrix $\mathcal{H}_k$ defined by \eqref{ThCharact-02}, is the moment matrix of the measure $\mu_k$, which is a non-negative finite Borel measure whose support is an infinite subset. Hence, from Lemma \ref{SchoTam-lemma} we have that $\mathcal{H}_k$ s a positive definite matrix of infinite order. Reciprocally, let $\mathcal{G}$ be a non-degenerate Hessenberg matrix satisfying 1 and 2. From Theorems \ref{ThCharact-HessenS} and \ref{RelaOperadores} we conclude that $\mathcal{M}$, the matrix of formal moments associated to $\mathcal{G}$, is a Hankel-Sobolev matrix of index $d$, i.e., there exist Hankel matrices $\mathcal{H}_{k}$, $k=0,\,1,\dots,\,d$, such that $$ \mathcal{M}=\sum_{k=0}^{d} \left(\mathcal{U}^{-k};\mathcal{D}_k \; \mathcal{H}_{k} \; \mathcal{D}_k\;\mathcal{U}^{k}\right). $$ From Theorem \ref{Th-hp1} and Lemma \ref{SchoTam-lemma}, the $S$-momentum problem for $\mathcal{M}$ is defined. Let $\mu_{d-k}$ be a solution of the problem of moments with respect to $\mathcal{H}_{d-k}$ for each $k=0,\,1,\dots,\,d$. If $\langle p, q \rangle_{\vec{\mathbf{\mu}}_d}$ is as in \eqref{Sob-InnerP}, from Proposition \ref{Uniqueness-HankelS} we get $ \mathcal{S} =\mathcal{M}$. \end{proof} The following result may be proved in much the same way as Theorem \ref{ThFavardSobolev}, using the appropriate assertions of Lemma \ref{SchoTam-lemma} for the case of measures supported on finite subsets. \begin{theorem} [Favard type theorem for discrete case] \label{ThFavardSobolevDiscrete} Let $\mathcal{G}$ be a non-degenerate Hessenberg matrix. Then, there exists $d \in \mathds{Z}_{+}$ and $\vec{\mathbf{\mu}}_d \in \mathfrak{M}_{d}(\mathds{R})$ such that $ \langle p, q \rangle_{\vec{\mathbf{\mu}}_d} =\langle p, q \rangle_{\mathcal{G}} $ if and only if \; \begin{enumerate} \item $\mathcal{G}$ is a Hessenberg-Sobolev matrix of index $d \in \mathds{Z}_{+}$. \item The Hankel matrix $\mathcal{H}_{0}$ defined by \eqref{ThCharact-02}, is a positive definite matrix of infinite order and for each $k=0,\,1,\dots,\;d-1$; the matrix $\mathcal{H}_{d-k}$ is a positive definite matrix of order $m_k \in \mathds{Z}_{+}$. \end{enumerate} \end{theorem} The previous theorem is a refinement of \cite[Lemma 3]{Dur93}. \begin{theorem} [Favard type theorem for discrete-continuous case] \label{ThFavardSobolevDiscreteCont} Let $\mathcal{G}$ be a non-degenerate Hessenberg matrix. Then, there exists $d \in \mathds{Z}_{+}$ and $\vec{\mathbf{\mu}}_d \in \mathfrak{M}_{d}(\mathds{R})$ such that $ \langle p, q \rangle_{\vec{\mathbf{\mu}}_d} =\langle p, q \rangle_{\mathcal{G}} $ if and only if \; \begin{enumerate} \item $\mathcal{G}$ is a Hessenberg-Sobolev matrix of index $d \in \mathds{Z}_{+}$. \item The Hankel matrix $\mathcal{H}_{d}$ defined by \eqref{ThCharact-02}, is a positive definite matrix of infinite order and for each $k= 1,\,2\dots,\;d$; the matrix $\mathcal{H}_{d-k}$ is a positive definite matrix of order $m_k \in \mathds{Z}_{+}$. \end{enumerate} \end{theorem}
2024-02-18T23:40:58.135Z
2022-07-14T02:13:20.000Z
algebraic_stack_train_0000
3,803
9,128
proofpile-arXiv_066-2587
\section{Introduction} The relevance of the concept of symmetry in quantum systems dates back to Wigner \cite{Wigner:1939cj}, who showed that a symmetry group $G$ is realized by (anti)linear and (anti)unitary operators $U_g$ on the Hilbert space, labeled by $g\in G$. In local quantum field theory global symmetry is the main tool. On the one side, it organizes the spectrum in representations of $G$, hinting which QFT can describe a given physical phenomenon. On the other side global symmetries and their anomalies are among the few intrinsic and renormalization group (RG) flow invariant properties \cite{tHooft:1979rat, Callan:1984sa}, imposing selection rules on correlation functions as well as constraints in strongly coupled theories. For instance, along with the RG flow, all the operators compatible with the global symmetries are generated by quantum effects, so that the full classification of the global symmetries of a model is a powerful tool to have control over the flow. This is the classic notion of \emph{naturalness} \cite{tHooft:1979rat}. It is very important to remark that even if a global symmetry is not exactly realized, it is often useful to study a limit in which the symmetry is restored and discuss its consequences. Then the amount of violation of these consequences will be estimated by the amount of violation of the symmetry. It is well known that except for few cases involving supersymmetry \cite{Seiberg:1994pq, Seiberg:1994bz, Seiberg:1994bp}, the standard global symmetries are not enough to constrain the RG flow and the infrared phase. Nevertheless, in some models there is evidence against the generation, along with the RG flow, of operators which do not violate the known symmetries of the theory \cite{Komargodski:2020mxz}. Our faith in the notion of naturalness generates a tension, which can be solved only by enlarging the \emph{category} of what we want to call global symmetries. A revolution took place in this sense in the last decade, starting from the observation \cite{Gaiotto:2014kfa} that global symmetries lead to extended unitary topological operators $U_g[\mathcal{M}_{d-1}]$ labeled by group elements $g\in G$, supported on co-dimension one manifolds, and following group-like fusion rules $U_g[\mathcal{M}_{d-1}]U_h[\mathcal{M}_{d-1}]=U_{gh}[\mathcal{M}_{d-1}]$. It has been noticed that it is really the topological nature of these operators which can replace the usual notion of symmetry, yielding by itself to their RG invariance, selection rules, anomalies, and the notion of naturalness. Quantum field theories however have much more topological operators that those supported on co-dimension one defects and following group-like fusion rules. Specifically, they can be supported in higher co-dimension defects, leading to the notion of higher $p$-form symmetries \cite{Kapustin:2014gua, Gaiotto:2014kfa} or they can be a mixture of higher form symmetries of different degrees $p$, producing the so-called higher groups \cite{Kapustin:2013uxa, Cordova:2018cvg, Benini:2018reh, Tachikawa:2017gyf} (see also \cite{Cordova:2020tij, Wan:2019soo, Hsin:2020nts, Apruzzi:2021mlh, DelZotto:2022joo, Bhardwaj:2022scy, Bhardwaj:2021wif}). But even more drastically there exist topological operators which are \emph{not} unitary, thus finding a way out from the Wigner paradigm, and do \emph{not} follow group-like fusion rules. Instead, the fusion of two defects produces a sum of several defects. In particular, there exist defects that do not possess an inverse under the fusion product, in sharp contrast with group-like symmetries. For this reason, this type of generalization is dubbed as \emph{non-invertible symmetries} \cite{Bhardwaj:2017xup, Chang:2018iay}. In 2d theories, non-invertible symmetries generated by topological defect lines (TDLs) are ubiquitous \cite{Verlinde:1988sn, Frohlich:2006ch, Petkova:2000ip, Petkova:2013yoa}, and their correct underlying mathematical structure has been recognized to be that of certain 1-categories, namely \emph{fusion categories} \cite{etingof2005fusion, etingof2016tensor, Bhardwaj:2017xup, Chang:2018iay, Thorngren:2019iar, Thorngren:2021yso}. This means that the TDLs are objects of an Abelian category with a tensor product structure (fusion) \begin{equation} L_a\otimes L_b=\sum _{c}f_{ab}^c L_c. \end{equation} The Abelian structure means that one can construct finite direct sums of objects, while the morphisms from $L_a$ to $L_b$ form a $\mathbb{C}-$vector space $\mbox{Hom}(a,b)$. Physically, the morphisms are topological local operators changing the line $L_a$ into $L_b$, and they can be combined linearly with arbitrary $\mathbb{C}-$coefficients. The structure constants $f_{ab}^c$ are the dimensions of $\mbox{Hom}(a\otimes b, c)$. Some lines cannot be written as a sum of others, and these are called \emph{indecomposable lines}. Importantly these lines are also \emph{simple}, meaning that they do not have endomorphisms on them, except for those proportional to the identity: $\mbox{Hom}(a,a)\cong \mathbb{C}$. Moreover if $a$ and $b$ are simple, there are no morphisms between them.\\ The dynamical consequences of having a fusion category symmetry have been studied, finding new constraints on the RG flow \cite{Chang:2018iay} and surprising solutions to problems of apparent lack of naturalness \cite{Komargodski:2020mxz}. These results demonstrate that exploring new examples of symmetries is not merely an academic exercise but can give interesting physical insights.\\ In 3d TQFT the line operators also form non-invertible symmetries with a similar categorical structure which takes into account the braiding, namely a \emph{modular tensor category} (MTC) \cite{Moore:1988qv, Moore:1989vd, Turaev:1994xb, Fuchs:2002cm, Kitaev:2005hzj, Kong:2013aya, Barkeshli:2014cna} (for a recent application in low dimensional holography see \cite{Benini:2022hzx}). Almost all the known examples are about discrete non-invertible symmetries, while continuous ones are believed to be rare and exotic. Moreover, until very recently, the existence of non-invertible symmetries in higher dimensions was seriously questioned, because of the lack of concrete examples. Last but not least, since non-invertible topological line defects are described by fusion category theory, it is natural to expect that the higher dimensional generalization is described by \emph{fusion higher category theory} \cite{douglas2018fusion, decoppet2021weak, decoppet2022multifusion}, which is a mathematical topic still in development. Roughly speaking, when one considers topological defects of dimension at least two, the morphisms are not local operators but extended ones. For instance, for a symmetry generated by surface defects, the morphisms are TDLs stacked on the defects, which are by themselves objects of a fusion category. They have their own morphisms, which in the higher category of surface operators are called \emph{2-morphisms}, and they form a vector space. Despite the lack of a mathematically established structure, in the last year, many examples of non-invertible symmetries in higher dimensions have been discovered \cite{Nguyen:2021yld,Wang:2021vki,Choi:2021kmx, Kaidi:2021xfk, Roumpedakis:2022aik, Choi:2022zal, Bhardwaj:2022yxj, Kaidi:2022uux, Choi:2022jqy, Cordova:2022ieu,Hayashi:2022fkw,Arias-Tamargo:2022nlf}, which are related with some gauging procedure. In particular in \cite{Bhardwaj:2022yxj} many discrete 1-form non-invertible symmetries are constructed in 4d by generalizing a procedure well known in 3d TQFT \cite{Barkeshli:2014cna}, which consists in gauging a discrete invertible 0-form symmetry $G^{(0)}$ which acts as an automorphism of the set of generators $\left\{U_h\right\}_{h\in H}$ of a discrete 1-form symmetry $H^{(1)}$. The basic idea is the following. The generators of $H^{(1)}$ fall into various orbits $\mathcal{O}_{[h]}=\left\{ U_{g\cdot h} \; | \; g \in G \right\}$ for the action $g: \; h\rightarrow g\cdot h$ of $G^{(0)}$ on $H^{(1)}$. After gauging $G^{(0)}$ most of the $U_h$ are no longer gauge invariant. However, instead of throwing them away, we get new indecomposable objects labeled by the orbits $[h]$, each one being the sum of the objects in the corresponding orbit \begin{equation} \widehat{U}_{[h]}=\bigoplus \left\{U_{ h'}\; | \ [h']=[h] \right\} \end{equation} up to a normalization factor. These new objects have \emph{not} group-like fusion rules. The orbits can be long or short, depending on the stabilizer. In the 3d procedure of \cite{Barkeshli:2014cna} the objects corresponding to short orbits come into copies labeled by representations of the stabilizer. This is because in $d$ dimensions the gauging of the 0-form symmetry produces a quantum $(d-2)-$form symmetry $\widehat{G}^{(d-2)}$, whose topological operators are the $G^{(0)}$ Wilson lines \cite{Gaiotto:2014kfa}. For $d=3$ this is a 1-form symmetry as the non-invertible one, and indeed they are both generated by lines. Therefore the line generators of the non-invertible symmetry can be \emph{dressed} with those of the quantum symmetry. Thus the full set of indecomposable objects in the gauged theory is given by $\widehat{U}_{[h]}^{\rho}=\widehat{U}_{[h]} \eta _{\rho}$, where $\rho \in \mbox{Rep}(G)$ and $\eta _{\rho}$ is the corresponding Wilson line. However, some of these $G^{(0)}$ Wilson lines are "absorbed" by the non-invertible lines, and one can argue that what effectively labels the different copies of $\widehat{U}_{[h]}$ are the irreducible representations of the stabilizer of $h\in H$ for the $G$ action on $H$ \cite{Barkeshli:2014cna}. This \emph{dressing procedure} strictly speaking is no longer true in $d>3$ because the generators of the 1-form non-invertible symmetries have dimension $d-2>1$. The non-invertible symmetry is described by a $(d-2)-$category, and the Wilson lines of the dual symmetry appears as $(d-3)-$morphsims. In this paper, we will consider only the 4d case, in which the non-invertible 1-form symmetry is generated by topological surfaces and the quantum $\mbox{Rep}(G)$ symmetry is generated by lines, which can enter in the category of 1-morphisms of the surfaces. At first sight, the indecomposable objects of the gauged theory are labeled only by the orbits of the $G$-action on $H$. The approach of \cite{Bhardwaj:2022yxj} was that there are indeed no further objects, but the dual symmetry generated by the 1-endomorphisms should be sometimes gauged in the fusion rules. On this aspect we propose a somewhat different but equivalent point of view, which unifies the stories in 3d and higher dimensions. The idea is that the quantum 2-form symmetry $\mbox{Rep}(G)$, which is non-invertible for non-Abelian $G$, can be condensed on a surface $\Sigma$, generating the so-called \emph{condensation defects} \cite{Gaiotto:2019xmp, Roumpedakis:2022aik}. There are as many gauging procedures as many are the subgroups of $G$. In section \ref{sec:global fusion} we will give an alternative construction of the condensation defects which makes it clear that they are in one-to-one correspondence with the subgroups of $G$, also in the non-Abelian case. Our point of view is that in the gauged theory for each $G$-orbit we have many indecomposable defects, labeled by the subgroups of the stabilizer, and these are obtained from the \emph{naked} defect by dressing it with the various condensation defects. From this perspective, the 2-category must be enlarged by including all the dressed defects, similarly to the 3d case. This perspective is also motivated by the mathematical literature \cite{Gaiotto:2019xmp, douglas2018fusion}, which suggests that all the condensation defects must be added to obtain the so called \emph{idempotent completion}\footnote{Other names in the literature are Karoubi completion, or condensation completion.} of the fusion category. This way of obtaining non-invertible symmetries by gauging automorphisms can produce a large number of examples \cite{Bhardwaj:2022yxj}, which is a very interesting "data-base" of higher category symmetries, potentially also for mathematicians. However, from a physical point of view, the theories one gets in this way are somewhat exotic. One of the aims of this paper is to provide an instance in which the gauging procedure is very natural, and is in some sense built-in. This is the case of the Weyl group $W_{G}\subset G$ in 4d $G$ Yang-Mills (YM) theory, which is automatically gauged. Does this produce a non-invertible symmetry? Strictly speaking the answer is no, basically because there is no theory producing $G$ YM theory upon gauging $W_G$. However, if we go to high energy where the theory becomes free\footnote{With an abuse of terminology by \emph{free} in the UV we will always mean \emph{weakly coupled}.}, a partial fixing of the non-Abelian gauge invariance is achieved by looking at the gauge theory for the Cartan torus $U(1)^r$ \cite{tHooft:1981bkw} (see section \ref{sec:3} for a detailed discussion). Here the Weyl group appears as a global 0-form symmetry, and thus we need to gauge it to obtain a theory related to the UV limit of YM theory. We are led to look at the theory with gauge group $U(1)^r\rtimes W_G$. The Abelian gauge theory $U(1)^r$ has continuous 1-form symmetries on which the Weyl group acts as an automorphism. Thus we are precisely in the condition described above, except that the 1-form symmetry is continuous. Then the $U(1)^r\rtimes W_G$ gauge theory is expected to have 1-form continuous non-invertible symmetries, described by a 2-category. We will focus on the case $G=SU(N)$, so we consider the $U(1)^{N-1}\rtimes S_N$ gauge theory in 4d. The 3d analog of this theory has been constructed on the lattice and with a different goal in \cite{Nguyen:2021yld}, where it has been dubbed \emph{semi-Abelian theory}. In that paper it is also pointed out that there are non-invertible symmetries. However, their fusion rules have not been computed, and only a subset of these symmetries has been discussed. In particular, even if it is pointed out that the general topological operators are parametrized by $N-1$ parameters, the ones studied in \cite{Nguyen:2021yld} depends only on one compact variable. On the other hand, we will see that the parameter space of the non-invertible symmetry is $U(1)^{N-1}/S_N$, and that the fusion rules are \begin{equation} \mathcal{T}(\boldsymbol{\alpha})\otimes \mathcal{T}(\boldsymbol{\beta})=\sum _{\sigma \in H_{\boldsymbol{\alpha} } \backslash S_N /H_{\boldsymbol{\beta}}}f_{\alpha \beta}^{\sigma}\;\; \mathcal{T}\left(\boldsymbol{\alpha}+\mathfrak{S}^{\vee}_{\sigma }\cdot \boldsymbol{\beta} \right) \end{equation} where \begin{equation} f_{\alpha \beta}^{\sigma} = \frac{|H_{\boldsymbol{\alpha}+\mathfrak{S}^{\vee}_{\sigma }\cdot \boldsymbol{\beta}}|}{|H_{\boldsymbol{\alpha}}\cap \sigma H_{\boldsymbol{\beta}} \sigma ^{-1}|} . \end{equation} Here $\mathfrak{S}_{\sigma}^{\vee}$ is the relevant action of the permutation group $S_N$ on the labels $\boldsymbol{\alpha}\in U(1)^{N-1}$, while $H_{\boldsymbol{\alpha}}\subset S_N$ denotes the stabilizer for this action. One of our main results is that, while the fusions above hold when the defect does not contain non-trivial 1-cycles, on a general topology we have to modify the formula above by including the condensations \begin{equation} \mathcal{T}(\boldsymbol{\alpha})[\Sigma]\otimes \mathcal{T}(\boldsymbol{\beta})[\Sigma] =\sum _{\sigma \in H_{\boldsymbol{\alpha}}\backslash S_N /H_{\boldsymbol{\beta}}}f_{\alpha \beta}^{\sigma} \; P_{\mbox{Rep}\left(\left(H_{\boldsymbol{\alpha}}\cap H_{\boldsymbol{\beta}}\right)^{\perp}\right)}\otimes \mathcal{T}\left(\boldsymbol{\alpha} +\mathfrak{S}_{\sigma} ^{\vee} \cdot \boldsymbol{\beta}\right)[\Sigma]\;. \end{equation} The operator $P_{\mbox{Rep}\left(\left(H_{\boldsymbol{\alpha}}\cap H_{\boldsymbol{\beta}}\right)^{\perp}\right)}$ coincide, up to a normalization, with a condensation defect, and it is a projector in the sense that it squares to itself. Notice that $U(1)^{N-1}/S_N$ coincides with the set of conjugacy classes of $SU(N)$ also labeling the Gukov-Witten (GW) operators of $SU(N)$ YM theory \cite{Gukov:2006jk, Gukov:2008sn, Gukov:2014gja}. In the full YM theory, only the GW labeled by central elements are topological, and generate the 1-form center symmetry. We propose that \emph{all} the GW operators in YM theory become topological at high energy and form a non-invertible symmetry, broken down to the center symmetry by the RG flow. That the $SU(N)$ YM theory at high energy has non-invertible symmetries has been recently observed from a different point of view also in \cite{Cordova:2022rer}. The fact that these two distinct arguments agree is reassuring. Moreover, in that paper, the fusion rules have been computed only in the $N=2$ case, and they agree with those of the $U(1)\rtimes S_2$ gauge theory\footnote{The coefficients appearing are different, but this has to do with different choices of normalization. However, in our normalization, the fusion coefficients are always integers and as we point out in the main text, this is important since they count the number of 1-morphisms up to endomorphisms.}. It is reasonable that by applying the methods of \cite{Cordova:2022rer} for any $N$ one gets the fusion rules which we compute in the $U(1)^{N-1}\rtimes S_N$ theory, thus confirming that the symmetry found in that paper is really the same discussed here. We leave this interesting problem for future work. In 2d YM theories it was already pointed out in \cite{Nguyen:2021naa} that all the GW operators are topological, and they form a non-invertible symmetry at all energy scales. This conclusion is peculiar of 2d YM theory since the theory is quasi-topological. In $d>2$ this is obviously not true and indeed this symmetry exists only in the UV limit.\\ The connection between the UV symmetries of YM theory and those of the $U(1)^{N-1}\rtimes S_N$ gauge theory is important because the second case is much more under control, and we are able to discuss the 2-categorical structure of this symmetry (section \ref{sec:global fusion}), which indeed was not analyzed before. The analysis of this structure is the bulk of the paper. In our examples, we find several properties which we believed to be general aspects of 2-category symmetries. For instance, we argue that the almost universal presence of condensation defects on the right-hand side of the fusion rules is what distinguishes the global fusion rules (those obtained on general manifolds) from the local ones, which are true only if the defects are topologically trivial. Moreover, we find that the fusion coefficients are always positive integer numbers. We interpret these numbers as counting the 1-morphisms up to possible endomorphisms. This is an important difference with respect to fusion 1-category symmetries, in which the indecomposable objects cannot have non-trivial endomorphisms, and therefore the fusion coefficients are directly counting the morphisms living at the junctions. Moreover, while in fusion 1-categories the morphisms form a vector space, and therefore these numbers are the dimensions of these vector spaces, in fusion 2-categories the 1-morphsims form by itself a category, and the numbers are better interpreted as quantum dimensions. Finally, after understanding the map between the gauge invariant operators of YM theory and the $U(1)^{N-1}\rtimes S_N$ gauge theory, we are able to determine how this non-invertible symmetry acts on line operators which are compatible with the known results when we restrict to the group-like subcategory $\mathbb{Z}_N^{(1)}$ corresponding to the center. The rest of the paper is organized as follows. In section \ref{sec:2} we study the $U(1)^{N-1}\rtimes S_N$ gauge theory, by analyzing the full spectrum of gauge-invariant operators, finding the continuous non-invertible 1-form symmetries. The intricate 2-categorical structure of this symmetry is analyzed in section \ref{sec:global fusion}, where we also explain the connection, in our specific example, between the concept of \emph{global fusions} introduced in \cite{Bhardwaj:2022yxj} and the \emph{higher condensation defects} constructed in \cite{Roumpedakis:2022aik}. Then section \ref{sec:3} is devoted to the connection between the $U(1)^{N-1}\rtimes S_N$ gauge theory and $SU(N)$ YM theory at high energy. After a path integral argument, we show a mapping among all kinds of gauge invariant observables of the two theories (local, line, and surface operators). Then we identify the center symmetry of $SU(N)$ YM theory with a discrete subset of topological surface operators of the $U(1)^{N-1}\rtimes S_N$ theory, by showing that they give rise to the same Ward identities with the Wilson line operators. We also discuss how all the possible choices of the global structure of the YM theory are obtained from the point of view of the $U(1)^{N-1}\rtimes S_N$ gauge theory. We conclude in section \ref{sec:4} with a discussion on possible future directions. \section{The 4d $U(1)^{N-1}\rtimes S_N$ Gauge Theory} \label{sec:2} This section is devoted to the $U(1)^{N-1}\rtimes S_N$ gauge theory on its own. We show that the theory has non-invertible 1-form symmetries labeled by continuous parameters valued in $U(1)^{N-1}/S_N$. This non-invertible symmetry is described by a 2-category which we study in detail, discovering an interesting mathematical structure. \subsection{Abelian Gauge Theory}\label{sec:Abeliangaugetheory} We start with a free Abelian gauge theory with gauge group $U(1)^{N-1}$. The definition of the theory is encoded in the choice of the spectrum of Wilson line operators, namely an $N-1$ dimensional lattice. A way to make this explicit is by exhibiting a basis for the gauge fields $\mathcal{A}_{i=1,...,N-1}$ in which the Wilson lines have integer charges. This is a choice of a symmetric positive definite $(N-1)\times (N-1)$ matrix $Q^{(N-1)}_{ij}$ such that the action is \begin{equation} S=\frac{1}{4e^2}\int d^4 x \;Q^{(N-1)}_{ij}\mathcal{F}_i\wedge * \mathcal{F}_j \end{equation} where $\mathcal{F}_i=\ d \mathcal{A}_i$. Then the most general Wilson line is \begin{equation} \label{eq: general Abelian Wline} \mathcal{W}(\boldsymbol{n})[\gamma]= \mathcal{W}(n_1,...,n_{N-1})[\gamma]=\prod _{i=1}^{N-1}\mathcal{W}_i[\gamma] ^{n_i} \ , \ \ \ \ \mathcal{W}_i [\gamma] ^{n_i}:=\exp{\left(in_i\oint _{\gamma} \mathcal{A}_i\right)} \end{equation} where $\boldsymbol{n}=(n_1,...,n_{N-1})\in \mathbb{Z}^{N-1} \label{eq:globalchoice}$. To make the action of the $S_N$ 0-form symmetry explicit we define the theory by demanding that, upon introducing $\mathcal{A}_N=-\mathcal{A}_1-...-\mathcal{A}_{N-1}$ the action takes the form\footnote{Here $\mathcal{F}_i\mathcal{F}_j$ means $\mathcal{F}_i\wedge *\mathcal{F}_j$.} $$ S=\frac{1}{4}\int d^4 x\left(\mathcal{F}_1^2+...\mathcal{F}_N^2\right)= \frac{1}{4}\int d^4 x\left(\sum _{i=1}^{N-1}2\mathcal{F}_i^2+\sum _{i<j}^{N-1}2\mathcal{F}_i\mathcal{F}_j\right)=\frac{1}{4}\int d^4 x Q^{(N-1)}_{ij}\mathcal{F}_i\mathcal{F}_j $$ thus defining the quadratic form $Q^{(N-1)}$ as \begin{equation} Q^{(N-1)}_{ij}=1+\delta _{ij} \ , \ \mbox{with} \ \ \ \ \ \text{det}(Q^{(N-1)})=N \ , \ \ \ \left(Q^{(N-1)}\right)^{-1}_{ij}=\frac{-1+N\delta _{ij}}{N}. \label{eq:definitionQ} \end{equation} The $S_N$ symmetry permutes the connections $\mathcal{A}_{i=1,...,N}$. On the $N-1$ field strengths $\mathcal{F}_1,...\mathcal{F}_{N-1}$ it acts in the \emph{standard representation} of $S_N$, which we denote by $\mathfrak{S}$ (see appendix \ref{sec:SN}). This obviously induces also an action of $S_N$ on the Wilson lines, which is conveniently rewritten as an action on the charges: \begin{equation} \label{eq:action S_n on Wlines} \sigma \cdot \mathcal{W}(\boldsymbol{n})=\exp{\left(i\oint\sum _{j=1} ^{N-1}n_j\mathfrak{S}_{\sigma}\left(\mathcal{A}_j\right)\right)}=\exp{\left(i\oint\sum _{j=1} ^{N-1}\mathfrak{S}_{\sigma ^{-1}}^{\vee}( n_j)\mathcal{A}_j\right)}=\mathcal{W}\left(\mathfrak{S}_{\sigma ^{-1}}^{\vee}\cdot \boldsymbol{n}\right) \; . \end{equation} We have introduced $\mathfrak{S}_{\sigma} ^{\vee}\cdot \boldsymbol{n}=\left(\mathfrak{S}_{\sigma}^{\vee}(n_1),...,\mathfrak{S}^{\vee}_{\sigma}(n_{N-1})\right)$, with $\mathfrak{S}^{\vee}_{\sigma}(n _i)=m_{\sigma(i)}-m_{\sigma(N)}$, where $n_i=m_i-m_N$. This is a dual representation of $S_N$ on $N-1$ variables (see appendix \ref{sec:SN}). We also have electric GW operators \cite{Gukov:2006jk, Gukov:2008sn} (for a review \cite{Gukov:2014gja}) \begin{equation} \label{eq: general abelin eGW} \mathcal{D} (\boldsymbol{\alpha})[\Sigma]=\mathcal{D}(\alpha _1,...,\alpha _{N-1})[\Sigma ]=\prod _{i=1}^{N-1} \mathcal{D}_i (\alpha _i)[\Sigma] \ , \ \ \ \mathcal{D}_i(\alpha_i)[\Sigma]:=\exp{\left(i\alpha _i \int _{\Sigma} \frac{*\mathcal{F}_i}{2\pi}\right)}. \end{equation} The variables $\boldsymbol{\alpha}=(\alpha _1,...,\alpha _{N-1})$ parametrize an $(N-1)$-dimensional torus (the precise periodicity is shown below). These operators are the generators of the electric 1-form symmetry \cite{Gaiotto:2014kfa} $\left(U(1)_{e}^{(1)}\right)^{N-1}$. On the GW operators $S_N$ acts as it does on the Wilson lines: \begin{equation} \sigma \cdot \mathcal{D}(\boldsymbol{\alpha})= \mathcal{D}\left(\mathfrak{S}_{\sigma ^{-1}}^{\vee} \cdot \boldsymbol{\alpha}\right). \end{equation} The electric GW operators $\mathcal{D}(\boldsymbol{\alpha})$ have an action on the Wilson lines $\mathcal{W}(\boldsymbol{n})$ by linking, and a simple computation shows the following Ward identity \begin{equation} \label{eq:GW on Lines Abelian} \mathcal{D}(\boldsymbol{\alpha})[\Sigma ] \cdot \mathcal{W}(\boldsymbol{n})[\gamma]=\exp{\left(i\; Lk(\Sigma ,\gamma)\sum _{i,j=1} ^{N-1}\alpha _i \left(Q^{(N-1)}\right)^{-1}_{ij} n_j \right)} \mathcal{W}(\boldsymbol{n})[\gamma] \end{equation} where $ Lk(\Sigma ,\gamma)$ denotes the linking number between $\Sigma$ and $\gamma$. From this we deduce the periodicity $\alpha _i \sim \alpha _i +2\pi w_jQ_{ji}$, $w_i\in \mathbb{Z}$. Equivalently, the variables $$ \beta _i:=\alpha _i \left(Q^{N-1}\right)^{-1}_{ji} $$ are $2\pi$ periodic, thus parametrizing a torus $U(1)^{N-1}$. We remark that an analogous discussion holds for the \emph{'t Hooft lines} $\widetilde{\mathcal{W}}(\boldsymbol{n})$ and magnetic GW operators $\widetilde{\mathcal{D}}(\boldsymbol{\alpha})$. However the global structure we have chosen restricts the set of allowed 't Hooft lines by Dirac quantization conditions. We will discuss this in detail in section \ref{sec:global_structures}. \subsection{Warm Up: $N=2$ Case} \label{sec:N=2} Before we face the general case, it is useful to study the baby example $N=2$, which is simpler since $S_2=\mathbb{Z}_2$ is Abelian, but captures several features of the general case. Indeed $U(1)\rtimes \mathbb{Z}_2=O(2)$ and the model is known as the $O(2)$ gauge theory \cite{Kiskis:1978ed,Schwarz:1982ec}. This subsection contains a review of discussions in \cite{Heidenreich:2021xpr} and \cite{Bhardwaj:2022yxj}, where the model has been shown to have non-invertible symmetries, but we also introduce new points, which we will expand on in the general case. We start from the $U(1)$ Maxwell theory, in which $S_2=\mathbb{Z}_2$ acts as charge conjugation by reversing the sign of the connection $\mathcal{A}$, we then gauge this symmetry obtaining the $U(1)\rtimes \mathbb{Z}_2$ theory. A class of operators of this theory consists in gauge invariant operators of the $U(1)$ theory but we also have the $\mathbb{Z}_2$ Wilson line \begin{equation} \eta [\gamma]= e^{i\oint_{\gamma} a_2} \end{equation} where $a_2 \in H^1(\mathcal{M}_4,\mathbb{Z}_2)$ is the dynamical $\mathbb{Z}_2$ gauge field. The $\eta$ line is topological and generates the quantum 2-form symmetry $\widehat{\mathbb{Z}}_2^{(2)}$ as $\eta^2 = 1$. Let us discuss the $\mathbb{Z}_2$ invariant combinations of operators of the original Abelian theory, which remain good operators after gauging. The local operators are all the even polynomials in the field strength. The Wilson lines $\mathcal{W}(n)$ of the Maxwell theory are labeled by one integer, their charge, and $\mathbb{Z}_2$ acts by reversing the sign of $n$. The Wilson line operators of the $O(2)$ gauge theory are obtained from those of $U(1)$ by summing over the $\mathbb{Z}_2$ orbits: \begin{equation} \mathcal{V}(n)[\gamma]=\mathcal{W}(n)[\gamma]+\mathcal{W}(-n)[\gamma]=e^{in\oint _{\gamma} \mathcal{A}}+e^{-in\oint _{\gamma} \mathcal{A}}. \end{equation} We have a similar story for the electric GW operators. Imitating the well-known 3d procedure of \cite{Barkeshli:2014cna} described in the introduction, we build the gauge-invariant surface operators by summing over the $\mathbb{Z}_2$ orbits. For reasons that will be clear in the following, we normalize the operators by dividing them by $|H_{\alpha}|$, where $H_{\alpha}\subset \mathbb{Z}_2$ is the stabilizer of $\alpha$ \begin{equation} \mathcal{T}(\alpha)[\Sigma]=\frac{1}{|H_{\alpha}|}\left(\mathcal{D}(\alpha)[\Sigma]+\mathcal{D}(-\alpha)[\Sigma]\right)= \frac{1}{|H_{\alpha}|}\left(e^{i\alpha \int _{\Sigma} *\mathcal{F}}+e^{-i\alpha \int _{\Sigma} *\mathcal{F}}\right). \end{equation} In this case $H_{\alpha}$ can be either trivial (for $\alpha \neq 0,2\pi$) or equal to $\mathbb{Z}_2$ (for $\alpha =0,2\pi$). The operators $\mathcal{T}(\alpha)$ are \emph{indecomposable} objects after gauging, meaning that they cannot be written as direct sum of other objects. Since $Q^{(1)}=2$ in our normalization, $\mathcal{D}(\alpha)$ is parametrized by $\alpha \in [0,4\pi)$. Then the manifold where $\alpha$ takes values in the $O(2)$ theory is $U(1)/\mathbb{Z}_2=[0,2\pi]$, which is singular since $\alpha =0,2\pi$ are fixed points of the $\mathbb{Z}_2$ action. The somewhat surprising fact is that, since these operators are topological, they can be regarded as the generator of a symmetry, even though $\mathcal{T}(\alpha)$ is not a unitary operator and does not satisfy a group law multiplication: \begin{equation} \label{eq:local fusion O(2)} \mathcal{T}(\alpha)\otimes \mathcal{T}(\beta)=\frac{1}{|H_{\alpha}||H_{\beta}|}\Big(|H_{\alpha+\beta}|\mathcal{T}(\alpha+\beta)+|H_{\alpha-\beta}|\mathcal{T}(\alpha-\beta)\Big). \end{equation} This is a non-invertible symmetry \cite{Bhardwaj:2017xup}. In the last few years these new type of symmetries have been analyzed extensively in 2d (for instance \cite{Bhardwaj:2017xup,Chang:2018iay,Thorngren:2019iar,Komargodski:2020mxz,Thorngren:2021yso}), and very recently also in higher dimensions \cite{Choi:2021kmx, Kaidi:2021xfk, Roumpedakis:2022aik, Choi:2022zal, Bhardwaj:2022yxj, Choi:2022jqy, Cordova:2022ieu}. However most of the examples in the literature discuss \emph{discrete} non-invertible symmetries, while the non- invertible symmetry of the $O(2)$ gauge theory, as well as the other cases we discuss in the present paper are \emph{continuous} non-invertible symmetries. Until recently, these where believed to be very rare and exotic type of symmetries. One of our aims is to show that they can appear quite naturally, and they have some features similar to more common continuous symmetries. Notice that there are exactly two values of $\alpha$ for which the fusion is group-like, namely the fixed points of the $\mathbb{Z}_2$ action $\alpha =0,2\pi$, for which \begin{equation} \mathcal{T}(2\pi)\otimes \mathcal{T}(2\pi)=\mathcal{T}(4\pi)=\mathcal{T}(0)=1. \end{equation} These are also the only two unitary operators. This shows that the large and continuous non-invertible symmetry contains an invertible $\mathbb{Z}_2^{(1)}$ 1-form symmetry, which is nothing but the center symmetry since $\mathcal{Z}(O(2))=\mathbb{Z}_2$. It is also important to notice that in the fusion \eqref{eq:local fusion O(2)} the coefficients are always integer numbers. This is obvious when $H_{\alpha}$ and $H_{\beta}$ are both either trivial or $\mathbb{Z}_2$. When instead $H_{\alpha}=1$ but $H_{\beta}=\mathbb{Z}_2$ the $1/2$ factor is cancelled because $\mathcal{T}(\alpha+\beta)=\mathcal{T}(\alpha-\beta)$. We will show that an analogous mechanism takes place for general $N$. This fact is important because the fusion coefficients have a meaning and must be integer numbers: when $\mathcal{T}(\gamma)$ appears in the fusion $\mathcal{T}(\alpha)\otimes \mathcal{T}(\beta)$ it means that there is a fusion category of 1-morphisms $\mathcal{T}(\alpha)\otimes \mathcal{T}(\beta)\rightarrow \mathcal{T}(\gamma)$, and the coefficient counts the number of simple lines in this category, or more precisely its \emph{total quantum dimension}. However, since some objects have non-trivial endomorphisms, this counting is only up to these endomorphisms. We will expand on this point in the general case. The non-unitarity of the GW operators $\mathcal{T}(\alpha)$ for $\alpha \neq 0, 2\pi$ reflects itself in the fact that the charges of Wilson lines are not phases, as follows from the generalized Ward identity \begin{equation} \mathcal{T}(\alpha)[\Sigma]\cdot \mathcal{V}(n)[\gamma]=\frac{2}{|H_{\boldsymbol{\alpha}}|}\cos{\left( Lk(\Sigma,\gamma)\; n\frac{\alpha}{2}\right)} \mathcal{V}(n)[\gamma]. \end{equation} We get a phase only for $\alpha =0,2\pi$ in which the GW operators are group-like. This phase is $(-1)^n$ depending only on the parity of $n$. Notice that at generic values of $\alpha$, different $n$'s with the same parity give different charges. Up to this point, the discussion was a bit naive and indeed was correct only in the case when $\Sigma$ does not have non-trivial 1-cycles \cite{Bhardwaj:2022yxj}. When we consider topologically non-trivial defects, we need to modify the discussion above and analyze in detail the 2-categorical structure of the non-invertible symmetries. To do this, we have to incorporate the dual 2-form symmetry $\mathbb{Z}_2^{(2)}$ arising from the gauging. This story will be more complicated in the general case $N>2$ in which $S_N$ is non-Abelian, so it is worth discussing the symmetry structure before in this simple example. Before gauging, the electric 1-form symmetry has a very simple 2-categorical structure: the indecomposable objects are $\left\{\mathcal{D}(\alpha)\right\}_{\alpha \in [0,4\pi)}$ and the category of 1-morphisms $\mathcal{D}(\alpha)\rightarrow \mathcal{D}(\beta)$ is empty unless $\alpha=\beta$, in which case it contains only the identity line. After gauging, we get one additional topological operator, namely the non-trivial $\mathbb{Z}_2$ Wilson line $\eta$, which does not affect the indecomposable objects but enters in the 1-morphisms. This is a sharp difference with respect to the 3d case of \cite{Barkeshli:2014cna} in which by \emph{dressing} the objects with $\eta$ one gets new indecomposable objects. Naively, in 4d it seems that there are no further indecomposable objects, but we will explain shortly that this conclusion is wrong. Since $\eta$ is a bulk line, it exists as a 1-morphism $\eta : \mathcal{T}(0)\rightarrow \mathcal{T}(0)$, but also as a 1-morphism on the surface $\mathcal{T}(2\pi)$ on which it is non-trivial\footnote{This is because the operators $\mathcal{T}(0),\mathcal{T}(2\pi)$ were indecomposable objects also in the pre-gauged theory, and they do not see the $\mathbb{Z}_2$ symmetry. Therefore it is not required to put boundary conditions for the $\mathbb{Z}_2$ on the gauge field.}. Notice an important difference of higher category symmetries with respect to more standard fusion categories of topological defect lines in 2d \cite{Bhardwaj:2017xup,Thorngren:2019iar}: even for indecomposable objects, the category of 1-endomorphism can contain non-trivial operators because there can be lower dimensional topological bulk defects which can be put on the objects without becoming trivial. As we will see, there are further interesting cases in which additional topological lines exist only stacked on a non-trivial surface. The surface operators $\mathcal{T}(\alpha)$, $\alpha\neq 0,2\pi$ on the other hand absorb the Wilson line $\eta$. Therefore the only 1-endomorphism on them is the identity. This is because before gauging $\mathcal{D}(\alpha)$, $\alpha\neq 0,2\pi$ is not invariant under $\mathbb{Z}_2$, so the precise definition of the gauge invariant defect $\mathcal{T}(\alpha)=\mathcal{D}(\alpha)+\mathcal{D}(-\alpha)$ requires to fix Dirichlet boundary condition for the $\mathbb{Z}_2$ gauge field on the surface. We will call these kinds of objects \emph{strongly simple}, following the terminology of \cite{Johnson-Freyd:2020ivj}. The discussion above is crucial whenever $\Sigma$ has non-contractible 1-cycles. When this is the case, the same line $\eta$ can be non-trivial on the surface, and generates a 0-form symmetry $\mathbb{Z}_2$ on it. As suggested in \cite{Bhardwaj:2022yxj}, the \emph{local} fusion rules \eqref{eq:local fusion O(2)} must be modified by generally gauging this 0-form symmetry on $\Sigma$, leading to the \emph{global} fusion rules. We understand this gauging procedure as well as the necessary modification of the fusion by a different argument. One can use the 2-form symmetry $\mathbb{Z}_2^{(2)}$ in the bulk to construct one further topological surface operator by condensing the symmetry on a surface, as explained in detail in \cite{Roumpedakis:2022aik}: \begin{equation} \label{eq:condensation defect Z2} \mathcal{C}[\Sigma]:=\frac{1}{\sqrt{|H_1\left(\Sigma,\mathbb{Z}_2\right)|}}\sum _{\gamma \in H_1\left(\Sigma,\mathbb{Z}_2\right)} \eta [\gamma]. \end{equation} Even if it is a surface operator, it has trivial action on lines because it is made of lower dimensional objects which cannot braid with lines. Notice that the condensation produces a dual 0-form $\mathbb{Z}_2$ symmetry living on the defect, which is generated by topological lines. The condensation defect is non-invertible, and its fusion was computed in \cite{Roumpedakis:2022aik} to be \begin{equation} \label{eq:fusion condensation Z2} \mathcal{C}[\Sigma]\otimes\mathcal{C}[\Sigma]=\mathcal{Z}(\mathbb{Z}_2;\Sigma) \mathcal{C}[\Sigma] \end{equation} where $\mathcal{Z}(\mathbb{Z}_2;\Sigma)=\sqrt{|H_1\left(\Sigma,\mathbb{Z}_2\right)|}$ is the partition function of the 2d pure $\mathbb{Z}_2$ gauge theory on $\Sigma$. The fact that the fusion coefficients are not numbers, but partition functions of TQFT, seems to be a general feature of higher category symmetries, as pointed out in recent papers \cite{Roumpedakis:2022aik,Choi:2022zal}. We will derive the same result from a different point of view in subsection \ref{sec:global fusion}, also generalizing to the case in which the symmetry that we condense to produce $\mathcal{C}[\Sigma]$ is non-invertible.\\ Having introduced the condensation defect, the gauging procedure on $\mathcal{T}(\alpha)[\Sigma]$ described in \cite{Bhardwaj:2022yxj} in order to get the global fusion rule is nothing but stacking $\mathcal{C}[\Sigma]$ on $\mathcal{T}(\alpha)[\Sigma]$, up to a normalization coefficient: \begin{equation} \frac{\mathcal{T}(\alpha)[\Sigma]}{\mathbb{Z}_2}\equiv \frac{1}{\mathcal{Z}(\mathbb{Z}_2;\Sigma)}\mathcal{T}(\alpha)[\Sigma]\otimes \mathcal{C}[\Sigma]. \end{equation} With this definition, using the fusion of $\mathcal{C}[\Sigma]$ with itself we see that for all the GW operators \begin{equation} \label{eq:projection condensation} \frac{1}{\mathcal{Z}(\mathbb{Z}_2;\Sigma)}\frac{\mathcal{T}(\alpha)[\Sigma]}{\mathbb{Z}_2}\otimes \mathcal{C}[\Sigma]=\frac{\mathcal{T}(\alpha)[\Sigma]}{\mathbb{Z}_2}. \end{equation} The invariance of the GW with $\alpha \neq 0,2\pi$ by stacking $\eta$ is equivalent to \begin{equation} \label{eq:projection invariance} \mathcal{T}(\alpha)[\Sigma]\otimes \mathcal{C}[\Sigma]=\mathcal{Z}(\mathbb{Z}_2;\Sigma) \mathcal{T}(\alpha) [\Sigma] \ \Rightarrow \ \mathcal{T}(\alpha) [\Sigma]/\mathbb{Z}_2=\mathcal{T}(\alpha) [\Sigma]. \end{equation} The two equations above can be rephrased by introducing the \emph{projector} $P_{\mathbb{Z}_2}$ which acts on surface operators as \begin{equation} P_{\mathbb{Z}_2}\equiv \frac{1}{\mathcal{Z}(\mathbb{Z}_2;\Sigma)} \mathcal{C}[\Sigma]. \end{equation} This is a projector because $P_{\mathbb{Z}_2}^2=P_{\mathbb{Z}_2}$, and we have $P_{\mathbb{Z}_2}\otimes \mathcal{T}(\alpha)[\Sigma]\equiv\mathcal{T}(\alpha)[\Sigma]/\mathbb{Z}_2$. Then \eqref{eq:projection condensation} follows from $P_{\mathbb{Z}_2}^2=P_{\mathbb{Z}_2}$, while \eqref{eq:projection invariance} is just the statement that for the strongly simple objects $\alpha\neq 0,2\pi$, $P_{\mathbb{Z}_2}\otimes \mathcal{T}(\alpha)[\Sigma]=\mathcal{T}(\alpha)[\Sigma]$. On the other hand, the topological operators $\mathcal{T}(0)[\Sigma]/\mathbb{Z}_2$, $\mathcal{T}(2\pi)[\Sigma]/\mathbb{Z}_2$ are further indecomposable objects\footnote{The procedure of adding to the category all the defects obtained by condensations is known in category theory as \emph{idempotent completion}, \emph{Karoubi completion}, or \emph{condensation completion} \cite{Gaiotto:2019xmp, douglas2018fusion}.}. This explains why there is not really a mismatch with respect to the 3d case: also in 4d, the defects associated with the short orbits come in different copies obtained by stacking the condensation defect on them. All these copies are connected by 1-morphisms, obtained by putting at the junction lines generating the dual symmetry of the condensed one\footnote{In fusion higher category theory it is known that the simple objects connected by 1-morphisms are only those related among them by condensation \cite{douglas2018fusion}.}. This point will be generalized for $N>2$, but the story will be more involved. By having understood that to a $\mathbb{Z}_2$ surface operator of the ungauged theory there may correspond different defects of the gauged theory, the necessary modification of the fusion rules, roughly speaking, involves the choices of which of the copies of a given defect appears on the right-hand side. We can determine this by requiring consistency with the fusion with $P_{\mathbb{Z}_2}$: when the left-hand side of the fusion is $P_{\mathbb{Z}_2}$ invariant, also the right-hand side must be invariant. Whenever the local fusion does not have this property, we make it consistent by replacing the right-hand side with $P_{\mathbb{Z}_2}(\mbox{r.h.s.})$. This approach leads to the following modifications (here $\alpha \neq 0,\pi, 2\pi$): \begin{equation} \label{eq:global fusion O(2)} \begin{array}{l} \mathcal{T}(\alpha)[\Sigma]\otimes \mathcal{T}(2\pi -\alpha)[\Sigma]=2\mathcal{T}(2\pi)[\Sigma]/\mathbb{Z}_2 +\mathcal{T}(2\alpha -2\pi)[\Sigma] \\ \\ \mathcal{T}(\alpha)[\Sigma]\otimes \mathcal{T}(\alpha)[\Sigma]=2\mathcal{T}(0)[\Sigma]/\mathbb{Z}_2 +\mathcal{T}(2\alpha)[\Sigma]\\ \\ \mathcal{T}(\pi)[\Sigma]\otimes \mathcal{T}(\pi)[\Sigma]=2\mathcal{T}(0)[\Sigma]/\mathbb{Z}_2+2\mathcal{T}(2\pi)[\Sigma]/\mathbb{Z}_2 \end{array} \end{equation} in agreement with the fusion rules found in \cite{Bhardwaj:2022yxj}, up to the coefficients in front of the defects on which $\mathbb{Z}_2$ is gauged. This difference boils down to a different normalization for the gauging procedure. Our choice is the one that, when generalized to $N>2$, makes all the fusion coefficients to be positive integer numbers. This makes it possible to relate these coefficients with the total quantum dimensions of the fusion categories of 1-morphisms, made of topological defect lines at the junctions. Indeed in our case, these fusion categories are always categories of modules of finite groups, and they must have integer quantum dimensions equal to the order of the group. \subsection{$U(1)^{N-1}\rtimes S_N$ Gauge Theory}\label{sec:semiAbelian} Now we construct the $U(1)^{N-1}\rtimes S_N$ gauge theory we are interested in, by gauging the 0-form symmetry $S_N$ of the Abelian theory. The 3d analog of this theory has been discussed on the lattice in \cite{Nguyen:2021yld}. The toy example in the last subsection has several features of the general case, but there are many other interesting aspects for $N>2$ which make the analysis more complicated. In section \ref{sec:3} we will show the connection between this theory and 4d $SU(N)$ YM theory. For this reason, we present the results in a way to make the comparison with the YM theory suitable. The local operators are the $S_N$ invariant combinations of those of the Abelian theory, namely all the symmetric polynomials in the $N-1$ variables $\mathcal{F}_{i=1,...,N-1}$. There are $N-1$ independent symmetric polynomials obtained by adding $\mathcal{F}_N=-\mathcal{F}_1-...-\mathcal{F}_{N-1}$ and constructing the $N-1$ symmetric polynomials of degrees $2,3,...,N$ in the $N$ variables $\mathcal{F}_{i=1,...,N}$. The Wilson lines are the minimal $S_N$ invariant combinations of the Wilson lines $\mathcal{W}(\boldsymbol{n})$ of the Abelian theory $U(1)^{N-1}$. By recalling the action \eqref{eq:action S_n on Wlines}, we construct the Wilson lines of the $U(1)^{N-1}\rtimes S_N$ theory by summing over the orbit of $S_N$ \begin{equation} \label{eq:W lines S_N} \mathcal{V}(\boldsymbol{n})[\gamma]=\sum _{\sigma \in S_N}\mathcal{W}\left(\mathfrak{S}_{\sigma} ^{\vee} \cdot \boldsymbol{n}\right)[\gamma]. \end{equation} Now we look at the electric GW operators and their action on the Wilson lines. These are the objects of a 2-category with non-trivial morphism structure, coming from the dual non-invertible 2-form symmetry $\mbox{Rep}(S_N)$ induced by the gauging of $S_N$. These new topological lines arise as 1-morphisms and play a crucial role in the global fusion. Since the problem is a bit intricate, we start at the local level by putting all the GW on surfaces without non-trivial 1-cycles. We will discuss the 2-category structure and the global fusions in the next subsection. The following discussion applies, \emph{mutatis mutandis}, for the magnetic GW operators as well. The GW operators of the $U(1)^{N-1}\rtimes S_N$ theory are the minimal $S_N$ invariant combination of GW operators of $U(1)^{N-1}$, and their construction is parallel to that for the $S_N$ Wilson lines explained above. We normalize the GW dividing by $|H_{\boldsymbol{\alpha}}|$, where $H_{\boldsymbol{\alpha}}\subset S_N$ is the stabilizer of $\boldsymbol{\alpha}$ \footnote{This normalization is the most natural one since with this choice the gauge invariant GW operators are generated by summing the non-gauge invariant ones without overcounting. For instance when $H_{\alpha} = S_N$ we get that $\mathcal{T}(\boldsymbol{\alpha})[\Sigma ] \equiv \mathcal{D}(\boldsymbol{\alpha})[\Sigma ]$ as it should.}: \begin{equation} \label{eq:GW useful} \mathcal{T}(\boldsymbol{\alpha})[\Sigma ]=\frac{1}{|H_{\boldsymbol{\alpha}}|}\sum _{\sigma \in S_N}\mathcal{D}\left(\mathfrak{S}_{\sigma}^{\vee}\cdot\boldsymbol{\alpha}\right)[\Sigma]. \end{equation} These operators are topological. By construction $\mathcal{T}\left( \mathfrak{S}_{\sigma} ^{\vee} (\boldsymbol{\alpha})\right)=\mathcal{T}(\boldsymbol{\alpha})$, so that the parameter space of the GW operators is \begin{equation} U(1)^{N-1}/ S_N. \end{equation} This is a singular manifold since the $S_N$ action on $U(1)^{N-1}$ has fixed points. It is easier to see this in the variables $\beta _i$ introduced above, and we will do it shortly. For the time being we just emphasize that $U(1)^{N-1}/S_N$ coincide with the set of conjugacy classes of $SU(N)$, which labels also the (generically non-topological) GW operators of the $SU(N)$ YM theory \cite{Gukov:2006jk, Gukov:2008sn, Gukov:2014gja}. This is a first clue of a connection between the $U(1)^{N-1}\rtimes S_N$ theory and $SU(N)$ YM theory which we explore in the next section. We will see that it is natural to identify $\mathcal{T}(\boldsymbol{\alpha})$ with the high energy limit of the GW operators of $SU(N)$ YM theory, which becomes topological in the ultraviolet and form a non-invertible symmetry, broken by the RG flow to the center 1-form symmetry $\mathbb{Z}_N^{(1)}$. Let us look at the \emph{local fusion} rules . From the definition \eqref{eq:GW useful} we get \begin{eqnarray} \label{eq:local fusions1} \begin{array}{rl} \mathcal{T}(\boldsymbol{\alpha})\otimes \mathcal{T}(\boldsymbol{\beta}) & \displaystyle =\frac{1}{|H_{\boldsymbol{\alpha}}||H_{\boldsymbol{\beta}}|} \sum _{\sigma _1,\sigma _2 \in S_N}\mathcal{D}\left(\mathfrak{S}_{\sigma _1}^{\vee} \cdot (\boldsymbol{\alpha}+\mathfrak{S}^{\vee}_{\sigma _1^{-1}\circ \sigma _2}\cdot \boldsymbol{\beta} ) \right) = \\ \\ & \displaystyle = \frac{1}{|H_{\boldsymbol{\alpha}}||H_{\boldsymbol{\alpha}}|} \sum _{\sigma\in S_N}\sum _{\sigma _1 \in S_N}\mathcal{D}\left(\mathfrak{S}_{\sigma _1}^{\vee} \cdot \left(\boldsymbol{\alpha}+\mathfrak{S}^{\vee}_{\sigma }\cdot \beta \right) \right)= \\ \\ &\displaystyle = \frac{1}{|H_{\boldsymbol{\alpha}}||H_{\boldsymbol{\beta}}|}\sum _{\sigma \in S_N}|H_{\boldsymbol{\alpha}+\mathfrak{S}^{\vee}_{\sigma }\cdot \boldsymbol{\beta}}|\mathcal{T}\left(\boldsymbol{\alpha}+\mathfrak{S}^{\vee}_{\sigma }\cdot \boldsymbol{\beta} \right) \end{array} \end{eqnarray} showing that the symmetry generated by the GW operators is non-invertible. This formula is very implicit and does not make it clear the interpretation of the coefficients appearing. Indeed it is important to show that, as in the $N=2$ case, the fusion coefficients are always integer numbers, counting the total quantum dimension of the fusion category of 1-morphisms living at the junctions. We can massage the formula above as follows. Notice that for any $x\in H_{\boldsymbol{\alpha}}, y \in H_{\boldsymbol{\beta}}$ we have $\mathcal{T}\big(\boldsymbol{\alpha}+\mathfrak{S}^{\vee}_{\sigma } \cdot \boldsymbol{\beta}\big)=\mathcal{T}\big(\boldsymbol{\alpha}+\mathfrak{S}^{\vee}_{x\sigma y} \cdot \boldsymbol{\beta}\big)$, and $x\sigma y$ are all the elements of the double coset $H_{\boldsymbol{\alpha}}\sigma H_{\boldsymbol{\beta}}$. Moreover $S_N$ is the disjoint union of all the double cosets, labeled by elements of the double cosets space $H_{\boldsymbol{\alpha}}\backslash S_N /H_{\boldsymbol{\beta}}$. By choosing arbitrarily one element for each double coset the formula above can be rewritten as \begin{equation} \mathcal{T}(\boldsymbol{\alpha})\otimes \mathcal{T}(\boldsymbol{\beta})=\frac{1}{|H_{\boldsymbol{\alpha}}||H_{\boldsymbol{\beta}}|}\sum _{\sigma \in H_{\boldsymbol{\alpha} } \backslash S_N \slash H_{\boldsymbol{\beta}}}|H_{\boldsymbol{\alpha}+\mathfrak{S}^{\vee}_{\sigma }\cdot \boldsymbol{\beta}}| |H_{\boldsymbol{\alpha}}\sigma H_{\boldsymbol{\beta}}| \mathcal{T}\left(\boldsymbol{\alpha}+\mathfrak{S}^{\vee}_{\sigma }\cdot \boldsymbol{\beta} \right) \end{equation} The order of the double coset $H_{\boldsymbol{\alpha}}\sigma H_{\boldsymbol{\beta}} $ is \cite{hall2018theory} \begin{equation} |H_{\boldsymbol{\alpha}}\sigma H_{\boldsymbol{\beta}}|=\frac{|H_{\boldsymbol{\alpha}}| |H_{\boldsymbol{\beta}}|}{|H_{\boldsymbol{\alpha}}\cap \sigma H_{\boldsymbol{\beta}} \sigma ^{-1}|} \end{equation} from which we find \begin{equation} \label{eq:local fusions} \mathcal{T}(\boldsymbol{\alpha})\otimes \mathcal{T}(\boldsymbol{\beta})=\sum _{\sigma \in H_{\boldsymbol{\alpha} } \backslash S_N /H_{\boldsymbol{\beta}}}f_{\alpha \beta}^{\sigma}\;\; \mathcal{T}\left(\boldsymbol{\alpha}+\mathfrak{S}^{\vee}_{\sigma }\cdot \boldsymbol{\beta} \right) \end{equation} where \begin{equation} f_{\alpha \beta}^{\sigma} = \frac{|H_{\boldsymbol{\alpha}+\mathfrak{S}^{\vee}_{\sigma }\cdot \boldsymbol{\beta}}|}{|H_{\boldsymbol{\alpha}}\cap \sigma H_{\boldsymbol{\beta}} \sigma ^{-1}|} \in \mathbb{Z}_+ . \end{equation} The fusion coefficients $f_{ab}^{\sigma}$ appearing here are integers because $H_{\boldsymbol{\alpha}}\cap \sigma H_{\boldsymbol{\beta}} \sigma ^{-1}$ is a subgroup of $H_{\boldsymbol{\alpha}+\mathfrak{S}^{\vee}_{\sigma }\cdot \boldsymbol{\beta}}$. These numbers are counting the 1-morphisms living at the junctions, up to the endomorphisms. We will shortly see how these numbers are related with the condensation defects that we need to add on right hand side to correct the fusion rules whenever the surface is topologically non-trivial. Let us look at the Ward identities involving the GW $\mathcal{T}(\boldsymbol{\alpha})[\Sigma]$ and the Wilson lines linking once with $\Sigma$. Consider first a Wilson line $\mathcal{W}(\boldsymbol{n})$ of the $U(1)^{N-1}$ theory, and the action of $\mathcal{T}(\boldsymbol{\alpha})$ on it. By using \eqref{eq:GW on Lines Abelian} we obtain \begin{equation} \label{eq: GW semiAbelian on lines} \mathcal{T}(\boldsymbol{\alpha})\cdot \mathcal{W}(\boldsymbol{n})=\frac{1}{|H_{\boldsymbol{\alpha}}|}\sum _{\sigma \in S_N}\exp{\left(i\sum _{i,j=1} ^{N-1} \mathfrak{S}_{\sigma} ^{\vee} (\alpha _i) \left(Q^{-1}\right)_{ij}n_j\right)} \mathcal{W}(\boldsymbol{n})=\mathfrak{C} (\boldsymbol{\alpha},\boldsymbol{n})\mathcal{W}(\boldsymbol{n}). \end{equation} To prove that the action on the Wilson lines $\mathcal{V}(\boldsymbol{n})$ is diagonal we need to show that $\mathfrak{C} (\boldsymbol{\alpha},\mathfrak{S}_{\sigma}^{\vee} \cdot \boldsymbol{n})=\mathfrak{C} (\boldsymbol{\alpha},\boldsymbol{n})$ for any $\sigma \in S_N$. We recall that from the definition of $Q_{ij}$ we have $\mathfrak{S}_{\sigma} (\mathcal{F}_i)Q_{ij}\mathfrak{S}_{\sigma}(\mathcal{F}_j)=\mathcal{F}_i Q_{ij}\mathcal{F}_j$, implying that $\mathfrak{S}_{\sigma} ^TQ\mathfrak{S}_{\sigma}=Q$. Then $Q^{-1} = \mathfrak{S}_{\sigma}^{-1}Q^{-1}\left(\mathfrak{S}_{\sigma}^{T}\right)^{-1} = \left(\mathfrak{S}_{\sigma}^{\vee}\right)^{T} Q^{-1}\mathfrak{S}^{\vee}_{\sigma}$ which implies $Q^{-1}\left(\mathfrak{S}^{\vee}_{\sigma}\right)^{-1} =\left(\mathfrak{S}_{\sigma}^{\vee}\right)^{T}Q^{-1} $, or $Q^{-1}\left(\mathfrak{S}^{\vee}_{\sigma}\right) =\left(\mathfrak{S}_{\sigma^{-1}}^{\vee}\right)^{T}Q^{-1} $. This gives us the desired invariance \begin{equation} \begin{split} \mathfrak{C}(\boldsymbol{\alpha}, \mathfrak{S}_{\sigma}^{\vee}\cdot \boldsymbol{n}) &= \frac{1}{|H_{\boldsymbol{\alpha}}|}\sum _{\sigma' \in S_N}\exp{\left(i\boldsymbol{\alpha}^{T}\cdot\left(\mathfrak{S}_{\sigma'} ^{\vee}\right)^{T} Q^{-1}\mathfrak{S}_{\sigma}^{\vee}\cdot \boldsymbol{n}\right)} \\&= \frac{1}{|H_{\boldsymbol{\alpha}}|}\sum _{\sigma ' \in S_N}\exp{\left(i\boldsymbol{\alpha}^{T}\cdot\left(\mathfrak{S}^{\vee}_{ \sigma^{-1} \sigma'}\right)^{T} Q^{-1}\cdot \boldsymbol{n}\right)} = \mathfrak{C}(\boldsymbol{\alpha}, \boldsymbol{n}) \end{split} \end{equation} which proves the following Ward identities \begin{equation} \label{eq:actionWilson} \begin{array}{c} \mathcal{T}(\boldsymbol{\alpha})[\Sigma]\cdot \mathcal{V}(\boldsymbol{n})[\gamma]=\mathfrak{C} (\boldsymbol{\alpha},\boldsymbol{n})^{Lk(\Sigma,\gamma)}\mathcal{V}(\boldsymbol{n})[\gamma] \\ \\ \displaystyle \mathfrak{C} (\boldsymbol{\alpha},\boldsymbol{n})=\frac{1}{|H_{\boldsymbol{\alpha}}|}\sum _{\sigma \in S_N}\exp{\left(i\sum _{i,j=1} ^{N-1} \mathfrak{S}_{\sigma} ^{\vee} (\alpha _i)\left(Q^{-1}\right) _{ij} n_j\right)}. \end{array} \end{equation} Notice that for $N=2$ we have $\mathfrak{C} (\alpha,n)=\frac{2}{|H_{\alpha}|}\cos\left(n\frac{\alpha}{2}\right)$, as we obtained before. The GW operators $\mathcal{T}(\boldsymbol{\alpha})[\Sigma]$ are the generator of a continuous non-invertible symmetry. However an interesting issue is the identification of the sub-category of group-like symmetries. Because the center of $U(1)^{N-1}\rtimes S_N$ is isomorphic to $\mathbb{Z}_N$ we already expect the discrete center symmetry $\mathbb{Z}_N^{(1)}$ to be embedded in the continuous non-invertible symmetry. In the $N=2$ case it was easy to see that $\mathbb{Z}_2$ is the maximal set of invertible unitary generators. We are going to show the same for any $N$, and we provide some interesting property of this center symmetry related with the action on the Wilson lines, to be compared with the non-invertible one. This analysis is also interesting in view of the connection with $SU(N)$ YM theory in the next section, in which only the center symmetry $\mathbb{Z}_N$ remains as an unbroken symmetry along the RG flow.\\ From \eqref{eq:local fusions1} we see that $T(\boldsymbol{\alpha})$ has group-like fusion only if $\boldsymbol{\alpha}$ is a fixed point of the Weyl group. The tricky point here is to properly account for the identifications on the parameters. It is convenient to work in the variables $ \boldsymbol{\beta} = Q^{-1}\boldsymbol{\alpha} $ which are separately $2 \pi $ periodic. $S_N$ acts on $\boldsymbol{\alpha}$ with $\mathfrak{S}_{\sigma}^{\vee}$, thus we need to work out the action on $\boldsymbol{\beta}$. By definition \begin{equation} \beta_{i} = \sum_{j=1}^{N-1}\frac{-1 + N \delta_{ij}}{N}\alpha_{j} = \alpha_{i}-\frac{1}{N}\sum_{j=1}^{N-1}\alpha_{j} \end{equation} Since the $\alpha_{i}$ transform in the $\mathfrak{S}^{\vee}$ representation we may write them as $\alpha_{i}= u_{i}-u_{N}$ where $u_{i}$ transform in the $N$-dimensional natural representation. We then have \begin{equation} \beta_{i} = u_{i}-u_{N}-\frac{1}{N}\sum_{j=1}^{N-1}(u_{j}-u_{N}) =\left(1-\frac{1}{N}\right)u_i -\frac{1}{N} \sum_{j\neq i} ^{N} u_{j} \end{equation} We now introduce an $N$-th variable \begin{equation} \beta_{N} = - \sum_{i=1}^{N-1}\beta_{i} = \left(1-\frac{1}{N}\right)u_{N}-\frac{1}{N}\sum_{j\neq N}u_{j}. \end{equation} Since the $u_{i}$ are permuted by $S_{N}$ it is clear that also the $\beta_{i}$, including $\beta_{N}$, are permuted, i.e. sit in the natural representation. By construction the sum of the $\beta_{i}$ vanishes hence they transform in the standard $N-1$-dimensional representation. It is now easy to determine the fixed points. Clearly $S_{N}$ contains a subgroup $S_{N-1}$ which permutes the $N-1$ unconstrained $\beta_{i}$'s, those must then be equal at the fixed point: $\beta_{i}=\beta$. The only remaining equation to solve is \begin{equation} \beta = -\sum_{i=1}^{N-1}\beta = -(N-1)\beta \ \mbox{mod } 2\pi \ \ \Rightarrow \ \ N \beta = 0 \text{ mod } 2 \pi \end{equation} which is solved by the $N$-th roots of unity \begin{equation} \beta_{*} = \frac{2 \pi k}{N} \quad \quad k = 0,.., N-1. \end{equation} This shows that there are $N$ fixed points. We can map them back to the original basis \begin{equation} \alpha_{i} =\sum_{j=1}^{N-1}Q_{ij}\beta_{*} = \sum_{j=1}^{N-1}(1 + \delta_{ij})\beta_{*} = N\beta_{*} = 2 \pi k \quad \quad \forall i = 1,.., N-1. \end{equation} We will denote this fixed points by $\boldsymbol{\alpha}_k$, $k=0,...,N-1$. The corresponding fusions are \begin{equation} \mathcal{T}\left(\boldsymbol{\alpha}_k\right) \mathcal{T}\left(\boldsymbol{\alpha}_l\right) = \mathcal{T}\left(\boldsymbol{\alpha}_{l+k}\right) \end{equation} proving that these operators form a $\mathbb{Z}_{N}$ subgroup of the non-invertible symmetry. This construction shows that $\mathbb{Z}_{N}$ is the largest possible subcategory with group-like fusions. Let us now see how this subgroup acts on the lines of the theory. By inserting $\boldsymbol{\alpha}=\boldsymbol{\alpha}_k$ in (\ref{eq:actionWilson}) we get \begin{equation} \begin{split} &\mathfrak{C}(\boldsymbol{\alpha}_k,\boldsymbol{n}) = \frac{1}{N!} \sum _{\sigma \in S_N}\exp{\left(i\sum _{i,j=1} ^{N-1} \mathfrak{S}_{\sigma} ^{\vee} (\alpha_i)\left(Q^{-1}\right) _{ij} n_j\right)} = \\ & = \frac{1}{N!} \sum _{\sigma \in S_N}\exp{\left(i\sum _{i=1} ^{N-1} \mathfrak{S}_{\sigma}(\beta_i) n_i\right)} =\exp{\left(\frac{2\pi i k}{N}\sum _{i=1} ^{N-1} n_i\right)}. \end{split} \label{eq:group_phase} \end{equation} This shows that when we restrict to the $\mathbb{Z}_N$ subgroup of the non-invertible symmetry, the action on the Wilson line $\mathcal{V}(\boldsymbol{n})$ becomes group-like with a phase which is an $N$-root of unity with charge \begin{equation} |\boldsymbol{n}|:= \sum_{i=1}^{N-1} n_i. \end{equation} \subsection{Higher Condensation and Global Fusion} \label{sec:global fusion} When the GW operators are supported on surfaces $\Sigma$ with non-trivial topology we are able to probe the full structure of the 2-category symmetry. An important role is played by the 1-morphisms, which are non-trivial due to the quantum 2-form symmetry arsing by the gauging of $S_N$, implying that there are indecomposable objects with non-trivial endomorphisms. For $N>2$ the quantum symmetry is a discrete non-invertible symmetry $\mbox{Rep}\left(S_N\right)$ and the analysis is more involved with respect to the $O(2)$ gauge theory. The higher condensation defect $\mathcal{C}_{\mbox{Rep}\left(S_N\right)}[\Sigma]$ must be constructed by gauging non-invertible lines on a surface. There is a well established definition of gauging in fusion categories described in \cite{Bhardwaj:2017xup}, and fortunately for any discrete group $G$ the fusion category $\mbox{Rep}(G)$ in 2d can be fully gauged, thus defining the following condensation defect on $\Sigma$: \begin{equation} \mathcal{C}_{\mbox{Rep}\left(S_N\right)}[\Sigma]=\;\;\; \raisebox{-4.5 em}{\includegraphics[width=3cm]{algebra_mesh.pdf}}\;\;\;. \end{equation} Here $\mathcal{A}_{\mbox{Rep}\left(S_N\right)}$ is the Frobenius algebra object of $\mbox{Rep}\left(S_N\right)$ corresponding to the \emph{regular} representation\footnote{For Abelian group the regular representation is just the sum of all the irreducible representations, and this generalized gauging procedure coincide with the standard one.}, and by $\mathcal{C}_{\mbox{Rep}\left(S_N\right)}[\Sigma]$ we mean a fine enough mesh of this object on $\Sigma$. On the defect there is a symmetry $S_N$ \cite{Bhardwaj:2017xup}: the fusion category of 1-endomorphisms is the group $S_N$. Notice that the lines generating this symmetry are stacked on the defect and do not exist in the bulk. Below we will give an equivalent description of $\mathcal{C}_{\mbox{Rep}\left(S_N\right)}[\Sigma]$, which turns out to be useful to compute the fusion with itself, allowing us to find the non-Abelian generalization of the fusions found in \cite{Roumpedakis:2022aik}. As in the $N=2$ case, we determine the global fusion of $\mathcal{T}(\boldsymbol{\alpha})$ and $\mathcal{T}(\boldsymbol{\beta})$ by requiring consistency with the stacking of Wilson lines which are absorbed by $\mathcal{T}(\boldsymbol{\alpha})$ and $\mathcal{T}(\boldsymbol{\beta})$. For $N=2$ this corresponded to the use of the projector $P_{\mathbb{Z}_2}$, and it was enough because $S_2=\mathbb{Z}_2$ does not have non-trivial proper subgroups: $\alpha \in U(1)/\mathbb{Z}_2$ is either fixed or invariant under $\mathbb{Z}_2$. For $N>2$ there are values $\boldsymbol{\alpha}\in U(1)^{N-1}$ which are stabilized by a non-trivial proper subgroup $H_{\boldsymbol{\alpha}}\subset S_N$. Then the fusion category of 1-endomorphisms $\mathcal{T}(\boldsymbol{\alpha})\rightarrow \mathcal{T}(\boldsymbol{\alpha})$ is isomorphic to $\mbox{Rep}\left(H_{\boldsymbol{\alpha}}\right)$, meaning that the $H_{\boldsymbol{\alpha}}$ Wilson lines are not \emph{absorbed} by $\mathcal{T}(\boldsymbol{\alpha})$ and can live on it as non-trivial lines. On the other hand there are $S_N$ Wilson lines which are not $H_{\boldsymbol{\alpha}}$ Wilson lines, and these are absorbed by $\mathcal{T}(\boldsymbol{\alpha})$. This implies that the local fusion rules require modifications which cannot be seen by simply applying the projector $P_{\mbox{Rep}\left(S_N\right)}$ corresponding to $\mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma]$. Indeed this projector condenses the full symmetry living on the defect: \begin{equation} P_{\mbox{Rep}\left(S_N\right)}\otimes \mathcal{T}\left(\boldsymbol{\alpha}\right)[\Sigma]=\mathcal{T}\left(\boldsymbol{\alpha}\right)[\Sigma]/\mbox{Rep}\left(H_{\boldsymbol{\alpha}}\right). \end{equation} When $\mathcal{T}(\boldsymbol{\alpha})$ is a strongly simple object, namely $H_{\boldsymbol{\alpha}}=1$, by using $P_{\mbox{Rep}\left(S_N\right)}$ we can determine the correct fusion rules. On the other hand if $H_{\boldsymbol{\alpha}}$ is a non-trivial proper subgroup, using only $P_{\mbox{Rep}\left(S_N\right)}$ we would miss the global fusion rules with $\mathcal{T}\left(\boldsymbol{\alpha}\right)[\Sigma]$ appearing on the left hand side. We then need to construct the projector containing the maximal set of lines absorbed by $\mathcal{T}(\boldsymbol{\alpha})$. Before clarifying what does this mean and giving a general construction, we need to introduce the promised alternative definition of $\mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma]$. The idea is that since the $\mbox{Rep}(S_N)$ symmetry is obtained by the gauging of $S_N$, condensing it on $\Sigma$ is the same as doing a step back before gauging $S_N$, removing $\Sigma$ from the space-time manifold $\mathcal{M}$ and then gauging $S_N$ in $\mathcal{M}-\Sigma$. We do so imposing Dirichlet boundary conditions $a|_{\Sigma}=0$ on the surface for the $S_N$ gauge field $a$. This construction produces the $U(1)^{N-1}\rtimes S_N$ theory with the insertion of a condensation defect $\mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma]$. Notice that this picture is consistent with the presence of a dual $S_N$ symmetry living on $\mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma]$: a co-dimension one defect of the 0-form global symmetry $S_N$ in the $U(1)^{N-1}$ theory can intersect $\Sigma$ on a line wrapping a cycle, then this defect is made transparent outside $\Sigma$ by the gauging of $S_N$ in $\mathcal{M}-\Sigma$, while the line on $\Sigma$ remains as the generator of a 0-form symmetry on the condensation defect. This way of presenting $\mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma]$ may seem abstract, but it does not rely on the concept of gauging a Frobenius algebra object, and turns out to be useful to determine the fusion $\mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma]\otimes \mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma]$. For convenience we denote the defect constructed in this way by $\widetilde{\mathcal{C}}$, even if $\widetilde{\mathcal{C}}=\mathcal{C}$, to distinguish when we are thinking about the condensation defect in the standard or in the latter presentation. To compute $\mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma]\otimes \mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma]$ the trick is to think one of the two supported on $\Sigma$ and defined in the presentation $\widetilde{\mathcal{C}}$, while the other defined in the standard way $\mathcal{C}$ with the condensation of the algebra object $\mathcal{A}_{\mbox{Rep}(S_N)}$ on a surface $\Sigma '=\Sigma +\delta \Sigma$, which lies inside the mesh of $S_N$ defects in $\mathcal{M}-\Sigma$. When we send the displacement $\delta \Sigma$ to zero the mesh of $\mathcal{A}_{\mbox{Rep}(S_N)}$ defining $\mathcal{C}$ enters into the "hole" $\Sigma$ defining $\widetilde{\mathcal{C}}$ (see figure \ref{fig:condensation_fusion}). The result is again $\widetilde{\mathcal{C}}_{\mbox{Rep}(S_N)}[\Sigma]$ but with the hole $\Sigma$ filled with a mesh of algebra objects implementing the higher gauging of $\mbox{Rep}(S_N)$. Because of the Dirichlet boundary conditions this condensation does not speak with the $S_N$ gauge field in the bulk, and it simply computes the partition function of the 2d $\mbox{Rep}(S_N)$ gauge theory on $\Sigma$, denoted by $\mathcal{Z}\left( \mbox{Rep}\left(S_N\right) ; \Sigma \right) $. Since $\widetilde{\mathcal{C}}=\mathcal{C}$ we get \begin{equation} \mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma]\otimes \mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma]=\mathcal{Z}\left( \mbox{Rep}\left(S_N\right) ; \Sigma \right) \mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma] \end{equation} which can be thought of as a non-Abelian generalization of the results in \cite{Roumpedakis:2022aik}. \begin{figure}[t!] \centering \raisebox{-0 em}{\includegraphics[width=15cm]{fusion_condensation.pdf}} \caption{A pictorial representation of the fusion of the condensation defects. The green region is the one with a gauged $S_N$ symmetry while the white one is the one in which such a symmetry is still global. The red lines represent a fine enough mesh of the algebra object representing the gauging of Rep($S_N$) in the $2$-dimensional surface $\Sigma_2$ or $\Sigma'_2$. } \label{fig:condensation_fusion} \end{figure} We check the correctness of this abstract procedure by repeating it in an Abelian case where $S_N$ is replaced by $\mathbb{Z}_N$. This has the advantage that the dual symmetry is invertible and its higher gauging on $\Sigma ' \subset \mathcal{M}-\Sigma$ can be done by simply coupling the defect to a background gauge field $b \in H^1\left(\Sigma ' , \mathbb{Z}_N\right)$ and summing over it. The coupling of $b$ to the $\mathbb{Z}_N$ gauge field $a\in H^1(\mathcal{M}-\Sigma, \mathbb{Z}_N)$ is the standard one: \begin{equation} \label{eq:coupling} \exp{\left(\int _{\Sigma '} a\cup b\right)}\mathcal{Z}(\mathbb{Z}_N; \Sigma '). \end{equation} By summing over $b$ one gets the condensation defect in the standard presentation $\mathcal{C}$ of \cite{Roumpedakis:2022aik}. On the other hand our alternative definition $\widetilde{\mathcal{C}}$ is formally the same in the Abelian and in the non-Abelian case, since it does not use the notion of gauging non-invertible symmetries. The insertion of $\mathcal{C}_{\mathbb{Z}_N}[\Sigma]\otimes \mathcal{C}_{\mathbb{Z}_N}[\Sigma] $ in a correlation function can be replaced by \eqref{eq:coupling} in the same correlation function, computed in the theory where $\mathbb{Z}_N$ is gauged in $\mathcal{M}-\Sigma$ with Dirichlet boundary condition $a|_{\Sigma}=0$, and then take the limit $\Sigma '\rightarrow \Sigma$. In this limit the exponential factor disappear because of the Dirichlet boundary condition. We remain with the partition function of the 2d $\mathbb{Z}_N$ gauge theory on $\Sigma$ multiplying the correlation function computed in the theory with dynamical gauge field $a\in H^1(\mathcal{M}-\Sigma,\mathbb{Z}_N)$. This means \begin{equation} \mathcal{C}_{\mathbb{Z}_N}[\Sigma]\otimes \mathcal{C}_{\mathbb{Z}_N}[\Sigma]=\mathcal{Z}\left(\mathbb{Z}_N; \Sigma \right) \mathcal{C}_{\mathbb{Z}_N}[\Sigma] \end{equation} which is the same fusion of \cite{Roumpedakis:2022aik}. The non-Abelian condensation defects we defined allow to construct the projector $P_{\mbox{Rep}(S_N)}$ satisfying $P_{\mbox{Rep}(S_N)}^2=P_{\mbox{Rep}(S_N)}$. By using it we obtain the global fusions of the strongly simple GW operators, namely those with trivial stabilizers $H_{\boldsymbol{\alpha}}=H_{\boldsymbol{\beta}}=1$ \begin{equation} \label{eq:global fusion with trivial stabilizer on the lhs} \begin{array}{rr} \mathcal{T}(\boldsymbol{\alpha})[\Sigma]\otimes \mathcal{T}(\boldsymbol{\beta})[\Sigma] & \displaystyle =\sum _{\sigma \in S_N} |H_{\boldsymbol{\alpha} +\mathfrak{S}_{\sigma} ^{\vee} \cdot \boldsymbol{\beta}}|P_{\mbox{Rep}(S_N)}\otimes \mathcal{T}\left(\boldsymbol{\alpha} +\mathfrak{S}_{\sigma} ^{\vee} \cdot \boldsymbol{\beta}\right)[\Sigma]=\\ \\ &\displaystyle =\sum _{\sigma \in S_N} |H_{\boldsymbol{\alpha} +\mathfrak{S}_{\sigma} ^{\vee} \cdot \boldsymbol{\beta}}| \frac{\mathcal{T}\left(\boldsymbol{\alpha} +\mathfrak{S}_{\sigma} ^{\vee} \cdot \boldsymbol{\beta}\right)[\Sigma]}{\mbox{Rep}\left(H_{\boldsymbol{\alpha} +\mathfrak{S}_{\sigma} ^{\vee} \cdot \boldsymbol{\beta}}\right)} \end{array} \end{equation} where we used that the projector on the right hand side implements the gauging of the full symmetry $\mbox{Rep}\left(H_{\boldsymbol{\alpha} +\mathfrak{S}_{\sigma} ^{\vee} \cdot \boldsymbol{\beta}}\right)$ living on the GW $\mathcal{T}\left(\boldsymbol{\alpha} +\mathfrak{S}_{\sigma} ^{\vee} \cdot \boldsymbol{\beta}\right)$. As advertised before, when $H_{\boldsymbol{\alpha}}$ is a non-trivial proper subgroup of $S_N$ we need the maximal projector absorbed by $\mathcal{T}(\boldsymbol{\alpha})$. A priory it is not obvious at all how to define this projector. If $H_{\boldsymbol{\alpha}}$ is a normal subgroup, then $H_{\boldsymbol{\alpha}}^{\perp}=S_N/H_{\boldsymbol{\alpha}}$ is a group, and intuitively we need a projector $P_{\mbox{Rep}(H_{\boldsymbol{\alpha}}^{\perp})}$ obtained from the condensation of $\mbox{Rep}(H_{\boldsymbol{\alpha}}^{\perp})$ Wilson lines. However it is not obvious that this an allowed gauging in the category $\mbox{Rep}(S_N)$ of bulk lines, and more seriously we would not know how to proceed when the stabilizer is not a normal subgroup\footnote{For $N\geq 5$ there is only one non-trivial proper normal subgroup of $S_N$, namely the alternating group $A_N$.}. Our definition of the relevant condensation defect absorbed by $\mathcal{T}(\boldsymbol{\alpha})$ is as follows. We start from the maximal condensation defect $\mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma]$ and we recall that there is a quantum symmetry $S_N$ living on it, which is very explicit in our presentation $\widetilde{\mathcal{C}}$ of this defect. Then for any subgroup $H_{\boldsymbol{\alpha}}\subset S_N$ we can gauge this smaller symmetry on the defect, which corresponds to remove the $\mbox{Rep}(H_{\boldsymbol{\alpha}})$ Wilson line from the condensate, and generate a \emph{new} higher-condensation defect which, with an abuse of notation, we denote with $\mathcal{C}_{\mbox{Rep}(H_{\alpha}^{\perp})}[\Sigma]$. Notice that this construction matches nicely with the known fact that the Frobenius algebra objects of $\mbox{Rep}(G)$ are in one-to-one correspondence with the subgroups of $G$ \cite{Bhardwaj:2017xup}. From this defect we can construct the projector $P_{\mbox{Rep}(H_{\boldsymbol{\alpha}}^{\perp})}$ for any $\boldsymbol{\alpha} \in U(1)^{N-1}/S_N $, and this is the maximal projector absorbed by $\mathcal{T}(\boldsymbol{\alpha})$. When we fuse two of these higher condensation defects for $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$, we are essentially removing from the condensate $\mathcal{C}_{\mbox{Rep}(S_N)}[\Sigma]$ all the lines which are lines of both $H_{\boldsymbol{\alpha}}$ and $H_{\boldsymbol{\beta}}$, while keeping all the others. This leads to the following algebra of projectors \begin{equation} \label{eq:projectors algebra} P_{\mbox{Rep}(H_{\boldsymbol{\alpha}}^{\perp})}\otimes P_{\mbox{Rep}(H_{\boldsymbol{\beta}}^{\perp})}=P_{\mbox{Rep}\left(\left(H_{\boldsymbol{\alpha}}\cap H_{\boldsymbol{\beta}}\right)^{\perp}\right)} \end{equation} We can use this knowledge to compute the most general global fusion rules, by starting from the local one \eqref{eq:local fusions} and apply the projectors $P_{\mbox{Rep}(H_{\boldsymbol{\alpha}}^{\perp})}$ and $P_{\mbox{Rep}(H_{\boldsymbol{\beta}}^{\perp})}$ to both sides of the equation, which are absorbed by the left-hand side: \begin{equation} \label{eq:general global fusion} \mathcal{T}(\boldsymbol{\alpha})[\Sigma]\otimes \mathcal{T}(\boldsymbol{\beta})[\Sigma] =\sum _{\sigma \in H_{\boldsymbol{\alpha}}\backslash S_N /H_{\boldsymbol{\beta}}}f_{\alpha \beta}^{\sigma} \; P_{\mbox{Rep}\left(\left(H_{\boldsymbol{\alpha}}\cap H_{\boldsymbol{\beta}}\right)^{\perp}\right)}\otimes \mathcal{T}\left(\boldsymbol{\alpha} +\mathfrak{S}_{\sigma} ^{\vee} \cdot \boldsymbol{\beta}\right)[\Sigma] \;\;\; . \end{equation} It is a trivial exercise to check that this formula agrees with the global fusion of the $O(2)$ gauge theory. The general fusion rule above explains the meaning of the integer fusion coefficients coefficients $f_{ab}^{\sigma}$. These numbers are greater than one whenever they multiply an operator dressed with some condensation defect, and the number coincide with the quantum dimension of the algebra object condensed on the defect. This fact has a simple interpretation. The condensation produces a dual symmetry on the defect, and a junction among the two fused defects and any one of those appearing on the right can be constructed using any of the lines generating this dual symmetry, whose total quantum dimension is equal to that of the condensed algebra object. We can look back at the $N=2$ case and check that this discussion applies. A richer example is the case $N=3$, and it is worth to discuss it here. Given $\boldsymbol{\alpha}=(\alpha _1,\alpha _2)\in U(1)^2/S_3$ there are four possible stabilizers: \begin{itemize} \item $\alpha _1=\alpha _2=2\pi k$, $k=0,1,2$ is fixed by the full group $H_{\boldsymbol{\alpha}}=S_N$. \item For $\alpha _1=2\pi k_1$, $\alpha _2=2\pi k_2 $, $k_1,k_2=0,1,2$, $k_1\neq k_2$ the stabilizer is $H_{\boldsymbol{\alpha}}=\mathbb{Z}_3$. \item For $\alpha _1=\alpha _2=:\alpha$, but $\alpha /2\pi \notin \mathbb{Z}$ the stabilizer is $H_{\boldsymbol{\alpha}}=\mathbb{Z}_2$ \item In all the other cases the stabilizer is trivial. \end{itemize} When $\boldsymbol{\alpha}=(\alpha,\alpha)$, $\boldsymbol{\beta}=(\beta,\beta)$ are both stabilized by $\mathbb{Z}_2$, by using \eqref{eq:local fusions} the local fusion is\footnote{The fact that only two terms appear in the right hand side follows the Cauchy-Frobenius lemma \begin{equation*} |H_1\backslash G/H_2|=\frac{1}{|H_1||H_2|}\sum _{h_1\in H_1, h_2\in H_2} |G_{h_1,h_2}| \end{equation*} where $G_{h_1,h_2}=\left\{g\in G \; | \; h_1gh_2=g\right\}$. Indeed $\mathbb{Z}_2=\left\{1,s\right\}\subset S_3$, where $s=(213)$, and it is easy to see that $G_{11}=S_3$, $G_{1x}=G_{x1}=\emptyset$, $G_{xx}=\mathbb{Z}_2$.} \begin{equation} \mathcal{T}\left(\alpha ,\alpha \right)\otimes \mathcal{T}\left(\beta ,\beta \right)=\mathcal{T}\left(\alpha ,\alpha-\beta \right)+\frac{|H_{\boldsymbol{\alpha}+\boldsymbol{\beta}}|}{2}\mathcal{T}\left(\alpha +\beta ,\alpha +\beta \right) \end{equation} which is modified, at the global level, by gauging $\mathbb{Z}_3$. Since we are assuming $\alpha ,\beta \notin 2\pi \mathbb{Z}$ the first term cannot be stabilized by $\mathbb{Z}_3$. Then the only non-trivial modification is when $\beta=2\pi k-\alpha$, $\alpha /2\pi \notin \mathbb{Z}$, in which case the last term is central, and we get \begin{equation} \mathcal{T}\left(\alpha ,\alpha \right)\otimes \mathcal{T}\left(2\pi k-\alpha ,2\pi k-\alpha \right)=\mathcal{T}\left(\alpha ,2\alpha-2\pi k \right)+3\frac{\mathcal{T}\left(2\pi k ,2\pi k \right)}{\mathbb{Z}_3} \ . \end{equation} Notice that, even if the last GW in the local fusion is a generator of the center which stabilizes the full $S_3$, the condensation defect dressing it in the global fusion is the one associated with $\mathbb{Z}_3$. There is a quantum $\mathbb{Z}_3$ symmetry on this defect, and the coefficient 3 is counting precisely its total quantum dimension. Now we fuse two GW whose parameters $\boldsymbol{\alpha}_{a_1a_2}=(2\pi a_1,2\pi a_2), \boldsymbol{\beta}_{b_1b_2}=(2\pi b_1,2\pi b_2)$ have both stabilizer $\mathbb{Z}_3$. The local fusion is \begin{equation} \mathcal{T}(\boldsymbol{\alpha}_{a_1a_2})\otimes \mathcal{T}(\boldsymbol{\beta}_{b_1b_2})=\frac{|H_{\boldsymbol{\alpha}_{a_1a_2}+\boldsymbol{\beta}_{b_1b_2}}|}{3} \mathcal{T}(\boldsymbol{\alpha}_{a_1a_2}+\boldsymbol{\beta}_{b_1b_2})+\frac{|H_{\boldsymbol{\alpha}_{a_1a_2}+\boldsymbol{\beta}_{b_2b_1}}|}{3} \mathcal{T}(\boldsymbol{\alpha}_{a_1a_2}+\boldsymbol{\beta}_{b_2b_1}) \end{equation} which should be modified by applying $P_{\mathbb{Z}_2}$. Notice that it is impossible that both terms on the right hand side are stabilized by $\mathbb{Z}_2$, otherwise $a_1=a_2, b_1=b_2$. When the second term is stabilized by $\mathbb{Z}_2$ we get the global fusion rule \begin{equation} \begin{array}{r} \displaystyle \mathcal{T}(2\pi a_1,2\pi a_2)\otimes \mathcal{T}\left(2\pi b_1,2\pi (a_2+b_1-a_1)\right)= \mathcal{T}(2\pi (a_1+b_1),2\pi (2a_2+b_1-a_1))+ \\ \\ \displaystyle + 2\frac{\mathcal{T}(2\pi (a_2+b_1),2\pi (a_2+b_1))}{\mathbb{Z}_2} \end{array} \end{equation} The coefficient 2 in the last term has the same interpretation of the 3 in previous case, as the number of possible junctions. Because $S_3=\mathbb{Z}_3\rtimes \mathbb{Z}_2$, this example can be analyzed also with the technique of gauging sequentially $\mathbb{Z}_3$ and then $\mathbb{Z}_2$ as in \cite{Bhardwaj:2022yxj}, and one can check that we reproduce the same global fusions. On the other hand our method is more general since it does not assume that the group to be gauged is a semidirect product of Abelian factor, which is not true for $S_N$, $N\geq 5$. Nevertheless the computation is incredibly harder for $N>3$, even if it is algorithmic. We conclude this subsection with a general remark. The method we described to derive the global fusion rules in the $U(1)^{N-1}\rtimes S_N$ appears to be general in higher category symmetries. The difference between local and global fusion arises in this context because also indecomposable objects can have a non-trivial category of 1-endomorphisms, and one needs to require consistency of the fusions with the condensation of these symmetries generated by 1-endomorphisms. This takes the form of various projections obtained by fusing with the higher condensation defects introduced in \cite{Roumpedakis:2022aik}. As we have discussed, the determination of the full set of higher condensation defects of a given theory might be non-trivial. Nevertheless we propose that, at least for non-invertible symmetries induced by gauging, the only modification of the local fusions required when the defects have non-trivial topology are those coming from these consistency conditions. As a consequence, finding all the higher condensation defects of a theory allows to fully determine the global fusions. This proposal is motivated by the observation that the only difference arising when the defect is topologically non-trivial can be in the presence of lower dimensional defects wrapping cycles. By definition, the higher condensation defects are precisely those which are made by lower dimensional objects\footnote{It is worth noting that this procedure is reminiscent of the idempotent completion in higher categories introduced in \cite{Gaiotto:2019xmp}. It would be very interesting to draw a precise connection with the recent known mathematical results}. Notice, for example, that the way in which the authors of \cite{Choi:2021kmx} determined the fusion rules of the \emph{duality defects}, after being aware of condensation defects, can be interpret as our method. \section{The Ultraviolet Limit of 4d Yang-Mills Theory} \label{sec:3} In this section we connect the $U(1)^{N-1}\rtimes S_N$ gauge theories to $SU(N)$ YM theories showing that \emph{some} properties of the UV limit of the latter are nicely captured by the former. We will argue that a convenient way to analyze this relation boils down to choosing a particular gauge fixing, originally introduced in \cite{tHooft:1981bkw}, in which the connection with the semi-Abelian theory is manifest. We will show that all gauge invariant operators of $SU(N)$ YM theory are matched by operators in the semi-Abelian theory. The relation we find implies that the global symmetries of the high energy YM theory are much larger than those of the full theory as they include much more topological operators which generate a non-invertible symmetry\footnote{The same conclusion was argued for $SU(2)$ YM theory in \cite{Cordova:2022rer} by directly doing the $g \rightarrow 0$ limit.}. Naively one might say that the UV limit of $SU(N)$ YM theory is sharply different from the $U(1)^{N-1}\rtimes S_N$, since the latter is locally a theory of $N-1$ photons, while the former seems to be a theory of $N^2-1$ free gluons. However in a non-Abelian theory there are much more gauge transformations than in a collection of Abelian ones, as for instance gluons can be rotated into each other, thus a UV description in terms $N^2-1$ photons is misleading as it does not account for all the redundancies. In order to introduce the general idea, it is useful to look at a toy example. Consider the matrix model of $N\times N$ hermitian matrices. Here it is clear that, by diagonalizing the matrices, we can reduce the initial $N^{2}$ degrees of freedom to only $N$ at the price of introducing a potential among them related to the Vandermonde determinant\footnote{Note that in this diagonal gauge the Weyl group which permutes the eigenvalues is still a gauge symmetry.} (for a review see e.g. \cite{Marino:2012zq}). This determinant is crucial in order to match all the calculable quantities of the original theory\footnote{For instance the free energy computed directly from the matrix model is proportional to $g^{N^2}$, signal of the fact that the theory contains $N^2$ degrees of freedom. The theory described by the eigenvalues gives the same result only if the Vandermonde is taken into account.}. However the gauge invariant content of the theory is entirely captured by the $N$ degrees of freedom and, in the free limit, the measure induced potential are turned off and become irrelevant to study \emph{kinematical} properties of the original theory. \\Inspired by this example we can argue that the UV limit of $SU(N)$ YM theory is related to the semi-Abelian gauge theory $U(1)^{N-1} \rtimes S_N$. In particular, even if the dynamics of the YM theory at arbitrary small coupling is \emph{not} captured by the semi-Abelian theory, the equations of motion of the latter, together with the symmetry structure, carry over to the UV limit of the YM theory. In the next subsection we make this argument more precise. Then the other subsections are devoted to show the matching of all the gauge invariant operators. Finally we will discuss how the possible global structures of the YM theory are captured by the semi-Abelian theory. \subsection{Yang-Mills theory} \label{sec: YM} Consider the 4d $SU(N)$ YM theory \begin{equation} Z = \int DA e^{-\frac{1}{2g^2}\int d^4x Tr F \wedge *F} \end{equation} where the field strength $F = dA + A \wedge A$ is an hermitian and traceless matrix transforming covariantly under $SU(N)$ gauge transformations $ F \rightarrow \Omega^{-1}F\Omega $. We use the letters $i,j,...$ for the generators $h_i$ in the Cartan subalgebra, while $a,b,...$ for the off-diagonal ones $T_a$. We use the non-Abelian gauge redundancy to choose a gauge in which $F$ is diagonal $F=F_ih_i$ \footnote{The idea to Abelianize a non-Abelian gauge theory using particular gauge fixing was introduced in \cite{tHooft:1981bkw}. In particular this method was made rigorous and it was used in \cite{Blau:1995rs} in order to solve $G$ YM theories in $2$d where these theories are quasi-topological and solvable.}. In this gauge the action of the theory becomes \begin{equation} S = \frac{1}{4g^2} \int d^4x \sum_{i,j=1} ^{N-1} K_{ij}F_i\wedge * F_j \end{equation} and all the complicated dynamics is then captured by the induced gauge fixing determinant. \\ The $(N-1)\times (N-1)$ matrix $K_{ij}$ is the Killing form restricted to the Cartan subalgebra. It is useful to choose the Chevalley basis in which \begin{equation} h_i=2\frac{\alpha _i ^I H^I}{|\alpha _i|^2} \ \ \ \ K(H^I,H^J)=\delta ^{I,J}. \end{equation} so that (for simply-laced Lie algebras) the Killing form restricted to the Cartan subalgebra is the Cartan matrix \begin{equation} K_{ij}=\frac{2}{|\alpha _i|^2}\frac{2\alpha _i ^I \alpha _j^JK(H^I,H^J)}{|\alpha _j|^2}=2\frac{\alpha _i \cdot \alpha _j}{|\alpha _j|^2}=A_{ij}. \end{equation} The residual gauge freedom is now described by the semi-Abelian gauge group $U(1)^{N-1} \rtimes S_N$, where $U(1)^{N-1}$ is the maximal torus of $SU(N)$ and $S_N$ is the Weyl group. Its gauging reflects the freedom of defining the $N$ eigenvalues $F_i$ in different orders. As opposed to the simpler case of the matrix model, this gauge fixing condition is now more complicated. In what follows we sketch how this procedure should be done, even if in order to analyze the kinematical properties of the UV theory (such as the symmetries) all the technicalities turn out not to be crucial. \\ In the YM path integral we integrate over the connections, not over the field strengths which are the objects transforming covariantly. However we can still do similar considerations. The gauge fixing condition which we want to impose is \begin{equation} F^a = 0. \label{Abeliangaugefixing} \end{equation} Usually in the Faddeev-Popov procedure we do not resolve the $\delta$-function corresponding to the gauge fixing, but instead we rewrite it as a gauge fixing term in the action. In this case however it is convenient to resolve the $\delta$-function, so that the constraint is imposed directly in the action. This is because we do not want to preserve the full gauge covariant form of the action, but only the $U(1)^{N-1}$ one. Note that the connections $A^a$ are not necessarily zero, only their field strengths are. There is an induced Faddeev-Popov determinant so that the gauge-fixed path integral looks like \begin{equation} Z = \int DA^i \;DA^a e^{-\frac{1}{2}\sum_{i,j} F_iF_j K_{ij}} \Delta(A^i,A^a). \end{equation} In writing this we used the normalization in which the field strength is $F = dA + g A\wedge A$ so that in the $g=0$ limit we just get the Abelian kinetic term for the $A^i$ connections\footnote{Note that in this case also in $\Delta(A^i,A^a)$ there is a dependence on $g$. }. When we write $\Delta(A^i,A^a) = e^{V(A^i,A^a)}$ and we integrate over $A^a$, we induce complicated non-local interactions among the Cartan gauge connections $A^i$. These interactions play the same role as the Vendermonde determinant, and in particular it will be crucial in order to match all the complicated dynamics of the non Abelian theory, precisely as in the matrix model. However these interactions are weighted by the gauge coupling $g$, and in the high energy limit are turned off. Therefore, for what concerns the analysis of gauge invariant operators and symmetries we can safely drop the non-local interactions at high energy and study the remaining theory, which is precisely the $U(1)^{N-1}\rtimes S_N$ gauge theory. We want now to discuss an additional issue which concerns the global properties of the non Abelian theory. Indeed in the $SU(N)$ theory we have different instanton sectors labeled by the third homotopy group of the gauge group. When we fix the gauge we loose this information since the residual gauge symmetry has no non-trivial topological sectors. This means that this gauge fixing works only locally and it must be modified if we want to account the global properties of the theory \cite{Blau:1994rk}. However, in the $g=0$ limit also in the $SU(N)$ theory all the non-trivial instanton sectors decouple and the lack of non-trivial topological sectors is no longer an issue. The argument above suggests to look for a mapping between the gauge invariant operator of the $SU(N)$ YM theory and those of the semi-Abelian theory, in the following three subsections we show the precise correspondence. We want to stress that the matching of the gauge invariant operators is independent of the gauge fixing procedure since it comes just from the freedom of applying gauge rotations on gauge invariant quantities. Instead, the power of these considerations is that, once we understand the map of operators, we will be able to extract some information about the UV limit of YM theory knowing the properties of the semi-Abelian one already discussed in the previous section. In the YM theory the observables are of three kinds: \begin{itemize} \item \textbf{Local operators.} In pure YM theory they are all constructed out of the field strength. The most obvious one are the Lagrangian itself $Tr(F\wedge *F)$ and the $\theta -$term $Tr(F\wedge F)$, but there are others like $Tr((F\wedge * F)\wedge *(F\wedge *F))$ and so on. \item \textbf{Line operators.} These are the simplest kind of extended operators, supported on lines. In YM theory they are the Wilson operators \begin{equation} W_{\mathcal{R}}[\gamma]=Tr _{\mathcal{R}} P \exp{\left(i \oint _{\gamma} A \right)} \end{equation} labeled by an irreducible representation of the gauge group, as well as the 't Hooft lines, defined as disorder operators \cite{tHooft:1977nqb}. These are also labeled by representations, but for gauge group $SU(N)$ only those with $N-$ality zero are \emph{genuine line operators}, while the others require topological surfaces attached to them \cite{Aharony:2013hda}. \item \textbf{Surface operators.} These operators are supported on surfaces, which in 4d can link with lines $Lk(\Sigma, \gamma)\in \mathbb{Z}$. For this reason there is a crucial interplay between line and surface operators. When a surface operator is topological it is a generator of a 1-form symmetry, possibly non-invertible, and the charged objects are line operators. In 4d gauge theories the surface operators are known as GW operators \cite{Gukov:2006jk,Gukov:2008sn,Gukov:2014gja}. They are labeled by the conjugacy classes of $G$ parametrized by $U(1)^r/W_G$, but only those corresponding to the center $Z(G)\subset G$ are topological in the full theory, generating the center symmetry. \end{itemize} In the next three subsections we will discuss the matching of the three types of operator between the $U(1)^{N-1}\rtimes S_N$ gauge theory and the UV effective description of YM theory. In the discussion on local operator (subsection \ref{sec:local operators}) we also clarify the relation between the various Lagrangians, in the various basis. \subsection{Local Operators} \label{sec:local operators} As explained above, in the $g_{YM}\rightarrow 0$ limit we can reduce to the action \begin{equation} \label{eq:action UV YM} S=\frac{1}{4}\int d^4x \; K_{ij}F_i\wedge * F_j \end{equation} where $K_{ij}=K(h_i,h_j)$ is the block of the Killing form relative to the Cartan subalgebra. This is an Abelian gauge theory with gauge group $U(1)^{N-1}$. As pointed out in section (\ref{sec:Abeliangaugetheory}) the precise definition of this theory requires the choice of the global structure, which can be fixed declaring which of the transformations $A_i\rightarrow A_i+d\lambda _i$ are gauge transformations, or, equivalently, specifying the spectrum of line operators. However here the choice is dictated by the global structure of the YM theory. Indeed in the Chevalley basis the eigenvalues of $h_i$ on the weight states of any representations are the Dynkin labels, which must be integer numbers. These are precisely the charges of the Abelian Wilson lines written for the connection $A_i$ in this basis. Therefore the global structure of the $U(1)^{N-1}$ theory we need is that in which, when the Killing form is the Cartan matrix, all the Wilson lines have integer charges. At the global level this Abelian theory cannot be the correct UV description of YM theory, since it has a $S_N$ 0-form global symmetry, which is instead gauged in YM theory. The action of the permutation group on the $N-1$ field strengths is more evident in the basis defined by the quadratic form $Q^{(N-1)}_{ij}$ defined in (\ref{eq:definitionQ}), which we dub \emph{symmetric basis}. Therefore it is worth to pause a bit to discuss the relation between the two basis of interests. We look for a matrix $L$ such that \begin{equation} A_i=L_{ij}\mathcal{A}_j \ \ \ \Rightarrow \ \ \ L^TA^{(N-1)}L=Q^{(N-1)} \end{equation} Where $A^{(N-1)}$ is the Cartan matrix of $\mathfrak{su}(N)$. We solve this constraint using the Cholesky decomposition for both $A^{(N-1)}$ and $Q^{(N-1)}$, namely $A^{(N-1)}=H^TH$, $Q^{(N-1)}=G^TG$, where $H,G$ are upper triangular matrices. Then $L$ is uniquely defined as $L=H^{-1}G$. It turns out that $L$ is upper triangular, with all non-zero components equal to 1: \begin{equation} L_{ij}=\left\{\begin{array}{cc} 1 & \mbox{if} \ \ i\leq j \\ 0 & \mbox{if} \ \ i>j \end{array}\right. \ \ \ \ \Rightarrow \ \ \ A_i = \sum_{j\ge i}\mathcal{A}_j \end{equation} Notice that $\text{det}(L)=1$, so $L\in GL_{N-1} (\mathbb{Z})$ is an automorphism of the lattice $\mathbb{Z}^{N-1}$. Thus with the global structure dictated by $SU(N)$ YM theory the charges of the Wilson lines are integers in both the Chevalley and symmetric basis. Let us now discuss in some detail the local operators. For this discussion the suitable basis is the symmetric one. In YM theory all the local operators are constructed out of the non-Abelian field strength $F$, and we can classify them with the powers of $F$ appearing. In the Abelian theory $U(1)^{N-1}$ all the field strengths $\mathcal{F}_{i=1,...,N-1}$ are gauge invariant, so that this theory has much more local gauge invariant operators, and cannot be the UV description of YM theory. By gauging the discrete 0-form symmetry $S_N$ described above we completely fix this mismatch of operators. Let us see how this goes. A general gauge invariant operator of $SU(N)$ YM theory at finite coupling is a product of single-trace operators involving a generic power $k$ of the field strengths\footnote{When writing powers of $F$ we imagine the contraction being given by either $\wedge$ or $\wedge *$, the following analysis holds for both cases.}. In general however, for finite values of $N$, not all single trace operators are independent as there are complicated trace relations among them\footnote{ The simplest example of such relations is for $N=2$ as, for a generic $2\times 2$ hermitian matrix $F$, we have \begin{equation} \text{Tr}\left(F^{3}\right) = \dfrac{1}{2}\text{Tr}\left(F\right)\left[3\text{Tr}\left(F^{2}\right)-\text{Tr}\left(F\right)^{2}\right] \end{equation} and similar expressions may be derived for any $\text{tr}\left(F^{k}\right)$ for $k\ge 3$.}, whose origin is simple to understand. The field strength is $N\times N$ hermitian traceless matrix $F$, and we can diagonalize it by rotating it in the Cartan subalgebra \begin{equation} U^{\dagger}FU = \text{diag}(F_{1},.., F_{N}), \ \ \ \ \ \ \ U^{\dagger} U = 1 \quad \ \ \ \ \sum_{i}F_{i}=0 \end{equation} where $F_i$ are the field strengths of the maximal torus $U(1)^{N-1}$. The trace of the $k$-th power of $F$ is \begin{equation} \text{Tr}(F^{k}) = \sum_{i=1}^{N}F_{i}^{k} \end{equation} which is manifestly invariant under the Weyl group $S_N$ acting by permutations of the eigenvalues. Such invariance follows from the trace being well-defined on the singular space $U(1)^{N-1}/S_N$ which labels conjugacy classes. We then regard the single-trace operators in $SU(N)$ YM theory as symmetric polynomials in $N$ variables. Since for $N$ variables there are only $N$ independent elementary symmetric polynomials of degree $k\le N$, all local gauge invariant operators of $SU(N)$ YM theory can be expressed as polynomials in the field strengths of the Cartan subalgebra. Since the matrix $F$ is also traceless we have to impose the constraint $\sum_{i}\lambda_i=0$ which reduces the number of independent and non-zero polynomials to $N-1$. To obtain a basis independent statement we may trade the elementary polynomials with $\text{Tr}(F^{k})$, where $k=2,.., N$, so that all other local operators are sums of products of these basic traces. We already discussed the spectrum of gauge invariant local operators of the $U(1)^{N-1}\rtimes S_N$ gauge theory in \ref{sec:semiAbelian}. In the symmetric basis a generic local gauge invariant operator is a symmetric polynomial in the field strengths $\mathcal{F}_{i}$. Again only $N-1$ are independent. The two sets of symmetric polynomials in $SU(N)$ YM theory or $U(1)^{N-1}\rtimes S_N$ are clearly in bijective correspondence, connected by a change of basis in the Cartan subalgebra between the two. It follows that the spectrum of local operators in $SU(N)$ YM theory and in $U(1)^{N-1}\rtimes S_N$ are in one-to-one correspondence. \subsection{Line Operators} \label{sec:line operators} Now we discuss the Wilson line operators of the $SU(N)$ YM theory \begin{equation} W_{\mathcal{R}}=Tr_{\mathcal{R}}\mathcal{P}\exp{\left(i\oint _{\gamma} A\right)} \end{equation} labeled by an irreducible representation $\mathcal{R}$ of the gauge group $SU(N)$. In the full theory they are charged under the $\mathbb{Z}_N$ 1-form symmetry generated by the GW operators corresponding to conjugacy classes in the center $\mathbb{Z}_N\subset SU(N)$, and their charge is the $N-$ality of the representation $\mathcal{R}$. We want to analyze the UV limit of the Wilson lines. Following the general philosophy that we have described, all the gauge covariant observables can be mapped to the Cartan torus by performing suitable gauge transformations. The holonomy $$ hol_{\gamma} [A]=\mathcal{P}\exp{\left(i\oint _{\gamma} A\right)} $$ indeed transform covariantly under $SU(N)$ gauge transformations. Its trace on an irreducible representation $\mathcal{R}$ gives the Wilson line $W_{\mathcal{R}}[\gamma]$. By decomposing the representation in weight states $|\lambda \rangle$ labeled by their Dynkin labels $(\lambda _1,...,\lambda _{N-1})\in \mathbb{Z}^{N-1}$, the trace simply amounts to summing over these states. Since the off-diagonal components of the connections decouple in the UV limit this sum is particularly simple. We express the connection into the Chevalley basis as $$ A=A_i h_i \ \ , \ \ \ \ \ h_i =2\frac{\alpha _i ^I H^I}{|\alpha _i|^2} \;\; . $$ The eigenvalues of the Cartan generators in the Chevalley basis are just the Dynkin labels, thus we get a sum of Abelian Wilson lines of the Cartan torus $U(1)^{N-1}$, with charges given by the Dynkin labels. These combinations are always invariant for the action of the Weyl group $S_N$ and then they correspond to a linear sum of the simple Wilson lines of the $U(1)^{N-1}\rtimes S_N$ gauge theory described in section (\ref{sec:semiAbelian}). In order to define carefully this action we have to consider the symmetric basis $\mathcal{A}_i$, on which $S_N$ acts naturally, and then change basis to the connections $A_i$ in the Chevalley basis, in which the Wilson lines are easily written, using $A_i=L_{ij}\mathcal{A}_j$. To prove the invariance of such lines under $S_N$ we can adopt another point of view. The Wilson lines coincide formally with the characters of the associated representations \begin{equation} \chi(v) = \text{Tr}_{\mathcal{R}} \prod_{i} v_i^{h_i} \end{equation} where the product runs over the Chevalley basis and the fugacities $v_i$ are generically complex variables. The Wilson line in representation $\mathcal{R}$ is given by an expression formally identical to the character where the fugacities have been replaced with the holonomies of the components of the gauge field in the Chevalley basis. This proves that the Wilson lines are always invariant under the Weyl group. Indeed the characters are generally defined as the trace of a generic group element in a given representation, as such they are only sensible to the conjugacy class of the element. In other words characters are complex-valued functions defined on the set of conjugacy classes which, for $SU(N)$, is given by $U(1)^{N-1}/S_N$. It follows that the characters written as Laurent polynomials in the $N-1$ variables corresponding to a maximal torus of $SU(N)$ must be well defined functions on the quotient space $U(1)^{N-1}/S_N$, thus they must be invariant under $S_N$\footnote{As an aside notice that this point of view on the Wilson lines tells us that they fuse exactly as the associated representations of the group which is what should happen at $g_{YM}=0$.}. Since this discussion is quite abstract we want to present some concrete examples on how to construct these lines for $SU(2)$ and $SU(3)$ YM theories. The reader convinced by the argument above may wish to skip these examples. \paragraph{$\boldsymbol{SU(2)}.$} The irreducible representations of $SU(2)$ are characterized by one positive integer $\lambda \in \mathbb{N}$, the Dynkin label of the highest weight state. The states have Dynkin labels $\lambda, \lambda -2,...,-\lambda +2,-\lambda$. In the $g_{YM}\rightarrow 0$ limit the $SU(2)$ Wilson lines $W_{\lambda} ^{SU(2)}$ decompose into a sum over the weight states of the Wilson lines $W(n)=W^n$ of the Abelian theory $U(1)$. In the Chevalley basis the charges $n$ coincide with the Dynkin labels, and we get \begin{eqnarray} \label{eq:SU(2) Wilson lines} W_{\lambda}^{SU(2)}=\sum _{k=0}^{\lambda} W^{\lambda -2k} \end{eqnarray} For $SU(2)$ the Chevalley basis and the symmetric one are the same, and indeed the Wilson lines above are manifestly $S_2=\mathbb{Z}_2$ invariant, being a sum of lines $\mathcal{V}(n)=\mathcal{W}^n+\mathcal{W}^{-n}$. \paragraph{$\boldsymbol{SU(3).}$}The $SU(3)$ case is richer. The weight states in any irreducible representation are labeled by two Dynkin labels $(n_1,n_2)\in \mathbb{Z}^2$, which are the charges of the Wilson lines \begin{equation} W_1^{n_1}=\exp{\left(in_1\oint _{\gamma} A_1 \right)} \ , \ \ \ \ W_2^{n_2}=\exp{\left(in_2\oint _{\gamma} A_2 \right)} \end{equation} of the $U(1)^2$ theory expressed in the Chevalley basis. The relation with the symmetric case is $A_1=\mathcal{A}_1+\mathcal{A}_2$, $A_2=\mathcal{A}_2$, so that $ W_1=\mathcal{W}_1\mathcal{W}_2$, $ W_2=\mathcal{W}_2$ and $\mathcal{W}_1=W_1W_2^{-1}$, $ \mathcal{W}_2=W_2$. The action of $S_3$ on the Wilson lines in the symmetric basis is by simple permutations $$ (\mathcal{W}_1,\mathcal{W}_2,\mathcal{W}_3)\rightarrow (\mathcal{W}_{\sigma (1)}, \mathcal{W}_{\sigma (2)},\mathcal{W}_{\sigma (3)}) \ , \ \ \ \sigma \in S_3 $$ where we should remember that $\mathcal{W}_1\mathcal{W}_2\mathcal{W}_3=1$. Consider the UV Wilson line in the fundamental representation, whose weight states are $(1,0), (-1,1), (0,-1)$. The Dynkin labels coincide with the charges $(n_1,n_2)$ of the Wilson lines in the Chevalley basis. Hence we have \begin{equation} W_{(1,0)}^{SU(3)}=W_1+W_1^{-1}W_2+W_2^{-1}. \end{equation} We can easily check that this operator is $S_3$ invariant. Notice also that the terms above are all mapped into each other by the Weyl group. Indeed by rewriting the lines in the symmetric basis we have \begin{equation} W_{(1,0)}^{SU(3)}=\mathcal{W}_1^{-1}+\mathcal{W}_2^{-1}+\mathcal{W}_1\mathcal{W}_2=\mathcal{V}(-1,0)=\mathcal{V}(0,-1)=\mathcal{V}(1,1) \end{equation} namely a single Wilson line of the $U(1)^2\rtimes S_3$ gauge theory. This property is clearly not true for all the representations of $SU(3)$. It is worth considering also the anti-fundamental representation, whose weight states are $(0,1), (1,-1), (-1,0)$. The corresponding Wilson line is \begin{equation} W_{(0,1)}^{SU(3)}=W_2+W_1W_2^{-1}+W_1^{-1} \end{equation} which is again $S_3$ invariant. Notice that we can obtain this Wilson line from the one in the fundamental by acting with $$ C\cdot W_1= W_2 \ , \ \ \ \ C\cdot W_2=W_1. $$ The operator $C$ is \emph{charge conjugation}. At the level of the connections it exchanges $A_1\leftrightarrow A_2$, thus leaving the Lagrangian $F_1^2+F_2^2-F_1F_2$ invariant. However, as we have just seen, $C$ can act non-trivially on gauge-invariant operators and therefore it is a global symmetry of the theory. This has to be contrasted with $S_3$ which leaves the action invariant, but acts trivially also on the gauge invariant operators. This is because the Weyl group $S_3$ is gauged in the YM theory, while charge conjugation is a 0-form global symmetry acting as an automorphism of the set of line operators. \subsection{Gukov-Witten Operators} \label{sec:surface operators} The surface operators of YM theory, introduced by Gukov and Witten in \cite{Gukov:2006jk, Gukov:2014gja}, are of two types, electric and magnetic. Both types are labeled by conjugacy classes of the gauge group, namely points in $\boldsymbol{\alpha} \in U(1)^{N-1}/S_N$. The electric GW operators labeled by elements of the center $\mathbb{Z}_N\subset SU(N)$ are topological and generate the 1-form center symmetry $\mathbb{Z}_N^{(1)}$ acting on Wilson lines with charge given by the $N$-ality of the associated representation. In the semi-Abelian theory we similarly have electric and magnetic surface operators, denoted $\mathcal{T}(\boldsymbol{\alpha})$ and $\widetilde{\mathcal{T}}(\boldsymbol{\alpha})$ respectively. As we have seen these are labeled by $\boldsymbol{\alpha}\in U(1)^{N-1}/S_N$, thus exactly matching those of the $SU(N)$ theory. A further confirmation that the surface operators of the semi-Abelian theory are related to those of YM theory comes from the action on Wilson lines. The center symmetry of $SU(N)$ is preserved along the RG flow hence must be present also in the deep ultraviolet and should be realized in the semi-Abelian theory. We have already shown that the largest invertible symmetry inside the 2-category describing the surface operators is $\mathbb{Z}_N^{(1)}$ and that these defects act on simple Wilson lines multiplying them by a phase \begin{equation} \mathfrak{C}( \boldsymbol{\alpha}_k,\boldsymbol{n}) = \exp{\left(\frac{2\pi i k}{N}|\boldsymbol{n}|\right)}. \label{eq:N-ality} \end{equation} To prove that this $\mathbb{Z}_N$ subgroup of the non-invertible symmetry corresponds to the one-form symmetry of the YM theory we need to check that the $SU(N)$ Wilson lines have definite charge proportional to the N-ality of the representation. Notice that a priory this is not obvious since the lines of $SU(N)$ are combinations of the lines of the semi-Abelian theory, and so for generic GW operator $\mathcal{T}(\boldsymbol{\alpha})$ \begin{equation} \mathcal{T}(\boldsymbol{\alpha})\mathcal{W}^{SU(N)} \not\propto \mathcal{W}^{SU(N)}. \end{equation} Actually this factorization occurs precisely for the GW operators generating the center symmetry $\mathbb{Z}_N$. In order to see this we have to rewrite the charge $|\boldsymbol{n}|$ appearing in \eqref{eq:N-ality} in the Chevalley basis. From $A_i = L_{ij}\mathcal{A}_i$ we get $n_i=L_{ji}q_j$, where $q_j$ are the charges in the Chevalley basis. By noting that $\sum _i L_{ji}=j$ we obtain \begin{equation} |\boldsymbol{n}|=\sum_{i,j=1}^{N-1} L_{ji} q_j=\sum _j j q_j =:p \label{eq:SU(N)Wilson_phases} \end{equation} where $p = \sum_i i q_i \; \mbox{mod }N$ is precisely the $N$-ality of the weight state $(q_1,...,q_{N-1})$. An $SU(N)$ Wilson line in representation $\mathcal{R}$ is a particular combination of simple $S_N$-invariant lines with charges given by the weights of $\mathcal{R}$. Since each weight of a weight system belongs to the same congruence class all terms in the $SU(N)$ Wilson line have same charge under the $\mathbb{Z}_N$ generators. Thus on $SU(N)$ Wilson lines the action of the invertible GW operators factorizes and assigns a charge exactly coinciding with the $N$-ality of the representation. Notice that we found this action only after implicitly imposing a global structure for the semi-Abelian theory dictated by choosing $SU(N)$ as the gauge group of YM theory. Other choices of global structure will lead to different group-like symmetries, this will be discussed in the next subsection. \subsection{Global Structures} \label{sec:global_structures} For a gauge theory with Lie algebra $\mathfrak{g}$ we have different choices of global structures, corresponding to different choices of genuine line operators of the theory \cite{Aharony:2013hda}, which can be related by the gaugings of the center symmetry (or some subgroup of it) \cite{Kapustin:2014gua}. In this section we show that all the possible global structures of $\mathfrak{g}=su(N)$ YM theories are nicely matched in the $U(1)^{N-1}\rtimes S_N$ gauge theory\footnote{A similar idea is used in \cite{DelZotto:2022ras} in order to derive the possible choices of global structures of supersymmetric gauge theories from the infrared the Coulomb branch. Here we perform the somewhat complementary analysis in the ultraviolet.}. \subsubsection{Dirac quantization condition and 't Hooft lines} In 4d Maxwell theories the possible global structures are the solutions of the Dirac quantization condition. For a single Abelian gauge field the only compact global structure is $U(1)$ and the usual Dirac quantization condition imposes that the charges $q$ and $\widetilde{q}$ of the Wilson and 't Hooft lines respectively must satisfy the condition $q \widetilde{q} \in \mathbb{Z}$. In the case of $U(1)^{N-1}$ Maxwell theory this condition is a straightforward generalization, if we consider the diagonal action \begin{equation} S=\frac{1}{4}\int d^4 x \widehat{F}_i \wedge * \widehat{F}_i \end{equation} we have \begin{equation} q_i \widetilde{q}_i \in \mathbb{Z} \ , \ \ \ \forall i=1,...,N-1. \label{eq:diagonalDirac} \end{equation} These charges however do not have an immediate interpretation in terms of the relation with $SU(N)$ YM theory. To have such interpretation we should work in the Chevalley basis (or the symmetric one) which is non-diagonal. By changing basis $\widehat{F}_i=R_{ij}F_j$, so that the action in the $A_i$ variables is \eqref{eq:action UV YM}, then $K = R^TR $. By denoting with $n_i,\widetilde{n}_i$ the electric and magnetic charges in the basis with quadratic form $K$, we get \begin{equation} q_i = n_j(R^{-1})_{ji}\ , \ \;\;\; \widetilde{q}_i = \widetilde{n}_j (R^{-1})_{ji} \; \; . \end{equation} The Dirac condition (\ref{eq:diagonalDirac}) can now be written as (not summed over $i$) \begin{equation} q_i \widetilde{q}_i = n_j(R^{-1})_{ji} \widetilde{n}_k (R^{-1})_{ki} \in \mathbb{Z} . \end{equation} Then by summing over $i$ we get \begin{equation} \label{eq:Dirac_condition} n_i (K^{-1})_{ij} \widetilde{n}_j \in \mathbb{Z}. \end{equation} A particular choice of the global structure in the $U(1)^{N-1}\rtimes S_N$ gauge theory will constraint the set of possible $n_i$, or equivalently the set of possible $\widetilde{n}_i$. Then the constraints on the other charges are completely fixed by \eqref{eq:Dirac_condition}. The 't Hooft lines of the $U(1)^{N-1}\rtimes S_N$ gauge theory are of the form \begin{equation} \mathcal{M}(\boldsymbol{\widetilde{n}}) = \sum_{\sigma\in S_N} \mathcal{H}(\mathfrak{S}_{\sigma} ^{\vee}\cdot \widetilde{\boldsymbol{n}}). \end{equation} The UV limit of the $SU(N)$ 't Hooft lines are particular combinations of the $\mathcal{M}(\widetilde{\boldsymbol{n}})$ for various $\widetilde{\boldsymbol{n}}\in \mathbb{Z}^{N-1}$ such that the quantity \begin{equation} |\widetilde{\boldsymbol{n}}|= \sum_{i=1}^{N-1}\widetilde{n}_i \end{equation} is fixed. As for the Wilson lines $|\widetilde{\boldsymbol{n}}|$ is the $N$-ality of the corresponding $SU(N)$ representation. By keeping this in mind we are ready to discuss the relation between the possible global structures of YM theory and those of the semi-Abelian theory. \subsubsection{Matching the global structures} To match the $SU(N)$ global structure in the $U(1)^{N-1}\rtimes S_N$ theory we require the charges $n_i$ of the Wilson lines in the Chevalley basis to be all possible integers. With this choice all the UV Wilson lines defined in section (\ref{sec:line operators}) are genuine line operators of the theory. Taking $K=Q$ in (\ref{eq:Dirac_condition}), and choosing only one $n_i$ different than zero and equal to one we get the constraint \begin{equation} \widetilde{n}_i = Q_{ij}v_j \ , \ \ \ \ v_i\in\mathbb{Z} \label{eq:Dirac_conditionQ} \end{equation} for the charges of the 't Hooft line $\mathcal{H}(\widetilde{n}_1,\cdots,\widetilde{n}_{N-1})$. The condition (\ref{eq:Dirac_conditionQ}) implies that \begin{equation} |\widetilde{\boldsymbol{n}}|= \sum_{i=1}^{N-1 }\widetilde{n}_i = \sum_{i,j=1}^{N-1 }Q_{ij}v_j = N \sum_{i}v_i \in N\mathbb{Z} \end{equation} where we used $\sum _j Q_{ij}=N$. As expected only the 't Hooft lines with $0$ $N$-ality are genuine line operators. Notice that in this case the invertible magnetic GW operators do not have charged operators, hence only the electric $\mathbb{Z}^{(1)}_N$ is non trivial. By exchanging the roles of $\boldsymbol{n}$ and $\boldsymbol{\Tilde{n}}$ we immediately see that also the global structure of $PSU(N)$ can be reproduced in the semi-Abelian theory, in this case the electric $\mathbb{Z}^{(1)}_N$ invertible symmetry has no charged operator and the one-form symmetry $\mathbb{Z}_{N}^{(1)}$ of the theory is entirely generated by the invertible magnetic GW operators. The $SU(N)$ and $PSU(N)$ theories are connected by the gauging of the center symmetry. We want to show that also in the UV theory the same conclusion is true. Indeed in the previous section we have shown that $U(1)^{N-1}\rtimes S_N$ posses a $\mathbb{Z}_N$ 1-form symmetry which can be gauged. The action of this group on the Wilson lines of the theory is presented in (\ref{sec:surface operators}) and it is \begin{equation} \mathcal{T}(\boldsymbol{\alpha}_k)\cdot \mathcal{V}(\boldsymbol{n}) = e^{\frac{2\pi i k}{N}|\boldsymbol{n}|} \mathcal{V}(\boldsymbol{n}). \end{equation} After gauging only the Wilson lines satisfying $|\boldsymbol{n}|= 0 \text{ mod } N$ remain as good operators of the theory, matching the spectrum of Wilson lines in the $PSU(N)$ theory\footnote{We have also a different but equivalent way to gauge this symmetry. Indeed the GW operators generating $\mathbb{Z}_N$ are a subgroup of the $(U(1)^{(1)}_e)^{N-1}$ symmetry of the $U(1)^{N-1}$ gauge theory before the $S_N$ gauging. Then we can gauge this subgroup in this theory and then gauge the permutation symmetry in the resulting theory. As known, gauging a $\mathbb{Z}_N$ symmetry in a Maxwell theory simply changes the quantization conditions for the electric and magnetic charges and we can easily get the same result obtained in the main text.}. Moreover since now we have eliminated some Wilson lines in the theory, the Dirac quantization condition for the genuine 't Hooft lines \begin{equation} n_i (Q^{-1})_{ij}\widetilde{n}_j \in \mathbb{Z} \end{equation} implies that \begin{equation} n_i \in \frac{1}{N} Q_{ij} v_j \;\;(v_i \in \mathbb{Z}) \end{equation} which imposes that $|\widetilde{\boldsymbol{n}}| \in \mathbb{Z}$ as it should in $PSU(N)$. It is straightforward to check that gauging $\mathbb{Z}_{l}$ subgroups of the center symmetry one gets a spectrum of lines in the semi-Abelian theory which exactly matches the spectrum of the $SU(N)/\mathbb{Z}_l$ gauge theory. \section{Outlook} \label{sec:4} The main motivation of this paper was studying the properties of the continuous non-invertible symmetries arising in the $U(1)^{N-1} \rtimes S_N$ gauge theories and make a connection with the UV limit of $SU(N)$ YM theory. In particular we have found that all the GW operators of the non-Abelian theories become topological in the deep UV and they describe a non-invertible symmetry which is broken to its group-like subcategory $\mathbb{Z}_N$ along the RG flow. Therefore this is one of the few examples in which the gauging of an automorphism is not an artificial mechanism introduced to produce non-invertible symmetries but instead comes naturally from physically interesting systems. In doing this we have analyzed extensively the symmetry, which forms a continuous 2-category with an intricate structure arising form the presence of topological lines, appearing as 1-morphisms. The fusion rules encodes information about these morphisms in the integer constants $f_{ab}^{\sigma}$, and in the presence of the condensation defects. Even if we analyzed explicitly the $SU(N)$ gauge theory, it is easy to see that our results extend to any gauge group $G$. The theory encoding the gauge invariant data in the ultraviolet is the $U(1)^r\rtimes W_G$ gauge theory, where $r$ is the rank of $\mathfrak{g}=\mbox{Lie}\;G$ and $W_G$ is the Weyl group. Then the fusion rules \eqref{eq:local fusions} as well as the action of the GW on line operators \eqref{eq:actionWilson} are simply obtained by replacing $S_N$ with $W_G$. Also the analysis of the condensation defects, the global fusions and the 2-categorical structure is conceptually identical for any gauge group $G$. We conclude by proposing interesting open problems which arise naturally from our work, and also give qualitative ideas and suggestions about these issues. \textbf{Non-local currents, spontaneous symmetry breaking and anomalies.} The first question concerns the properties of the continuous non-invertible symmetries studied in this paper. Indeed it is natural to ask if such symmetries have conserved currents and if possible spontaneous symmetry breaking of continuous non-invertible symmetry would lead to Goldstone bosons. The existence of conserved currents can be derived from the known conserved currents of the $U(1)^{N-1}$ theory before the $S_N$ gauging. In this theory we have the conserved $2$-form current \begin{equation} j^i = F^i \end{equation} where $i = 1,\cdots,N-1$, corresponding to the $(U(1)^{(1)})^{N-1}$ 1-form symmetry of the theory. \\After the $S_N$ gauging this operator is no longer gauge invariant and then it cannot be regarded as a good operator of the theory. However we can construct a gauge invariant non-genuine local operator attaching to $F^i$ an $S_N$ Wilson line in the $N-1$ standard representation \begin{equation} J = W_{S_N}(\gamma_x)^{i} F^i(x). \end{equation} In the above equation $\gamma_x$ is an infinite topological line which ends on $x$ and then $J$ is a good gauge invariant operator. The idea is that currents of non-invertible symmetries correspond to non-genuine local operators \cite{Thorngren:2021yso}. Note that however this new current is not conserved but is covariantly conserved with respect to $S_N$ transformations, namely \begin{equation} D_{S_N} J = 0. \end{equation} In particular the conserved current in ordinary invertible symmetries is the operator creating Goldstone particles from the vacuum when such a symmetry is spontaneously broken. In this case it would be interesting to understand what happens to these excitations and interpreting them from a generalized version of a Goldstone theorem\footnote{For a generalization of the Goldstone theorem in the case of \emph{invertible} higher-form symmetries see e.g. \cite{Lake:2018dqm,Hofman:2018lfz}.}. Another interesting question is about the possible mixed 't Hooft anomaly between the electric and magnetic non-invertible symmetries possessed by the semi-Abelian gauge theory. Indeed before the $S_N$ gauging the $U(1)^{N-1}$ gauge theory has such an anomaly between the invertible 1-form symmetries $(U(1)_e^{(1)})^{N-1}$ and $(U(1)_m^{(1)})^{N-1}$. This anomaly involves continuous symmetries and we expect it to be inherited by the non-invertible symmetries since a discrete gauging cannot cancel a continuous anomaly. However to study this anomaly we need to couple these symmetries to backgrounds (note that the $B_{e,m}$ backgrounds of the Abelian theory are not anymore gauge invariant) but a consistent definition of backgrounds for non-invertible symmetries is still an open problem. \textbf{Constraints on the RG flow of Yang-Mills theories.} Perhaps the most important question regards possible implications of the UV emergent symmetries along the RG flow of YM theories. Indeed in a generic QFT, a symmetry possessed by the UV fixed point and broken by some relevant deformations affects the possible structure of the low energy effective theory. This is the case, for instance, of the quark mass perturbation in QCD which leads to mass terms in the chiral Lagrangian. In this case it would be interesting to study more carefully the deformation which breaks this non-invertible symmetry to the center symmetry of YM theory. In particular we expect that for instance correlation functions involving a GW operator and a Wilson line \begin{equation} \langle T(\boldsymbol{\alpha})^{SU(N)} [\Sigma_2] W_{R}(\boldsymbol{n})^{SU(N)}[\gamma]...\rangle, \end{equation} which at $g\not=0$ and $T(\alpha) \not\in \mathbb{Z}_N$ depends on the relative position of the surface $\Sigma_2$ and the curve $\gamma$, when the surface is infinitesimally closed to $\gamma$, they approximately follow the topological action presented in the previous sections, with corrections of order $\Lambda_{YM}r$ where $r$ parametrizes the distance between $\Sigma_2$ and $\gamma$\footnote{For instance taking $\Sigma_2 = S^2$ surrounding $\gamma$ then $r$ is exaclty the radius of the sphere. }. \\We hope that other possible predictions can be achieved also when the issues presented in the first part of this section will be understood. In particular the presence of an anomaly before the deformation would suggest that the gap produced by the RG flow should go to zero in the limit in which the RG flow is never triggered. Indeed this is something believed to happen in YM theory since the gap is of order $\Lambda_{YM}$. \section*{Acknowledgments} We would like to thank Stephane Bajeot, Francesco Benini, Christian Copetti, Thibault Decoppet, Lorenzo Di Pietro, Marco Serone and Matthew Yu for useful discussions. We especially thank Christian Copetti and Marco Serone for useful comments on the draft which led us to a better understanding of some of the results presented in this work. We are grateful to the Perimeter Institute for Theoretical physics for the hospitality and to the organizers of the workshop "Global Categorical Symmetries", during which this work has been completed and presented in a poster session. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. This work is partially supported by INFN Iniziativa Specifica ST\&FI. A.A. and G.R. are supported in part by the ERC-COG grant NP-QFT No. 864583 ``Non-perturbative dynamics of quantum fields: from new deconfined phases of matter to quantum black holes''.
2024-02-18T23:40:58.219Z
2022-06-29T02:18:20.000Z
algebraic_stack_train_0000
3,807
21,121
proofpile-arXiv_066-2655
\section{Introduction} Since the recent revival of neural networks by deep learning~\citep{lecun2015deep}, the approach of over-parameterized network has achieved a huge success in a wide range of areas such as computer vision~\citep{krizhevsky2012imagenet, he2016identity, redmon2016you} and nature language processing~\citep{devlin2018bert, radford2018improving}. The size of neural networks has grown enormously. For example, LeNet-5 model~\citep{lecun1998gradient} for digit classification in 1998 had 60 thousand parameters, while in 2020, the GPT-3 language model~\citep{brown2020language} has 175 billion parameters, requiring a vast amount of computation, storage, and energy resource for training and prediction. The large model size may also prohibit the deployment to edge devices such as cell phones due to hardware constraints. Interestingly, researchers have empirically shown that one can often build a much simpler network with similar performance based on a pre-trained network, which can save a considerable amount of resources~\citep{han2015deep, frankle2018lottery}. For example, \citet{han2015deep} show that on the ImageNet dataset, AlexNet can be compressed to retain only $3\%$ of the original parameters without impacting classification accuracy. As a result, a rising interest is to compress an over-parameterized neural network to a simpler model with comparable prediction accuracy. The study of neural network compression has a long history and various methods have been proposed. One of the most popular and effective ways to obtain a simplified model is by pruning the original network~\citep{lecun1989optimal, hagiwara1993removal, han2015deep, hu2016network, luo2017thinet, frankle2018lottery, lee2018snip, he2017channel}, which means dropping some neuron connections, neurons, or neuron-like structures such as filters and layers. The general intuition is that in an over-parameterized model, many connections are redundant and the removal of them will have little impact on prediction. To prune a pre-trained network, a standard procedure consists of a pruning criterion and a stopping criterion. A number of pruning criteria have been proposed from different interpretations of redundancy. For example, one may postulate that weights with small magnitudes are non-influential and thus can be eliminated~\citep{hagiwara1993removal, han2015deep}, or one may prune the edges/neurons that have the least influence on the network output~\citep{lecun1989optimal, lee2018snip,hu2016network,soltani2021information}. A comprehensive survey of the existing literature on model compression can be found in~\citep{hoefler2021sparsity}. A commonly used stopping criterion is to terminate pruning when the test accuracy on the validation dataset starts to drop significantly. Most existing pruning algorithms are heuristic. In practice, the compressibility of different networks and tasks may vary, the efficiency of pruning methods may be unpredictable, and a lot of ad hoc fine-tuning may be involved during pruning. Thus, it is critical to develop a theoretical understanding of compressibility. There have been some recent works in this regard. For example, \citet{arora2018stronger} relate the compressibility and generalization error of a model through its noise stability. They show that the stability of a neural network against noise injection in the input implies the existence of a smaller network with similar performance. Another line towards the provable existence of a sparser network with similar performance is to look for a coreset of parameters. A coreset means a small subset of parameters that can preserve the output of the original network. \citet{baykal2018data} propose to sample the parameters based on their importance, quantified by the sensitivity of the output due to parameter changes, and approximated using the training data. \citet{mussay2019data} use the same idea of constructing coresets, but calculate the sensitivity in a data-free manner. \citet{ye2020good} propose to reconstruct a network by greedily adding the neurons that decrease the prediction risk the most. They show the generalization error of the constructed model with $m$ parameters is at the order of $O(1/m)$ for two-layer neural networks. However, the prediction risk is intractable in practice, and the greedy selection is done by evaluating the training loss, causing a gap from the theory. \citet{malach2020proving, orseau2020logarithmic} prove that any network can be arbitrarily well approximated by a subnetwork pruned from another deeper and wider network. \citet{zhang2021lottery} show why the pruned network often has a better test accuracy and easier to train than the original network. In this paper, we aim to study two fundamental but under-explored problems: for a pre-trained neural network, when can one perform a successful pruning, and how much can one compress with little accuracy degradation. We are motivated by the idea that an essentially sparse network has fewer important weights and can be pruned more without affecting the performance. An immediate example of sparsity indicator is the number of non-zero weights in a network, also known as the $\ell_0$-norm or hard sparsity. However, it is common in practice that the network has many weights with small magnitudes. We propose to use the $\ell_q$-norm ($0<q<1$) of weights as a soft sparsity measure to characterize the compressibility of a network. In particular, we will provide an upper bound for the approximation and generalization errors of the pruned model for any given pruning ratio. This bound indicates that a network with larger soft sparsity is more compressible. It also guides the selection of a proper pruning ratio that produces allowed accuracy degradation. We then propose a novel adaptive backward pruning procedure based on our theory, which has two specific implementations. The first implementation prunes the network based on the magnitude of weights, while the pruning ratio of each neuron is adaptively determined by its soft sparsity level. The second implementation alternatively determines the pruning ratio of each neuron by using LASSO~\citep{tibshirani1996regression} with the same penalty parameter. A neuron that is essentially sparse will be pruned more by this way. Experiments show their promising performance when compared with the baseline method that prunes a fixed proportion of weights with the smallest magnitude neuron-wisely. \section{Problem formulation}\label{sec:form} We start with the standard regression learning framework. We will extend the results to classification learning in \Autoref{sec:thm}. Suppose that the data generation distribution is $(X,Y) \sim \mu$, where $X = (x_1, \ldots, x_p)^\T \in \mathbb{R}^p$ denotes the predictor/input variable, $Y \in \real$ is the response/output that is square-integrable with respect to $\mu$. The target function is $f^*: x \to \E(Y \mid X=x)$, and the loss function is the mean squared error loss throughout the paper unless otherwise specified. The risk, or generalization error, of a function $f: \real^p \to \real$ is given by $R(f) = \E{[f(X)-f^*(X)]^2}.$ A pre-trained $L$-layer fully connected neural network, denoted by $f_T$, is given as follows. The initial input layer is considered as the $0$-th layer, and the output layer is the $L$-th layer. \begin{align*} f_i^{(0)} &= x_i, \ i=1,\ldots, p, \\ g_i^{(k)} &= \sum^{n_{k-1}}_{j=1}w_{ij}^{(k-1)}f_j^{(k-1)} , \ f_i^{(k)} = \sigma\bigl(g_i^{(k)}\bigr), i=1,\ldots,n_k, k=1,\ldots, L, \\ f_T &= g_1^{(L)}. \end{align*} Here, we call $f_i^{(k)}$ the $i$-th function, or neuron, in the $k$-th layer, $g_i^{(k)}$ the linear part of $f_i^{(k)}$, $n_k$ is the number of neurons in the $k$-th layer ($n_0=p$, $n_L=1$), $w_{ij}^{(k-1)}$'s are the weights or parameters\footnote{Without loss of generality, the bias term is absorbed into the weights since we can add a constant neuron to each layer.}, and $\sigma(\cdot)$ is a $\rho$-Lipschitz activation function. Common activation functions include the ReLU function $\sigma(x) = \max\{x, 0\}$, tanh function $\sigma(x)=(e^{x}+e^{-x})/(e^{x}-e^{-x})$, and sigmoid function $\sigma(x)=1/(1+e^{-x})$. We are interested in finding a sparsified network $f_S$ of $f_T$, such that $f_S$ has a similar generalization error as $f_T$. In other words, $f_S$ has the same structure as $f_T$, but the weights include many zeros. The non-zero elements of $f_S$ do not have to remain the same as those in $f_T$. Suppose that a vector $w$ has $M$ coefficients in total, and after pruning, it has only $m$ non-zero coefficients. We define the compression ratio and pruning ratio for this vector as $M/m$ and $1-m/M$, respectively. This definition naturally generalizes to a layer or the whole network. Thus, the problem is cast into the following: what is the best generalization error of a subnetwork $f_S$ with a given pruning ratio? \textbf{Notation.} We use $\E$ for expectation and $\P$ for probability. For a vector $w=(w_1, \ldots, w_d)^\T \in \real^d$, the $\ell_0$-norm and $\ell_q$-norm with $0< q \leq 1$ of $w$ is \begin{align*} \norm{w}_0=\sum_{i=1}^d \ind_{w_i \neq 0},\ \text{ and } \norm{w}_q=\biggl[\sum_{i=1}^d \abs{w_i}^q\biggr]^{1/q}, \end{align*} where $\ind_{(\cdot)}$ is the indicator function. Note that the $\ell_q$-norm with $0<q<1$ is actually a quasinorm, and $\ell_0$-norm is not even a quasinorm, though we still call them norms by convention. Define the $L_p(\mu_X)$-norm of any function $f$ for $p \geq 1$ as \begin{align*} \norm{f}_p = \biggl(\int |f|^p\mu_X(dx)\biggr)^{1/p} , \end{align*} where $\mu_X$ is the marginal distribution of $X$. The generalization error of $f$ can be written as $R(f) = \norm{f-f^*}_2^2.$ The $L_{\infty}(\mu_X)$-norm of $f$ is $\norm{f}_{\infty} = \esssup \abs{f}.$ Other frequently used notations are summarized in \Autoref{tab:notation}. \begin{table} \caption{A summary of frequently used notations.} \label{tab:notation} \centering \begin{tabular}{lp{10.5cm}} \toprule Notation & Meaning \\ \midrule $f_j^{(k)}, g_j^{(k)}$ & The $j$-th function (also named neuron) and its linear part (before activation) in the $k$-th layer \\ $f_j^{(k),s}, g_j^{(k),s}$& The approximation of $f_j^{(k)}, g_j^{(k)}$ after $s$ steps\\ $f^*, f_T, f_{T}^{(s)}$ & The target function, the pre-trained network, and the approximation of $f_{T}$ after $s$ steps \\ $w_{ij}^{(k-1)}$ or $w_{i,j}^{(k-1)}$ & The $i$-th coefficient of the $j$-th function's weight vector in the $k$-th layer \\ $n_k, m_k, q_k, t_k$ & The number of neurons, number of neurons preserved, the norm, and the largest $\ell_{q_k}$-norm of weights in the $k$-th layer \\ $\sigma(\cdot)$ & A $\rho$-Lipschitz activation function \\ \bottomrule \end{tabular} \end{table} \section{Theoretical characterization of compressibility using \texorpdfstring{$\ell_q$}{} norm}\label{sec:thm} We next provide an upper bound of the generalization error for some pruned model $f_S$. Our study is motivated by sparse linear approximation. For any function in $f=\sum_{i=1}^n w_if_i$ that is square-integrable, we hope to approximate $f$ by a linear combination of a small subset of basis functions $\mathcal{F}=\{f_1, \ldots, f_n\}$. Specifically, we would like to choose $m$ functions $\mathcal{F}_m=\{f_{s_1}, \ldots, f_{s_m}\}$ such that $\hat{f}=\sum_{i=1}^m \tilde{w}_{s_i}f_{s_i}$ is close to $f$, where $\tilde{w}_{s_i}$'s are new coefficients. We note that this is model compression of the one layer neural network scenario when $f$ is considered as the output layer and $f_i$'s are functions of neurons in the hidden layer. It is intuitive that if the weight vector $w$ of $f$ is sparse, then $f$ can be well approximated by a small subset of $f_i$'s. For example, suppose $w$ has the hard sparsity of $\norm{w}_0=m$, we can simply retain $f_i$ with $w_i\neq 0$. However, hard sparsity is unrealistic. In practice, a much more common situation is that the network has a large number of small-valued coefficients. A notable approach to study sparse linear approximation in the latter case is via the $\ell_q$-norm ($0<q<1$) of $w$~\citep{wang2014adaptive}. For a vector $w$ with fixed dimension, a small $\norm{w}_q$ implies that the number of relatively large parameters is also limited. Therefore, $\ell_q$-norm can be regarded as the soft sparsity indicator. In particular, for a given subset cardinality $m$, \citep{wang2014adaptive} provides an upper bound of the approximation error for the best linear combination of $m$ basis functions under some mild conditions. This paper utilizes the above result and extends to any fully connected neural network. Specifically, we approximate each neuron by a small number of neurons from the previous layer, hence obtaining a simpler model, and establish its error bound. In other words, each neuron in the previous layer is regarded as a basis function for current layer's neurons, and we compress the connections between them by using a sparse linear combination. The overall compressibility of a network is evaluated by the aggregation of compressibility for all neurons. Next, we describe how to apply the sparse linear approximation to the whole network in detail. We consider a backward approximation scheme. Let $S$ index the step of approximations starting from the output layer. The first-step approximation step, $S=1$, is to select $m_{L-1}$ neurons from the $(L-1)$-th layer to obtain a linear combination of $f_j^{(L-1)}, j=1,\ldots,n_{L-1}$ as the approximation for $f_T$, which is denoted by $f_{T}^{(1)}$. Without loss of generality, we assume the first $m_{L-1}$ neurons are selected, since we can always reorder the indices. Thus, \begin{align}\label{eq:step1} f_{T}^{(1)} = \sum_{i=1}^{m_{L-1}} \tilde{w}_{1,i}^{(L-1)} f_i^{(L-1)}, \end{align} where $\tilde{w}_{1,i}^{(L-1)}$ is the new weights for the sparse approximation. The second-step approximation of $S=2$ is to approximate each of the $f_j^{(L-1)}, j=1,\ldots,n_{L-1}$, by selecting $m_{L-2}$ neurons from the $(L-2)$-th layer. Here, we assume the pruning ratio for the neurons in the same layer is fixed for simplicity. For each $f_j^{(L-1)}$, suppose the indices of functions selected to approximate it are $\{s_1, \ldots, s_{m_{L-2}}\}$, we have the one-step approximation of $f_j^{(L-1)}$ as \begin{align}\label{eq:step2} f_j^{(L-1),1} = \sigma\bigl(g_j^{(L-1),1}\bigr), \ g_j^{(L-1),1} = \sum_{i=1}^{m_{L-2}} \tilde{w}_{j,s_i}^{(L-2)} f_{s_i}^{(L-2)}. \end{align} Note that the neurons selected for approximating different $f_{j}^{(L-1)}$ are different. To ease the notation, we do not distinguish the index sets for different neurons and use $\{s_1, \ldots ,s_{m_k}\}$, which should be self-explanatory in the context. Plugging the approximation above into $f_{T}^{(1)}$, we obtain the two-step approximation of $f_T$ as follows. \begin{align*} f_{T}^{(2)} = \sum_{i=1}^{m_{L-1}} \tilde{w}_{1,i}^{(L-1)} f_i^{(L-1),1}. \end{align*} Iteratively, after the $S$-step approximation, the output function is approximated $S$ times, denoted by $f_{T}^{(S)}$. In summary, each neuron in the $(k+1)$-th layer of the pre-trained network $f_T$ has $n_k$ inputs from the previous layer, and we prune the number of inputs to $m_k$ (and tune the weights correspondingly). For all such sub-networks, we give an upper bound of the generalization error for the best sub-network as follows. \begin{theorem}[Error bound of the pruned model]\label{thm4.1} Let $f$ be any square-integrable function and $t_k=\max_{1\leq i \leq n_k}\norm{w_i^{(k)}}_{q_k}$ for $k=0, \ldots, L-1$, where $w_i^{(k)}= (w_{i,1}^{(k)}, \ldots, w_{i,n_{k}}^{(k)})$ is the weight vector of $i$-th function of $(k+1)$-th layer. For any $1\leq S\leq L$, $1\leq m_k \leq n_k$, and $0<q_k\leq 1$, $k=0, \ldots, L-1$, we have \begin{align} \label{eq32} &\bnorm{f - f_{T}^{(S)}}_2 \leq \norm{f - f_T}_2 + Ct_{L-1}(m_{L-1})^{1/2-1/q_{L-1}}\max_{1 \leq j \leq n_{L-1}}\big\|f_j^{(L-1)}\big\|_2 \\ \nonumber &+ \rho C t_{L-1}t_{L-2}(m_{L-2})^{1/2-1/q_{L-2}}\max_{1 \leq j \leq n_{L-2}}\big\|f_j^{(L-2)}\big\|_2 + \ldots \\ \nonumber &+ \rho^{S-1}Ct_{L-1}t_{L-2}\ldots t_{L-S}(m_{L-S})^{1/2-1/q_{L-S}}\max_{1 \leq j \leq n_{L-S}}\big\|f_j^{(L-S)}\big\|_2, \end{align} where $C$ is a universal constant. \end{theorem} The proofs of \Autoref{thm4.1} and subsequent results are included in the Appendixs. \begin{remark} Note that if we take $f=f^*$, then the left hand side of \ref{eq32} is exactly the generalization error of the pruned model $f_{T}^{(S)}$, and is upper bounded by the generalization error of the original network $\norm{f^* - f_T}_2$ plus the approximation error. \end{remark} \begin{remark} \Autoref{thm4.1} indicates that there exists a sub-network $f_T^{(S)}$ such that the accuracy degradation is properly upper bounded. The bound~\ref{eq32} further implies that there is a trade-off between the generalization error and the compression ratio. Namely, the smaller $m_k$, the higher compression ratio, and the larger upper bound of the generalization error. \end{remark} \begin{remark} The compressibility, or the generalization error bound, is characterized by the $\ell_q$-norm of the original network. As mentioned at the beginning of this section, a small $\ell_q$-norm ($t_k$) indicates that the number of important weights are limited and the network is sparse, and thus one can compress more. This characterization can be used to understand the two fundamental issues proposed in the introduction. In particular, for the first question of when one can prune a network, we know a network with smaller soft sparsity can be pruned more. For the second question that how much one can prune with controlled accuracy drop, we can derive a lower bound for the pruning ratio via~\ref{eq32}. \end{remark} \begin{remark} \Autoref{thm4.1} assumes the pruning ratio and $\ell_q$-norm are fixed for all neurons in the same layer for technical convenience. In practice, one may allow them to differ for each neuron. To illustrate this point, we will propose adaptive neuron-level pruning techniques in \Autoref{ch4.3.3}. Additionally, \Autoref{thm4.1} can be generalized to give an upper bound for pruning certain layers, not necessarily in this backward form. \end{remark} \begin{remark} The condition that $t_k=\max_{1\leq i \leq n_k}\norm{w_i^{(k)}}_{q_k}$ can be relaxed to $t_k=\max_{i \in \mathcal{I}_k}\norm{w_i^{(k)}}_{q_k}$, where $\mathcal{I}_k$ contains the index of neurons that connect to the $(k+1)$-th layer of the pruned model. \end{remark} Next, we present a more parsimonious form of \Autoref{thm4.1} with specific activation functions and pruning ratios. \begin{corollary}[Homogeneous pruning]\label{coro:prune} For activation function such as tanh and sigmoid, we have $\rho=1$ and $\norm{f^{(i)}_j}_2 \leq 1$. Then, the bound~\ref{eq32} can be replaced with a simpler one \begin{align*} \big\|f - f_{T}^{(S)}\big\|_2 &\leq \big\|f - f_T\big\|_2 + Ct_{L-1}(m_{L-1})^{1/2-1/q_{L-1}} +\cdots \\ & \quad + Ct_{L-1}t_{L-2}\ldots t_{L-S}(m_{L-S})^{1/2-1/q_{L-S}}. \end{align*} Furthermore, when $q_k = q$ and $m_k = m$ for all $k$, we have \begin{align} \label{eq34} \big\|f - f_{T}^{(S)}\big\|_2 \leq \big\|f - f_T\big\|_2 + C(t_{L-1} + \ldots + t_{L-1}t_{L-2}\ldots t_{L-S})m^{\frac{1}{2}-\frac{1}{q}}. \end{align} \end{corollary} As an application of Corollary~\ref{coro:prune}, we provide an example to illustrate how one may decide the pruning ratio to achieve a desired generalization error rate. \begin{example} Suppose that $f_T$ is trained with data of sample size $N$, and take $ n_{L-1} = \ldots = n_{1} = N^\alpha$ for some $\alpha >0$. Suppose that $t_{L-1} = \cdots = t_1 = t = O(N^\gamma)$ for some small $\gamma >0$, where $O$ is the standard big $O$ notation. Then, by choosing $f=f^*$ in \Autoref{eq34}, we have \begin{align*} \big\|f^* - f_{T}^{(S)}\big\|_2 &\leq \big\|f^* - f_T\big\|_2 + Cm^{\frac{1}{2}-\frac{1}{q}}(t + t^2+\ldots + t^S) \\ &= \big\|f^* - f_T\big\|_2 + O(N^{S\gamma})m^{\frac{1}{2}-\frac{1}{q}}. \end{align*} For any $0<\tau \leq \alpha(1/q-1/2)-S\gamma$, choosing $m = N^{(S\gamma + \tau)2q/(2-q)}$ yields \begin{align*} \big\|f^* - f_{T}^{(S)}\big\|_2 &\leq \big\|f^* - f_T\big\|_2 + O(N^{-\tau}), \end{align*} which guarantees an error bound of $O(N^{-\tau})$. The associated compression rate is $N^\alpha/m$. \end{example} \textbf{Extension to classification learning.} We consider a binary classification task for simplicity. Suppose that $Y \in \{-1, 1\}$. For any function $f: \real^d \to [0, 1]$, let the classification rule be $\delta_f: x\to \sign(2f(x)-1)$, where $\sign(x) = 1$ if $x >0$ otherwise $\sign(x)=-1$. Suppose we use the zero-one loss function, so the classification error probability of $f$ is given by \begin{align*} R(f) = \P(\delta_f(X)\neq Y). \end{align*} Note that $f^*:x \to \E(Y\mid X=x)$ is still a minimizer of the error. The following well-known inequality connects the classification error of $\delta_f$ and the regression error of $f$~\citep{devroye2013probabilistic}. \begin{lemma} For any function $f: \real^d \to [0, 1]$, we have \begin{align*} R(f) - R(f^*) \leq 2\norm{f-f^*}_1 \leq 2\norm{f-f^*}_2. \end{align*} \end{lemma} Combining the above result with \Autoref{thm4.1}, we immediately have the following result. \begin{corollary}[Classification] For a binary classification task described above, we have \begin{align*} R(f_T^{(S)}) - R(f^*) &\leq 2\norm{f^* - f_T}_2 + Ct_{L-1}(m_{L-1})^{1/2-1/q_{L-1}}\max_{1 \leq j \leq n_{L-1}}\big\|f_j^{(L-1)}\big\|_2 \\ &\quad+ C\rho t_{L-1}t_{L-2}(m_{L-2})^{1/2-1/q_{L-2}}\max_{1 \leq j \leq n_{L-2}}\big\|f_j^{(L-2)}\big\|_2 + \ldots \\ \nonumber &\quad+ C\rho^{S-1}t_{L-1}t_{L-2}\ldots t_{L-S}(m_{L-S})^{1/2-1/q_{L-S}}\max_{1 \leq j \leq n_{L-S}}\big\|f_j^{(L-S)}\big\|_2. \end{align*} \end{corollary} \section{Adaptive pruning algorithms}\label{ch4.3.3} \subsection{Overview of the pruning procedure} Based on the developed theory in \Autoref{sec:thm}, we propose an adaptive backward pruning procedure (`ABP'). Specifically, we start from the last layer, which is the $L$-th layer of one function, and approximate it by constructing a sparse linear combination of the functions in the $(L-1)$-th layer, as presented in \Autoref{eq:step1}. We then proceed to the second last layer, and apply a similar procedure to the linear part of each neuron there, as presented in \Autoref{eq:step2}. We repeat approximating neurons from back to the front until reaching the first layer of the network. Additionally, we choose the pruning ratio of each neuron in an adaptive manner according to its soft sparsity level. In particular, a neuron has larger soft sparsity will be pruned more. We summarize the overall pruning procedure in \Autoref{alg:m}. Since \Autoref{thm4.1} only shows the existence of a pruned model that satisfies the error bound, we need a practical algorithm to find the sparse linear approximation for each neuron. To this end, we propose two particular adaptive pruning strategies, also summarized in \Autoref{alg:mag} and \Autoref{alg:lasso} as subroutines of \Autoref{alg:m}, respectively. \subsection{Adaptive pruning strategies for each neuron} \textbf{Magnitude-based pruning (`ABP-M').} This algorithm to find sparse linear approximation is based on the magnitude of the weights. Let $I_m$ be the largest $m$ components (in absolute values) of a vector $w$. Since the energy of $w$ is concentrated in the coefficients indexed by $I_m$, a natural idea is to prune all the coefficients not in $I_m$. The question is how to decide the pruning ratio, or equivalently, $m$. To address that, we first introduce a tolerance parameter $\eta$ that satisfies $\sum_{i \notin I_m}|w_i|^q\leq \eta \sum_{i \in I_m}|w_i|^q$. We use $\eta$ to control the overall pruning degree, since a smaller $\eta$ requires a larger $m$. For any fixed $\eta$, we propose to decide $m$ by each neuron's soft sparsity. In particular, we define the sparsity index for $w \in \real^d$ and any $0<q<1$ as \begin{align*} \textrm{SI}_q(w) = \norm{w}_1/\norm{w}_q. \end{align*} We will write $\textrm{SI}_q(w)$ as $\textrm{SI}$ for short in the rest of the paper when there is no ambiguity. We can derive the following inequality (with more details in the Appendix). \begin{align} m \geq \textrm{SI}^{-q/(1-q)}(1+\eta)^{-1/(1-q)}. \label{eq4.si} \end{align} We propose to choose $m$ as the lower bound in \ref{eq4.si}. The corresponding algorithm is summarized in \Autoref{alg:mag}. \begin{remark} Note that $\textrm{SI} \in [d^{1-1/q}, 1]$, and a larger $\textrm{SI}$ indicates a sparser vector. Aligned with our motivation, \Autoref{eq4.si} indicates that if $w$ is soft-sparse and $\textrm{SI}$ is relatively large, then $m$ can be small and we prune more. \end{remark} Similar to \Autoref{thm4.1}, we can show an upper bound for the pruned model produced by \Autoref{alg:mag} as follows. \begin{theorem}\label{thm:mag} Suppose $f_{T}^{(S)}$ is obtained by \Autoref{alg:mag}, with $m_{k}$ defined as the smallest $m$ obtained for the $k$th layer. The other notation and conditions are the same as \Autoref{thm4.1}. Then, we have \begin{align*} \bnorm{f - f_{T}^{(S)}}_2 \leq & \norm{f - f_T}_2 + t_{L-1}(m_{L-1})^{1-1/q_{L-1}}\max_{1 \leq j \leq n_{L-1}}\big\|f_j^{(L-1)}\big\|_2 \\ \nonumber &+ \rho t_{L-1}t_{L-2}(m_{L-2})^{1-1/q_{L-2}}\max_{1 \leq j \leq n_{L-2}}\big\|f_j^{(L-2)}\big\|_2 + \cdots \\ \nonumber &+ \rho^{S-1}t_{L-1}t_{L-2}\ldots t_{L-S}(m_{L-S})^{1-1/q_{L-S}}\max_{1 \leq j \leq n_{L-S}}\big\|f_j^{(L-S)}\big\|_2. \end{align*} \end{theorem} \begin{remark} We note that the upper bound of \Autoref{thm:mag} is looser than \Autoref{thm4.1}, since pruning based on magnitude is a simple but possibly crude choice of the sparse linear combination, while \Autoref{thm4.1} prunes the network based on the best sparse approximation. This also motivates us to use other strategies to find a better sparse approximation, such as the application of LASSO discussed next. \end{remark} \begin{algorithm}[tb] \caption{One-shot adaptive backward pruning procedure (`ABP')}\label{alg:m} \begin{algorithmic}[1] \Require Pre-trained network to be pruned $f_T$ \State $k=L$ \While{$k>0$} \For{$j$ in $1,\dots,n_k$} \State Get the weights vector $w_j^{(k-1)}$ for $j$-th neuron in $k$-th layer of $f_T$ \State Adaptively find its sparse linear approximation $\tilde{w}_j^{(k-1)}$, such as using \Autoref{alg:mag} or \ref{alg:lasso} \EndFor \State $ k = k-1$ \EndWhile \Ensure The pruned model $f_S$ with weights $\tilde{w}_j^{(k-1)}, j=1,\ldots,n_k, k=1,\ldots,L$. \end{algorithmic} \end{algorithm} \begin{algorithm}[tb] \caption{(Subroutine of \Autoref{alg:m}, `ABP-M') Sparse approximation based on magnitude}\label{alg:mag} \begin{algorithmic}[1] \Require Weight vector $w_j^{(k-1)}$, $\eta$, $q$ \Comment{Approximate the $j$-th neuron in $k$-th layer} \State Calculate $\textrm{SI} = \bnorm{w_j^{(k-1)}}_1/\bnorm{w_j^{(k-1)}}_q$ \State Calculate $m = \textrm{SI}^{q/(q-1)}(1+\eta)^{1/(q-1)}$ \State Select $m$ indices of $w_j^{(k-1)}$ that have the largest magnitudes as $J_m=\{s_1, \dots, s_m\}$ \State Calculate $X^{(k)}_i = (f_{s_1}^{(k-1)}(X_i), \ldots, f_{s_m}^{(k-1)}(X_i))$, $Y^{(k)}_i = g_j^{(k)}(X_i)$ for $i=1,\ldots, N$ \State Perform linear regression on $X^{(k)}_i, Y^{(k)}_i, i=1,\ldots, N$. \Ensure The retrained weights $\tilde{w}$ from the linear regression \end{algorithmic} \end{algorithm} \begin{algorithm}[tb] \caption{(Subroutine of \Autoref{alg:m}, `ABP-L') Sparse approximation using LASSO}\label{alg:lasso} \begin{algorithmic}[1] \Require Weight vector $w_j^{(k-1)}$, $\lambda$ \Comment{Approximate the $j$-th neuron in $k$-th layer} \State Calculate $X^{(k)}_i = (f_1^{(k-1)}(X_i), \ldots, f_{n_{k-1}}^{(k-1)}(X_i))$, $Y^{(k)}_i = g_j^{(k)}(X_i)$ for $i=1,\ldots, N$ \State Perform LASSO with $X^{(k)}_i, Y^{(k)}_i, i=1,\ldots, N$ and penalty parameter $\lambda$ \Ensure The retrained weights $\tilde{w}$ from LASSO \end{algorithmic} \end{algorithm} \textbf{LASSO-based pruning (`ABP-L').} As an alternative, we may find sparse linear approximations using LASSO~\citep{tibshirani1996regression}. To approximate the $j$-th neuron in the $k$-th layer using the functions in the $(k-1)$-th layer, we first obtain the input and output of this neuron using the training data. They are denoted by $X^{(k)}_i = (f_1^{(k-1)}(X_i), \ldots, f_{n_{k-1}}^{(k-1)}(X_i))$ and $Y^{(k)}_i = g_j^{(k)}(X_i)$ for $i=1,\ldots, N$, respectively. The approximation weight vector $\tilde{w}$ is obtained from applying LASSO to $X^{(k)}_i, Y^{(k)}_i, i=1,\ldots, N$ with penalty parameter $\lambda$. We note that LASSO adds a $\ell_1$ penalty on the weight vector, which enforces the learned $\tilde{w}$ to be sparse. Furthermore, an essentially sparser nature (larger soft sparsity) leads to a sparser $\tilde{w}$. This algorithm is summarized in \Autoref{alg:lasso}. \section{Experiments} \label{sec:exp} We compare our proposed pruning procedure using magnitude-based approximation \Autoref{alg:mag} (`ABP-M') and LASSO-based \Autoref{alg:lasso} (`ABP-L') with a standard pruning algorithm (`Mag'), which prunes a fixed proportion $p$ of weights with the smallest magnitude for each neuron. We train a four-layer fully connected ReLU neural network $f_T$ on the California Housing dataset~\citep{pace1997sparse}, which is a regression task with eight continuous predictors and about 20 thousand instances. The evaluation criterion is mean squared error (MSE). For `ABP-M', we choose hyper-parameters $q=0.3,0.5,0.7$ and $\eta=0, 0.1, 0.2, 0.3$. For `ABP-L', we use penalty parameter $\lambda=10^{-5}, 10^{-4}, 10^{-3}$. For `Mag', we choose pruning ratio $p=0.3, 0.5, 0.7$. Each time, we prune $f_T$ using all methods under different settings, and then evaluate the compression ratio, pruning ratio (both defined in \Autoref{sec:form}), and the MSE increase ratio, which is the increase of MSE (the MSE difference between the pruned and the original networks) divided by the MSE of $f_T$. The procedure is replicated $20$ times, and the results are summarized in \Autoref{fig:pr_ac}. \begin{table}[tb] \centering \caption{Mean compression ratio, pruning ratio, and MSE increase ratio for three methods with different hyper-parameters. The stand errors are reported in the parentheses.} \label{fig:pr_ac} \begin{tabular}{lccc} \toprule Method & Compression Ratio &Pruning Ratio & MSE Increase Ratio\\ \midrule ABP-L ($\lambda=10^{-3}$) & 12.97 (0.22) & 0.92 (0.00) & 0.30 (0.02) \\ ABP-L ($\lambda=10^{-4}$)& 4.68 (0.07) & 0.79 (0.00) & 0.05 (0.00) \\ ABP-L ($\lambda=10^{-5}$)& 2.33 (0.04) & 0.57 (0.01) & 0.03 (0.00) \\ \midrule ABP-M ($\eta=0$, $q=0.3$)& 1.52 (0.01) & 0.34 (0.00) & 0.01 (0.00) \\ ABP-M ($\eta=0$, $q=0.5$)& 1.80 (0.02) & 0.44 (0.01) & 0.01 (0.00) \\ ABP-M ($\eta=0$, $q=0.7$)& 2.10 (0.02) & 0.52 (0.01) & 0.02 (0.00) \\ ABP-M ($\eta=0.1$, $q=0.5$)& 2.46 (0.03) & 0.59 (0.01) & 0.04 (0.01) \\ ABP-M ($\eta=0.2$, $q=0.5$)& 3.12 (0.06) & 0.68 (0.01) & 0.17 (0.06) \\ ABP-M ($\eta=0.3$, $q=0.5$)& 3.88 (0.08) & 0.74 (0.01) & 0.31 (0.09) \\ \midrule Mag ($p=0.3$)& 1.55 (0.00) & 0.35 (0.00) & 0.07 (0.01) \\ Mag ($p=0.5$)& 2.32 (0.01) & 0.57 (0.00) & 0.41 (0.06) \\ Mag ($p=0.7$)& 4.71 (0.10) & 0.79 (0.00) & 0.85 (0.10) \\ \bottomrule \end{tabular} \end{table} From \Autoref{fig:pr_ac}, both `ABP-M'~and `ABP-L'~have a significantly smaller accuracy degradation ratio compared to `Mag', when the compression ratio is similar, which supports our intuition that an adaptive pruning scheme is more efficient than pruning a fixed portion of weights. Furthermore, `ABP-L'~outperforms `ABP-M'~in general. For `ABP-M', we note that the sparsity index-inspired pruning in~\Autoref{eq4.si} with $\eta=0$ (or small) works very well for preserving accuracy, although it may be conservative in terms of the pruning ratio. Regarding the choice of $q$, a large $q$ tends to increase the pruning ratio, but as long as $\eta$ is chosen appropriately, $q$ is relatively insensitive. As for `ABP-L', a larger penalty parameter $\lambda$ leads to a larger pruning ratio. Since both methods are one-time pruning, we suggest selecting hyper-parameters through cross-validation to balance deep pruning and accuracy protection. The code is included in the Appendix. \section{Conclusion}\label{sec:con} This paper provides a theory that characterizes the compressibility of a neural network in terms of the $\ell_q$-norm of its weights. The $\ell_q$-norm, or soft sparsity, can be used to compare the compressibility of different models with the same structure. Furthermore, it reveals the relationship between the degree of compression and accuracy degradation, which guides us in selecting an appropriate pruning ratio. The theory also motivates a new pruning scheme by finding a sparse linear approximation of neurons in a backward manner. The developed algorithms produce pruned models with significantly better performance than some standard pruning algorithms. There are some limitations of the current study that we leave for future work. First, our pruning procedure is one-shot pruning, so we may rely on cross-validation to select the hyper-parameters for optimal performance. It will be interesting to study the stopping criterion and develop an iteratively pruning algorithm that can stop intelligently with maximal pruning ratio and little accuracy drop. Second, how to fairly compare the compressibility between two networks with different structures remains a challenge. Third, we focused exclusively on fully connected feed forward neural networks. Generalizations of our results to other networks are of interest. \section*{Acknowledgements} This paper is based upon work supported by the Office of Naval Research under grant number N00014-21-1-2590. \section*{Appendices}
2024-02-18T23:40:58.574Z
2022-06-14T02:12:51.000Z
algebraic_stack_train_0000
3,819
5,920
proofpile-arXiv_066-2700
\section{Introduction} \label{s:intro} Longitudinal image data based on fluorescent proteins play a crucial role for both in vivo and in vitro analysis of various biological processes such as gene expression and cell lineage fate. Assessing the growth patterns of different cell types within a heterogeneous population and monitoring their interactions enables biomedical researchers to determine the role of different cell types in important biological processes such as organ development and regeneration, malignant growth or immune responses under various experimental conditions. For example, tumor progression has been shown to be affected by bidirectional interactions among cancer cells or between cancer cells and cells from the microenvironment, including tumor-infiltrating immune cells \cite{medema2011microenvironmental}. Being able to study these interactions in a laboratory setting is therefore highly relevant, but is complicated by the difficulty of dissecting the effect of the different cell types as soon as the number of cell types exceeds two. In the present study we used longitudinal image data collected from multicolor live-cell imaging growth experiments of co-cultures of cancer cells and fibroblasts (a key cell type in the tumor microenvironment) as well as behaviourally distinct (cloned) cancer cells. Using a high-content imaging system, we were able to acquire characteristics for each individual cell at subsequent times, including fluorescent properties, spatial coordinates, and morphological features. The motivation of this work was to design a model allowing the determination of spatio-temporal growth interactions between these multiple cell populations. In longitudinal growth experiments, the two important goals are to determine growth rates for different cell populations and to assess how interactions between cell types may affect their growth. Whilst a wide range of descriptive data analysis approaches have been used in applications, inference based on a comprehensive model of multicolor cell data is an open research area. The main challenges are related to the presence of complicated spatio-temporal interactions amongst cells and difficulties related to tracking individual cells across time from image data. Typical longitudinal experiments consist of a relatively small number of measurements (e.g. 5 to 20 images taken every few hours), which is adequate for monitoring cell growth. Tracking individual cells would typically require more frequent measurements, complicating the practicality of the experiments in terms of the storage cost of very large image files and the cytotoxicity induced by the imaging process. Although tracking individual cell trajectories is difficult due to cell migration, overlapping cells, changes in cell morphology, image artifacts, cell death and division, obtaining cell counts by cell type (represented by a certain color) is straightforward and can be easily automated. To describe the spatial distribution for different cell types, we propose to divide an image into a number of contiguous regions (tiles) to form a regular lattice structure as shown in Figure \ref{fig:raw} (a). We then record the frequency of cells of different colors in each tile at subsequent time points, and based on which we model the spatial and temporal dependencies of the cell growth. \begin{figure} \includegraphics[scale = 0.9]{Fig1.eps} \caption{(a) Microscope images for the cancer cell growth data obtained from a high-content imager (Operetta, Perkin Elmer) at the initial and final time points of the experiment. In each image, colors for non-fluorescent fibroblasts, as well as red and green fluorescent cancer cells are merged. (b) Illustration of the local structure for the model in (\ref{intensity1}). The two planes correspond to $3 \times 3$ tiles at times $t$ and $t+1$. The average number of cells of color $c$ in a given tile at time $t+1$ is assumed to depend on the number of cells of other colors in contiguous neighboring tiles at time $t$.} \label{fig:raw} \end{figure} To model spatio-temporal data, one could choose to approximate the spatio-temporal process by a spatial process of time series, that is, to view the process as a multivariate spatial process where the multivariate dependencies are inherited from temporal dependencies. In other words, it can be seen as a temporal extension of spatial processes. The most popular way of developing a spatial process is through the conditionally auto-regressive (CAR) model proposed by \cite{besag1974spatial}. \cite{waller1997hierarchical} extend the CAR model into a spatio-temporal setting by allowing spatial effects to vary across time. However, the model lacks a specification of temporal dependency, as also noted by \cite{knorr1999bayesian}. More recently, \cite{quick2017multivariate} proposed a multivariate space-time CAR (MSTCAR) model, which is essentially a multivariate CAR model, where both temporal and between group dependencies are modelled as multivariate dependencies. Other works related to spatial process of time series include \cite{sans2008bayesian} and \cite{quick2016hierarchical}. Alternatively, one also think of the process as a time series of spatial process, or a spatial extension of time series. This is the approach we take in our spatio-temporal modelling. The underlying notion is that ``the temporal dependence is more natural to model than the spatial dependence" \citep{cressie2015statistics}. Following \cite{cox1981statistical}, it is useful to distinguish two modelling approaches for the analysis of time series data commonly seen in spatial-temporal modelling literature: the parameter-driven and observation-driven model. In a parameter-driven model, the dependence between subsequent observations is modelled by a latent stochastic process, which evolves independently of the past history of the observation process. In contrast, in an observation-driven model, time dependence arises because the conditional expectation of the outcome given the past depends explicitly on the past values. For multivariate count data, the advantage of parameter-driven models is that one can easily assume that the conditional expectation of the observed process (on log-scale), as a latent process, is (multivariate) normal. There are extensive works related to latent spatio-temporal models under the Bayesian framework, including models with Gaussian data modelled by (multivariate) Gaussian process with an additive error (\cite{wikle1998hierarchical}, \cite{shaddick2002modelling}, \cite{bradley2015multivariate} and \cite{bradley2016multivariate}), Poisson data with conditional expectation modelled by Gaussian latent process (\cite{mugglin2002hierarchical}, \cite{holan2015hierarchical} and Chapter 7 of \cite{cressie2015statistics}) and Poisson data with multivariate log-gamma latent process \citep{bradley2017computationally}. However, estimation of parameters in parameter-driven models requires considerable computational effort, as does prediction of the latent process. On the other hand, in observation-driven models, inference is possible in a (penalized) maximum likelihood framework and therefore can be easily fitted even for quite complex regression models \citep{davis2003observation}. \cite{schrodle2012assessing} proposed a parameter-driven spatio-temporal model and compared it with a similar observation-driven model proposed by \cite{paul2008multivariate}. They conclude that the parameter-driven models perform slightly better in terms of prediction in some cases, however, while the computation time for the observation-driven model is mostly less than a second, fitting a parameter-driven model takes several hours if it ever converges, because of the complexity with the latent autoregressive process. Besides, their model contains only five parameters, while in our application, the number of parameters of interest grows quadratically with the number of cell populations, which will make the parameter-driven models intractable even with a moderate number of cell populations. Therefore, we choose to work with a spatial extension of observation-driven time series. \cite{zeger1988markov} review various observation-driven time series models with a quasi-likelihood estimation. \cite{fokianos2011log} develop and study the probabilistic properties of a log-linear autoregressive time series model for Poisson data, as an extension of the model considered by \cite{fokianos2009poisson}. See \cite{dunsmuir2015glarma} and \cite{kedem2005regression} for a complete review. Literature about observation-driven spatio-temporal models, however, is relatively sparse. \cite{held2005statistical} propose a multivariate time series model where parameters are allowed to vary across space. \cite{paul2008multivariate} extended the model such that spatial dependences are captured by additional parameters that quantify the ``directed influence" of neighbouring areas at previous time points on the observation of interest. \cite{paul2011predictive} further extend the model by introducing random effects. Note that these approaches model directly the conditional expectation of the count data, meaning they are using an identity link function, instead of the canonical log-link. Thus, it is required that the parameters are positive to ensure that the resulting conditional expectation is positive. \cite{knorr2003hierarchical} propose a space-time model for surveillance data, apart from separate seasonal and spatial components, they include an autoregressive term with a latent indicator. In this paper, we develop a conditional spatial-temporal model for multivariate count data on tiled images, and provide its application on tiled images in the context of longitudinal cancer cell monitoring experiments. Our model enables us to measure the effect on the growth rate of each cell population and changes due to local cross-population interactions. Specifically, we consider a multivariate Poisson model with intensity modeled as a log-linear form similar to those in \cite{knorr2003hierarchical} and \cite{fokianos2011log}, and we quantify spatio-temporal impacts of different cell populations in neighboring tiles through model parameters, as illustrated in Figure \ref{fig:raw} (b). Impacts are allowed to be positive or negative, and unlike those models that describe between group dependence through a covariance matrix, influences do not have to be symmetrical in our model. Another main advantage of the proposed framework is that it enables one to accommodate spatio-temporal cell interactions for heterogeneous cell populations within a relatively parsimonious statistical model. Since the model complexity can be potentially very large in the presence of many cell types, it is also important to address the question of how to select an appropriate model by retaining only the meaningful spatio-temporal interactions between cell populations We cary out a model selection using the common model selection criteria for parametric models, the Akaike and the Bayesian information criteria (AIC and BIC). The remainder of the paper is organized as follows. In Section \ref{sec:methods}, we introduce the conditional spatio-temporal lattice model for multivariate count data and develop maximum likelihood inference tools. In the same section, we discuss the asymptotic properties of our estimator and standard errors. In Section \ref{sec:montecarlo}, we study the performance of the new estimator using simulated data. In Section \ref{sec:realdata}, we apply our method to analyze datasets from two in-vitro experiments: One where cancer cells are co-cultured with fibroblasts, and one where individually recognisable cloned cancer cell populations are cultured together in different combinations. In Section \ref{sec:conclusion}, we conclude and give final remarks. \section{Methods}\label{sec:methods} \subsection{Multicolor spatial autoregressive model on the lattice} \label{sec:model} Let $\mathcal{L} \in \mathbb{N}^2$ be a discrete lattice. In the context of our application, the lattice is obtained by tiling a microscope image into $n_\mathcal{L}$ tiles, denoted by $\mathcal{L}_n (\subset \mathcal{L})$. The total number of tiles $n_\mathcal{L}$ is a monotonically increasing function of $n.$ One can choose various forms of lattice, for example, the regular or hexagonal lattices. For simplicity, we tile the image into $n \times n$ regular rectangular tiles, which makes $n_{\mathcal{L}} =n^2.$ An example of a tiled image with $n=10$ is shown in Figure \ref{fig:raw} (a). Denote a pair of neighbouring tiles $\{i, j\}$ with $i \sim j$, if tiles $i$ and $j$ share the same border or coincide ($i=j$). Each tile may contain cells of different colors; thus, we let $\mathcal{C}=\{ 1, \dots, n_{\Ccal } \}$ be a finite set of colors and denote by $n_{\Ccal } $ the total number of colors. Let ${\pmb{Y}} = \{ {\pmb Y}_{t}, t = 1, \dots,T\}$ be the sample of observations where ${\pmb{Y}}_t = \{ {\pmb{Y}}_t^{(c)}, c \in \mathcal{C} \}$ is the collection of observations at time point $t$, and ${\pmb{Y}}^{(c)}_{t}=(Y_{1,t}^{(c)}, \dots, Y_{n_\mathcal{L},t}^{(c)})^\top$ is the vector of observed frequencies for color $c$ on the lattice $\mathcal{L}_n$ at time $t$. The joint distribution for the spatio-temporal process on the lattice is difficult to specify, due to local spatial interactions for neighboring tiles and global interactions occurring at the level of the entire image. An additional issue is that cells tend to be clustered together due to the cell division process and other biological mechanisms; thus it is not uncommon to observe low counts in a considerable portion of tiles. In typical longitudinal experiments, the number of time points seldom go beyond $50$ due to experimental, storage and processing cost, while $n_\mathcal{L}$ can be relatively large. So we work under the framework where $T$ is assumed to be finite, while $n_\mathcal{L}$ is allowed to grow to infinity. We suppose that the count for the $i$th tile $Y^{(c)}_{i,t}$ follows a marginal Poisson distribution $Y^{(c)}_{i,t}|{\pmb{Y}}_{t-1} \sim \text{Pois}(\lambda^{(c)}_{i,t})$, with intensity modeled by the canonical log-link $v^{(c)}_{i,t} =\log \lambda_{i,t}^{(c)} $, where $v^{(c)}_{i,t}$ takes the following spatial autoregressive form: \begin{align} v^{(c)}_{i,t} & = \alpha^{(c)} + \sum_{c' \in \mathcal{C}} \beta^{(c|c')} S_{i,t-1}^{(c')} , \label{intensity1} \\ S_{i,t-1}^{(c')} & = \dfrac{1}{n_i} \sum_{\substack{i\sim j: \ j\in \mathcal{L}_n}} \log\left( 1+ Y_{j,t-1}^{(c')} \right), \label{intensity2} \end{align} for all $c \in \mathcal{C}, t = 1, \dots, T$, with $n_i = \{ \# j: i \sim j, j\in \mathcal{L}_n\}$ being the number of tiles in a neighbourhood of tile $i$. Although we are adopting the regular grids for simplicity, the model is readily applicable to other tiling strategies. Changing the tiling strategy would only change the realisations of $S_{i,t-1}^{(c')}$ in (2). Here, we assume that the conditional count for different tiles at time $t$ is independent conditioning on information from $t-1$, i.e. $$ P\big(Y^{(c)}_{i,t} Y^{(c')}_{j,t}| {\pmb{Y}}_{t-1} \big) = P\big(Y^{(c)}_{i,t}| {\pmb{Y}}_{t-1} \big) P\big(Y^{(c')}_{j,t}| {\pmb{Y}}_{t-1} \big), $$ for all $c,c' \in \mathcal{C}, t = 1, \dots, T,$ and $ i,j \in \mathcal{L}_n, i \neq j.$ This does not suggest that they ($Y^{(c)}_{i,t} $ and $Y^{(c')}_{j,t}$) are independent, but rather that their spatio-temporal dependence is due to the structure of intensity $\lambda_{i,t}^{(c)}$ in (\ref{intensity1}). Conditional independence is a commonly used assumption for spatio-temporal models in a non-gaussian setting \cite{waller1997hierarchical,wikle2003climatological}, since it's exceedingly difficult to work with multivariate non-Gaussian distribution \cite{cressie2015statistics}. The elements of the parameter vector ${\pmb{\alpha}} = ( \alpha^{(1)},\dots, \alpha^{(n_{\Ccal } )})^\top$ are main effects corresponding to a baseline average count for cells of different colors. The spatio-temporal interactions are measured by the statistic $S_{i,t-1}^{(c')}$ in (\ref{intensity2}), which essentially counts the number of cells of color $c'$ in the neighborhood of tile $i$ at time $t-1$. Hence, the autoregressive parameter $\beta^{(c|c')}$ is interpreted as positive or negative change in the average number of cells with color $c$, due to interactions with cells of color $c'$ in neighbouring tiles. A positive (or a negative) sign of $\beta^{(c|c')}$ means that the presence of cells of color $c'$ in neighboring tiles promotes (or inhibits) the growth of cells of color $c$. The spatio-temporal effects $\beta^{c|c'}, c,c' \in \mathcal{C} ,$ are collected in the $n_{\Ccal } \times n_{\Ccal } $ weighted incidence matrix ${\pmb{\mathcal{B}}}$. This may be used to generate weighted directed graphs, as shown in the example of Figure \ref{f:realdata2}, where the nodes of the directed graph correspond to cell types, and the directed edges are negative or positive spatio-temporal interactions between cell types. Equation (\ref{intensity1}) could be extended to some more specific form, for example, $v^{(c)}_{i,t} = \alpha^{(c)} + \sum_{c' \in \mathcal{C}} \beta_1^{(c|c')} S_{i,t-1}^{(c')} + \beta_0^{(c|c')} \log\left( 1+ Y_{i,t-1}^{(c')} \right)$, where $ \beta_1^{(c|c')}$ are interpreted as the effect of cells of color $c'$ from neighbouring (but not the same) tiles have on the growth of cells with color $c$, while $ \beta_0^{(c|c')}$ as the effect of cells of color $c'$ from the same tile. However, we stick to the model in (\ref{intensity1}) because we have no evidence showing that the more complex model is advantageous from model selection view point. We choose to work with a log-linear form for the autoregressive equation of $v^{(c)}_{i,t}$ in Equation (\ref{intensity1}), where we apply a logarithmic transform and add $1$ to the counts at time $t-1$, $Y^{(c)}_{i,t-1}$. It offers several advantages compared to the more commonly used linear form. First, $\lambda^{(c)}_{i,t}$ and $Y^{(c)}_{i,t-1}$ are transformed on the same scale. Moreover, this model can accommodate both positive and negative correlations, while it is not possible to account for positive association in a stationary model if past counts are directly included as explanatory variables. For example, with the model $v_{i,t} = \alpha + \beta Y_{i,t-1}$ for a single color, the intensity would be $\lambda_{i,t} = \exp\left(\alpha \right) \exp\left(\beta Y_{i,t-1} \right),$ which may lead to instability of the Poisson means if $\beta>0$ since $\lambda_{i,t}$ is allowed to increase exponentially fast. Finally, adding $1$ to $Y^{(c)}_{i,t-1}$ is for coping with zero data values, since $\log(Y^{(c)}_{i,t-1})$ is not defined when $Y^{(c)}_{i,t-1} = 0$, which arises often, and it maps zeros of $Y^{(c)}_{i,t-1}$ into zeros of $\log(1+ Y^{(c)}_{i,t-1})$. \subsection{Likelihood inference} \label{sec:clinference} Let ${\pmb{\theta}} $ be the overall parameter vector ${\pmb{\theta}} = ({\pmb{\alpha}} ^\top, \text{vec}({\pmb{ \mathcal{B}}})^\top )^\top \in \mathbb{R}^{p}$, where ${\pmb{\alpha}} $ is a $n_\mathcal{C}$-dimensional vector defined in Section \ref{sec:model} and ${\pmb{\mathcal{B}}}$ is a $n_\mathcal{C} \times n_\mathcal{C}$ matrix of colour interaction effects, $p = n_{\Ccal } (1+n_{\Ccal } )$ is the total number of parameters. In this section, we develop a weighted maximum likelihood estimator for our model, \begin{equation}\label{CL} L_n({\pmb{\theta}} ) = \prod_{t=1}^T \prod_{c \in\mathcal{C}} \prod_{i \in \mathcal{L}_n} P(Y_{i,t}^{(c)} |{\pmb{Y}} _{t-1}; {\pmb{\theta}} ) = \prod_{t=1}^T \prod_{c \in \mathcal{C}} \prod_{i \in \mathcal{L}_n} \Bigg( e^{-\lambda_{i,t}^{(c)}({\pmb{\theta}} )} \dfrac{{\lambda_{i,t}^{(c)}({\pmb{\theta}} )}^{y_{i,t}^{(c)}}}{y_{i,t}^{(c)}!} \Bigg), \end{equation} where $\lambda_{i,t}^{(c)}({\pmb{\theta}} )$ is the expected number of cells with color $c$ in tile $i$ at time $t$, defined in (\ref{intensity1}). The maximum likelihood estimator (MLE), $\hat{{\pmb{\theta}} }$, is obtained by maximizing the weighted log-likelihood function \begin{equation}\label{cl} \ell_n({\pmb{\theta}} ) = \sum_{i \in \mathcal{L}_n} \sum_{t = 1}^T \sum_{c \in \mathcal{C}} \left[ Y_{i,t}^{(c)} v^{(c)}_{i,t}({\pmb{\theta}} ) - \exp \left\{ v^{(c)}_{i,t}({\pmb{\theta}} ) \right\} \right], \end{equation} where $v_{i,t}^{(c)}({\pmb{\theta}} ) \equiv \log \lambda_{i,t}^{(c)}({\pmb{\theta}} )$. Equivalently, $\hat{{\pmb{\theta}} }$ is formed by solving the weighted estimating equations \begin{equation}\label{score} 0 = {\pmb{u}}_n({\pmb{\theta}} ) \equiv \dfrac{1}{n_\mathcal{L}}\nabla \ell_n({\pmb{\theta}} ) = \dfrac{1}{n_\mathcal{L}} \sum_{i \in \mathcal{L}_n} \sum_{t=1}^T {\pmb{\gamma}}_{i,t}({\pmb{\theta}} ) \otimes \nabla {\pmb{v}}_{i,t}, \end{equation} where $ {\pmb{\gamma}}_{i,t}({\pmb{\theta}} ) = \left( y^{(1)}_{i,t} - \exp \left\{ v^{(1)}_{i,t}({\pmb{\theta}} ) \right\}, \dots,y^{(n_\mathcal{C})}_{i,t} - \exp \left\{ v^{(n_\mathcal{C})}_{i,t}({\pmb{\theta}} ) \right\} \right)$, $\otimes$ denotes the Kronecker product, $\nabla$ is the gradient operator with respect to ${\pmb{\theta}} $ and $\nabla {\pmb{v}}_{i,t} \equiv \nabla {\pmb{v}} ^{(c)}_{i,t}({\pmb{\theta}} ) = ( 1, S_{i,t-1}^{(1)}, \dots, S_{i,t-1}^{(n_{\Ccal } )})^\top$. Our empirical results show that this choice performs reasonably well in terms of estimation accuracy in all our numerical examples and guarantees optimal variance for the estimator $\hat{{\pmb{\theta}} }$ under correct model specification. The solution to Equation (\ref{score}) is obtained by a standard Fisher scoring algorithm, which is found to be stable and converges fast in all our numerical examples. Finally, in practical applications it is also important to address the question of how to select an appropriate model by retaining only the meaningful spatio-temporal interactions between cell populations, and avoid over-parametrized models. Model selection plays an important role by balancing goodness-of-fit and model complexity. Here, we select non-zero model parameters based traditional model selection approaches: the Akaike Information criterion, $AIC = -2\ell(\hat{{\pmb{\theta}} }) + 2p$, and the Bayesian information criterion, $BIC = -2\ell(\hat{{\pmb{\theta}} }) + p\log(|n_\mathcal{L} T|)$. \subsection{Asymptotic properties and standard errors} \label{sec:se} In this section, we overview the asymptotic behavior of the estimator introduced in Section \ref{sec:clinference}. In our setting we consider a fixed number of time points, $T$, whilst the lattice $\mathcal{L}_n$ is allowed to increase. This reflects the notion that the statistician is allowed to choose an increasingly fine tiling grid as the number of cells increases. If the regularity conditions stated in the Appendix hold, then $\sqrt{n_\mathcal{L}}{\pmb{ H}}_n({\pmb{\theta}} _0)^{1/2}(\hat{{\pmb{\theta}} }_n - {\pmb{\theta}} _0)$ converges in distribution to a $p$-variate normal distribution with zero mean vector and identity variance, as $n_\mathcal{L} \rightarrow \infty$, with ${\pmb{H}} _n({\pmb{\theta}} )$ given in (\ref{eq:H}). Asymptotic normality of $\hat{{\pmb{\theta}} }_n$ follows by applying the limit theorems for M-estimators for nonlinear spatial models developed by \cite{jenish2009central}. One condition required to ensure this behavior is that ${\pmb{Y}} _t$ has constant entries at the initial time point $t=0$, which is quite realistic since typically cells are seeded randomly at the beginning of the experiment. Our proofs mostly check $\alpha$-mixing conditions and $\mathcal{L}_2$-Uniform Integrability of the score functions ${\pmb{u}} _{i,t}({\pmb{\theta}} )$ ensures a pointwise law of large numbers, with additional stochastic equicontinuity, a uniform version of the law of large numbers required by \cite{jenish2009central}. The asymptotic asymptotic variance of $\hat {\pmb{\theta}} $ is ${\pmb{V}} _n(\hat{{\pmb{\theta}} })= {\pmb{H}} _n^{-1}({\pmb{\theta}} _0)$, where ${\pmb{H}} _n({\pmb{\theta}} )$ is the $p \times p$ Hessian matrix \begin{align} \label{eq:H} {\pmb{H}} _n({\pmb{\theta}} ) = - E \left[ \nabla^2 \ell({\pmb{\theta}} )\right] = - E \left( \sum_{i \in \mathcal{L}_n} \nabla {\pmb{u}} _i({\pmb{\theta}} ) \right), \end{align} with $ {\pmb{u}} _i({\pmb{\theta}} ) = {\pmb{u}}_{i,1}({\pmb{\theta}} ) + \cdots + {\pmb{u}}_{i,T}({\pmb{\theta}} )$ being the partial score function for the $i$th tile. Direct evaluation of ${\pmb{H}}({\pmb{\theta}})$ may be challenging since the expectations in (\ref{eq:H}) is intractable. Thus, we estimate ${\pmb{H}}_n({\pmb{\theta}})$ by the empirical counterpart $$ \hat{{\pmb{H}}}_n({\pmb{\theta}}) = \begin{pmatrix} \hat{{\pmb{H}}}^{(1)}({\pmb{\theta}}) & {\pmb{0}} & \cdots & {\pmb{0}} \\ {\pmb{0}} & \hat{{\pmb{H}}}^{(2)}({\pmb{\theta}}) & \cdots & {\pmb{0}} \\ \vdots & \vdots & \ddots & \vdots \\ {\pmb{0}} & {\pmb{0}} & \cdots & \hat{{\pmb{H}}}^{(n_\mathcal{C})}({\pmb{\theta}}) \end{pmatrix}, $$ where \begin{align}\label{hatH} \hat{{\pmb{H}}}^{(c)}({\pmb{\theta}}) & = \sum_{i \in \mathcal{L}_n} \sum_{t=1}^T \exp\left[v_{i,t}^{(c)}({\pmb{\theta}})\right] \left[ \nabla {\pmb{v}}_{i,t}\right]\left[ \nabla {\pmb{v}}_{i,t}\right]^\top. \end{align} Note that the above estimators approximate the quantities in formula (\ref{eq:H}) by conditional expectations. Our numerical results suggest that the above variance approximation yields confidence intervals with coverage very close to the nominal level $(1-\alpha)$. Besides the above formulas, we also consider confidence intervals obtained by a parametric bootstrap approach. Specifically, we generate $B$ bootstrap samples ${\pmb{Y}}^*_{(1)}, \dots, {\pmb{Y}}^*_{(B)} $ by sampling at subsequent times from the conditional model specified in Equations (\ref{intensity1}) and (\ref{intensity2}) with ${\pmb{\theta}} = \hat{{\pmb{\theta}}}$. From such bootstrap samples, we obtain bootstrapped estimators, $\hat{{\pmb{\theta}}}^*_{(1)}, \dots, \hat{{\pmb{\theta}}}^*_{(B)}$, which are used to estimate $var(\hat {\pmb{\theta}}_0)$ by the usual covariance estimator $\hat{{\pmb{V}}}_{boot} (\hat{{\pmb{\theta}}}) = \sum_{b=1}^B (\hat{{\pmb{\theta}}}^*_{(b)} - {\overline{{\pmb{\theta}}} }^* )^2/ (B-1)$, where ${\overline{{\pmb{\theta}}} }^* =\sum_{b=1}^B\hat{{\pmb{\theta}}}^*_{(b)}/B $. Finally, a $(1-\alpha) 100 \%$ confidence interval for ${\pmb{\theta}}_j$ is obtained as $\hat{{\pmb{\theta}}}_j \pm z_{1-\alpha/2} \{\hat{{\pmb{V}}}\}_{jj}^{1/2}$, where $z_q$ is the $q$-quantile of a standard normal distribution, and $\hat{{\pmb{V}}}$ is an estimate of $var(\hat {\pmb{\theta}})$ obtained by either Equation (\ref{hatH}) or bootstrap resampling. \section{Monte Carlo simulations} \label{sec:montecarlo} In our Monte Carlo experiments, we generate data from a Poisson model as follows. At time $t=0$, we populate $n_{\mathcal{L}}$ tiles using equal counts for cells of different colors. For $t = 1, \dots, T$, observations are drawn from the multivariate Poisson model $ Y^{(c)}_{i,t}|{\pmb{Y}}_{t-1} \sim \text{Poisson}(\lambda^{(c)}_{i,t}), c \in \mathcal{C}. $ Recall that the rate $\lambda^{(c)}_{i,t}$ defined in Section \ref{sec:model} contains autoregressive coefficients $\beta^{(c|c')}$, which are collected in the $n_\mathcal{C} \times n_\mathcal{C} $ matrix $\mathcal{B}$. We assess the performance of MLE under different settings concerning the size and sparsity of $\mathcal{B}$. Consider the three models with the following choices of $\mathcal{B}$: \[ {\pmb{\mathcal{B}_1}} = \left( \begin{array}{ccc} 0.7 & -0.7 & 0.7 \\ 0.7 & 0.7 & -0.7 \\ -0.7 & 0.7 & 0.7\end{array} \right), {\pmb{\mathcal{B}_2}} = \left( \begin{array}{ccc} 0.05 & -0.15 & 0.25 \\ 0.35 & 0.45 & -0.55 \\ -0.65 & 0.75 & 0.85 \end{array} \right), {\pmb{\mathcal{B}_3}} = \left( \begin{array}{ccc} 0.7 & -0.7 & 0.7 \\ 0 & 0.7 & 0 \\ 0 & 0 & 0.7\end{array} \right). \] Denote Model $i$ as the model corresponding to $\mathcal{B}_i, i = 1,2,3$. In Model 1, all the effects in ${\pmb{\mathcal{B}}}$ have the same size; in Model 2, the effects have decreasing sizes; Model 3 is the same as Model 1, but with some interactions exactly equal to zero. We set $\alpha^{(1)}= \cdots = \alpha^{(n_{\mathcal{C}})} = -0.1$ for all three models, which The above parameter choices reflect the situation where the generated process ${\pmb{Y}}$ has a moderate growth. In Tables \ref{t:Sim1} and \ref{t:Sim2}, we show results based on 1000 Monte Carlo runs generated from Models 1-3, for $n=25, n_\mathcal{C} = 3$ and $T=10$ and $25$. In Table \ref{t:Sim1}, we show Monte Carlo estimates of squared bias and variance of $\hat{{\pmb{\theta}}}$. Both squared bias and variance of our estimator are quite small in all three models, and decrease as $T$ gets larger. The variances of Model 2 are slightly larger than those in the other two models due to the increasing difficulty in estimating parameters close to zero. \begin{table}[h] \begin{center} \begin{tabular}{ccccccccccc }\toprule & \multicolumn{2}{c}{$T = 10$} & \phantom{abc}& \multicolumn{2}{c}{$T = 25$} & \phantom{abc}& \\ \cmidrule{2-3} \cmidrule{5-6} & $\widehat{\text{Bias}}^2$ & $\widehat{\text{Var}}$ && $\widehat{\text{Bias}}^2$ & $\widehat{\text{Var}}$ \\ \midrule Model 1 & 0.45(0.57) & 5.75(0.26) & &0.29(0.32) & 2.36(0.11)\\ Model 2 & 0.64(0.91) & 9.66(0.42) & &0.67(0.71) & 4.45(0.20)\\ Model 3 & 0.77(0.97) & 8.09(0.36) && 0.52(0.51) & 3.47(0.16)\\ \bottomrule \end{tabular} \end{center} \caption{Monte Carlo estimates for squared bias $(\times 10^{-6})$ and variance $(\times 10^{-4})$ of the MCLE for three models with time points $T= 10, 25.$ Simulation standard errors are shown in parenthesis. The three models differ in terms of the coefficients $\beta^{(c|c')}, c,c' \in \mathcal{C}$, as described in Section \ref{sec:montecarlo}: Non-zero equal effects (Model 1), non-zero decreasing interactions (Model 2), and sparse effects (Model 3). For all models, $ \alpha^{(c)} = -0.1, c = 1, 2, 3$. Estimates are based on 1000 Monte Carlo runs.}\label{t:Sim1} \end{table} In Table \ref{t:Sim2}, we report the coverage probability for symmetric confidence intervals of the form $\hat {{\pmb{\theta}}} \pm z_{1-\alpha/2} {\widehat{sd}(\hat{{\pmb{\theta}}}) }$, where $z_q$ is the $q-$quantile for a standard normal distribution, with $\alpha = 0.01, 0.05, 0.10.$ The standard error, $\widehat{sd}(\hat{{\pmb{\theta}}})$, is obtained by the sandwich and the parametric bootstrap estimate, $\hat{{\pmb{V}}}_{est}$ and $\hat{{\pmb{V}}}_{boot}$, described in Section \ref{sec:se}. The coverage probability of the confidence intervals are very close to the nominal level for both methods. \begin{table}[h] \begin{center} \begin{tabular}{ccccccccccccccccccc }\toprule & \phantom{abc}& \multicolumn{2}{c}{$T = 10$} & \phantom{abc}& \multicolumn{2}{c}{$T = 25$} & \phantom{abc \phantom{abc} \\ \cmidrule{3-4} \cmidrule{6-7} && $\hat{{\pmb{V}}}_{boot}$ & $\hat{{\pmb{V}}}_{est}$ && $\hat{{\pmb{V}}}_{boot}$ & $\hat{{\pmb{V}}}_{est}$ \\ \midrule &Model 1 &98.6 &99.0 & &98.9 & 99.0 \\ $\alpha = 0.01$ &Model 2 & 99.0 & 99.0 &&98.8 &98.9 \\ &Model 3 & 98.9 &99.0 && 98.9 & 98.9\\ \midrule &Model 1 & 94.2 & 95.2 && 94.9 & 95.0 \\ $\alpha = 0.05$ &Model 2 & 95.2 &95.1 & &95.0 & 95.3 \\ &Model 3 & 95.4 &95.5 && 94.9 & 95.1\\ \midrule &Model 1 & 89.2 & 90.3 && 90.1 &90.3 \\ $\alpha = 0.10$&Model 2 & 90.6 &90.0 &&89.7 &90.0\\ &Model 3 & 90.6 & 90.6 && 90.2 &90.2 \\ \bottomrule \end{tabular} \end{center} \caption{ Monte Carlo estimates for the coverage probability of $(1-\alpha)\%$ confidence intervals $\hat {{\pmb{\theta}}} \pm z_{1-\alpha/2} {\widehat{sd} (\hat{{\pmb{\theta}}})}$, with ${\widehat{sd}(\hat{{\pmb{\theta}}}) }$ obtained using bootstrap ($\hat{{\pmb{V}}}_{boot}$) and sandwich ($\hat{{\pmb{V}}}_{est}$) estimators in Section \ref{sec:methods} and \ref{sec:montecarlo}. The three models differ in terms of the coefficients $\beta^{(c|c')}, c,c' \in \mathcal{C}$ as described in Section \ref{sec:montecarlo}: Non-zero equal effects (Model 1), non-zero decreasing interactions (Model 2), and sparse effects (Model 3). For all models, $ \alpha^{(c)} = -0.1, c = 1, 2, 3$, estimates are based on 1000 Monte Carlo runs.}\label{t:Sim2} \end{table} In Table \ref{t:Sim3}, we show results for the model selection based on 1000 Monte Carlo samples from Model 3 using the AIC and the BIC given in Section 2 for $n=25$ and $T=10, 25$. We report Type A error (a term is not selected when it actually belongs to the true model ) and Type B error (a term is selected when it is not in the true model ). For both AIC and BIC model selection is more accurate for large $T$. As expected AIC tends to over select, and BIC outperforms AIC, with zero Type A error, and very low Type B error. \begin{table}[h] \begin{center} \begin{tabular}{cccccccccccccccccccccccccccc }\toprule & \multicolumn{3}{c}{$T = 10$} &\multicolumn{3}{c}{$T = 25$} \\ \cmidrule{2-3} \cmidrule{5-6} & Type A & Type B & & Type A & Type B & \\ \midrule AIC & 0.00 & 10.00 & & 0.00 & 10.38 &&\\ \midrule BIC & 0.00 & 0.22 & &0.00 & 0.20 &\\ \bottomrule \end{tabular} \end{center} \caption{ Monte Carlo estimates for $\%$ Type A error (a term is not selected when it actually belongs to the true model) and $\%$ Type B error (a term is selected when it is not in the true model) using AIC and BIC criteria. Results are based on 1000 Monte Carlo samples generated from Model 3 with $n=25$ and $T=10, 25$. }\label{t:Sim3} \end{table} \section{Analysis of the cancer cell growth data} \label{sec:realdata} Cancer cell behavior is believed to be determined by several factors including genetic profile and differentiation state. However, the presence of other cancer cells and non-cancer cells has also been shown to have a great impact on overall tumor behavior \cite{tabassum2015tumorigenesis,kalluri2006fibroblasts}. It is therefore important to be able to dissect and quantify these interactions in complex culture systems. The data sets in this section represent two scenarios: cancer cell-fibroblast co-culture and cloned cancer cell co-culture experiments. The data sets analyzed consist of counts of cell types (different cancer cell populations expressing different fluorescent proteins, and non-fluorescent fibroblasts) from 9 subsequent images taken at an 8-hour frequency over a period of 3 days using the Operetta high-content imager (Perkin Elmer). Information regarding cell type (fluorescent profile) and spatial coordinates for each individual cell were extracted using the associated software (Harmony, Perkin Elmer). Each image was subsequently tiled using a $25\times 25$ regular grid. \subsection{Cancer cell-fibroblast co-culture experiment} In this experiment, cancer cells are co-cultured with fibroblasts, a predominant cell type in the tumor microenvironment, believed to affect tumor progression, partly due to interactions with and activation by cancer cells \cite{kalluri2006fibroblasts}. In this experiment, fibroblasts (F) are non-fluorescent whereas cancer cells fluoresce either in the red (R) or green (G) channels due to the experimental expression of mCherry or GFP proteins, respectively. Cells were initially seeded at a ratio of 1:1:2 (R:G:F). \paragraph{Model selection and inference.} We applied our methodology to quantify the magnitude and direction of the impacts have on growth for the considered cell types. To select the relevant terms in the intensity expression (\ref{intensity1}), we carry out model selection using the BIC model selection criterion. In Table \ref{t:Real}, we show estimated parameters for the full and the BIC models, with bootstrap $95\%$ confidence intervals in parenthesis. Figure \ref{f:realdata2} illustrates estimated spatio-temporal impacts between cell types using a directed graph. The solid and dashed arrows represent respectively significant and not significant impacts between cell types at the $95\%$ confidence level. Significant impacts coincide with parameters selected by BIC. The interactions within each cell type ($\hat{\beta}^{(c|c)}, c= R, G, F$) are significant, which is consistent with healthy growing cells. As anticipated, the effects $\hat{\beta}^{(c|c)}$ for the cancer cells are larger than those for the slower growing fibroblasts. The validity of the estimated parameters is also supported by the similar sizes of the parameters for the green and red cancer cells. This is expected, since the red and green cancer cells are biologically identical except for the fluorescent protein they express. Interestingly, the size of the estimated effects within both types of cancer cells ($\hat{\beta}^{(c|c)}, c= R, G$) are larger than the impact they have on one another ($\hat{\beta}^{(G|R)}$ and $\hat{\beta}^{(R|G)}$). This is not surprising, since $\hat{\beta}^{(c|c)} (c= R, G)$ reflects not only impacts between cells from the same cell population, but also cell proliferation. The fact that we are able to detect the impacts between the red and green cancer cells confirms that our methodology is sensitive enough to detect biologically relevant impacts even though no interactions were found between the cancer cells and the fibroblasts. This might be due to the fact that we used normal fibroblasts that had not previously been in contact with cancer cells and thus had not been activated to support tumor progression as is the case with cancer-activated fibroblasts. \begin{figure}[ht] \centering \includegraphics[scale = 0.9]{Figure2.eps} \caption{Directed graph showing fitted spatio-temporal interactions between GFP cancer cells (G), mCherry cancer cells (R) and fibroblasts (F). The solid and dashed arrows represent respectively the significant and not significant interactions between cell types at the $95\%$ confidence level. } \label{f:realdata2} \end{figure} \begin{table}[h] \begin{center} \scalebox{0.9}{ \begin{tabular}{ccccccccc }\toprule\label{Real} & \multicolumn{3}{c}{Full model} & \phantom{abc} \phantom{abc} \\ \cmidrule{2-4} $c=$ & $G$ & $R$ & $F$ \\ \midrule $\widehat{\alpha}^{(c)}$ & \;-0.99 (-1.19, -0.79) & \;-0.50 (-0.70, -0.30) & \;-0.26 (-0.45, -0.06) \\ $\widehat{\beta}^{(G|c)}$ & 1.23 (1.10, 1.35) & 0.34 (0.21, 0.48) & 0.12 (-0.03, 0.27) \\ $\widehat{\beta}^{(R|c)}$ & 0.28 (0.17, 0.38) & 1.09 (0.96, 1.21) & \;0.02 (-0.09, 0.13) \\ $\widehat{\beta}^{(F|c)}$ & 0.10 (-0.01, 0.21) & \;0.02 (-0.07, 0.12) & 0.92 (0.81, 1.03) \\ \midrule & \multicolumn{3}{c}{BIC model} & \phantom{abc} \phantom{abc} \\ \cmidrule{2-4} $c=$ & $G$ & $R$ & $F$\\ \midrule $\widehat{\alpha}^{(c)}$ &\;-0.88 (-1.04, -0.72)& \;-0.49 (-0.66, -0.31)& \;-0.19 (-0.36, -0.02) \\ $\widehat{\beta}^{(G|c)}$ & 1.24 (1.11, 1.37)& 0.35 (0.21, 0.48)& /\\ $\widehat{\beta}^{(R|c)}$ & 0.28 (0.17, 0.39)& 1.09 (0.96, 1.21)& /\\ $\widehat{\beta}^{(F|c)}$ & /& /& 0.93 (0.82, 1.04) \\ \bottomrule \end{tabular} } \end{center} \caption{Estimated parameters for the full and the BIC models based on the cancer cell growth data described in Section \ref{sec:realdata}. Bootstrap $95\%$ confidence intervals based on $50$ bootstrap samples are given in parenthesis. } \label{t:Real} \end{table} \paragraph{Goodness-of-fit and one-step ahead prediction} To illustrate the goodness-of-fit of the estimated model, we generate cell counts for each type in each tile, $\hat{y}_{i,t}^{{(c)}}$, from the Pois($\hat{\lambda}_{i,t}^{{(c)}}$) distribution for $t \leq 1$, where $\hat{\lambda}_{i,t}^{{(c)}}$ is computed using observations at time $t-1$, with parameters estimated from the entire dataset. In Figure \ref{f:goodnesofffit}, we compare the actually observed and generated cell counts for GFP cancer cells (G) and mCherry cancer cells (R) and fibroblasts (F) across the entire image. The solid and dashed curves for all cell types are close, suggesting that the model fits the data reasonably well. As anticipated, the overall growth rate for the red and green cancer cells are similar, and sensibly larger than the growth rate for fibroblasts. \begin{figure}[ht] \centering \includegraphics[scale = 0.9]{Fig4.eps} \caption{ Goodness-of-fit of the estimated model. Observed (solid) and predicted (dashed) number of GFP cancer cells (G), mCherry cancer cells (R) cancer cells and fibroblasts (F) for the entire image. Predicted cell counts for each cell type in each tile $\hat{y}_{i,t}^{{(c)}}$ is generated from the conditional Poisson model with intensity $\hat{\lambda}_{i,t}^{{(c)}}$ defined in Equation (\ref{intensity1}) and (\ref{intensity2}), where the coefficients $\hat{\beta}^{(c|c')}$ are estimated from the entire dataset.} \label{f:goodnesofffit} \end{figure} To assess the prediction performance of our method, we consider one-step-ahead forecasting using parameters estimated from a moving window of five time points. In Figure \ref{qqplot2}, we show quantiles of observed cell counts against predicted counts for each tile. The upper and lower $95\%$ confidence bounds are computed non-parametrically by taking $\hat{F}^{-1}_1 \big(\hat{F}_0 (y_{t}^{(c)}) - 0.95 \big)$ and $ \hat{F}^{-1}_1 \big(\hat{F}_0 (y_{t}^{(c)}) + 0.95 \big)$, where $\hat{F}_0$ and $\hat{F}_1$ are the empirical distributions of the observations and predictions at time $t$ respectively \citep{koenker2005quantile}. The identity line falls within the confidence bands in each plot, indicating a satisfactory prediction performance. \begin{figure}[ht] \centering \includegraphics[scale = 0.9]{Fig3.eps} \caption{ QQ-plots for cell growth, comparing observed (horizontal axis) and one-time ahead predicted (vertical axis) cell counts per tile on the entire image at times $t=6,7,8$ for GFP cancer cells (G), mCherry cancer cells (R) and fibroblasts (F). One-time ahead predictions are based on the model fitted using a moving window of five time points. }\label{qqplot2} \end{figure} \section{Conclusion and final remarks}\label{sec:conclusion} In this paper, we introduced a conditional spatial autoregressive model and accompanying inference tools for multivariate spatio-temporal cell count data. The new methodology enables one to measure the overall cell growth rate in longitudinal experiments and spatio-temporal interactions with either homogeneous or heterogeneous cell populations. The proposed inference approach is computationally tractable and strikes a good balance between computational feasibility and statistical accuracy. Numerical findings from simulated and real data in Sections 3 and 4 confirm the validity of the proposed approach in terms of prediction, goodness-of-fit and estimation accuracy. The data sets described in this paper serve as a proof-of-concept that the proposed methodology works. However, the potential applications and the relevant questions that the methodology can help to answer in cancer cell biology are plentiful. To build on from the examples given in this paper, the methodology can be used to study interactions between cancer cells and a wide range of cancer-relevant cell types such as cancer-activated fibroblasts, macrophages, and other immune cells when co-cultured. Since a substantial proportion of cancer cells in tumors are in close proximity to other cell types that have been shown to affect tumor progression, using these co-cultures is more representative of the situation in a patient compared to studying cancer cells on their own. In addition to just giving the final cell number, the presented approach can dissect which cell types affect the growth of others and to what extent in complex heterogeneous populations. This could be relevant in a drug discovery setting to determine if a drug affects cancer cell growth due to internal effects (on other cancer cells) or by interfering with the interaction between the cancer cells and other cell types. Finding drugs with different targets and mechanisms of action are particularly sought after as they provide a wider target profile, increasing the chance of patients responding as well as reducing the risk of tumors becoming resistant. The impact of different genes and associated pathways in different cell types in relation to inter-cellular interactions can also be studied by genetically modifying the cell type(s) in question before mixing the cells together. This could be beneficial to identify new potential drug targets. Our approach is also applicable in other kinds of studies where local spatial cell-cell interactions are believed to affect cell growth such as studies of neurodegenerative diseases \cite{garden2012intercellular} and wound healing/tissue re-generation \cite{leoni2015wound}. In addition to evaluating cell growth, our approach can also be used to study transitions between cellular phenotypes upon interaction with other cell types, provided that the different phenotypes studied can be distinguished from one another based on the image data. Finally, it is worth noting that issues may arise when cells become too confluent/dense, this may lead to segmentation problems of the imaging system. If they become completely confluent, they are likely to progressively stop growing. If one wants to measure for longer period of time, experiments can be performed in larger wells/plates or with smaller starting cell numbers. Our methods offer several practical advantages to researchers interested in analysing multivariate count data on heterogeneous cell populations. First, the conditional Poisson model does not require tracking individual cells across time, a process that is often difficult to automate due to cell movement, morphology changes at subsequent time points, and additional complications related to storage of large data files. Second, we are able to quantify local spatio-temporal interactions between different cell populations from a very simple experimental set-up where the different cell populations are grown together in a single experimental condition (co-culture). An alternative, solely experimentally-based strategy would require monitoring the different cell types alone and together at different cell densities (number of cells per condition) in order to make inferences in terms of potential interactions. However, such an approach would give no possibility of evaluating the spatial relations in the co-culture conditions and would still restrict the number of simultaneously tested cell types to two. In the future, we foresee several useful extensions of the current methodology, possibly enabling the treatment of more complex experimental settings. First, complex experiments involving a large number of cell populations, $n_{\mathcal{C}}$, would imply an over-parametrized model. Clearly, this large number of parameters would be detrimental to both statistical accuracy and reliable optimization of the likelihood objective function $\ell_n(\theta)$ (\ref{cl}). To address these issues, we plan to explore a penalized likelihood of form $\ell_n(\theta) - \text{pen}_{\lambda}(\theta)$, where $\text{pen}(\theta)$ is a nonnegative sparsity-inducing penalty function. For example, in a different likelihood setting, Bardic et al. \cite{bradic2011penalized} consider the $L_1$-type penalty $\text{pen}(\theta) = \lambda \sum | \theta |,$ $\lambda > 0$. Second, for certain experiments, it would be desirable to modify the statistics in (\ref{intensity2}) to include additional information on cell growth such as the distance between heterogeneous cells, and covariates describing cell morphology. Besides, it would be useful to consider tiling the microscope image into a hexagonal lattice, which is a more natural choice in real application, since the distance between neighbouring tiles would be more even than that of a regular lattice. \section*{Acknowledgements} The authors wish to acknowledge support from the Australian National Health and Medical Research Council grants 1049561, 1064987 and 1069024 to Fr\'{e}d\'{e}ric Hollande. Christina M{\o}lck is supported by the Danish Cancer Society. \bibliographystyle{abbrvnat}
2024-02-18T23:40:58.740Z
2018-04-11T02:05:01.000Z
algebraic_stack_train_0000
3,832
7,782
proofpile-arXiv_066-2829
\section*{\refname \@mkboth{\MakeUppercase{\refname}}{\MakeUppercase{\refname}}} } \providecommand{\boldsymbol}[1]{\mbox{\boldmath $#1$}} \usepackage{babel} \makeatother \begin{document} \vfill{} \title{\textbf{Language Models with Pre-Trained (GloVe) Word Embeddings}} \vfill{} \author{ Victor Makarenkov, Lior Rokach, Bracha Shapira \mbox{}\\ \email{makarenk@post.bgu.ac.il, liorrk@bgu.ac.il, bshapira@bgu.ac.il} } \institute{ Department of Software and Information Systems Engineering\\ Ben-Gurion University of the Negev } \maketitle \section{Introduction} In this work we present a step-by-step implementation of training a Language Model (LM) , using Recurrent Neural Network (RNN) and pre-trained GloVe word embeddings, introduced by Pennigton et al. in \cite{glove2014}. The implementation is following the general idea of training RNNs for LM tasks presented in \cite{DBLP:journals/corr/ZarembaSV14} , but is rather using Gated Recurrent Unit (GRU) \cite{DBLP:journals/corr/ChoMGBSB14} for a memory cell, and not the more commonly used LSTM \cite{LSTM}. The implementation presented is based on using \texttt{keras}\footnote{https://keras.io/} \cite{keras}. \section{Motivation} Language Modeling is an important task in many Natural Language Processing (NLP) applications. These application include clustering, information retrieval, machine translation, spelling and grammatical errors correction. In general, a language model defined as a function that puts a probability measure over strings drawn from some vocabulary. In this work we consider a RNN based language model task, which aims at predicting the next \textit{n-th} word in a sequence, given the previous $n-1$ words. Put otherwise, finding the word with maximum value for $P(w_n|w_1,...,w_{n-1})$ . The $n$ parameter is the $ContextWindowSize$ argument in the algorithm described further.\\ To maximize the effectiveness and performance of the model we use word embeddings into a continuous vector space. The model of embedding we use is the GloVe \cite{glove2014} model, with dimensionality size equal to 300 or 50. We use both pre-trained\footnote{downloaded from: http://nlp.stanford.edu/projects/glove/} on 42 billion tokens and 1.9 million vocabulary size, and specifically trained for this work vector models, which we trained on SIGIR-2015 and ICTIR-2015 conferences' proceedings.\\ The model itself is trained as a RNN, with internal GRU for memorizing the prior sequence of words. It was shown lately, that RNNs outperform most language modeling based tasks \cite{DBLP:journals/corr/ZarembaSV14, DBLP:journals/corr/ChoMGBSB14} when tuned and trained correctly. \section{Short Description} In this work we use 300-dimensional and 50-dimensional, GloVe word embeddings. In order to embed the words in a vector space, GloVe model is trained by examining word co-occurrence matrix $X_{ij}$ within a huge text corpus. Despite the huge size of the Common Crawl corpus, some words may not exist with the embeddings, so we set these words to random vectors, and use the same embeddings consistently if we encounter the same unseen word again in the text. The RNN is further trained to predict the next word in its embedding form, that is, predicts the next n-dimensional vector, given the $ContextWindowSize$ previous words. We divide the $TextFile$ into 70\% and 30\% for training and testing purposes. \section{Pseudo Code} \begin{algorithm}[H] \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{Input: glove-vectors : Pre-Trained-Word-Embeddings, Text-File, ContextWindowSize=10, hidden-unites=300} \Output{A Language Model trained on Text-File with word-embeddings representation} \For{ $w \in$ Text-File }{ \If {$w \in$ OOV-file}{ tokenized-file.append(OOV-file.get-vector(w)) } \If {$w \in$ glove-vectors}{ tokenized-file.append(glove-vectors.get-vector(w)) } \Else { vector $\leftarrow$ random-vector() \\ tokenized-file.append(vector) \\ OOV-file.append(vector) } } NN $\leftarrow$ CreateSingleHiddenLayerDenseRNN(unit=GRU, inputs=300, outputs=300, hidden-unites)\\ NN.setDropout(0.8)\\ NN.setActivationFunction(Linear)\\ NN.setLossFunction(MSE)\\ NN.setOptimizer(rmsprop)\\ $X_{train} \leftarrow$ tokenized-file.getInterval(0.0,0.7)\\ $X_{test} \leftarrow$ tokenized-file.getInterval(0.7,1.0)\\ $Y_{train} \leftarrow X_{train}.Shift(ContextWindowSize)$\\ $Y_{test} \leftarrow X_{test}.Shift(ContextWindowSize)$\\ NN.Fit($X_{train}, Y_{train}$)\\ NN.Predict($X_{test}, Y_{test}$)\\ \caption{Training a language model on word embeddings} \end{algorithm} \section{Detailed Explanation} \label{section:explain} As stated earlier, GloVe model is trained by examining word co-occurrence matrix of two words $i$ and $j$: $X_{ij}$ within a huge text corpus. While training the main idea is stating that ${w_i}^Tw_j+b_i+b_j=log(X_{ij})$ where $w_i$ and $w_j$ are the trained vectors, $b_i$ and $b_j$ are the scalar bias terms associated with words $i$ and $j$. The most important parts of the training process in GloVe are: 1) A weighting function $f$ for elimination of very common words (like stop words) which add noise and not overweighted, 2) rare words are not overweighted 3) the co-occurrence strength, when modeled as a distance, should be smoothed with a $log$ function. Thus, the final loss function for a GloVe model is $J= \Sigma_{i,j \in V}f(X_{ij})({w_i}^Tw_j+b_i+b_j-log(X_{ij}))^2 $ where $V$ is a complete vocabulary, and $f(x)=(x/x_{max})^\alpha$ if $x < x_{max}$, and $f(x)=1$ otherwise. The model that is used in this work was trained with $x_{max}=100$ and $\alpha=0.75$. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{RNN} \caption{A general architecture of training an unfolded RNN with 1-sized window based shifted labeled data} \label{fig:RNN} \end{figure} Training of the RNN is somewhat blurred between supervised and unsupervised techniques. That is, no extra labeled data is given, but part of the input is used as labels. In this \textit{unfolded} training paradigm, which is illustrated on Figure \ref{fig:RNN}, the output is \textit{shifted} in a way to create a labels for the input train dataset. In this way the RNN can actually learn to predict a next word (vector) in a sequence. \section{Evaluation} \subsection{Pre-trained Vector Models} In order to evaluate our implementation of the language model, we train several different language models and compare the predicted error distribution with a random word prediction. The error is measured with a cosine distance\footnote{implemented on python, with scipy package} between two vectors: $1- \frac {\pmb x \cdot \pmb y}{||\pmb x|| \cdot ||\pmb y||}$. On figure \ref{fig:hidden30} we see the error distribution of the RNN with 30 hidden units. The training was performed on 5000 tokens long text file, the first entry at English wikipedia, \textit{Anarchism}. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{hidden30} \caption{Two distributions of the predicted next word vector errors. The left is the result of predicted by RNN errors, and the righ is predicted by random. The RNN in the model was trained with 30 hidden GRU units. It took 300 iterations (epochs) on the data to achieve these results.} \label{fig:hidden30} \end{figure} The machine that was used to run the evaluation had the following characteristics: 1.7 GHz, Core i7 with 8 GB memory, OS X version 10.11.13. The time it took to train the model with 30 epochs was 125 seconds . The time took to make the predictions on a test set is 0.5 seconds. On figure \ref{fig:hidden300} we see the error distribution of the RNN with 300 hidden units The time it took to train the model with \textbf{300} epochs was 1298 seconds . The time took to make the predictions on a test set is 0.49 seconds. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{hidden300} \caption{Two distributions of the predicted next word vector errors. The left is the result of predicted by RNN errors, and the righ is predicted by random. The RNN in the model was trained with 300 hidden GRU units. It took 300 iterations (epochs) on the data to achieve these results.} \label{fig:hidden300} \end{figure} \subsection{Self Trained Vector Model} In addition, in order to further evaluate the current approach, we specifically trained a narrow domain-specific, vector model. We used ICTIR-2015 and SIGIR-2015 conferences proceedings as a corpora, and produced 50-dimensional vectors. The \textit{vector model} is based on 1,500,000 tokens total, and resulted in 17,000 long vocabulary. The \textit{language model} for the evaluation was built on a paper published in the ICTIR-2015 proceedings \cite{qpp2015}. Consider figure \ref{fig:sigir}. The predicted words' errors distribution differs even more than in the general case, where the vectors were trained on the general Common-Crawl corpora. That is, the performance of the language model, for the task of word prediction is higher. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{sigir} \caption{Two distributions of the predicted next word vector errors. The left is the result of predicted by RNN errors, and the righ is predicted by random. The RNN in the model was trained with 50 hidden GRU units. It took 50 iterations (epochs) on the data to achieve these results.} \label{fig:sigir} \end{figure} The time took to train this model is 10 seconds. The time took to compute the predictions for evaluations is 0.04 seconds. \section{Instructions for running the code} The implementation of the model training was written in this work in the Python language, version 2.7. The library that was used is Keras, which in the course of this implementation was based on \textit{Theano} framework. Instead of Theano, the Google's \textit{Tensorflow} can be also used behind the scenes of the Keras in this implementation. In order to train the model yourself, you need to follow the next steps: \begin{enumerate} \item Download pre-trained GloVe vectors from \texttt{http://nlp.stanford.edu/projects/glove/} \item Obtain a text to train the model on. In our example we use a Wikipdia \textit{Anarchism} entry. \item Open and adjust the LM\_RNN\_GloVe.py file parameters inside the main function: \begin{enumerate} \item file\_2\_tokenize\_name (example = "/Users/macbook/corpora/text2tokenize.txt") \item tokenized\_file\_name (example = "/Users/macbook/corpora/tokenized2vectors.txt") \item glove\_vectors\_file\_name (example = "/Users/macbook/corpora/glove.42B.300d.txt") \item extra\_vocab\_filename (example = "/Users/macbook/corpora/extra\_vocab.txt"). This argument has also a default value in the \texttt{get\_vector} function \end{enumerate} \item Run the following methods: \begin{enumerate} \item tokenize\_file\_to\_vectors(glove\_vectors\_file\_name, file\_2\_tokenize\_name, tokenized\_file\_name) \item run\_experiment(tokenized\_file\_name) \end{enumerate} \end{enumerate} \section{Discussion} In this work we implemented and tested the training of a LM based on RNN. To emphasize the strength of such an approach, we have chosen one of the most powerful and prominent techniques for word embeddings - the GloVe embeddings. Although there are other approaches, such as the popular \textit{word2vec} \cite{word2vec} technique, the GloVe embeddings was shown to outperform it on several tasks \cite{glove2014}, partially because of the reasons described in section \ref{section:explain}. By training the model with two different settings, one of which is order of magnitude more complex than the other we show the power of such LM. The distributions shown on figures \ref{fig:hidden30} and \ref{fig:hidden300} clearly indicate much smaller error on the task of next word prediction. The main limitation of this implementation is the fixed window size, of the prefix in the LM. This approach does not fully show the full power of RNN-based LM. For dynamic size prefix LM please consider the DyNet \cite{dynet} package for example. DyNet supports a dynamic computation graph and shares the learned parameters across multiple variable length instances during the training. \section{The source at GitHub} The code was submitted publicly to the GitHub repository of the author, and is available under \textit{vicmak} username, \textit{proofseer} project\footnote{https://github.com/vicmak/ProofSeer}. \newpage \bibliographystyle{splncs}
2024-02-18T23:40:59.342Z
2017-02-07T02:06:22.000Z
algebraic_stack_train_0000
3,858
2,002
proofpile-arXiv_066-2841
\section{Introduction}\label{sec.introduction} For solving unconstrained nonlinear optimization problems, one of the simplest and most widely used techniques is \emph{steepest descent} (SD). This refers to any strategy in which, from any solution estimate, a productive step is obtained by moving some distance along the negative gradient of the objective function, i.e., the direction along which function descent is steepest. While SD methods have been studied for over a century and employed in numerical software for decades, a unique and powerful instance came about relatively recently in the work by \cite{BarzBorw88}, where a ``two-point step size'' strategy is proposed and analyzed. The resulting SD method, commonly referred to as the BB method, represents an effective alternative to other SD methods that employ an exact or inexact line search when computing the stepsize in each iteration. The theoretical properties of the BB method are now well-known when it is employed to minimize an $n$-dimensional strongly convex quadratic objective function. Such objective functions are interesting in their own right, but one can argue that such analyses also characterize the behavior of the method in the neighborhood of a strong local minimizer of any smooth objective function. In the original work (i.e., \cite{BarzBorw88}), it is shown that the method converges $R$-superlinearly when $n=2$. In \cite{Rayd93}, it is shown that the method converges from any starting point for any natural number~$n$, and in \cite{DaiLiao02} it is shown that the method converges $R$-linearly for any such $n$. In each iteration of the BB method, the stepsize is determined by a computation involving the displacement in the gradient of the objective observed between the current iterate and the previous iterate. As shown in \cite{Flet12}, this idea can be extended to a \emph{limited memory steepest descent} (LMSD) method in which a \emph{sequence} of $m$ stepsizes is computed using the displacements in the gradient over the previous $m$ steps. This extension can be motivated by the observation that these displacements lie in a Krylov subspace determined by a gradient previously computed in the algorithm, which in turn yields a computationally efficient strategy for computing $m$ distinct eigenvalue estimates of the Hessian (i.e., matrix of second derivatives) of the objective function. The reciprocals of these eigenvalue estimates represent reasonable stepsize choices. Indeed, if the eigenvalues of the Hessian are computed exactly, then the algorithm terminates in a finite number of iterations; see \cite{Flet12} and \S\ref{sec.fundamentals}. In \cite{Flet12}, it is shown that the proposed LMSD method converges from any starting point when it is employed to minimize a strongly convex quadratic function. However, to the best of our knowledge, the convergence rate of the method for $m > 1$ has not yet been analyzed. The main purpose of this paper is to show that this LMSD method converges $R$-linearly when employed to minimize such a function. Our analysis builds upon the analyses in \cite{Flet12} and \cite{DaiLiao02}. We mention at the outset that numerical evidence has shown that the practical performance of the BB method is typically much better than known convergence proofs suggest; in particular, the empirical rate of convergence is often $Q$-linear with a contraction constant that is better than that observed for a basic SD method. Based on such evidence, we do not claim that the convergence results proved in this paper fully capture the practical behavior of LMSD methods. To explore this claim, we present the results of numerical experiments that illustrate our convergence theory and demonstrate that the practical performance of LMSD can be even better than the theory suggests. We conclude with a discussion of possible explanations of why this is the case for LMSD, in particular by referencing a known finite termination result for a special (computationally expensive) variant of the algorithm. \paragraph{Organization} In \S\ref{sec.fundamentals}, we formally state the problem of interest, notation to be used throughout the paper, Fletcher's LMSD algorithm, and a finite termination property for it. In~\S\ref{sec.linear}, we prove that the LMSD algorithm is $R$-linearly convergent for any history length. The theoretical results proved in~\S\ref{sec.linear} are demonstrated numerically in \S\ref{sec.numerical} and concluding remarks are presented in~\S\ref{sec.conclusion}. \paragraph{Notation} The set of real numbers (i.e., scalars) is denoted as $\R{}$, the set of nonnegative real numbers is denoted as $\R{}_+$, the set of positive real numbers is denoted as $\R{}_{++}$, and the set of natural numbers is denoted as $\N{} := \{1,2,\dots\}$. A natural number as a superscript is used to denote the vector-valued extension of any of these sets---e.g., the set of $n$-dimensional real vectors is denoted as $\R{n}$---and a Cartesian product of natural numbers as a superscript is used to denote the matrix-valued extension of any of these sets---e.g., the set of $n \times n$ real matrices is denoted as $\R{n\times n}$. A finite sequence of consecutive positive integers of the form $\{1,\dots,n\} \subset \N{}$ is denoted using the shorthand $[n]$. Subscripts are used to refer to a specific element of a sequence of quantities, either fixed or generated by an algorithm. For any vector $v \in \R{n}$, its Euclidean (i.e., $\ell_2$) norm is denoted by $\|v\|$. \section{Fundamentals}\label{sec.fundamentals} In this section, we state the optimization problem of interest along with corresponding definitions and concepts to which we will refer throughout the remainder of the paper. We then state Fletcher's LMSD algorithm and prove a finite termination property for it similar to that proved in \cite{Flet12}. \subsection{Problem Statement} Consider the problem to minimize a strongly convex quadratic function $f : \R{n} \to \R{}$ defined by a symmetric positive definite matrix $A \in \R{n \times n}$ and vector $b \in \R{n}$, namely, \bequation\label{prob.f} \min_{x\in\R{n}}\ f(x),\ \ \text{where}\ \ f(x) = \thalf x^TAx - b^Tx. \eequation Formally, we make the following assumption about the problem data. \bassumption\label{ass.f} \textit{ The matrix $A$ in problem~\eqref{prob.f} has $r \leq n$ distinct eigenvalues denoted by \bequation\label{eq.eigenvalues} \lambda_{(r)} > \cdots > \lambda_{(1)} > 0. \eequation Consequently, this matrix yields the eigendecomposition $A = Q \Lambda Q^T$, where \bequation\label{eq.eigendecomposition} \baligned Q &= \bbmatrix q_1 & \cdots & q_n \ebmatrix && \text{is orthogonal} \\ \text{and}\ \ \Lambda &= \diag(\lambda_1,\dots,\lambda_n) && \text{with}\ \ \lambda_n > \cdots > \lambda_1 > 0 \\ &&& \text{and}\ \ \lambda_i \in \{\lambda_{(1)},\dots,\lambda_{(r)}\}\ \ \text{for all}\ \ i \in [n]. \ealigned \eequation } \eassumption The eigendecomposition of $A$ defined in Assumption~\ref{ass.f} plays a crucial role in our analysis. In particular, we will make extensive use of the fact that any gradient of the objective function computed in the algorithm, a vector in $\R{n}$, can be written as a linear combination of the columns of the orthogonal matrix $Q$. This will allow us to analyze the behavior of the algorithm componentwise according to the weights in these linear combinations corresponding to the sequence of computed objective gradients. Such a strategy has been employed in all of the aforementioned articles on BB and LMSD. \subsection{Limited memory steepest descent (LMSD) method} Fletcher's limited memory steepest descent method is stated as Algorithm~\ref{alg.lmsd}. The iterate update in the algorithm is the standard update in an SD method: each subsequent iterate is obtained from the current iterate minus a multiple of the gradient of the objective function evaluated at the current iterate. With this update at its core, Algorithm~\ref{alg.lmsd} operates in cycles. At $x_{k,1} \in \R{n}$ representing the initial point of the~$k$th cycle, a sequence of~$m$ positive stepsizes $\{\alpha_{k,j}\}_{j\in[m]}$ are selected to be employed in an inner cycle composed of $m$ updates, the result of which is set as the initial point for cycle $k+1$. Once such an inner cycle has been performed, the stepsizes to be employed in the next cycle are computed as the reciprocals of estimates of eigenvalues of $A$. \cite{Flet12} describes how these can be obtained in one of three ways, all offering the same estimates (in exact arithmetic). The most intuitive definition is that, for cycle $k+1$, the estimates come as the eigenvalues of $T_k := Q_k^TAQ_k$, where $Q_k \in \R{n \times m}$ satisfying $Q_k^TQ_k = I$ is defined by a thin QR factorization of the matrix of $k$th cycle gradients, i.e., for some upper triangular matrix $R_k \in \R{m \times m}$, such a factorization satisfies the equation \bequation\label{eq.G} Q_kR_k = G_k := \bbmatrix g_{k,1} & \cdots & g_{k,m} \ebmatrix. \eequation (For now, let us assume that $G_k$ has linearly independent columns, in which case the matrix $R_k$ in~\eqref{eq.G} is nonsingular. For a discussion of situations when this is not the case, see Remark~\ref{rem.G_lin_ind} later on.) Practically, however, obtaining $T_k$ in this manner requires multiplications with $A$ as well as the storage of the $n$-vectors composing the columns of $Q_k$. Both can be avoided by obtaining the gradient at the initial point of cycle $k+1$, namely $g_{k+1,1} \equiv g_{k,m+1}$, as well as the matrix of $k$th-cycle reciprocal stepsizes \bequation\label{eq.J} J_k \gets \bbmatrix \alpha_{k,1}^{-1} & & \\ -\alpha_{k,1}^{-1} & \ddots & \\ & \ddots & \alpha_{k,m}^{-1} \\ & & -\alpha_{k,m}^{-1} \ebmatrix, \eequation computing (upper triangular) $R_k$ and $r_k$ from the partially extended Cholesky factorization \bequation\label{eq.Cholesky} G_k^T\bbmatrix G_k & g_{k,m+1} \ebmatrix = R_k^T\bbmatrix R_k & r_k \ebmatrix, \eequation then computing \bequation\label{eq.T} T_k \gets \bbmatrix R_k & r_k \ebmatrix J_k R_k^{-1}. \eequation Fletcher's third approach, which also avoids multiplications with $A$, is to compute \bequation\label{eq.QR} T_k \gets \bbmatrix R_k & Q_k^Tg_{k,m+1} \ebmatrix J_k R_k^{-1}. \eequation However, this is less efficient than using \eqref{eq.T} due to the need to store $Q_k$ and since the QR factorization of $G_k$ requires $\sim\!\!m^2 n$ flops, as opposed to the $\sim\!\!\thalf m^2 n$ flops required for \eqref{eq.T}; see \cite{Flet12}. \balgorithm[H] \renewcommand{\thealgorithm}{LMSD} \caption{\ Limited Memory Steepest Descent Method} \label{alg.lmsd} \begin{algorithmic}[1] \State choose an initial point $x_{1,1} \in \R{n}$, history length $m \in \N{}$, and termination tolerance $\epsilon \in \R{}_{+}$ \State choose stepsizes $\{\alpha_{1,j}\}_{j\in[m]} \subset \R{}_{++}$ \State compute $g_{1,1} \gets \nabla f(x_{1,1})$ \State \textbf{if} $\|g_{1,1}\| \leq \epsilon$, \textbf{then return} $x_{1,1}$ \label{step.termination1} \For{$k \in \N{}$} \For{$j \in [m]$} \State set $x_{k,j+1} \gets x_{k,j} - \alpha_{k,j} g_{k,j}$ \State compute $g_{k,j+1} \gets \nabla f(x_{k,j+1})$ \State \textbf{if} $\|g_{k,j+1}\| \leq \epsilon$, \textbf{then return} $x_{k,j+1}$ \label{step.termination2} \EndFor \State set $x_{k+1,1} \gets x_{k,m+1}$ and $g_{k+1,1} \gets g_{k,m+1}$ \label{step.end_of_cycle} \State set $G_k$ by \eqref{eq.G} and $J_k$ by \eqref{eq.J} \State compute $R_k$ and $r_k$ to satisfy \eqref{eq.Cholesky} and set $T_k$ by \eqref{eq.T} \State set $\{\theta_{k,j}\}_{j\in[m]} \subset \R{}_{++}$ as the eigenvalues of $T_k$ in decreasing order \State set $\{\alpha_{k+1,j}\}_{j\in[m]} \gets \{\theta_{k,j}^{-1}\}_{j\in[m]} \subset \R{}_{++}$ \EndFor \end{algorithmic} \end{algorithm} The choice to order the eigenvalues of $T_k$ in decreasing order is motivated by \cite{Flet12}. In short, this ensures that the stepsizes in cycle $k+1$ are ordered from smallest to largest, which improves the likelihood that the objective function and the norm of the objective gradient decrease monotonically, at least initially, in each cycle. This ordering is not essential for our analysis, but is a good choice for any implementation of the algorithm; hence, we state the algorithm to employ this ordering. One detail that remains for a practical implementation of the method is how to choose the initial stepsizes $\{\alpha_{1,j}\}_{j\in[m]} \subset \R{}_{++}$. This choice has no effect on the theoretical results proved in this paper, though our analysis does confirm the fact that the practical performance of the method can improved if one has the knowledge to choose one or more stepsizes exactly equal to reciprocals of eigenvalues of~$A$; see~\S\ref{sec.finite}. Otherwise, one can either provide a full set of $m$ stepsizes or carry out an initialization phase in which the first few cycles are shorter in length, dependent on the number of objective gradients that have been observed so far; see \cite{Flet12} for further discussion on this matter. \bremark\label{rem.G_lin_ind} \textit{ In \eqref{eq.G}, if $G_k$ for some $k \in \N{}$ does not have linearly independent columns, then $R_k$ is singular and the formulas \eqref{eq.T} and \eqref{eq.QR} are invalid, meaning that the employed approach is not able to provide~$m$ eigenvalue estimates for cycle~$k$. As suggested in \cite{Flet12}, an implementation of the method can address this by iteratively removing ``older'' columns of $G_k$ until the columns form a linearly independent set of vectors, in which case the approach would be able to provide $\mtilde \leq m$ stepsizes for the subsequent (shortened) cycle. We advocate such an approach in practice and, based on the results proved in this paper, conjecture that the convergence rate of the algorithm would be $R$-linear. However, the analysis for such a method would be extremely cumbersome given that the number of iterations in each cycle might vary from one cycle to the next within a single run of the algorithm. Hence, in our analysis in \S\ref{sec.linear}, we assume that $G_k$ has linearly independent columns for all $k \in \N{}$. In fact, we go further and assume that $\|R_k^{-1}\|$ is bounded proportionally to the reciprocal of the norm of the objective gradient at the first iterate in cycle $k$ (meaning that the upper bound diverges as the algorithm converges to the minimizer of the objective function). These norms are easily computed in an implementation of the algorithm; hence, we advocate that a procedure of iteratively removing ``older'' columns of $G_k$ would be based on observed violations of such a bound. See the discussion following Assumption~\ref{ass.lmsd} in \S\ref{sec.linear}. } \eremark \subsection{Finite Termination Property of LMSD}\label{sec.finite} If, for some $k \in \N{}$ and $j \in [m]$, the stepsizes in Algorithm~\ref{alg.lmsd} up through iteration $(k,j) \in \N{} \times [m]$ include the reciprocals of all of the $r \leq n$ distinct eigenvalues of~$A$, then the algorithm terminates by the end of iteration $(k,j)$ with $x_{k,j+1}$ yielding $\|g_{k,j+1}\| = 0$. This is shown in the following lemma and theorem, which together demonstrate and extend the arguments made in \S2 of \cite{Flet12}. \blemma\label{lem.recursion} \textit{ Under Assumption~\ref{ass.f}, for each $(k,j) \in \N{} \times [m]$, there exist weights $\{d_{k,j,i}\}_{i\in[n]}$ such that~$g_{k,j}$ can be written as a linear combination of the columns of $Q$ in \eqref{eq.eigendecomposition}, i.e., \bequation\label{eq.combination} g_{k,j} = \sum_{i=1}^n d_{k,j,i} q_i. \eequation Moreover, these weights satisfy the recursive property \bequation\label{eq.recursion} d_{k,j+1,i} = (1 - \alpha_{k,j} \lambda_i) d_{k,j,i}\ \ \text{for all}\ \ (k,j,i) \in \N{} \times [m] \times [n]. \eequation } \elemma \bproof Since $g_{k,j} = Ax_{k,j} - b$ for all $(k,j) \in \N{} \times [m]$, it follows that \bequalin && x_{k,j+1} &= x_{k,j} - \alpha_{k,j}g_{k,j},\\ \implies && Ax_{k,j+1} &= Ax_{k,j} - \alpha_{k,j}Ag_{k,j},\\ \implies && g_{k,j+1} &= g_{k,j} - \alpha_{k,j}Ag_{k,j},\\ \implies && g_{k,j+1} &= (I - \alpha_{k,j}A)g_{k,j},\\ \implies && g_{k,j+1} &= (I - \alpha_{k,j}Q \Lambda Q^T) g_{k,j}, \eequalin from which one obtains that \bequationn \sum_{i=1}^n d_{k,j+1,i} q_i = \sum_{i=1}^n d_{k,j,i}(I - \alpha_{k,j} Q \Lambda Q^T)q_i = \sum_{i=1}^n d_{k,j,i}(q_i - \alpha_{k,j}\lambda_iq_i) = \sum_{i=1}^n d_{k,j,i}(1 - \alpha_{k,j}\lambda_i)q_i. \eequationn The result then follows since the columns of $Q$ form an orthogonal basis of $\R{n}$. \end{proof} \btheorem\label{th.finite} \textit{ Suppose that Assumption~\ref{ass.f} holds and that Algorithm~\ref{alg.lmsd} is run with termination tolerance $\epsilon = 0$. If, for some $(k,j) \in \N{} \times [m]$, the set of computed stepsizes up through iteration $(k,j)$ includes all of the values $\{\lambda_{(l)}^{-1}\}_{l\in[r]}$, then, at the latest, the algorithm terminates finitely at the end of iteration $(k,j)$ with $x_{k,j+1}$ yielding $\|g_{k,j+1}\| = 0$. } \etheorem \bproof Consider any $(k,j) \in \N{} \times [m]$ such that the stepsize is equal to the reciprocal of an eigenvalue of $A$, i.e., $\alpha_{k,j} = \lambda_{(l)}^{-1}$ for some $l \in [r]$. By Lemma~\ref{lem.recursion}, it follows that \bequationn d_{k,j+1,i} = (1 - \alpha_{k,j} \lambda_i) d_{k,j,i} = (1 - \lambda_{(l)}^{-1} \lambda_i) d_{k,j,i} = 0\ \ \text{for all}\ \ i \in [n]\ \ \text{such that}\ \ \lambda_i = \lambda_{(l)}. \eequationn Along with the facts that Lemma~\ref{lem.recursion} also implies \bequationn d_{k,j,i} = 0 \implies d_{k,j+1,i} = 0\ \ \text{for all}\ \ (k,j) \in \N{} \times [m] \eequationn and $x_{k+1,1} \gets x_{k,m+1}$ (and $g_{k+1,1} \gets g_{k,m+1}$) for all $k \in \N{}$, the desired conclusion follows. \eproof \bremark \textit{ Theorem~\ref{th.finite} implies that Algorithm~\ref{alg.lmsd} will converge finitely by the end of the second cycle if $m \geq r$ and the eigenvalues of $T_1$ include all eigenvalues $\{\lambda_{(l)}\}_{l\in[r]}$. This is guaranteed, e.g., when the first cycle involves $m = n$ steps and $G_1$ has linearly independent columns. } \eremark \section{$R$-Linear Convergence Rate of LMSD}\label{sec.linear} Our primary goal in this section is to prove that Algorithm~\ref{alg.lmsd} converges $R$-linearly for any choice of the history length parameter $m \in \N{}$. For context, we begin by citing two known convergence results that apply for Algorithm~\ref{alg.lmsd}, then turn our attention to our new convergence rate results. \subsection{Known Convergence Properties of LMSD} In the Appendix of \cite{Flet12}, the following convergence result is proved for Algorithm~\ref{alg.lmsd}. The theorem is stated slightly differently here only to account for our different notation. \btheorem\label{th.Fletcher} \textit{ Suppose that Assumption~\ref{ass.f} holds and that Algorithm~\ref{alg.lmsd} is run with termination tolerance $\epsilon = 0$. Then, either $g_{k,j} = 0$ for some $(k,j) \in \N{} \times [m]$ or the sequences $\{g_{k,j}\}_{k=1}^\infty$ for each $j \in [m]$ converge to zero. } \etheorem \noindent As a consequence of this result, we may conclude that if Algorithm~\ref{alg.lmsd} does not terminate finitely, then, according to the relationship \eqref{eq.combination}, the following limits hold: \bsubequations\label{eq.limits} \begin{align} \lim_{k\to\infty}\ g_{k,j} &= 0\ \ \text{for each}\ \ j \in [m]\ \ \text{and} \label{eq.limit_g} \\ \lim_{k\to\infty}\ d_{k,j,i} &= 0\ \ \text{for each}\ \ (j,i) \in [m] \times [n]. \label{eq.limit_d} \end{align} \esubequations Fletcher's result, however, does not illuminate the rate at which these sequences converge to zero. Only for the case of $m=1$ in which Algorithm~\ref{alg.lmsd} reduces to a BB method do the following results from \cite{DaiLiao02} (see Lemma~2.4 and Theorem~2.5 therein) provide a convergence rate guarantee. \begin{lemma}\label{lem.DaiLiao} \textit{ Suppose that Assumption~\ref{ass.f} holds and that Algorithm~\ref{alg.lmsd} is run with history length $m = 1$ and termination tolerance $\epsilon = 0$. Then, there exists $K \in \N{}$, dependent only on $(\lambda_1,\lambda_n)$, such that \bequationn \|g_{k+K,1}\| \leq \thalf \|g_{k,1}\|\ \ \text{for all}\ \ k \in \N{}. \eequationn } \end{lemma} \btheorem\label{th.DaiLiao} \textit{ Suppose that Assumption~\ref{ass.f} holds and that Algorithm~\ref{alg.lmsd} is run with history length $m = 1$ and termination tolerance $\epsilon = 0$. Then, either $g_{k,1} = 0$ for some $k \in \N{}$ or \bequationn \|g_{k,1}\| \leq c_1c_2^k\|g_{1,1}\|\ \ \text{for all}\ \ k \in \N{}, \eequationn where, with $K \in \N{}$ from Lemma~\ref{lem.DaiLiao}, the constants are defined as \bequationn c_1 := 2\(\frac{\lambda_n}{\lambda_1} - 1\)^{K-1}\ \ \text{and}\ \ c_2 := 2^{-1/K} \in (0,1). \eequationn Overall, the computed gradients vanish $R$-linearly with constants that depend only on $(\lambda_1,\lambda_n)$. } \etheorem \subsection{$R$-Linear Convergence Rate of LMSD for Arbitrary $m \in \N{}$} Our goal in this subsection is to build upon the proofs of the results stated in the previous subsection (as given in the cited references) to show that Algorithm~\ref{alg.lmsd} possesses an $R$-linear rate of convergence for any $m \in \N{}$. More precisely, our goal is to show that the gradients computed by the algorithm vanish $R$-linearly with constants that depend only on the spectrum of the data matrix $A$. Formally, for simplicity and brevity in our analysis, we make the following standing assumption throughout this section. \bassumption\label{ass.lmsd} \textit{ Assumption~\ref{ass.f} holds, as do the following: \benumerate \item[(i)] Algorithm~\ref{alg.lmsd} is run with $\epsilon = 0$ and $g_{k,j} \neq 0$ for all $(k,j) \in \N{} \times [m]$. \item[(ii)] For all $k \in \N{}$, the matrix $G_k$ has linearly independent columns. Further, there exists a scalar~$\rho \geq 1$ such that, for all $k \in \N{}$, the nonsingular matrix $R_k$ satisfies $\|R_k^{-1}\| \leq \rho\|g_{k,1}\|^{-1}$. \eenumerate } \eassumption \noindent Assumption~\ref{ass.lmsd}$(i)$ is reasonable since, in any situation in which the algorithm terminates finitely, all of our results hold for the iterations prior to that in which the algorithm terminates. Hence, by proving that the algorithm possesses an $R$-linear rate of convergence for cases when it does not terminate finitely, we claim that it possesses such a rate in all cases. As for Assumption~\ref{ass.lmsd}$(ii)$, first recall Remark~\ref{rem.G_lin_ind}. In addition, the bound on the norm of the inverse of $R_k$ is reasonable since, in the case of $m=1$, one finds that $Q_kR_k = G_k = g_{k,1}$ has $Q_k = g_{k,1}/\|g_{k,1}\|$ and $R_k = \|g_{k,1}\|$, meaning that the bound holds with~$\rho=1$. (This means that, in practice, one might choose $\rho \geq 1$ and iteratively remove columns of $G_k$ for the computation of $T_k$ until one finds $\|R_k^{-1}\| \leq \rho\|g_{k,1}\|^{-1}$, knowing that, in the extreme case, there will remain one column for which this condition is satisfied. However, for the reasons already given in Remark~\ref{rem.G_lin_ind}, we make Assumption~\ref{ass.lmsd}, meaning that $G_k$ always has $m$ columns.) We begin by stating two results that reveal important properties of the eigenvalues (corresponding to the elements of $\{T_k\}$) computed by the algorithm, which in turn reveal properties of the stepsizes. The first result is a direct consequence of the \emph{Cauchy Interlacing Theorem}. Since this theorem is well-known---see, e.g., \cite{Parl98}---we state the lemma without proof. \blemma\label{lem.interlacing} \textit{ For all $k \in \N{}$, the eigenvalues of $T_k$ ($= Q_k^TAQ_k$ where $Q_k^TQ_k = I$) satisfy \bequationn \theta_{k,j} \in [\lambda_{m+1-j}, \lambda_{n+1-j}]\ \ \text{for all}\ \ j \in [m]. \eequationn } \elemma The second result provides more detail about how the eigenvalues computed by the algorithm at the end of iteration $k \in \N{}$ relate to the weights in \eqref{eq.combination} corresponding to $k$ for all $j \in [m]$. \blemma\label{lem.eigenvectors} \textit{ For all $(k,j) \in \N{} \times [m]$, let $q_{k,j} \in \R{m}$ denote the unit eigenvector corresponding to the eigenvalue $\theta_{k,j}$ of $T_k$, i.e., the vector satisfying $T_kq_{k,j} = \theta_{k,j}q_{k,j}$ and $\|q_{k,j}\| = 1$. Then, defining \bequation\label{eq.Dc} D_k := \bbmatrix d_{k,1,1} & \cdots & d_{k,m,1} \\ \vdots & \ddots & \vdots \\ d_{k,1,n} & \cdots & d_{k,m,n} \ebmatrix\ \ \text{and}\ \ c_{k,j} := D_kR_k^{-1}q_{k,j}, \eequation it follows that, with the diagonal matrix of eigenvalues (namely, $\Lambda$) defined in Assumption~\ref{ass.f}, \bequation\label{eq.theta} \theta_{k,j} = c_{k,j}^T\Lambda c_{k,j}\ \ \text{and}\ \ c_{k,j}^Tc_{k,j} = 1. \eequation } \elemma \bproof For any $k \in \N{}$, it follows from \eqref{eq.Dc} and Lemma~\ref{lem.recursion} (in particular, \eqref{eq.recursion}) that $G_k = QD_k$ where~$Q$ is the orthogonal matrix defined in Assumption~\ref{ass.f}. Then, since $G_k = Q_kR_k$ (recall \eqref{eq.G}), it follows that $Q_k = QD_kR_k^{-1}$, according to which one finds \bequationn T_k = Q_k^TAQ_k = R_k^{-T}D_k^TQ^TAQD_kR_k^{-1} = R_k^{-T}D_k^T\Lambda D_kR_k^{-1}. \eequationn Hence, for each $j \in [m]$, the first equation in \eqref{eq.theta} follows since \bequationn \theta_{k,j} = q_{k,j}^TT_kq_{k,j} = q_{k,j}^TR_k^{-T}D_k^T\Lambda D_kR_k^{-1}q_{k,j} = c_{k,j}^T\Lambda c_{k,j}. \eequationn In addition, since $G_k = QD_k$ and the orthogonality of $Q$ imply that $D_k^TD_k = G_k^TG_k$, and since $Q_k = G_kR_k^{-1}$ with $Q_k$ having orthonormal columns (i.e., with $Q_k$ satisfying $Q_k^TQ_k = I$), it follows that \bequationn c_{k,j}^Tc_{k,j} = q_{k,j}^TR_k^{-T}D_k^TD_kR_k^{-1}q_{k,j} = q_{k,j}^TR_k^{-T}G_k^TG_kR_k^{-1}q_{k,j} = q_{k,j}^TQ_k^TQ_kq_{k,j} = q_{k,j}^Tq_{k,j} = 1, \eequationn which yields the second equation in \eqref{eq.theta}. \eproof The implications of Lemma~\ref{lem.eigenvectors} are seen later in our analysis. For now, combining Lemma~\ref{lem.interlacing}, Lemma~\ref{lem.recursion} (in particular, \eqref{eq.recursion}), and the fact that \eqref{eq.combination} implies \bequation\label{eq.g_norm} \|g_{k,j}\|^2 = \sum_{i=1}^n d_{k,j,i}^2\ \ \text{for all}\ \ (k,j) \in \N{} \times [m], \eequation one is lead to the following result pertaining to recursive properties of the weights in \eqref{eq.combination}. \blemma\label{lem.loose_bounds} \textit{ For each $(k,j,i) \in \N{} \times [m] \times [n]$, it follows that \bequation\label{eq.loose_bound} |d_{k,j+1,i}| \leq \delta_{j,i}|d_{k,j,i}|\ \ \text{where}\ \ \delta_{j,i} := \max\left\{\left| 1 - \frac{\lambda_i}{\lambda_{m+1-j}} \right|, \left| 1 - \frac{\lambda_i}{\lambda_{n+1-j}} \right| \right\}. \eequation Hence, for each $(k,j,i) \in \N{} \times [m] \times [n]$, it follows that \bequation\label{eq.loose_bound_m} |d_{k+1,j,i}| \leq \Delta_i|d_{k,j,i}|\ \ \text{where}\ \ \Delta_i := \prod_{j=1}^m \delta_{j,i}. \eequation Furthermore, for each $(k,j,p) \in \N{} \times [m] \times [n]$, it follows that \bequation\label{eq.loose_bound_j} \sqrt{\sum_{i=1}^p d_{k,j+1,i}^2} \leq \hat\delta_{j,p}\sqrt{\sum_{i=1}^p d_{k,j,i}^2}\ \ \text{where}\ \ \hat\delta_{j,p} := \max_{i\in[p]} \delta_{j,i}, \eequation while, for each $(k,j) \in \N{} \times [m]$, it follows that \bequation\label{eq.loose_bound_g} \|g_{k+1,j}\| \leq \Delta \|g_{k,j}\|\ \ \text{where}\ \ \Delta := \max_{i\in[n]} \Delta_i. \eequation } \elemma \bproof Recall that, for any given $(k,j,i) \in \N{} \times [m] \times [n]$, Lemma~\ref{lem.recursion} (in particular, \eqref{eq.recursion}) states \bequationn d_{k,j+1,i} = (1 - \alpha_{k,j}\lambda_i)d_{k,j,i}. \eequationn The relationship \eqref{eq.loose_bound} then follows due to Lemma~\ref{lem.interlacing}, which, in particular, shows that \bequationn \alpha_{k,j} \in \left[\frac{1}{\lambda_{n+1-j}},\frac{1}{\lambda_{m+1-j}}\right] \subseteq \left[\frac{1}{\lambda_n},\frac{1}{\lambda_1}\right]\ \ \text{for all}\ \ (k,j) \in \N{} \times [m]. \eequationn The consequence \eqref{eq.loose_bound_m} then follows by combining \eqref{eq.loose_bound} for all $j \in [m]$ and recalling that Step~\ref{step.end_of_cycle} yields $g_{k+1,1} \gets g_{k,m+1}$ for all $k \in \N{}$. Now, from \eqref{eq.loose_bound}, one finds that \bequationn \sum_{i=1}^p d_{k,j+1,i}^2 \leq \sum_{i=1}^p \delta_{j,i}^2 d_{k,j,i}^2 \leq \hat\delta_{j,p}^2 \sum_{i=1}^p d_{k,j,i}^2\ \ \text{for all}\ \ (k,j,p) \in \N{} \times [m] \times [n], \eequationn yielding the desired conclusion \eqref{eq.loose_bound_j}. Finally, combining \eqref{eq.loose_bound_m} and \eqref{eq.g_norm}, one obtains that \bequationn \|g_{k+1,j}\|^2 = \sum_{i=1}^n d_{k+1,j,i}^2 \leq \sum_{i=1}^n \Delta_i^2 d_{k,j,i}^2 \leq \Delta^2 \sum_{i=1}^n d_{k,j,i}^2 = \Delta^2 \|g_{k,j}\|^2\ \ \text{for all}\ \ (k,j) \in \N{} \times [m], \eequationn yielding the desired conclusion \eqref{eq.loose_bound_g}. \eproof A consequence of the previous lemma is that if $\Delta_i \in [0,1)$ for all $i \in [n]$, then $\Delta \in [0,1)$, from which~\eqref{eq.loose_bound_g} implies that, for each $j \in [m]$, the gradient norm sequence $\{\|g_{k,j}\|\}_{k\in\N{}}$ vanishes $Q$-linearly. For example, such a situation occurs when $\lambda_n < 2\lambda_1$. However, as noted in \cite{DaiLiao02}, this is a highly special case that should not be assumed to hold widely in practice. A more interesting and widely relevant consequence of the lemma is that for any $i \in [n]$ such that $\Delta_i \in [0,1)$, the sequences $\{|d_{k,j,i}|\}_{k\in\N{}}$ for each $j \in [m]$ vanish $Q$-linearly. For example, this is \emph{always} true for $i=1$, where \bequationn \delta_{j,1} = \max\left\{1 - \frac{\lambda_1}{\lambda_{m+1-j}},1 - \frac{\lambda_1}{\lambda_{n+1-j}}\right\} \in [0,1)\ \ \text{for all}\ \ j \in [m], \eequationn from which it follows that \bequation\label{eq.Delta_1} \Delta_1 = \prod_{j=1}^m \delta_{j,1} \in [0,1). \eequation The following is a crucial consequence that one can draw from this observation. \blemma\label{lem.i=1} \textit{ If $\Delta_1 = 0$, then $d_{1+\khat,\jhat,1} = 0$ for all $(\khat,\jhat) \in \N{} \times [m]$. Otherwise, if $\Delta_1 > 0$, then: \benumerate \item[(i)] for any $(k,j) \in \N{} \times [m]$ such that $d_{k,j,1} = 0$, it follows that $d_{k+\khat,\jhat,1} = 0$ for all $(\khat,\jhat) \in \N{} \times [m]$; \item[(ii)] for any $(k,j) \in \N{} \times [m]$ such that $|d_{k,j,1}| > 0$ and any $\epsilon_1 \in (0,1)$, it follows that \bequationn \frac{|d_{k+\khat,\jhat,1}|}{|d_{k,j,1}|} \leq \epsilon_1\ \ \text{for all}\ \ \khat \geq 1 + \left\lceil \frac{\log\epsilon_1}{\log\Delta_1} \right\rceil\ \ \text{and}\ \ \jhat \in [m]. \eequationn \eenumerate } \elemma \bproof If $\Delta_1 = 0$, then the desired conclusion follows from Lemma~\ref{lem.loose_bounds}; in particular, it follows from the inequality~\eqref{eq.loose_bound_m} for $i = 1$. Similarly, for any $(k,j) \in \N{} \times [m]$ such that $d_{k,j,1} = 0$, the conclusion in part~$(i)$ follows from the same conclusion in Lemma~\ref{lem.loose_bounds}, namely, \eqref{eq.loose_bound_m} for $i=1$. Hence, let us continue to prove part $(ii)$ under the assumption that $\Delta_1 \in (0,1)$ (recall \eqref{eq.Delta_1}). Suppose that the given condition holds with $j=1$, i.e., consider $k \in \N{}$ such that $|d_{k,1,1}| > 0$. Then, it follows by Lemma~\ref{lem.loose_bounds} (in particular, \eqref{eq.loose_bound_m} for $j=1$ and $i=1$) that \bequation\label{eq.Delta} \frac{|d_{k+\khat,1,1}|}{|d_{k,1,1}|} \leq \Delta_1^{\khat}\ \ \text{for any}\ \ \khat \in \N{}. \eequation Since $\Delta_1 \in (0,1)$, taking the logarithm of the term on the right-hand side with $\khat = \lceil \log\epsilon_1/\log\Delta_1 \rceil$ yields \bequation\label{eq.log} \left\lceil \frac{\log\epsilon_1}{\log\Delta_1} \right\rceil \log\Delta_1 \leq \(\frac{\log\epsilon_1}{\log\Delta_1}\)\log\Delta_1 = \log\(\epsilon_1\). \eequation Since $\log(\cdot)$ is nondecreasing, the inequalities yielded by \eqref{eq.log} combined with \eqref{eq.Delta} along with \eqref{eq.loose_bound_m} from Lemma~\ref{lem.loose_bounds} yield the desired result for $j=1$. On the other hand, if the conditions of part $(ii)$ hold for some other $j \in [m]$, then the desired conclusion follows from a similar reasoning, though an extra cycle may need to be completed before the desired conclusion holds for all points in the cycle, i.e., for all $\jhat \in [m]$; hence the addition of 1 to $\lceil \log\epsilon_1/\log\Delta_1 \rceil$ in the general conclusion. \eproof One may conclude from Lemma~\ref{lem.i=1} and \eqref{eq.combination} that, for any $(k,j) \in \N{} \times [m]$ and $\epsilon_1 \in (0,1)$, one has \bequationn \frac{|d_{k+\khat,\jhat,1}|}{\|g_{k,j}\|} \leq \epsilon_1\ \ \text{for all}\ \ \khat \geq K_1\ \ \text{and}\ \ \jhat \in [m] \eequationn for some $K_1 \in \N{}$ that depends on the desired contraction factor $\epsilon_1 \in (0,1)$ and the problem-dependent constant~$\Delta_1 \in (0,1)$, but does \emph{not} depend on the iteration number pair $(k,j)$. Our goal now is to show that if a similar, but looser conclusion holds for a squared sum of the weights in~\eqref{eq.combination} up through $p \in [n-1]$, then the squared weight corresponding to index $p + 1$ eventually becomes sufficiently small in a number of iterations that is independent of the iteration number $k$. (For this lemma, we fix $j=\jhat=1$ so as to consider only the first gradient in each cycle. This choice is somewhat arbitrary since our concluding theorem will confirm that a similar result would hold for any $j \in [m]$ and $\jhat = j$.) For the lemma, we define the following constants that dependent only on $p$, the spectrum of $A$ (which, in particular, yields the bounds and definitions in \eqref{lem.loose_bounds}), and the scalar constant $\rho \geq 1$ from Assumption~\ref{ass.lmsd}: \bsubequations\label{eq.get_small_defs} \begin{align} \hat\delta_p &:= \(1 + \hat\delta_{1,p}^2 + \hat\delta_{1,p}^2\hat\delta_{2,p}^2 + \cdots + \prod_{j=1}^{m-1} \hat\delta_{j,p}^2 \) \in [1,\infty), \\ \hat\Delta_{p+1} &:= \max\left\{\frac13,1 - \frac{\lambda_{p+1}}{\lambda_n}\right\}^m \in (0,1), \label{eq.Deltahatp1} \\ \text{and}\ \ \hat{K}_p &:= \left\lceil \frac{\log\(2\hat\delta_p\rho\epsilon_p\Delta_{p+1}^{-(K_p+1)}\)}{\log \hat\Delta_{p+1}} \right\rceil. \label{def.Khatp} \end{align} \esubequations \blemma\label{lem.get_small} \textit{ For any $(k,p) \in \N{} \times [n-1]$, if there exists $(\epsilon_p,K_p) \in (0,\tfrac{1}{2\hat\delta_p\rho}) \times \N{}$ independent of $k$ with \bequation\label{eq.p} \sum_{i=1}^p d_{k+\khat,1,i}^2 \leq \epsilon_p^2 \|g_{k,1}\|^2\ \ \text{for all}\ \ \khat \geq K_p, \eequation then one of the following holds: \benumerate \item[(i)] $\Delta_{p+1} \in [0,1)$ and there exists $K_{p+1} \geq K_p$ dependent only on $\epsilon_p$, $\rho$, and the spectrum of $A$ with \bequation\label{eq.p+1_easy} d_{k+K_{p+1},1,p+1}^2 \leq 4\hat\delta_p^2\rho^2\epsilon_p^2\|g_{k,1}\|^2; \eequation \item[(ii)] $\Delta_{p+1} \in [1,\infty)$ and, with $K_{p+1} := K_p + \hat{K}_p + 1$, there exists $\khat_0 \in \{K_p,\dots,K_{p+1}\}$ with \bequation\label{eq.p+1} d_{k+\khat_0,1,p+1}^2 \leq 4\hat\delta_p^2\rho^2\epsilon_p^2\|g_{k,1}\|^2. \eequation \eenumerate } \elemma \bproof By Lemma~\ref{lem.loose_bounds} (in particular, \eqref{eq.loose_bound_m} with $j=1$ and $i = p+1$) and \eqref{eq.g_norm}, it follows that \bequation\label{eq.jump} d_{k+\khat,1,p+1}^2 \leq \(\Delta_{p+1}^{\khat} d_{k,1,p+1}\)^2 = \Delta_{p+1}^{2\khat} d_{k,1,p+1}^2 \leq \Delta_{p+1}^{2\khat} \|g_{k,1}\|^2\ \ \text{for all}\ \ \khat \in \N{}. \eequation If $\Delta_{p+1} \in [0,1)$, then \eqref{eq.jump} immediately implies the existence of $K_{p+1}$ dependent only on $\epsilon_p$, $\rho$, and the spectrum of $A$ such that \eqref{eq.p+1_easy} holds. Hence, let us continue under the assumption that $\Delta_{p+1} \geq 1$, where one should observe that $\rho \geq 1$, $\hat\delta_p \geq 1$, $\epsilon_p \in (0,\tfrac{1}{2\hat\delta_p\rho})$, $K_p \in \N{}$, and $\Delta_{p+1} \geq 1$ imply $2\hat\delta_p\rho\epsilon_p\Delta_{p+1}^{-K_p} \in (0,1)$, meaning that $\hat{K}_p \in \N{}$. To prove the desired result, it suffices to show that if \bequation\label{eq.stay_big} d_{k+\khat,1,p+1}^2 > 4\hat\delta_p^2\rho^2\epsilon_p^2\|g_{k,1}\|^2\ \ \text{for all}\ \ \khat \in \{K_p,\dots,K_{p+1}-1\}, \eequation then \eqref{eq.p+1} holds at the beginning of the next cycle (i.e., when $\khat_0 = K_{p+1}$). From Lemma~\ref{lem.eigenvectors}, Lemma~\ref{lem.loose_bounds} (in particular, \eqref{eq.loose_bound_j}), \eqref{eq.p}, and \eqref{eq.stay_big}, it follows that with $\{c_{k+\khat,j,i}\}_{i=1}^n$ representing the elements of the vector $c_{k+\khat,j}$ and the matrix $D_{k+\khat,p}$ representing the first $p$ rows of $D_{k+\khat}$, one finds \bequalin \sum_{i=1}^p c_{k+\khat,j,i}^2 &\leq \|D_{k+\khat,p}\|_2^2 \|R_{k+\khat}^{-1}\|^2\|q_{k+\khat,j}\|^2 \\ &\leq \(1 + \hat\delta_{1,p}^2 + \hat\delta_{1,p}^2\hat\delta_{2,p}^2 + \cdots + \prod_{j=1}^{m-1} \hat\delta_{j,p}^2 \) \(\sum_{i=1}^p d_{k+\khat,1,i}^2\) \rho^2 \|g_{k+\khat,1}\|^{-2} \\ &\leq \hat\delta_p^2 (\epsilon_p^2 \|g_{k,1}\|^2) \rho^2 (4\hat\delta_p^2\rho^2\epsilon_p^2)^{-1}\|g_{k,1}\|^{-2} \leq \tfrac14\ \ \text{for all}\ \ \khat \in \{K_p,\dots,K_{p+1}-1\}\ \ \text{and}\ \ j \in [m]. \eequalin Along with Lemma~\ref{lem.eigenvectors}, this implies that \bequation\label{eq.3/4} \theta_{k+\khat,j} = \sum_{i=1}^n \lambda_i c_{k+\khat,j,1}^2 \geq \tfrac34 \lambda_{p+1}\ \ \text{for all}\ \ \khat \in \{K_p,\dots,K_{p+1}-1\}\ \ \text{and}\ \ j \in [m]. \eequation Together with Lemma~\ref{lem.recursion} (see \eqref{eq.recursion}) and $\alpha_{k+\khat+1,j} = \theta_{k+\khat,j}^{-1}$ for all $j \in [m]$, the bound \eqref{eq.3/4} implies \begin{align} d_{k+\khat+2,1,p+1}^2 &= \(\prod_{j=1}^m \(1 - \alpha_{k+\khat+1,j}\lambda_{p+1}\)^2\) d_{k+\khat+1,1,p+1}^2 \nonumber \\ &\leq \hat\Delta_{p+1}^2 d_{k+\khat+1,1,p+1}^2 \ \ \text{for all}\ \ \khat \in \{K_p,\dots,K_{p+1}-1\}. \label{eq.scream} \end{align} Applying this bound recursively, it follows with $K_{p+1} = K_p + \hat{K}_p + 1$ and \eqref{eq.jump} for $\khat = K_{p+1}$ that \bequationn d_{k+K_{p+1},1,p+1}^2 \leq \hat\Delta_{p+1}^{2\hat{K}_p} d_{k+K_p+1,1,p+1}^2 \leq \hat\Delta_{p+1}^{2\hat{K}_p} \Delta_{p+1}^{2(K_p+1)} \|g_{k,1}\|^2 \leq 4\hat\delta_p^2r^2\epsilon_p^2\|g_{k,1}\|^2, \eequationn where the last inequality follows by the definition of $\Khat_p$ in \eqref{def.Khatp}. \eproof We have shown that small squared weights in~\eqref{eq.combination} associated with indices up through $p \in [n-1]$ imply that the squared weight associated with index $p+1$ eventually becomes small. The next lemma shows that these latter squared weights also remain sufficiently small indefinitely. \blemma\label{lem.stay_small} \textit{ For any $(k,p) \in \N{} \times [n-1]$, if there exists $(\epsilon_p,K_p) \in (0,\tfrac{1}{2\hat\delta_p\rho}) \times \N{}$ independent of $k$ such that \eqref{eq.p} holds, then, with $\epsilon_{p+1}^2 := (1 + 4\max\{1,\Delta_{p+1}^4\}\hat\delta_p^2\rho^2)\epsilon_p^2$ and $K_{p+1} \in \N{}$ from Lemma~\ref{lem.get_small}, \bequation\label{eq.p+1_all} \sum_{i=1}^{p+1} d_{k+\khat,1,i}^2 \leq \epsilon_{p+1}^2\|g_{k,1}\|^2\ \ \text{for all}\ \ \khat \geq K_{p+1}. \eequation } \elemma \bproof For the same reasons as in the proof of Lemma~\ref{lem.get_small}, the result follows if $\Delta_{p+1} \in [0,1)$. Hence, we may continue under the assumption that $\Delta_{p+1} \geq 1$ and define $\hat\Delta_{p+1} \in (0,1)$ and $\Khat_p \in \N{}$ as in~\eqref{eq.get_small_defs}. By Lemma~\ref{lem.get_small}, there exists $\khat_0 \in \{K_p,\dots,K_{p+1}\}$ such that \bequation\label{eq.p+1_k0} d_{k+\khat,1,p+1}^2 \leq 4\hat\delta_p^2\rho^2\epsilon_p^2\|g_{k,1}\|^2\ \ \text{when}\ \ \khat = \khat_0. \eequation If the inequality in \eqref{eq.p+1_k0} holds for all $\khat \geq \khat_0$, then \eqref{eq.p+1_all} holds with $\epsilon_{p+1}^2 = (1 + 4\hat\delta_p^2\rho^2)\epsilon_p^2$. Otherwise, let $\khat_1 \in \N{}$ denote the smallest natural number such that \bequation\label{eq.shoulder} d_{k+\khat,1,p+1}^2 \leq 4\hat\delta_p^2\rho^2\epsilon_p^2\|g_{k,1}\|^2\ \ \text{for all}\ \ \khat_0 \leq \khat \leq \khat_1, \eequation but \bequation\label{eq.large_again} d_{k+\khat_1+1,1,p+1}^2 > 4\hat\delta_p^2\rho^2\epsilon_p^2\|g_{k,1}\|^2. \eequation As in the arguments that lead to \eqref{eq.scream} in the proof of Lemma~\ref{lem.get_small}, combining \eqref{eq.p} and \eqref{eq.large_again} implies \bequationn d_{k+\khat_1+3,1,p+1}^2 \leq \hat\Delta_{p+1}^2 d_{k+\khat_1+2,1,p+1}^2. \eequationn Generally, this same argument can be used to show that \bequationn \khat \geq K_p\ \ \text{and}\ \ d_{k+\khat+1,1,p+1}^2 > 4\hat\delta_p^2\rho^2\epsilon_p^2\|g_{k,1}\|^2\ \ \text{imply}\ \ d_{k+\khat+3,1,p+1}^2 \leq \hat\Delta_{p+1}^2 d_{k+\khat+2,1,p+1}^2. \eequationn Since $\hat\Delta_{p+1} \in (0,1)$, this fact and \eqref{eq.large_again} imply the existence of $\khat_2 \in \N{}$ such that \bequation\label{eq.airplane} d_{k+\khat+1,1,p+1}^2 > 4\hat\delta_p^2\rho^2\epsilon_p^2\|g_{k,1}\|^2\ \ \text{for all}\ \ \khat_1 \leq \khat \leq \khat_2 - 2, \eequation but \bequationn d_{k+\khat_2,1,p+1}^2 \leq 4\hat\delta_p^2\rho^2\epsilon_p^2\|g_{k,1}\|^2, \eequationn while, from above, \bequation\label{eq.ground} d_{k+\khat+3,1,p+1}^2 \leq \hat\Delta_{p+1}^2 d_{k+\khat+2,1,p+1}^2\ \ \text{for all}\ \ \khat_1 \leq \khat \leq \khat_2 - 2. \eequation Moreover, by Lemma~\ref{lem.loose_bounds} (in particular, \eqref{eq.loose_bound_m}) and \eqref{eq.shoulder}, it follows that \bsubequations \begin{align} d_{k+\khat_1+1,1,p+1}^2 &\leq \Delta_{p+1}^2 d_{k+\khat_1,1,p+1}^2 \leq 4\Delta_{p+1}^2\hat\delta_p^2\rho^2\epsilon_p^2\|g_{k,1}\|^2 \\ \text{and}\ \ d_{k+\khat_1+2,1,p+1}^2 &\leq 4\Delta_{p+1}^4\hat\delta_p^2\rho^2\epsilon_p^2\|g_{k,1}\|^2. \label{eq.plastic} \end{align} \esubequations Combining \eqref{eq.ground} and \eqref{eq.plastic}, it follows that \bequationn d_{k+\khat+3,1,p+1}^2 \leq 4\hat\Delta_{p+1}^2\Delta_{p+1}^4\hat\delta_p^2\rho^2\epsilon_p^2\|g_{k,1}\|^2\ \ \text{for all}\ \ \khat_1 \leq \khat \leq \khat_2 - 2. \eequationn Overall, since \eqref{eq.Deltahatp1} ensures $\hat\Delta_{p+1} \in (0,1)$, we have shown that \bequation\label{eq.suffice} d_{k+\khat,1,p+1}^2 \leq 4\Delta_{p+1}^4\hat\delta_p^2\rho^2\epsilon_p^2 \|g_{k,1}\|^2\ \ \text{for all}\ \ \khat \in \{\khat_0,\dots,\khat_2\}. \eequation Repeating this argument for later iterations, we arrive at the desired conclusion. \eproof The following lemma is a generalization of Lemma~\ref{lem.DaiLiao} for any $m \in \N{}$. Our proof is similar to that of Lemma~2.4 in \cite{DaiLiao02}. We provide it in full for completeness. \blemma\label{lem.g_contraction} \textit{ There exists $K \in \N{}$ dependent only on the spectrum of $A$ such that \bequationn \|g_{k+K,1}\| \leq \thalf \|g_{k,1}\|\ \ \text{for all}\ \ k \in \N{}. \eequationn } \elemma \bproof By Lemma~\ref{lem.stay_small}, if for some $(\epsilon_p,K_p) \in (0,\tfrac{1}{2\hat\delta_p\rho}) \times \N{}$ independent of $k$ one finds \bequation\label{eq.p_again} \sum_{i=1}^p d_{k+\khat,1,i}^2 \leq \epsilon_p^2 \|g_{k,1}\|^2\ \ \text{for all}\ \ \khat \geq K_p, \eequation then for $\epsilon_{p+1}^2 := (1 + 4\max\{1,\Delta_{p+1}^4\}\hat\delta_p^2\rho^2)\epsilon_p^2$ and some $K_{p+1} \geq K_p$ independent of $k$ one finds \bequation\label{eq.p+1_all_again} \sum_{i=1}^{p+1} d_{k+\khat,1,i}^2 \leq \epsilon_{p+1}^2\|g_{k,1}\|^2\ \ \text{for all}\ \ \khat \geq K_{p+1}. \eequation Since Lemma~\ref{lem.i=1} implies that for any $\epsilon_1 \in (0,1)$ one can find $K_1$ independent of $k$ such that \eqref{eq.p_again} holds with $p=1$, it follows that, independent of $k$, there exists a sufficiently small $\epsilon_1 \in (0,1)$ such that \bequationn \epsilon_1^2 \leq \cdots \leq \epsilon_n^2 \leq \tfrac14. \eequationn Hence, for any $k \in \N{}$, it follows that there exists $K = K_n$ such that \bequationn \|g_{k+\khat,1}\|^2 = \sum_{i=1}^n d_{k+\khat,1,i}^2 \leq \tfrac14 \|g_{k,1}\|^2\ \ \text{for all}\ \ \khat \geq K, \eequationn as desired. \eproof We are now prepared to state our final result, the proof of which follows in the same manner as Theorem~\ref{th.DaiLiao} follows from Lemma~\ref{lem.DaiLiao} in \cite{DaiLiao02}. We prove it in full for completeness. \btheorem \textit{ The sequence $\{\|g_{k,1}\|\}$ vanishes $R$-linearly. } \etheorem \bproof If $\Delta \in [0,1)$, then it has already been argued (see the discussion following Lemma~\ref{lem.loose_bounds}) that~$\{\|g_{k,1}\|\}$ vanishes $Q$-linearly. Hence, let us continue assuming that $\Delta \geq 1$. By Lemma~\ref{lem.g_contraction}, there exists~$K \in \N{}$ dependent only on the spectrum of $A$ such that \bequationn \|g_{1+Kl,1}\| \leq \thalf\|g_{1+K(l-1),1}\|\ \ \text{for all}\ \ l \in \N{}. \eequationn Applying this result recursively, it follows that \bequation\label{eq.tight_bound_g} \|g_{1+Kl,1}\| \leq (\thalf)^l \|g_{1,1}\|\ \ \text{for all}\ \ l \in \N{}. \eequation Now, for any $k \geq 1$, let us write $k = Kl+\khat$ for some $l \in \{0\}\cup\N{}$ and $\khat \in \{0\}\cup[K-1]$. It follows that \bequationn l = k/K - \khat/K \geq k/K - 1. \eequationn By this fact, \eqref{eq.loose_bound_g}, and \eqref{eq.tight_bound_g}, it follows that for any $k = Kl+\khat \in \N{}$ one has \bequationn \|g_{k,1}\| \leq \Delta^{\khat-1}\|g_{1+Kl,1}\| \leq \Delta^{K-1}(\thalf)^{k/K-1}\|g_{1,1}\| \leq c_1c_2^k\|g_{1,1}\|, \eequationn where \bequationn c_1 := 2\Delta^{K-1} \ \ \text{and}\ \ c_2 := 2^{-1/K} \in (0,1), \eequationn which implies the desired conclusion. \eproof \section{Numerical Demonstrations}\label{sec.numerical} The analysis in the previous section provides additional insights into the behavior of Algorithm~\ref{alg.lmsd} beyond its $R$-linear rate of convergence. In this section, we provide the results of numerical experiments to demonstrate the behavior of the algorithm in a few types of cases. The algorithm was implemented and the experiments were performed in Matlab. It is not our goal to show the performance of Algorithm~\ref{alg.lmsd} for various values of $m$, say to argue whether the performance improves or not as $m$ is increased. This is an important question for which some interesting discussion is provided by \cite{Flet12}. However, to determine what is a good choice of $m$ for various types of cases would require a larger set of experiments that are outside of the scope of this paper. For our purposes, our only goal is to provide some simple illustrations of the behavior as shown by our theoretical analysis. Our analysis reveals that the convergence behavior of the algorithm depends on the spectrum of the matrix $A$. Therefore, we have constructed five test examples, all with $n=100$, but with different eigenvalue distributions. For the first problem, the eigenvalues of $A$ are evenly distributed in $[1,1.9]$. Since this ensures that $\lambda_n < 2\lambda_1$, our analysis reveals that the algorithm converges $Q$-linearly for this problem; recall the discussion after Lemma~\ref{lem.loose_bounds}. All other problems were constructed so that $\lambda_1 = 1$ and $\lambda_n = 100$, for which one clearly finds $\lambda_n > 2\lambda_1$. For the second problem, all eigenvalues are evenly distributed in $[\lambda_1,\lambda_n]$; for the third problem, the eigenvalues are clustered in five distinct blocks; for the fourth problem, all eigenvalues except one are clustered around $\lambda_1$; and for the fifth problem, all eigenvalues except one are clustered around $\lambda_n$. Table~\ref{tab.test_problems} shows the spectrum of $A$ for each problem. The table also shows the numbers of outer and (total) inner iterations required by Algorithm~\ref{alg.lmsd} (indicated by column headers ``$k$'' and ``$j$'', respectively) when it was run with $\epsilon = 10^{-8}$ and either $m=1$ or $m=5$. In all cases, the initial $m$ stepsizes were generated randomly from a uniform distribution over the interval $[\lambda_{100}^{-1},\lambda_1^{-1}]$. One finds that the algorithm terminates in relatively few outer and inner iterations relative to $n$, especially when many of the eigenvalues are clustered. This dependence on clustering of the eigenvalues should not be surprising since, recalling Lemma~\ref{lem.interlacing}, clustered eigenvalues makes it likely that an eigenvalue of $T_k$ will be near an eigenvalue of $A$, which in turn implies by Lemma~\ref{lem.recursion} that the weights in the representation \eqref{eq.combination} will vanish quickly. On the other hand, for the problems for which the eigenvalues are more evenly spread in $[1,100]$, the algorithm requires relatively more outer iterations, though still not an excessively large number relative to $n$. For these problems, the performance was better for $m=5$ versus $m=1$, both in terms of outer and (total) inner iterations. \btable[ht]\renewcommand{\tabcolsep}{10pt} \centering \caption{Spectra of $A$ for five test problems along with outer and (total) inner iteration counts required by Algorithm~\ref{alg.lmsd}. For each spectrum, a set of eigenvalues in an interval indicates that the eigenvalues are evenly distributed within that interval.} \label{tab.test_problems} \btabular{|c|rcl|c|c|c|c|} \hline & & & & \multicolumn{2}{c|}{$m=1$} & \multicolumn{2}{c|}{$m=5$} \\ Problem & \multicolumn{3}{c|}{Spectrum} & \multicolumn{1}{c}{$k$} & $j$ & \multicolumn{1}{c}{$k$} & $j$ \\ \hline \hline 1 & $\{\lambda_{ 1},\dots,\lambda_{100}\}$ & $\subset$ & $[ 1, 1.9]$ & 13 & 13 & 3 & 14 \\ \hline 2 & $\{\lambda_{ 1},\dots,\lambda_{100}\}$ & $\subset$ & $[ 1,100 ]$ & 124 & 124 & 23 & 114 \\ \hline 3 & $\{\lambda_{ 1},\dots,\lambda_{ 20}\}$ & $\subset$ & $[ 1, 2 ]$ & 112 & 112 & 16 & 79 \\ & $\{\lambda_{21},\dots,\lambda_{ 40}\}$ & $\subset$ & $[25, 26 ]$ & & & & \\ & $\{\lambda_{41},\dots,\lambda_{ 60}\}$ & $\subset$ & $[50, 51 ]$ & & & & \\ & $\{\lambda_{61},\dots,\lambda_{ 80}\}$ & $\subset$ & $[75, 76 ]$ & & & & \\ & $\{\lambda_{81},\dots,\lambda_{100}\}$ & $\subset$ & $[99,100 ]$ & & & & \\ \hline 4 & $\{\lambda_{ 1},\dots,\lambda_{ 99}\}$ & $\subset$ & $[ 1, 2 ]$ & 26 & 26 & 4 & 20 \\ & $\lambda_{100}$ & $=$ & $100$ & & & & \\ \hline 5 & $\lambda_1$ & $=$ & $1$ & 16 & 16 & 5 & 25 \\ & $\{\lambda_{ 2},\dots,\lambda_{100}\}$ & $\subset$ & $[99,100 ]$ & & & & \\ \hline \etabular \etable As seen in our analysis (inspired by \cite{Rayd93}, \cite{DaiLiao02}, and \cite{Flet12}), a more refined look into the behavior of the algorithm is obtained by observing the step-by-step magnitudes of the weights in~\eqref{eq.combination} for the generated gradients. Hence, for each of the test problems, we plot in Figures~\ref{fig.p1}, \ref{fig.p2}, \ref{fig.p3}, \ref{fig.p4}, and \ref{fig.p5} these magnitudes (on a log scale) for a few representative values of $i \in [n]$. Each figure consists of four sets of plots: the first and third show the magnitudes corresponding to $\{g_{k,1}\}$ (i.e., for the first point in each cycle) when $m=1$ and $m=5$, respectively, while the second and fourth show the magnitudes at all outer and inner iterations, again when $m=1$ and $m=5$, respectively. In a few of the images, the plot ends before the right-hand edge of the image. This is due to the log of the absolute value of the weight being evaluated as $-\infty$ in Matlab. The tables show that the magnitudes of the weights corresponding to $i=1$ always decrease monotonically, as proved in Lemma~\ref{lem.i=1}. The magnitudes corresponding to $i=2$ also often decrease monotonically, but, as seen in the results for Problem~5, this is not always the case. In any case, the magnitudes corresponding to $i=50$ and $i=100$ often do not decrease monotonically, though, as proved in our analysis, one observes that the magnitudes demonstrate a downward trend over a finite number of cycles. Even further insight into the plots of these magnitudes can be gained by observing the values of the constants $\{\Delta_i\}_{i\in[n]}$ for each problem and history length. Recalling \eqref{eq.loose_bound_m}, these constants bound the increase that a particular weight in \eqref{eq.combination} might experience from one point in a cycle to the same point in the subsequent cycle. For illustration, we plot in Figures~\ref{fig.p1delta}, \ref{fig.p2delta}, \ref{fig.p3delta}, \ref{fig.p4delta}, and \ref{fig.p5delta} these constants. Values less than 1 are indicated by a purple bar while values greater than or equal to 1 are indicated by a blue bar. Note that, in Figure~\ref{fig.p4delta}, all values are small for both history lengths except $\Delta_{100}$. In Figure~\ref{fig.p5delta}, $\Delta_1$ is less than one in both figures, but the remaining constants are large for $m=1$ while being small for $m=5$. \bfigure[H] \centering \includegraphics[width=0.47\textwidth,clip=true,trim=15 5 55 15]{p1m1}\qquad \includegraphics[width=0.47\textwidth,clip=true,trim=15 5 55 15]{p1m5} \caption{Weights in \eqref{eq.combination} for problem $1$ with history length $m=1$ (left two plots) and $m=5$ (right two plots).} \label{fig.p1} \efigure \bfigure[H] \centering \includegraphics[width=0.33\textwidth,clip=true,trim=10 15 25 0]{p1m1d}\qquad \includegraphics[width=0.33\textwidth,clip=true,trim=10 15 25 0]{p1m5d} \caption{Constants in \eqref{eq.loose_bound_m} for problem $1$ with history length $m=1$ (left plot) and $m=5$ (right plot).} \label{fig.p1delta} \efigure \bfigure[H] \centering \includegraphics[width=0.47\textwidth,clip=true,trim=15 5 55 15]{p2m1}\qquad \includegraphics[width=0.47\textwidth,clip=true,trim=15 5 55 15]{p2m5} \caption{Weights in \eqref{eq.combination} for problem $2$ with history length $m=1$ (left two plots) and $m=5$ (right two plots).} \label{fig.p2} \efigure \bfigure[H] \centering \includegraphics[width=0.33\textwidth,clip=true,trim=10 15 25 0]{p2m1d}\qquad \includegraphics[width=0.33\textwidth,clip=true,trim=10 15 25 0]{p2m5d} \caption{Constants in \eqref{eq.loose_bound_m} for problem $2$ with history length $m=1$ (left plot) and $m=5$ (right plot).} \label{fig.p2delta} \efigure \bfigure[H] \centering \includegraphics[width=0.47\textwidth,clip=true,trim=15 5 55 15]{p3m1}\qquad \includegraphics[width=0.47\textwidth,clip=true,trim=15 5 55 15]{p3m5} \caption{Weights in \eqref{eq.combination} for problem $3$ with history length $m=1$ (left two plots) and $m=5$ (right two plots).} \label{fig.p3} \efigure \bfigure[H] \centering \includegraphics[width=0.33\textwidth,clip=true,trim=10 15 25 0]{p3m1d}\qquad \includegraphics[width=0.33\textwidth,clip=true,trim=10 15 25 0]{p3m5d} \caption{Constants in \eqref{eq.loose_bound_m} for problem $3$ with history length $m=1$ (left plot) and $m=5$ (right plot).} \label{fig.p3delta} \efigure \bfigure[H] \centering \includegraphics[width=0.47\textwidth,clip=true,trim=15 5 55 15]{p4m1}\qquad \includegraphics[width=0.47\textwidth,clip=true,trim=15 5 55 15]{p4m5} \caption{Weights in \eqref{eq.combination} for problem $4$ with history length $m=1$ (left two plots) and $m=5$ (right two plots).} \label{fig.p4} \efigure \bfigure[H] \centering \includegraphics[width=0.33\textwidth,clip=true,trim=10 15 25 0]{p4m1d}\qquad \includegraphics[width=0.33\textwidth,clip=true,trim=10 15 25 0]{p4m5d} \caption{Constants in \eqref{eq.loose_bound_m} for problem $4$ with history length $m=1$ (left plot) and $m=5$ (right plot).} \label{fig.p4delta} \efigure \bfigure[H] \centering \includegraphics[width=0.47\textwidth,clip=true,trim=15 5 55 15]{p5m1}\qquad \includegraphics[width=0.47\textwidth,clip=true,trim=15 5 55 15]{p5m5} \caption{Weights in \eqref{eq.combination} for problem $5$ with history length $m=1$ (left two plots) and $m=5$ (right two plots).} \label{fig.p5} \efigure \bfigure[H] \centering \includegraphics[width=0.33\textwidth,clip=true,trim=10 15 25 0]{p5m1d}\qquad \includegraphics[width=0.33\textwidth,clip=true,trim=10 15 25 0]{p5m5d} \caption{Constants in \eqref{eq.loose_bound_m} for problem $5$ with history length $m=1$ (left plot) and $m=5$ (right plot).} \label{fig.p5delta} \efigure \section{Conclusion}\label{sec.conclusion} We have shown that the limited memory steepest descent (LMSD) method proposed by \cite{Flet12} possesses an $R$-linear rate of convergence for any history length $m \in \N{}$ when it is employed to minimize a strongly convex quadratic function. Our analysis effectively extends that in \cite{DaiLiao02}, which covers only the $m=1$ case. We have also provided the results of numerical experiments to demonstrate the practical performance of the algorithm, the results of which are informed by our theoretical analysis. \ifthenelse{1 = 1}{ \bibliographystyle{plain} }{ \bibliographystyle{IMANUM-BIB} }
2024-02-18T23:40:59.421Z
2016-10-13T02:07:06.000Z
algebraic_stack_train_0000
3,862
9,996
proofpile-arXiv_066-2871
\section{Introduction} Clustering objects based on their similarities is a basic data mining approach in statistical analysis. In particular, graph data (or, network data) that reflect relationships between nodes, are often acquired in various scientific domains such as protein-protein interaction, neural network and social network \cite{fortunato2010community}, which potentially provides quite useful information on the underlying structure of the system in question. Specifically, our interest is to detect a possible \lq community\rq,~or cluster structure of undirected graph, which is defined as block structure of a graph (Fig.\ref{repmat}a), where the corresponding edge-weight matrix consists of several cluster blocks (four cluster blocks in Fig.\ref{repmat}b). To detect such structure, a number of clustering methods have been proposed in the literature of statistical physics and information theory \cite{newman2004finding, reichardt2006statistical, bolla2013spectral}. Mainly, there are four approaches: graph partitioning, hierarchical clustering, partitional clustering and spectral clustering \cite{fortunato2010community, bolla2013spectral, ng2002spectral}. However, the conventional framework for analysis of community structure is typically for an unsigned graph in which an edge weight is constrained to be non-negative. Recently, it has gained much attention to analyze a signed graph that allows for negative weights \cite{kunegis2010spectral}. Indeed, in real data, it is often essential to take into account negative as well as positive relationship for a better understanding of the underlying community structure in a graph such as social network. Most methods in literature, however, address this problem in a rather limited framework in which edge weights within a cluster are positive while those between clusters negative (i.e., weakly balanced structure) \cite{kunegis2010spectral}. On the other hand, it still remains an open question as how to cluster nodes in a more general framework such as negative edge weights within a cluster \cite{yang2007community}. In the present paper, we consider a general framework for community structure as follows. We assume that edge weights are independently generated from a generative model that is specific to a particular cluster block, which characterizes a distribution of edges in each cluster block. Further, we assume that these distributions are distinguishable in terms of their mean and variance. For this framework, as a first step of addressing a clustering problem, we aim to develop a statistical method for testing the existence of the underlying community structure. As regards statistical test on community structure, several methods have been proposed in the context of unsigned (weighted or unweighted) graph \cite{fortunato2010community}. A major approach to this problem is to evaluate the stability of cluster solutions when the graph in question is contaminated with noise \cite{gfeller2005finding, karrer2007robustness}. If similar cluster solutions are yielded for contaminated graphs, it suggests the stability of the cluster solution for the original graph, providing the evidence of the community structure. The bootstrap method \cite{rosvall2010mapping} is in the similar line with this approach. A second approach is based on comparison of the cluster solution for the original graph with those solutions of randomly permuted graphs. As a statistic for testing the significance, the entropy of graph configurations \cite{bianconi2009assessing}, or \lq C-score\rq~focusing on the lowest internal degrees \cite{lancichinetti2010statistical} have been proposed. The common feature of these state-of-the-art methods is that a cluster solution to the graph in question is required for testing. In other words, the test result depends on a clustering method that one uses. In this sense, these methods test the significance of a yielded cluster solution, rather than the existence of community structure itself. For the general framework of our interest, such an approach is not applicable because appropriate clustering methods are not readily available. We propose a general method for testing community structure of edge-weighted graph with real-valued weights, which does not require a cluster solution. Our method is based on the asymptotic behavior of eigenvalues of the normalized weight matrix of graph, which is described by Wigner semicircular law when there is no community structure. In the similar line with our approach, in the case of binary-valued graph, a statistical test on community structure has been recently proposed \cite{bickel2016hypothesis} that is based on the exact asymptotic behavior of (maximum) eigenvalues. However, their method is not directly applicable to real-valued graphs where we take into account both mean and variance, because Bernoulli distribution assumed in their method cannot properly capture these quantities. Our method provides a nontrivial extension of detecting community structure to real-valued graphs, which has a wide range of applications to network data. In the following sections, first, a theoretical foundation for our method is provided. Second, it is shown that our method outperforms other methods in synthetic data. Third, we apply our method to real data. \begin{figure}[ht!] \begin{center} \subfigure[Edge-weighted graph]{ \label{graphrep} \includegraphics[scale=0.16, trim=0mm 0mm 0mm 0mm]{rmtfigure.eps} } \subfigure[Edge-weight matrix]{ \label{matrixrep} \includegraphics[scale=0.12, trim=0mm 0mm 0mm 0mm]{matrep.eps} }\\ \end{center} \vspace{-5mm} \caption{\it \small Illustration of two-way community structure in a graph. Panel (a): Graph representation (edge-weighted graph). Panel (b): Matrix representation (edge-weight matrix), where strengths of relationships between nodes are denoted by color.} \label{repmat} \end{figure} \section{Method} Our statistical test on community structure is based on the probability distribution of eigenvalues of the normalized edge-weight matrix (we define \lq normalization\rq~in Section~\ref{stattest}). We make the best use of asymptotic results on such a distribution when there is no community structure, which have been intensively studied in the field of Random Matrix Theory of Theoretical Physics \cite{mehta2004random}. In this section, we provide a theoretical foundation for our statistical test. \begin{figure}[ht!] \begin{center} \subfigure[Setting of parameters]{ \label{lattice} \includegraphics[scale=0.16, trim=-0mm 0mm 0mm 0mm]{rmtpaperLattice.eps} } \subfigure[Tracy-Widom distibution]{ \label{density} \includegraphics[scale=0.12, trim=0mm 0mm 0mm 0mm]{f2fig.eps} }\\ \end{center} \vspace{-5mm} \caption{\it \small Panel (a) : Illustration of setting of community structure in matrix representation where the nodes are arranged in the order of cluster labels. Each cluster block is characterized by mean $\mu$ and standard deviation $\sigma$ with cluster block index $(k, k')$. Panel (b): The density function of Tracy-Widom distribution for Gaussian orthogonal ensembles with $\beta=1$ (the first derivative of $F_1(x)$ in Eq.(\ref{F1})), generated by the function {\it dtw} in R-package \{RMTstat\}. The critical values at significance level $\alpha=0.05$ and $\alpha/4 = 0.0125$ are $0.979$ (denoted in green line) and $1.889$ (in red line), respectively. } \label{F2} \end{figure} \subsection{Setting} \label{setting} We consider a clustering problem of nodes for undirected edge-weighted graph $G=(V, E)$ where $V$ consists of $n$ vertices $\{v_1, \ldots, v_n\}$, and $E$ is represented by the edge-weight matrix $\boldsymbol{W}_n$, which is a $n \times n $ symmetric (real Hermitian) matrix with elements $w_{i, j}=w_{j, i} \in \mathbb{R}$ and $w_{i, i} = 0$ ($\mathbb{R}$ denotes a set of real numbers). Let us assume that there are $K$ clusters of nodes, denoting them as $c_1, \ldots, c_K$. We define a cluster block $(k, k')$ as a set of weights $w_{i, j}$ such that nodes $i$ and $j$ belong to the cluster $c_k$ and $c_{k'}$, respectively: $v_i \in c_k$ and $v_j \in c_{k'}$ ($1 \leq k, k' \leq K$). Here, we assume that each off-diagonal weight $w_{i, j}$ is independently drawn from a certain distribution. With this assumption, we define a $K$-way community structure as characterized by different distributions in $K \times K$ cluster blocks. To elaborate this definition, we assume the following distribution for each cluster block: \begin{eqnarray} \nonumber w_{i, j} &\sim& g_{k, k'} ~~ (i \neq j)\\ g_{k, k'} &=& \mu_{k, k'} + g \times \sigma_{k, k'}, \label{settingw} \end{eqnarray} where $v_i \in c_k$, $v_j \in c_{k'}$, and $g$ is a certain probability distribution. This definition suggests that a pair of parameters $(\mu_{k, k'}, \sigma_{k, k'})$ characterizes each cluster block, hence, community structure (Fig.\ref{F2}a). Note that in this definition we exclude the degenerate case that $\mu_{k, k'} =constant$ and $\sigma_{k, k'}=0$ in which the variances become zero for the whole set of $\{w_{i, j}\}$. Since the community structure of our interest is based on differences of weight distributions, it is translation and scale invariant for the whole weights. Hence, to simplify the problem, as a preprocess, we standardize off-diagonal elements of $\boldsymbol{W}_n$ using all off-diagonal weights $w_{i, j}(i \neq j)$ so that the mean is zero and the variance one. We denote as $S$ the mapping that standardizes edge-weight matrix in this way, transforming each element of the matrix as \begin{eqnarray} \nonumber S: &&w_{i, j} \rightarrow (w_{i, j} - \mu)/\sigma ~~\mbox{for}~ i \neq j \\ &&w_{i, i} \rightarrow 0, \label{S} \end{eqnarray} where $\mu$ and $\sigma$ are the mean and the standard deviation of the whole off-diagonal elements $\{ w_{i, j }\}$. Practically, these mean and standard deviation may be replaced by the empirical counterparts $\mu_{emp}$ and $\sigma_{emp}$. For the standardized edge-weight matrix $S(\boldsymbol{W}_n)$, we assume that the mean and the variance of $g$ in Eq.(\ref{settingw}) are zero and one, respectively. In this setting, the mean and the variance in cluster block $(k, k')$ are $\mu_{k, k'}$ and $\sigma_{k, k'}^2$, respectively. The differences of these parameters distinguish between clusters in terms of the first and second moments, while controlling higher moments than two. Using this setting of community structure, we define no community case as a single community with $K=1$ where $\mu_{k, k'}=0$ and $\sigma_{k, k'}=1$ for $S(\boldsymbol{W}_n)$. Note that since $g$ is arbitrary, including a mixture distribution of certain distribution family, our definition of no community structure includes the case that each weight is generated from a specific distribution in a list of distributions in random order. Importantly, when we shuffle the off-diagonal elements $\boldsymbol{W}_n$ at random (in element-wise manner), the community structure always disappears. Indeed, in such a case, each element $w_{i, j}'$ of the shuffled matrix $\boldsymbol{W}_n'$ independently and identically follows the mixture distribution consisting of different components, i.e., $\sum_{k, k'}\pi_{k, k'}g_{k, k'}$ where $\pi_{k, k'}$ is the proportion of elements of cluster block $(k, k')$ for the original matrix $\boldsymbol{W}_n$. We use this property for our statistical test as an alternative way for estimating confidential intervals (Section~\ref{stattest}). \subsection{Statistical test}\label{stattest} In this section, we develop a statistical test on the existence of community structure defined in Section \ref{setting} (i.e., $K=1$ vs. $K>1$). We base our test on the asymptotic behavior of the eigenvalues of $S(\boldsymbol{W}_n)$ as the number of nodes $n$ goes to $\infty$ when there is no community structure. A useful result of Random Matrix Theory in our context is that if the elements of an infinite dimensional symmetric matrix $\boldsymbol{X}$ independently follow a certain distribution with mean zero and variance one, then the empirical (random) distribution of eigenvalue $\lambda$ of $\boldsymbol{X_n}/\sqrt{n}$, where $\boldsymbol{X_n}$ is the principal submatrix of $\boldsymbol{X}$ for the first $n$ rows and columns, converges almost surely to Wigner semicircular distribution as $n$ goes to $\infty$ (semicircular law) \cite[p.136]{tao2012topics}: \begin{eqnarray} f_{sc}(\lambda) \equiv \frac{1}{2\pi} \sqrt{4-\lambda^2}. \label{semi} \label{semicircular} \end{eqnarray} Note that this law holds for any generative distribution of the elements in matrix $\boldsymbol{X}$ (as long as independently drawn), which is referred to as universality property of the law. Also, this law holds even if we replace the diagonal elements by zero's. In order to apply the semicircular law to our context, we consider a normalization mapping of edge-weight matrix $\boldsymbol{W}_n$, transforming each element of the matrix as \begin{eqnarray*} \nonumber T: && w_{i, j} \rightarrow S(w_{i, j})/\sqrt{n}, \end{eqnarray*} where $S$ is the standardization mapping in Eq.(\ref{S}). Now, let us assume that the elements in an edge-weight matrix $\boldsymbol{W}_n$ are generated as in Eq.(\ref{settingw}). In this setting, the semicircular law suggests that if the eigenvalues of $T(\boldsymbol{W}_n)$ for sufficiently large $n$ do not follow Wigner semicircular distribution in Eq.(\ref{semicircular}), then, there should be some $K$-way community structure in the graph ($K>1$) because of our assumption in Eq.(\ref{settingw})\footnote{Without the assumption in Eq.(\ref{settingw}), this property does not hold. For instance, one can make a scale-free graph where the eigenvalues do not follow the semicircular law \cite{Nagao}.}. However, the converse argument does not necessarily hold. That is to say, the fact that the eigenvalues of $T(\boldsymbol{W}_n)$ follow Wigner semicircular distribution does not imply that there is no community structure (i.e., $K=1$). A counter example is given as follows (proof in Appendix~\ref{semicircular1}). \begin{exmp} \label{theory counter} Let $\boldsymbol{W}_n$ be a $n \times n$ symmetric edge-weight matrix that has $K$-way community structure with the same cluster size ($n/K$) as defined in Section~\ref{setting}. Suppose that $\mu_{k, k'}=0$ for $\forall k, k'$, $\sigma_{k, k'}^2=0$ for $k \neq k'$, and $\sigma_{k, k}^2=1$. Then, the empirical eigenvalue distribution of $T(\boldsymbol{W}_n)$ almost surely converges to Wigner semicircular distribution in Eq.(\ref{semicircular}) as $n$ goes to $\infty$. \end{exmp} Nonetheless, in our setting, we can show that an additional condition on the eigenvalue distribution for an exponentially mapped edge-weight matrix ensures that the converse argument also holds. For this purpose, we introduce the exponential mapping $Exp$ that transforms each element of $\boldsymbol{W}_n$ as \begin{eqnarray} \nonumber Exp:&& w_{i, j} \rightarrow \exp(t\times w_{i, j})~~\mbox{for}~ i \neq j \\ && w_{i, i} \rightarrow 0, \label{et} \end{eqnarray} where $t \in \mathbb{R}$ is a tuning parameter (we do not explicitly denote the dependence of $Exp$ on $t$ because of cluttering). Subsequently, we define the normalization mapping $T_e$ for the exponentially transformed matrix as \begin{eqnarray} T_e: && w_{i, j} \rightarrow S(Exp(w_{i, j}))/\sqrt{n} . \label{et2} \end{eqnarray} Now, the following theorem provides a necessary and sufficient condition for the existence of community structure (proof in Appendix~\ref{semicircular2}). \begin{theorem} \label{theory1} Let $\boldsymbol{W}_n$ be a $n \times n$ weight matrix defined in Section~\ref{setting} with the fixed proportion of cluster sizes $(r_1, \ldots, r_K)$ and the pairs of parameters $\{ (\mu_{k, k'}, \sigma_{k, k'})\} (k, k'=1, \ldots, K)$. Suppose that there exists the moment generating function $M(t)$ in an open interval containing zero for $g$ ($g$ is defined in Eq.(\ref{settingw})). Then, the following statements (C1) and (C2) are equivalent: \begin{itemize} \item [] (C1) There is no community structure (i.e., $K=1$) \item [] (C2) The empirical eigenvalue distribution of the following two matrices almost surely converges to Wigner semicircular distribution in Eq.(\ref{semicircular}) as $n$ goes to $\infty$ : $T(\boldsymbol{W}_n)$ and $T_e(\boldsymbol{W}_n)$ for some real value $t_0 \neq 0$. \end{itemize} \end{theorem} Theorem~\ref{theory1} \footnote{Alternatively, one may replace the exponential mapping by the square mapping, which requires less assumption on the existence of moments for $g$. However, the square transformation seems to have less power when we establish a statistical test (as in the following paragraphs), possibly because it is not one-to-one mapping. This observation motivates us to work on the exponential mapping.} motivates us to use the semicircular law to establish a statistical test on the null hypothesis $H_0$: \begin{eqnarray} \label{H0} H_0: \mbox{There is no community structure}. \end{eqnarray} As has been implied in the proof of Theorem~\ref{theory1}, the violation of the semicircular law for $T(\boldsymbol{W}_n)$ is related to differences of means ($\mu_{k, k'}$) among cluster blocks, while the violation of the law for $T_e(\boldsymbol{W}_n)$ related to differences of variances ($\sigma_{k, k'}^2$). Hence, if we take into account the eigenvalues of these two matrices, we can capture differences of the first and second moments of the underlying distributions among cluster blocks. Practically, to test the null hypothesis $H_0$, rather than dealing with the distribution of the whole eigenvalues, we focus on extreme values of eigenvalues, because the proof of Theorem~\ref{theory1} suggests that the extreme eigenvalues may be closely related to the violation of the semicircular law when there is community structure. The largest eigenvalue may be positively deviated from the expected value $2$, or the smallest eigenvalue may be negatively deviated from the expected value $-2$. Note that strictly speaking, the independent assumption on weights is broken if we transform them by $T$ or $T_e$ using empirical mean and standard deviation $\mu_{emp}$ and $\sigma_{emp}$. For simplicity, however, we ignore such an effect here. The behavior of the largest eigenvalue has been well studied in literature when elements of edge-weight matrix $\boldsymbol{W}_n$ are independently generated by certain symmetric distribution $g$ (typically Gaussian, otherwise, its density function may be even with less heavier tails than Gaussian distribution) with mean zero and variance one for non-diagonal elements and with mean zero and variance two for diagonal elements. In this setting, the largest eigenvalue $\lambda_{max}$ asymptotically follows Tracy-Widom distribution for Gaussian orthogonal ensembles with parameter $\beta=1$: \begin{eqnarray} \lim_{n \rightarrow \infty}P(\lambda_{max}\leq 2 + x/n^{2/3})=F_1(x), \label{F1} \end{eqnarray} where $F_1(x) \equiv \exp \{-(1/2)\int_{x}^{\infty} q(y)dy\} (F_2(x))^{1/2}$ with $F_2(x) \equiv \exp \{- \int_{x}^{\infty}(y-x)q^2(y)dy\}$ where $q(x)$ is the solution of Painlev\'e I\hspace{-.1em}I equation $d^2q/dx^2=xq + 2q^3$ with the boundary condition $q(x)\sim \mbox{Ai}(x)$ as $x \rightarrow \infty$ \cite{tracy1996orthogonal, tracy2009distributions}. Note that Tracy-Widom distribution is for the maximum eigenvalue of specific type of symmetric matrix (e.g., Gaussian ensembles) while the semicircular law holds for the distribution of eigenvalues in general type of symmetric matrix (Wigner ensembles). Moreover, in our framework, the diagonal elements are all zero, which is in a slightly different situation than the conventional assumption for Tracy-Widom distribution. Nevertheless, because of the universality property of Tracy-Widom distribution \cite[Theorem 21.4.3,]{ben2011wigner}, we can safely apply Eq.(\ref{F1}) to our context (obviously, our context satisfies the condition of universality that the diagonal part should be symmetric with a sub-Gaussian tail). Using Tracy-Widom distribution in Eq.(\ref{F1}), we set confidence intervals for our statistical test as follows. For the normalized edge-weight matrix $T(\boldsymbol{W}_n)$, we set the confidence interval $CI_{max}$ of the largest eigenvalue $\lambda_{max}$ at level $\alpha$. Since the violation of the semicircular law occurs as the positive deviation from the expected value, we consider the one-sided confidence interval as $(-\infty, q)$ where $q$ is a critical value at significant level $\alpha$, i.e., $P(\lambda_{max}\geq q | H_0)=\alpha$, which is estimated by $F_1(x)$ in Eq.(\ref{F1}) (refer to the shape of its first derivative in Fig.\ref{F2}b). If the generative distribution $g$ may not be symmetric or it may be heavy-tailed, one may evaluate the distribution of the largest eigenvalues by means of permutation test for $T(\boldsymbol{W}_n)$, though it may require some computation time. In addition to the largest eigenvalue, we also consider to test the smallest eigenvalue $\lambda_{min}$, which may cause the violation of the semicircular law (what matters is indeed the largest magnitude of eigenvalue). In this case, the confidence interval $CI_{min}$ is given by $(-q, \infty)$. In the similar manner, we consider to test the largest and the smallest eigenvalue of the exponentially normalized weight matrix. We first standardize the data and then apply the mapping $T_e$ where we set $t_0$ to 1/2 as default. This results in the transformed matrix $T_e(S(\boldsymbol{W}_n))$ (we denote the confidence intervals as $CI'_{max}$ and $CI'_{min}$, respectively). Since this procedure involves series of four statistical tests, we set the level of significance to $\alpha/4$ for each test, taking into account the Bonferroni correction (Algorithm~\ref{algo} \footnote{$\mathbb{I}(a)$ is an indicator function: 1 for correct $a$; 0 otherwise. }). \begin{algorithm} \caption{Testing on the existence of community structure} \label{algo} \begin{algorithmic} \STATE {\bfseries Input:} Edge-weight matrix $\boldsymbol{W}$, confidential intervals $CI_{max}$, $CI_{min}$, $CI'_{max}$ and $CI'_{min}$ at level $\alpha/4$. \STATE $s \leftarrow 0$ \STATE $s$ $\leftarrow$ $s$+$\mathbb{I}$ (max. eigenvalue of $T(\boldsymbol{W})$ $\in$ $CI_{max}$) \STATE $s$ $\leftarrow$ $s$+$\mathbb{I}$ (min. eigenvalues of $T(\boldsymbol{W})$ $\in$ $CI_{min}$) \STATE $s$ $\leftarrow$ $s$+$\mathbb{I}$ (max. eigenvalue of $T_e(S(\boldsymbol{W}))$ $\in$ $CI'_{max}$) \STATE $s$ $\leftarrow$ $s$+$\mathbb{I}$ (min. eigenvalue of $T_e(S(\boldsymbol{W}))$ $\in$ $CI'_{min}$) \IF {$s=4$} \STATE Accept $H_0$ \ELSE \STATE Reject $H_0$ \ENDIF \end{algorithmic} \end{algorithm} \section{Simulation study on synthetic data}\label{shynthetic} In this section, we report on a simulation study to evaluate the performance of our method. First, we investigate the validity of using $F_1(x)$ in Eq.(\ref{F1}) for approximation of the distribution of the maximum eigenvalue $\lambda_{max}$ when $n$ is finite. Second, we investigate the power of our method when the null hypothesis $H_0$ is not true. Third, we compare the performance of our method outlined in Algorithm~\ref{algo} with other methods. Basically, existing methods in literature consist of two steps. In the first step, a clustering solution for a given graph is yielded by a (arbitrary) clustering method. The yielded solution is subsequently compared with those clustering solutions for randomized graphs, which is further evaluated by a particular statistic. In this study, we adapt one of the state-of-the-art methods based on clustering entropy (\lq CE\rq, originally designed for a unweighted graph) \cite{gfeller2005finding}: $S = - \frac{1}{L} \sum_{(i, j)} \{ p_{i,j} \log_2p_{i, j} + (1-p_{i, j}) \log_2(1-p_{i,j}) \}$ where $L$ is the total number of edges in the graph, and $p_{i, j}$ is \lq in-cluster probability\rq~that measures the proportion of accordance of cluster memberships of nodes $i$ and $j$ between the given graph and the randomized graphs over a number of different noisy contaminations (we set the number of such contaminations to 100). As regards clustering, to the best of our knowledge, there is no clustering method that is specifically designed to detect community structure based on differences of patterns of distributions. As a bail-out procedure, we consider one of the state-of-the-art methods for signed networks: Signed spectral clustering based on normalized signed Laplacian method (\lq SignedSpec\rq), which is designed to detect the weakly balanced structure of graph, i.e., positive weights within a cluster and negative weights between clusters \cite{kunegis2010spectral}. We also consider the conventional spectral clustering (normalized Laplacian method, \lq ConvSpec\rq), which is applicable to a graph with positive weights. To apply the method \lq ConvSpec\rq~to our context, we transform an edge-weight matrix into a positively-weighted matrix by subtracting $\underset{i, j}{\mbox{min}}~w_{i, j}$ from each weight. Note that the method \lq ConvSpec\rq~is equivalent to the method \lq SignedSpec\rq~when edge weights are all positive. \begin{figure}[ht!] \begin{center} \subfigure[No community]{ \label{alpha} \includegraphics[scale=0.13, trim=15mm 0mm 15mm 0mm]{null2.eps} } \subfigure[$\mu$ differs]{ \label{power1} \includegraphics[scale=0.13, trim=15mm 0mm 15mm 0mm]{meandis2.eps} } \subfigure[$\sigma$ differs]{ \label{power2} \includegraphics[scale=0.13, trim=15mm 0mm 15mm 0mm]{sigdis2.eps} } \subfigure[$\sigma$ differs ({\it Exp})]{ \label{power3} \includegraphics[scale=0.13, trim=15mm 0mm 15mm 0mm]{sigexpdis2.eps} }\\ \end{center} \vspace{-5mm} \caption{\it \small Boxplots represent distributions of largest eigenvalues for various settings. Panel (a): No community case ($K=1$) of Gaussian ensembles for different number of nodes from 150 to 1500 in x-axis. Panel (b): Five-way community case with the number of nodes 750 and cluster size (50, 100, 150, 200, 250). Each cluster block is characterized by means of Gaussian distribution (while fixing variance=1), which is randomly chosen from $\{-\mu, \mu\} $ with equal probabilities. The value of $\mu$ is manipulated from 0 to 0.5 of width 0.1 in x-axis. Panel (c) : Five-way community case characterized by variance (while fixing mean=0), which is randomly chosen from $\{ 1, \sigma^2\} $ with equal probabilities. The value of $\sigma$ is manipulated from 1 to 6 of width 1 in x-axis. Panel (d): Five-way community case in the same setting as in (c), but each edge-weight matrix is transformed by the exponential mapping {\it Exp} in Eq.(\ref{et}) with $t_0=1/2$. In all panels, the green line denotes the 95 percentile of the largest eigenvalue under the null hypothesis $H_0$ in~(\ref{H0}).} \label{check} \end{figure} \begin{figure}[ht!] \begin{center} \subfigure[$\mu$ differs]{ \label{compar1} \includegraphics[scale=0.22, trim=0mm 0mm 0mm 0mm]{compare2.eps} } \subfigure[$\sigma$ differs]{ \label{power} \includegraphics[scale=0.22, trim=0mm 0mm 0mm 0mm]{comparesigma2ver3.eps} } \\ \end{center} \vspace{-5mm} \caption{\it \small Comparison of power of test for three different methods: our method, clustering entropy method for yielded cluster solution by signed spectral clustering method (CE + SignedSpec), and clustering entropy method by conventional spectral clustering (CE + ConvSpec). The true community structure is set as follows: cluster size (50, 100, 150, 200, 250); the means and the variances are manipulated in x-axis of Panel (a) and (b) as in Fig.\ref{check}b and Fig.\ref{check}c, respectively. } \label{compar2} \end{figure} \subsection{Data generation} For the data structure in this simulation study, we adopted that in \cite{hsieh2012low}, setting the number of clusters to five and cluster size to $(10s, 20s, 30s, 40s, 50s)$ where we manipulated integer $s$. In this setting, we have $5 \times 5 = 25$ cluster blocks. In each cluster block, weights were independently drawn from a Gaussian distribution $N(\mu_{k, k'}, \sigma_{k, k'})$ where $\mu_{k, k'}$ and $\sigma_{k, k'}^2$ are the mean and the variance for a cluster block $(k, k')$. We generated 100 datasets for each setting. \subsection{Results} \label{simresults} When the number of nodes ranges from 150 to 1500, the distribution function $F_1(x)$ in Eq.(\ref{F1}) provides a good approximation for the critical value at significance level of $\alpha = 0.05$ under the null hypothesis $H_0$ (Fig.\ref{check}a). Since the function $F_1(x)$ provides the asymptotic probability distribution, this result suggests that the function $F_1(x)$ also provides a good approximation for the critical value when the number of nodes goes up more than this range. As regards statistical power, it is implied that our method can well detect the existence of community structure when means $\mu_{k ,k'}$ in each block are different at most by 0.3 ($3 \times 0.05 + 3 \times 0.05 $) when $\sigma_{k, k'}=1$ with the number of nodes being 750 (Fig.\ref{check}b). On the other hand, the power may not be sufficient when differences among cluster blocks are characterized by variances $\sigma_{k, k'}^2$ (Fig.\ref{check}c). However, the application of our method to the exponentially transformed matrix by $Exp$ considerably improves the power (Fig.\ref{check}d). All these results suggest good performance of our method to test the existence of community structure in a graph. Lastly, we compare the performance of our method with the remainder of the methods. We applied our method as outlined in Algorithm~\ref{algo} to the synthetic data, setting $\alpha$ to 0.05 (hence, $\alpha/4 = 0.0125$). When the community structure is characterized by mean differences, the performance of our method is comparable with the clustering entropy method with signed spectral clustering (CE + SignedSpec), while it outperforms the clustering entropy method with conventional spectral clustering (CE + ConvSpec) (Fig.\ref{compar2}a). On the other hand, when the community structures is characterized by scale differences, our method considerably outperforms the remainder of the methods (Fig.\ref{compar2}b). \section{Application to real data} \label{application} In this section, we experiment our method to real data. The objective is to evaluate the performance of our method when it is applied to various types of real graph data. \subsection{Data} First, we applied our method to the following benchmark graph datasets: Karate club, {\it Karate} \cite{zachary1977information}; co-authorships in network science, {\it Co-authours} \cite{newman2006finding}; Tribal relationships in highland New Guinea, {\it Gahuku-Gama} \cite{read1954cultures}. The datasets of {\it Karate} and {\it Co-authours} are binary (i.e., $\{0, 1\}$), while the edges in the dataset of {\it Gahuku-Gama} take discrete signed values, $\{-1, 0, 1\}$. The number of nodes for these datasets are 34, 1589, and 16, respectively. These datasets have been well studied in terms of detecting community structure \cite{yang2007community}. The clustering results and the underlying social relationships between subjects (nodes) were fully discussed in literature, clearly suggesting the existence of community structure in these datasets. \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.22, trim=0mm 0mm 0mm 0mm]{karate3.eps} \includegraphics[scale=0.22, trim=0mm 0mm 0mm 0mm]{neural2.eps} \includegraphics[scale=0.225, trim=0mm 5mm 0mm 0mm]{gafukugama3.eps} \end{center} \caption{\it \small Results of application of our method to real datasets, {\it Karate}, {\it Co-authors}, and {\it Gahuku-Gama} from left to right panels. The star maker denotes the maximum or the minimum eigenvalues of the normalized matrix $T(\boldsymbol{W})$, while the cross marker denotes those of the exponentially normalized matrix $T_e(S(\boldsymbol{W}))$. The edges on top or bottom of boxes denote critical values of these eigenvalues at significance level $\alpha /2$ with $\alpha$=0.05 for {\it Karate} and {\it Co-authors} datasets, and $\alpha/4$ for {\it Gahuku-Gama} dataset. These critical values were yielded by permutation test with 1000 randomized realizations. In contrast, the red dashed lines denote the critical values derived from Tracy-Widom distribution $F_1(x)$. } \label{realdata} \end{figure} Second, we applied our method to a real-valued edge-weighted graph: resting state functional MRI ({\it fMRI}) data \cite{fmri}. The original dataset consists of the level of BOLD (Blood-Oxygen-Level Dependent) signal at short intervals, which reflects neural activity at each tiny portion of the brain, called \lq voxel\rq~(4949 voxels in this dataset). We pre-processed this dataset by evaluating temporal correlations among these voxels and carrying out Fisher's z-transformation for them, which results in a $4949 \time 4949$ edge-weight matrix $\boldsymbol{W}$. The objective in this dataset is to experiment our method to such a real-valued edge-weight matrix and to draw a useful implication from the analysis. \subsection{Results} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.2, trim=0mm 0mm 0mm 0mm]{fmri2.eps} \end{center} \caption{\it \small Results of application of our method to the {\it fMRI} dataset. The star markers denote the maximum or the minimum of eigenvalues $\lambda$ for normalized weight matrices by the mapping $T$ in various brains regions with edge-weight matrix $\boldsymbol{W}_{k, k}$, indexed by the brain region $k$ in x-axis. The cross markers denote those counterparts for exponentially normalized weight matrices by the mapping $T_e$. The horizontal lines denote lines $y=-2$ and $y=2$, which correspond the values to which the minimum and the maximum eigenvalues asymptotically converge. } \label{fmriall} \end{figure} For the first group of real datasets, our method correctly suggests that the community structure may exist (i.e., $K>1$), whether we estimate critical values either by Tracy-Widom distribution or by permutation test (Fig.\ref{realdata}). Note that in the binary case, we always have the same results of our test for the original matrix and the exponentially transposed matrix, because $T(\boldsymbol{W})=T_e(S(\boldsymbol{W}))$. So, we carried out our test only for $T(\boldsymbol{W})$ in {\it Karate} and {\it Co-authors} datasets, setting the significance level to $\alpha/2$. For {\it fMRI} dataset, our test rejected the null hypothesis $H_0$, yielding the maximum and the minimum eigenvalues as 31.0 and -7.2 for $T(\boldsymbol{W})$, and 31.8 and -10.9 for $T_e(S(\boldsymbol{W}))$, which provides strong evidence that there exists the community structure in this graph. Furthermore, we carried out our test for subsets of voxels in brain regions that are anatomically predefined, where the number of voxels ranges from 13 to 498. Our test results suggest that there may exist the community structure in each region (except for brain region 16) (Fig.\ref{fmriall}). This result supports the conjecture on heterogeneity of brain activities in anatomically defined brain regions discussed in the literature of neuroscience \cite{birn2001spatial}. \section{Discussion} We have proposed a novel method for statistical test on the existence of community structure in an undirected graph that is characterized by the first and the second moments of generative model for edge weights. This method can be considered as an (nontrivial) extension of the recently proposed method \cite{bickel2016hypothesis} from a binary-valued graph to a real-valued one. Unlike the existing methods for real-valued graphs, our method does not need a cluster solution. Hence, we can apply this method even to the nontrivial case of clustering in which edge weights take both positive and negative real values. Also, in our approach, we can avoid a nontrivial problem of determining the number of clusters. Further, our method is quite efficient in terms of computation time: We need only to evaluate the eigenvalues of edge-weight matrix just once if we use Tracy-Widom distribution, which is due to the asymptotic results provided by Random Matrix Theory. As the next step of analysis, one may wonder how to find community memberships when our test rejects the null hypothesis of $K=1$. The present paper did not address this issue, but, it would be quite useful to examine eigenvectors of the edge-weight matrix as in the case of spectral clustering. It is conjectured that some of the eigenvectors of $T(\boldsymbol{W})$ and $T_e(S(\boldsymbol{W}))$ may have information on community memberships. It would be an important future research topic on how to determine and synthesize relevant eigenvectors for inferring the underlying community structure. \vspace{2mm} \newpage
2024-02-18T23:40:59.533Z
2016-10-14T02:02:11.000Z
algebraic_stack_train_0000
3,869
5,863
proofpile-arXiv_066-3050
\section{Introduction} Recently, we have been witnessing the combined power of video streaming and e-commerce. Since online videos can reach millions of people, most companies have realized that they are the best showcase platforms to promote products. Therefore, many applications have been developed to support this combination. These applications includes fashion products retrieval from videos \cite{garcia2017dress}, contextual ads insertion \cite{chen2019livesense}, etc. We term them Video-to-Retail (V2R) applications. In V2R, the task is to analyze video content and products, and to match them to each other so that companies can promote products efficiently while maintaining the video watching experience of users. Developing V2R applications is a non-trivial task. First, the data that will be fed into the application such as videos, product ads and product descriptions, is multi-modality. Processing, fusing and aligning these data to understand them better require much effort and are still very challenging \cite{baltruvsaitis2018multimodal}. Second, to match videos to products or vice versa in a non-intrusive way, accurate recognition and retrieval algorithms are needed. Third, processing speed is vital for maintaining a good user experience. A video usually contains hundreds of thousands of frames, and a product database may include thousands of items. How to efficiently process and match them remains an open problem. To address these issues, representative works as listed in Table \ref{tab:v2o_compare} have considered the following two perspectives: the system perspective and the algorithm perspective. From the system perspective, for instance, Mei et al. \cite{mei2007videosense} build a system that includes a pipeline to unify the ads and video pre-processing for contextual ads insertion. In \cite{garcia2017dress}, they exploit the video frame redundancy and index features into a kd-tree for fast clothing retrieval. From the algorithm perspective, quite a number of matching frameworks employing DL models have been proposed. For instance, in \cite{cheng2017video2shop}, the authors design a framework consisting of an image feature network, a video feature network and a similarity network to match clothing in videos to online shopping images. In \cite{cheng2016video}, the authors use a set of models that include content understanding models to analyze user behavior, and video tags for accurate video advertising. \begin{table*}[] \centering \caption{A comparison of Hysia and existing V2R related works.} \label{tab:v2o_compare} \begin{adjustbox}{width=1\textwidth} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \textbf{V2O related work} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Product-to-\\ Video\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Video-to-\\ Product\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}System\\ support\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}End-to-\\ end\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Model\\ management\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Model serving\\ optimization\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Web interface\\ \&API\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Open\\ source\end{tabular}}} \\ \hline VideoSense. \cite{mei2007videosense} & \checkmark & $\times$ & \checkmark & \checkmark & $\times$ & $\times$ & \checkmark & $\times$ \\ \hline CAVVA. \cite{yadati2013cavva} & \checkmark & $\times$ & \checkmark & \checkmark & $\times$ & $\times$ & \checkmark & $\times$ \\ \hline Video eCommerce. \cite{cheng2016video} & $\times$ & \checkmark & \checkmark & \checkmark & $\times$ & $\times$ & \checkmark & $\times$ \\ \hline Garcia et al. \cite{garcia2017dress} & $\times$ & \checkmark & \checkmark & \checkmark & $\times$ & $\times$ & \checkmark & \checkmark \\ \hline Video2shop. \cite{cheng2017video2shop} & $\times$ & \checkmark & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\ \hline Madhok et al. \cite{madhok2018semantic} & \checkmark & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\ \hline \textbf{Hysia (Ours)} & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline \end{tabular} \end{adjustbox} \end{table*} There is still much work to be done to make developing fast and efficient V2R apps in various domains easier. First, existing systems only focus on one kind of V2R application such as contextual video advertising (product-to-video) or retrieving products from videos (video-to-product); and neglect the similarities (i.e. data engineering, model processing and matching) between them. Thus, multimedia researchers have to go through all the infrastructure plumbing work and make duplicate efforts in the process. Second, current systems pay more attention to improving matching accuracy and do not address system optimization. Third, given that DL models are increasingly used to build V2R applications, how to deploy these models with ease has not been fully considered. Lastly, there has been no comprehensive open source V2R platform for non-experts with little machine learning (ML) knowledge, making it challenging for them to harness the power of AI. To narrow these gaps, we develop Hysia, a fully open source and cloud-oriented framework that comprises widely used V2R models and optimized infrastructure services including data engine, model serving and content matching. It allows non-expert users to quickly make use of the built-in utilities to analyze V2R related data; and expert users to build or evaluate new, high performance V2R applications with ease. Essential features in V2R such as application management and new model binding are also provided. Hysia can run in either virtual machines (VMs) or containers, making it easy to be integrated into the current cloud environments. In Hysia, multimedia practitioners and researchers can focus on application design rather than writing repetitive codes, with reference applications provided out of the box. We integrate industry-grade libraries to speed up data processing including NVIDIA video SDK for video pre-processing, Facebook faiss for searching and gRPC for communication. Hysia is highly modular, allowing seamless integration with new modules. Though it has been designed for V2R, it can also be used as a multimedia toolbox for video analysis, audio recognition and so on. We release Hysia as an open source project at \url{https://github.com/cap-ntu/Video-to-Retail-Platform} under Apache 2.0 license. It has attracted attention and interests from many in the developer community. We also dockerize the system and publish it to DockerHub at \url{https://hub.docker.com/r/hysia/hysia} so that any cloud users can install and run Hysia with ease. \section{System Design} In this section, we first present the architecture of Hysia, and then we introduce the workflow for fulfilling V2R applications. \subsection{Architecture} The system architecture is presented in Figure \ref{fig:v2o_arch}. In designing Hysia, we focus on the system's modularity and extensibility. As a cloud-oriented and end-to-end platform, it consists of two components: a back-end infrastructure, and a front-end application layer. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{figure/v2o_structure.pdf} \caption{Hysia architecture.} \Description{The Hysia architecture.} \label{fig:v2o_arch} \end{figure} \textbf{Back-End Platform}. In clouds, computing resources are abstracted via three main approaches, namely infrastructure as a service (IaaS), container as a service (CaaS), and serverless/function as a service (FaaS). Hysia core services can make use of either virtual machines (IaaS) or containers (CaaS). In addition, as serving ML models is stateless \cite{zhang2019mark}, it is simple to deploy them using serverless (FaaS). The ML model-related services, namely data engine, model repository, content matching and serving in Hysia are encapsulated into a middleware - a form of ML-as-a-Service. The data engine is designed to reduce users' efforts for preprocessing complex multimodality data. The model repository manages various ML models in Hysia. The model serving and content matching are designed to speed up the data analysis by utilizing GPUs. The functions provided by these core services are exposed via APIs so that developers can easily extend our system. \textbf{Front-End Application}. Built on top of the back-end platform, the front-end application layer provides full support for four classes of users: 1) We have well-designed APIs for model contributors to bind new V2R-related models, develop new V2R applications and extend Hysia's functionalities; 2) We provide content analysis service for video providers so that they can mine videos to improve their commercial value; 3) A contextual advertising application is designed for advertisers to place ads at appropriate positions of videos; and 4) Hysia also has a video shopping service to help spectators buy products while watching videos. The built-in services and applications not only demonstrate the capability of our platform; they also provide reusable templates for researchers and practitioners to easily add more V2R plugins to Hysia to better serve their needs. \subsection{Workflow} The workflow of our system is illustrated in Figure \ref{fig:v2o_workflow}, which includes two phases: offline and online. In the offline phase, model contributors register their V2R-related models to Hysia and use the profiler to obtain their runtime performance. The profiling results are stored into a cache in the orchestrator; and the model weights are then persisted in the model repository. In the online phase, a web interface is provided for end users (e.g., video providers and advertisers) to upload data (e.g. videos and ads), and to display final results. Those data are first preprocessed by the data engine and transformed into formats acceptable to DL models. Meanwhile, the orchestrator sends the optimal batch size of a model to the data engine so that it can batch the formatted requests. They are then fed into the model server for further analysis. Finally, the predictions and data feature output from the model server will be sent to the content matching service to do matching of videos to products or vice versa. We also implement a monitoring component to record the system status. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figure/hysia_workflow.pdf} \caption{Workflow of a V2R application.} \Description{Hysia Workflow} \label{fig:v2o_workflow} \end{figure} \section{System Implementation} In this section, we describe the implementation of Hysia as illustrated in Figure \ref{fig:v2o_workflow}. \textbf{Model Repository}. Hysia stores ML models in a two-layer structure. It persists model information such as the model name, the service description (e.g., product detection) and so on in SQLite which is a very lightweight database. The simple data structure makes it easy for users to replace the storage backend with their own database solutions. The model weight file, usually sizeable, is serialized and stored separately in a file system and the file path will be persisted in SQLite. \textbf{Model Profiler}. This component receives ML models submitted by contributors and profiles these models offline. Much research has shown that the batch size can significantly impact the model's latency and throughput when served, in fact, our experiments also demonstrated that clearly in Section \ref{sec:expriment}. Therefore, Hysia profiles models under different batch sizes to obtain the corresponding latency and throughput. The profiling information will be stored in a cache in the orchestrator to help users choose the best batch size for a particular model. \textbf{Orchestrator}. The orchestrator contains a cache implemented with Redis to store the model profiling information, and a batch size calculator for selecting an appropriate batch size of a model. Expert users of Hysia only need to specify the maximum acceptable latency for their applications, i.e., a latency SLO (Service-Level-Objective). The orchestrator can then decide on an appropriate batch size, and sends such value to the data engine. \textbf{Data Engine}. The data engine implements a set of functions to pre-process multi-modality data, such as video, audio, product images, and textual content. \textit{(1) Video}: we employ NVIDIA video SDK to implement the HysiaDecode component to process videos with GPUs. In addition to utilizing GPU, HysiaDecode can also detect scene changes quickly by processing only one key frame in a scene shot. \textit{(2) Audio}: we separate audio from video and save it as a file that will be processed by suitable audio models. \textit{(3) Image}: we provide a resize and transform function to format original images so that they can be processed by existing Tensorflow or PyTorch models. \textit{(4) Text}: we implement a function to convert subtitles into ordinary text format, and a set of text preprocessing utilities such as tokenization so it can be fed into NLP models. \textbf{Model Server}. The model server is implemented using gRPC which is widely used for building micro-services. It receives batched data from the data engine and employs models in the repository to analyze them. The model server will output two kinds of results. One is the prediction, and the other is the intermediate features. The predictions will be sent back to the data engine for displaying to users. The feature vectors will be stored in the file system, and at the same time, be sent to a subsequent module for matching. \textbf{Matching}. We implement this module to match products to videos or vice versa. Much optimization has been done in Hysia to improve the matching efficiency. Specifically, we employ faiss \cite{johnson2019billion}, and load features into GPUs. Therefore, the similarity comparison between features has been accelerated to meet real-time latency requirements. In addition, to make the system extensible, we provide APIs for experts to extend the module to accommodate their needs. \textbf{Monitor}. The monitor is implemented with a pub-sub structure in Redis to support V2R applications running on a distributed infrastructure. It aggregates workers' status including CPU and GPU data, and resource usage of the executing models periodically. A master worker is set up to collect monitoring data from all worker nodes, making it easy for users to locate system issues. \section{Demonstration} Hysia incorporates a wide range of ML models, ranging from scene recognition and object detection to celebrity recognition and audio recognition, for building comprehensive V2R applications. In this section, we describe two built-in reference applications\footnote{\url{https://cap-ntu.github.io/hysia_mm_demo/}} including contextual advertising and video shopping, based on real-world scenarios. Then, we demonstrate how to bind new V2R models in Hysia. Finally, we present a quantitative evaluation of Hysia. \begin{figure} \begin{subfigure}{.5\columnwidth} \centering \includegraphics[width=1.0\linewidth]{demo/video_analysis_2.png} \caption{Video Analysis} \label{fig:video_analysis} \end{subfigure}% \begin{subfigure}{.48\columnwidth} \centering \includegraphics[width=1.0\linewidth]{demo/search_insert_ads_2.png} \caption{Ads Insertion} \label{fig:ads_insertion} \end{subfigure} \begin{subfigure}{.5\columnwidth} \centering \includegraphics[width=1.0\linewidth]{demo/ads_display_2.png} \caption{Ads Display} \label{fig:ads_display} \end{subfigure}% \begin{subfigure}{.47\columnwidth} \centering \includegraphics[width=1.0\linewidth]{demo/shopping.png} \caption{Video Shopping} \label{fig:video_shopping} \end{subfigure} \caption{The built-in applications of Hysia. Try them out yourself online.} \label{fig:demo} \end{figure} \subsection{Contextual Advertising} Both content and ads providers can enjoy the convenience provided by Hysia. For instance, a content provider gets a whole TV show and needs to insert several ad images or videos into the appropriate positions of videos. Hysia analyzes the uploaded video content as shown in Figure \ref{fig:video_analysis}. Then, advertisers can upload their ads to Hysia, and it will search for the top-5 of relevant video clips. Users can then choose the most relevant one (Figure \ref{fig:ads_insertion}). Here we leverage the human-in-the-loop factor, since real-world scenarios can be very complex. Automatically inserting into the top-1 clip may negatively affect users' experience, if the matching algorithm cannot capture new data distributions. Finally, Hysia allows both content and ads providers to verify the insertion results as shown in Figure \ref{fig:ads_display}. \subsection{Video Shopping} Spectators may choose to buy related products while watching videos. Hysia fulfills this need by providing a video shopping service. Since mobile video accounts for a significant portion of video traffic, we demonstrate a mobile application whose backend server is based on Hysia. As shown in Figure \ref{fig:video_shopping}, users can click on the screen, and Hysia will immediately search for products related to the scene that users are watching. The top 10 products will be shown to users. They can then click on the product icon to navigate to the corresponding shopping page. \subsection{New Model Binding} In Hysia, model contributors can use the provided APIs for binding new V2R models. Hysia provides well-designed template configuration files and reference models. For instance\footnote{\url{https://github.com/cap-ntu/hysia_mm_demo}}, a developer has trained a VQA model \cite{singh2018pythia} on a new V2R related dataset. The developer just needs to prepare a \texttt{YMAL} file and a \texttt{engine.py} file, following Hysia's template. The model will be containerized as a gRPC-based web service. Users can then employ the new model in Hysia to analyze V2R-related data. \subsection{Quantitative Evaluation} \label{sec:expriment} In this section, we evaluate Hysia's performance\footnote{\url{https://github.com/cap-ntu/Video-to-Retail-Platform/tree/master/tests}} on the Stanford Online Product \cite{oh2016deep} and TVQA video \cite{lei2018tvqa} datasets with a DGX workstation with NVIDIA V100 GPUs. \begin{figure} \begin{subfigure}{.5\columnwidth} \centering \includegraphics[width=1.0\linewidth]{exp/profile_decoder.pdf} \caption{} \label{fig:video_throughput} \end{subfigure}% \begin{subfigure}{.5\columnwidth} \centering \includegraphics[width=1.0\linewidth]{exp/key_frame_speedup_ratio.pdf} \caption{} \label{fig:match_latency} \end{subfigure} \begin{subfigure}{.55\columnwidth} \centering \includegraphics[width=1.0\linewidth]{exp/model_latency_throughput_object-detection-service.pdf} \caption{} \label{fig:model_throughput_latency} \end{subfigure}% \begin{subfigure}{.45\columnwidth} \centering \includegraphics[width=1.0\linewidth]{exp/profile_matching.pdf} \caption{} \label{fig:memory_utilization} \end{subfigure} \caption{System performance evaluation} \label{fig:system_evaluation} \end{figure} As shown in Figure \ref{fig:system_evaluation}, we have evaluated Hysia in four aspects: 1) Hysia data engine is able to efficiently utilize GPUs to process videos at the speed of more than 1000FPS, providing enough images for further analysis. 2) The key frame detection method can further improve video preprocessing speeds. A video with more scene shots can have more performance benefits. 3) As the batch size increases, the latency keeps increasing while the throughput increases initially, then decreases. This demonstrates the necessity of our model profiler and orchestrator for finding the right batch size. 4) By integrating faiss, Hysia's matching module can search for 100K of product images in less than 4.5ms. This demonstrates the ability to support a real-time shopping experience for spectators. \section{Conclusion} \label{sec:conclusion} In this paper, we present Hysia, a cloud-based system for the development and deployment of V2R applications. The system is designed to support a wide range of users, from ML novices to experts. The former can leverage built-in applications for V2R data analysis; while the latter can utilize Hysia's optimized services for rapid V2R prototyping. We demonstrate Hysia's usability with three real-world scenarios; and its efficiency with quantitative performance measurements. Our development team is continuously maintaining and improving Hysia as an open-source project. \bibliographystyle{ACM-Reference-Format}
2024-02-18T23:41:00.308Z
2020-06-11T02:10:50.000Z
algebraic_stack_train_0000
3,905
3,114
proofpile-arXiv_066-3086
\section{APPROXIMATE IMPLEMENTATIONS} \label{sec:aprox_impl} The solution space defined by (\ref{eq:implicit_constr}), although it exists for $T_c \geq T$, often yields solutions that are unstable. Further, Corollary \ref{corollary:m1} gives a fundamental limit on the sparsity of $\mathbf{M_c}$. If $\Phi_u(1)$ is dense, we cannot find implementation matrices that support any type of sparsity (e.g. communication delay, locality). These necessitate relaxations of (\ref{eq:implicit_constr}). For a relaxed implementation, we want the implemented closed-loop maps ($\tilde{\mathbf{\Phi}}_x$, $\tilde{\mathbf{\Phi}}_u$) to be as close to the optimal closed-loop maps ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$) as possible while maintaining internal stability, i.e. \begin{equation} \label{eq:noncvx_relax} \begin{aligned} \min_{\mathbf{R_c}, \mathbf{M_c}} \|\begin{bmatrix}\Rc \\ \Mc\end{bmatrix} (I + \mathbf{\Delta})^{-1} - \begin{bmatrix}\R \\ \M\end{bmatrix}\|\\ \textrm{s.t.} \quad (I + \mathbf{\Delta})^{-1} \textrm{stable}, \begin{bmatrix}\Rc \\ \Mc\end{bmatrix} \in \mathcal{S} \end{aligned} \end{equation} where $\mathcal{S}$ includes sparsity and FIR constraints, and $I + \mathbf{\Delta} = \mathbf{\Delta_c}$. This optimization problem is clearly nonconvex. Factoring the objective function as \begin{equation} \|(\begin{bmatrix}\Rc \\ \Mc\end{bmatrix} - \begin{bmatrix}\R \\ \M\end{bmatrix}(I + \mathbf{\Delta}))(I + \mathbf{\Delta})^{-1}\| \end{equation} and using similar submultiplicativity, small-gain, and power series arguments as Section 4.5.1 of \cite{Anderson2019}, we can upper bound the optimization problem (\ref{eq:noncvx_relax}) with this quasi-convex problem: \begin{equation} \label{eq:nested_relax} \begin{aligned} \min_{\gamma\in[0,1)}\frac{1}{1-\gamma} \min_{\mathbf{R_c}, \mathbf{M_c}, \mathbf{\Delta}} \|\begin{bmatrix}\Rc \\ \Mc\end{bmatrix} - \begin{bmatrix}\R \\ \M\end{bmatrix}(I + \mathbf{\Delta})\|\\ \textrm{s.t.} \quad \begin{bmatrix}zI-A && -B\end{bmatrix}\begin{bmatrix}\Rc \\ \Mc\end{bmatrix}=(I + \mathbf{\Delta}), \\ \|\mathbf{\Delta}\| \leq \gamma, \begin{bmatrix}\Rc \\ \Mc\end{bmatrix} \in \mathcal{S} \end{aligned} \end{equation} This is similar to the virtualized SLS method \cite{Matni2018} \cite{Anderson2019}, with one key difference. For an objective $g(\mathbf{\Phi}_x, \mathbf{\Phi}_u)$, the virtualized SLS method uses $g(\mathbf{R_c}, \mathbf{M_c})$ as the objective, while our two-step method uses \begin{equation} \|\begin{bmatrix}\Rc \\ \Mc\end{bmatrix} - \begin{bmatrix}\R \\ \M\end{bmatrix}(I + \mathbf{\Delta})\| \end{equation} as the objective. This is the equation error for (\ref{eq:implicit_constr}), and is a heuristic for the closed-loop difference. The nested optimization problem defined by (\ref{eq:nested_relax}) is time-consuming to solve; it can also be mathematically infeasible if the sparsity constraints $\mathcal{S}$ are too strict. We instead solve (\ref{eq:actual_relax}), which is much quicker and uses a regularizer on $\mathbf{\Delta}$ to promote stability. We suggest starting with a small $\lambda$, solving (\ref{eq:actual_relax}), checking for stability using the distributed method presented in Section \ref{sec:stability_check}, and increasing $\lambda$ if the stability check is failed. Alternatively, we can enforce $\|\mathbf{\Delta}\| < 1$. \begin{equation} \label{eq:actual_relax} \begin{aligned} \min_{\mathbf{R_c}, \mathbf{M_c}, \mathbf{\Delta}} \|\begin{bmatrix}\Rc \\ \Mc\end{bmatrix} - \begin{bmatrix}\R \\ \M\end{bmatrix}(I + \mathbf{\Delta})\| + \lambda\|\mathbf{\Delta}\|\\ \textrm{s.t.} \quad \begin{bmatrix}zI-A && -B\end{bmatrix}\begin{bmatrix}\Rc \\ \Mc\end{bmatrix}=(I + \mathbf{\Delta}), \begin{bmatrix}\Rc \\ \Mc\end{bmatrix} \in \mathcal{S} \end{aligned} \end{equation} We can also include additional objectives in (\ref{eq:actual_relax}), e.g. $\mathcal{L}_{1}$ regularization on ($\mathbf{R_c}$, $\mathbf{M_c}$) to promote sparsity. The optimization problem (\ref{eq:actual_relax}) is column-wise separable if we choose a column-wise separable norm for the objective (e.g. $\mathcal{H}_{2}$ norm). Like the original SLS problem, it can be decomposed into subproblems to be solved in parallel. \section{CLOSED-LOOP CONSTRAINTS VS. CONTROLLER CONSTRAINTS} \label{sec:cl_vs_ctrller} In this section, we discuss the physical interpretation of separately applying locality and delay constraints to the closed-loop and to the controller, and when such constraints are appropriate. This separation is not possible in standard SLS, since the closed-loop maps themselves are used as implementation matrices for the controller. First, a result on how applying controller constraints on the closed-loop maps can be overly restrictive: \begin{lemma} \label{lemma:m_in_k} Let $\mathbf{K}$ be the controller corresponding to the closed-loop maps ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$). Then, the operator $\mathbf{\Phi}_u$ lies in the range of the operator $\mathbf{K}$. \end{lemma} \begin{proof} By Theorem \ref{thm:unique_k}, we have that $\mathbf{K}\mathbf{\Phi}_x$ = $\mathbf{\Phi}_u$. \end{proof} Lemma \ref{lemma:m_in_k} shows that sparsity constraints (e.g. locality, delay) on $\mathbf{K}$ will translate to sparsity constraints on $\mathbf{\Phi}_u$, but not $\mathbf{\Phi}_x$; directly applying these constraints on $\mathbf{\Phi}_x$ may be too restrictive. Note that although it is also true that $\mathbf{K}\mathbf{R_c}$ = $\mathbf{M_c}$, both $\mathbf{M_c}$ and $\mathbf{R_c}$ must obey sparsity constraints as they are directly used in the implementation. \subsection{Locality} Let $\mathcal{L}(i)$ denote the locality of node $i$. Generally, $\mathcal{L}(i)$ consists of the $l$ closest neighbours of node $i$ in the network. Locality constraints restrict spectral components of $\mathbf{R_c}$ and $\mathbf{M_c}$ (or $\mathbf{\Phi}_x$ and $\mathbf{\Phi}_u$) to have nonzero support only over the allowed localities; i.e. \begin{equation} \label{eq:locality_constr} \begin{aligned} R_c(k)^{i,j} = 0 \quad \forall j \notin \mathcal{L}(i) \\ BM_c(k)^{i,j} = 0 \quad \forall j \notin \mathcal{L}(i) \\ \end{aligned} \end{equation} where $B$ is the actuation matrix of the system. For a system with nodes arranged in a chain configuration and $\mathcal{L}(i)$ equal to the $l$ closest neighbours of node $i$, these constraints result in banded diagonal $R_c(k)$ and $M_c(k)$ with a band width of $2l+1$ $\forall k$. When we apply locality constraints on the implementation matrices as per (\ref{eq:locality_constr}), we enforce that node $i$ will only communicate with nodes in $\mathcal{L}(i)$ for all time. When we apply locality constraints on the closed-loop maps (i.e. replace $R_c$ and $M_c$ in (\ref{eq:locality_constr}) with $\Phi_x$ and $\Phi_u$), we limit how far a disturbance at a node spreads before it is contained. While both are useful, controller locality tends to be a hard constraint that arises from physical limitations in the communication network, while closed-loop locality is a soft constraint that can be relaxed. \subsection{Delay} Let $d(i,j)$ denote the delay from node $j$ to node $i$. In general, $d(i,j)$ is proportional to the distance between nodes $i$ and $j$. Delay constraints are like time-varying locality constraints with an expanding locality, where $\mathcal{L}(i)$ at time $k$ contains all nodes $j$ for which $k \geq d(i,j)$. Delay constraints are enforced as follows: \begin{equation} \label{eq:delay_constr} \begin{aligned} R_c(k)^{i,j} = 0 \quad \forall k < d(i,j) \\ BM_c(k)^{i,j} = 0 \quad \forall k < d(i,j) \\ \end{aligned} \end{equation} where $B$ is the actuation matrix of the system. For a system in a chain configuration and $d(i,j)$ proportional to inter-nodal distance, these constraints result in banded diagonal $R_c(k)$ and $M_c(k)$, with wider bands for higher values of $k$. When we apply delay constraints on the implementation matrices as per (\ref{eq:delay_constr}), we are ensuring that controllers do not require information that cannot be communicated to them in time. For example, node $i$ cannot use any information about node $j$ that is more recent than $t - d(i,j)$. When we apply delay constraints on the closed-loop maps (i.e. replace $R_c$ and $M_c$ in (\ref{eq:delay_constr}) with $\Phi_x$ and $\Phi_u$), we limit how fast a disturbance at node $j$ propagates to the state and input at node $i$. As with locality, the controller delay constraint tends to be a hard constraint arising from physical communication limitations. Unlike in the locality case, the closed-loop delay constraint serves no clear purpose; by separating the controller design from the closed-loop design, we avoid imposing this unnecessary constraint on the closed-loop map. \subsection{Delay and locality as optimization objectives} We can augment the objective in (\ref{eq:actual_relax}) with the following terms to encourage tolerance for communication delay: \begin{equation} \label{eq:delay_tolerance} \sum_{k=1}^{T_c} \sum_{i=1}^{n} \sum_{j=1}^{n} e^{dist(i,j)-k} (\|R_c(k)^{i,j}\| + \|BM_c(k)^{i,j}\|) \end{equation} where $dist(i,j)$ is the distance between nodes $i$ and $j$ in the network. We can encourage tolerance for communication locality by using similar terms (note the removal of $k$ from the exponential weight): \begin{equation} \sum_{k=1}^{T_c} \sum_{i=1}^{n} \sum_{j=1}^{n} e^{dist(i,j)} (\|R_c(k)^{i,j}\| + \|BM_c(k)^{i,j}\|) \end{equation} Again taking the chain configuration as an example, these terms encourage banded-diagonal $R_c(k)$ and $M_c(k)$ with higher penalties on elements farther away from the diagonal. Elements that survive despite heavy penalty represent edges in the network that require fast communication in order to best preserve the desired closed-loop map. \section{CONCLUSIONS AND FUTURE WORK} \label{sec:conclusions} By separating controller synthesis from closed-loop synthesis, we are able to apply constraints to the controller without unnecessarily limiting the closed-loop map. As demonstrated above, our proposed two-step procedure offers benefits over the original single step procedure. This procedure offers a new perspective on system-level controller design, and an alternative approach for regimes in which standard SLS is infeasible. In future work, we would like to better understand how our method relates to the existing work on virtually localized SLS, and which types of problems each method is better suited to. Additionally, we would like to extend this work to the output feedback case. Synthesis methods mentioned in this paper can be found in the SLS-MATLAB toolbox at \url{https://github.com/sls-caltech/sls-code}. \section{EXAMPLES} \label{sec:examples} All subsequent analysis was done on \text{MATLAB} using the \text{cvx} toolbox with \text{SDPT3} on the low precision setting. The optimization was done on a laptop with an Intel i7 processor and 8GB of RAM. The system we work with is a 10-node chain with the following tridiagonal $A$ matrix: \begin{equation} A = \begin{bmatrix} 0.6 & 0.4 & 0 & \ldots & \\ 0.4 & 0.2 & \ddots & & \\ 0 & \ddots & \ddots & \ddots & \\ \vdots & & \ddots & 0.2 & 0.4 \\ & & & 0.4 & 0.6 \\ \end{bmatrix} \end{equation} The system has three actuators, located at nodes 3, 6, and 10. The system is marginally stable, with a spectral radius of 1. General observations below extend to larger chains with similarly sparse actuation. \subsection{Low-norm centralized controllers} We first synthesize a desired closed-loop map via SLS, with no communication or locality constraints. We use an FIR horizon of $T=20$ and an LQR objective. We then synthesize unconstrained controllers using (\ref{eq:actual_relax}) with an additional $\mathcal{L}_{1}$ regularization term on ($\mathbf{R_c}$, $\mathbf{M_c}$). We synthesize controllers with order ranging from $T_c=2$ to $T_c=25$. \begin{figure}[h] \centering \includegraphics[width=8.8cm]{figures/tc_sweep_plot.pdf} \caption{Closed-loop differences, spectral radii of internal dynamics, and $\mathcal{L}_1$ norms for controllers with varying $T_c$} \label{fig:example1} \end{figure} Fig. \ref{fig:example1} shows the differences between the desired closed loop maps ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$) and the implemented closed-loop maps ($\tilde{\mathbf{\Phi}}_x$, $\tilde{\mathbf{\Phi}}_u$), normalized by $\|\mathbf{\Phi}_x\|$ and $\|\mathbf{\Phi}_u\|$, respectively. As expected, the closed loop differences decrease with increasing $T_c$. Interestingly, we are able to approximate the system relatively well even for $T_c \ll T$; at $T_c=2$, we are less than 10\% away from the optimal closed-loop map. Fig. \ref{fig:example1} also shows the spectral radii of $A_z$. The spectral radius of the original controller is far lower than that of the new controllers, suggesting a possible tradeoff between controller norm and internal stability margins. All implementations are internally stable, and spectral radius remains relatively constant over $T_c$. Lastly, Fig. \ref{fig:example1} shows the $\mathcal{L}_{1}$ norms of the implementation matrices. All new controllers have significantly lower norm than the original controller, and $\mathcal{L}_{1}$ norm remains almost constant over $T_c$. \subsection{Localized LQR controller} In this example, separating closed-loop synthesis from controller synthesis yields much better results than the original synthesis procedure, in which controller and closed-loop synthesis are coupled. The objective of this example is to synthesize a controller with an LQR objective and FIR horizon of $T=20$. An SLS formulation of LQR can be found in \cite{Wang2014}. The following constraints must be obeyed: the controller at each node is only allowed to use information from its two neighbouring nodes, and communication speed is restricted to be the same speed as propagation speed. Directly applying the constraints to the closed-loop map renders the standard SLS problem infeasible (``Constrained CL map'' in Table \ref{table:lqr_tables}); the algorithm cannot find a controller that meets the constraints. We use the virtual localization technique introduced in \cite{Matni2018} to synthesize a controller that meets these constraints (``Virtually local'' in Table \ref{table:lqr_tables}), while relaxing the constraints on the closed-loop map. We then apply our proposed two-step procedure. First, we synthesize the desired closed-loop maps ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$) via SLS without communication and locality constraints. We use these closed-loop maps to implement a centralized controller for comparison purposes (``FIR centralized'' in Table \ref{table:lqr_tables}). We then synthesize a controller subject to the communication and locality constraints (``Two-step'' in Table \ref{table:lqr_tables}), using (\ref{eq:actual_relax}) with $\mathcal{L}_{1}$ regularization. We synthesize one low-order controller with order $T_c=2$, and one full-order controller with $T_c=T$. For all controllers, we evaluate the LQR cost, spectral radius of the internal dynamics, and $\mathcal{L}_{1}$ norm of the implementation matrices. The LQR cost is normalized by the optimal infinite horizon LQR cost. Results are shown in Table \ref{table:lqr_tables}. \begin{table}[htbp] \caption{Comparison of LQR costs} \label{table:lqr_tables} \begin{center} \begin{tabular}{|l|l|l|l|} \hline Controller & LQR cost & Spectral radius & $\mathcal{L}_{1}$ norm \\ \hline FIR centralized & 1.001 & 0.214 & 9.688 \\ \hline Constrained CL map & \multicolumn{3}{c}{\textit{Infeasible}} \vline \\ \hline Virtually local & 1.294 & 0.847 & 9.704 \\ \hline Two-step, $T_c=T$ & 1.033 & 0.876 & 1.495 \\ \hline Two-step, $T_c=2$ & 1.034 & 0.851 & 1.426 \\ \hline \end{tabular} \end{center} \end{table} In this example, both the full-order and low-order controller (``Two-step'') give an LQR cost increase of about 3$\%$ over the optimal infinite-horizon controller. In contrast, the virtually local controller incurs a cost increase of nearly 30$\%$. All synthesized controllers are internally stable, with spectral radius less than one. The centralized controller has lower spectral radius than the constrained controllers, which have comparable spectral radii. Additionally, both of our controllers are able to attain an $\mathcal{L}_{1}$ norm that is very close to the $\mathcal{L}_{1}$ norm achieved in the previous example, despite much more severe constraints. Overall, our proposed two-step synthesis procedure generates a controller that performs better than the controller generated by existing techniques, without sacrificing internal stability margins. Interestingly, the low-order controller performs almost as well as the full-order controller, with only 0.1$\%$ performance degradation. This suggests that in this case, highly delayed information (which correspond to higher order terms of the implementation matrices) are not very useful to the controller. \section{IMPLEMENTATION MATRICES} \label{sec:clim} \subsection{Controllers and closed-loop maps} \begin{theorem} \label{thm:unique_k} Let ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$) be stable closed-loop maps. The only linear controller $\mathbf{K}$ (i.e. $\mathbf{u}=\mathbf{K}\mathbf{x}$) that achieves these closed-loop maps is $\mathbf{K}=\mathbf{\Phi}_u\mathbf{\Phi}_x^{-1}$. \end{theorem} \begin{proof} By Theorem 4.1 in \cite{Anderson2019}, $\mathbf{K}=\mathbf{\Phi}_u\mathbf{\Phi}_x^{-1}$ achieves the closed-loop maps. We show uniqueness by contradiction. Assume there is another linear controller $\mathbf{K_1}$, $\mathbf{K_1} \neq \mathbf{K}$, that also achieves the desired closed-loop maps. Since both $\mathbf{K}$ and $\mathbf{K_1}$ achieve ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$), \begin{subequations} \begin{equation} \label{eq:r_k1k2} \mathbf{\Phi}_x = (zI - A - B\mathbf{K_1})^{-1} = (zI - A - B\mathbf{K})^{-1} \end{equation} \begin{equation} \label{eq:m_k1k2} \mathbf{\Phi}_u = \mathbf{K_1}(zI - A - B\mathbf{K_1})^{-1} = \mathbf{K}(zI - A - B\mathbf{K})^{-1} \end{equation} \end{subequations} Substituting (\ref{eq:r_k1k2}) into (\ref{eq:m_k1k2}) gives \begin{equation} \mathbf{K_1}\mathbf{\Phi}_x = \mathbf{K}\mathbf{\Phi}_x \end{equation} Since $\mathbf{\Phi}_x$ is invertible, this implies that $\mathbf{K_1}=\mathbf{K}$. Contradiction! \qedsymbol \end{proof} Theorem \ref{thm:unique_k}, along with the definitions from (\ref{eq:phi_definitions}), show a one-to-one mapping between ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$) and $\mathbf{K}$. However, the linear controller $\mathbf{K}$ can be implemented in a variety of ways. For example, we could directly implement $\mathbf{u}=\mathbf{K}\mathbf{x}$; we could also implement a linear controller using the structure shown in Figure \ref{fig:blockdiag}. In the original SLS framework, the latter is used to avoid direct matrix inversion of $\mathbf{\Phi}_x$. \subsection{Implementing closed-loop maps} For the controller structure defined in Fig. \ref{fig:blockdiag}, let the controller implemented by ($\mathbf{R_c}$, $\mathbf{M_c}$) achieve closed-loop maps ($\tilde{\mathbf{\Phi}}_x$, $\tilde{\mathbf{\Phi}}_u$). We define the following terminology: \begin{defn} ($\mathbf{R_c}$, $\mathbf{M_c}$) are the \textit{implementation transfer matrices} for the closed-loop maps ($\tilde{\mathbf{\Phi}}_x$, $\tilde{\mathbf{\Phi}}_u$). We will refer to them as \textit{implementation matrices}. \end{defn} \begin{defn} We call ($\tilde{\mathbf{\Phi}}_x$, $\tilde{\mathbf{\Phi}}_u$) the \textit{implemented closed-loop maps} of the controller ($\mathbf{R_c}$, $\mathbf{M_c}$). \end{defn} The implemented closed-loop maps are found by combining (\ref{eq:impl_eqns_time}) and (\ref{eq:dynamics}) as done in \cite{Ho2019}: \begin{equation} \label{eq:impl_cl_maps} \begin{bmatrix}\tilde{\mathbf{\Phi}}_x \\ \tilde{\mathbf{\Phi}}_u\end{bmatrix} = \begin{bmatrix}\Rc \\ \Mc\end{bmatrix} \mathbf{\Delta_c}^{-1} \end{equation} Where $\mathbf{\Delta_c}$ is a helper variable defined as \begin{equation} \label{eq:Delta_c_freq} \mathbf{\Delta_c} = \begin{bmatrix}zI-A && -B\end{bmatrix} \begin{bmatrix}\Rc \\ \Mc\end{bmatrix} \end{equation} Note that $\mathbf{\Delta_c}$ can also be written as $I+\mathbf{\Delta}$. This is the same formulation used by (4.22) in \cite{Anderson2019}, modulo notational differences (we use $\mathbf{R_c}$ and $\mathbf{M_c}$ instead of $\hat{\mathbf{\Phi}}_x$, $\hat{\mathbf{\Phi}}_u$). $\mathbf{\Delta_c}$ is invertible since its leading spectral element, $I$, is invertible. Our analysis largely focuses on closed-loop maps ($\mathbf{\Phi}_x, \mathbf{\Phi}_u$) instead of the controller $\mathbf{K}$. However, due to the one-to-one mapping between controller and closed-loop maps, we can also view ($\mathbf{R_c}$, $\mathbf{M_c}$) as implementation matrices for the controller $\mathbf{K}=\mathbf{\Phi}_u\mathbf{\Phi}_x^{-1}$. \begin{theorem} \label{thm:cl_impl_matrices} For $R_c(1)=I$, ($\mathbf{R_c}$, $\mathbf{M_c}$) are implementation matrices for ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$) \textit{if and only if} they satisfy \begin{equation} \label{eq:implicit_constr} \begin{bmatrix}\Rc \\ \Mc\end{bmatrix} = \begin{bmatrix}\R \\ \M\end{bmatrix} \begin{bmatrix}zI-A && -B\end{bmatrix} \begin{bmatrix}\Rc \\ \Mc\end{bmatrix} \end{equation} \end{theorem} \begin{proof} \textit{Necessity}. If ($\mathbf{R_c}$, $\mathbf{M_c}$) are implementation matrices for ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$), then we require \begin{equation} \label{eq:equal_maps} \begin{bmatrix} \tilde{\mathbf{\Phi}}_x \\ \tilde{\mathbf{\Phi}}_u \end{bmatrix} = \begin{bmatrix}\R \\ \M\end{bmatrix} \end{equation} Substituting (\ref{eq:impl_cl_maps}) into (\ref{eq:equal_maps}) and multiplying by $\mathbf{\Delta_c}$, then writing out $\mathbf{\Delta_c}$ in terms of ($A$, $B$, $\mathbf{R_c}$, $\mathbf{M_c}$), gives (\ref{eq:implicit_constr}). \textit{Sufficiency}. If ($\mathbf{R_c}$, $\mathbf{M_c}$) satisfy (\ref{eq:implicit_constr}), we can substitute (\ref{eq:implicit_constr}) into (\ref{eq:impl_cl_maps}) to conclude that ($\tilde{\mathbf{\Phi}}_x$, $\tilde{\mathbf{\Phi}}_u$) = ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$), i.e. ($\mathbf{R_c}$, $\mathbf{M_c}$) are implementation matrices for ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$). \qedsymbol \end{proof} This constraint describes an affine subspace of implementation matrices for ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$). \begin{corollary} \label{corollary:m1} If ($\mathbf{R_c}$, $\mathbf{M_c}$) are implementation matrices for ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$), then the first spectral components of $\mathbf{\Phi}_u$ and $\mathbf{M_c}$ are equal, i.e. $M_c(1)$ = $\Phi_u(1)$. \end{corollary} This equivalence arises directly from writing (\ref{eq:implicit_constr}) in terms of its spectral elements. \begin{corollary} \label{corollary:self_clim} For $T_c \geq T$, ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$) are implementation matrices for themselves. \end{corollary} ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$) are used as implementation matrices in \cite{Anderson2019}. \begin{corollary} If ($\mathbf{R_c}$, $\mathbf{M_c}$) are implementation matrices for ($\mathbf{\Phi}_x$, $\mathbf{\Phi}_u$), then $K = \mathbf{\Phi}_u\mathbf{\Phi}_x^{-1} = \mathbf{M_c}\mathbf{R_c}^{-1}$ \end{corollary} \subsection{Existence of solutions} To better understand the dimension of the space of implementation matrices, we rearrange the constraint (\ref{eq:implicit_constr}) so that the variables ($\mathbf{R_c}$, $\mathbf{M_c}$) appear on only one side of the constraint. Rewrite $\mathbf{\Delta_c}$ in block-matrix form: \begin{equation} \begin{bmatrix} \Delta_c(0) \\ \Delta_c(1) \\ \vdots \\ \Delta_c(T_c) \end{bmatrix} = \begin{bmatrix} I & & & 0 & \\ -A & I & & -B & & \\ & \ddots & \ddots & & \ddots & \\ & & -A & & & -B \end{bmatrix} \begin{bmatrix} R_c(1) \\ \vdots \\ R_c(T_c) \\ M_c(1) \\ \vdots \\ M_c(T_c) \end{bmatrix} \end{equation} Rewrite the right hand side of (\ref{eq:implicit_constr}) in block-matrix form: \begin{equation} \begin{bmatrix} R_c(1) \\ \vdots \\ \vdots \\ R_c(T_c) \\ 0 \\ \vdots \\ 0 \end{bmatrix} = \begin{bmatrix} \Phi_x(1) \\ \Phi_x(2) & \ddots \\ \vdots \\ \Phi_x(T) \\ & \ddots \\ & & \Phi_x(T) \end{bmatrix} \begin{bmatrix} \Delta_c(0) \\ \Delta_c(1) \\ \vdots \\ \Delta_c(T_c) \end{bmatrix} \end{equation} We show only the formulation for $\mathbf{R_c}$; the formulation for $\mathbf{M_c}$ is identical but with $\mathbf{\Phi}_u$ and $\mathbf{M_c}$ instead of $\mathbf{\Phi}_x$ and $\mathbf{R_c}$. Using the block-matrix formulations, we can rearrange (\ref{eq:implicit_constr}) into a constraint of the form \begin{subequations} \label{eq:explicit_constr} \begin{equation} \label{eq:constr_FvG} Fv = G \end{equation} \begin{equation} v = \begin{bmatrix} R_c(2) \\ \vdots \\ R_c(T_c) \\ M_c(1) \\ \vdots \\ M_c(T_c) \end{bmatrix} \end{equation} \end{subequations} where $F$ and $G$ are matrices that do not depend on $\mathbf{R_c}$ and $\mathbf{M_c}$. The total number of constraints is $(T_c+T)(m+n)$. \begin{lemma} \label{lemma:existence_solutions} The implementation constraints (as defined in (\ref{eq:implicit_constr})) are feasible \textit{if and only if} $\mathrm{rank}(F) = \mathrm{rank}(F | G)$. If feasible, the solution space has dimension $dim(\mathrm{null}(F)) \times n$, where $n$ is the number of states in the system. \end{lemma} \begin{proof} This result is a direct application of the Rouch\'e-Capelli theorem to the linear system defined in (\ref{eq:explicit_constr}). \qedsymbol \end{proof} Corollary \ref{corollary:self_clim} states that (\ref{eq:implicit_constr}) has at least one solution for $T_c \geq T$. When $T_c < T$, we can check the rank of $F$ and $[F | G]$ and calculate the dimension of the solution space if it exists. \section{INTRODUCTION} \label{sec:introduction} Large-scale distributed cyberphysical systems (e.g. power grids, intelligent transportation systems) are composed of numerous local controllers that exchange local information via some communication network. The information that each local controller is able to obtain is limited by properties of the communication network, e.g. delay. It is a challenge to scalably synthesize optimal local controllers subject to the limitations of the communication network \cite{Ho1971,Mahajan2012, Rotkowitz2005, Bamieh2002, Bamieh2005, Nayyar2013}. The recently developed \textit{System Level Synthesis} (SLS) framework addresses this challenge by shifting the optimization from the space of available controllers to the space of achievable system closed-loop maps \cite{Anderson2019}. In doing so, it allows the problem to be decomposed into sub-problems to be solved in parallel, resulting in a synthesis procedure with $O(1)$ complexity \cite{Wang2018}. In the original SLS framework, the closed-loop maps themselves are used to implement the controller, and thus any constraints applied to the controller are directly enforced on the closed-loop response as well. However, the abovementioned communication limitations motivate constraints on \textit{controllers}, not closed-loop maps; by applying these constraints on the closed-loop response, we unnecessarily limit the space over which we can search for solutions. Standard SLS is infeasible under excessive communication constraints. \cite{Matni2018} addresses this by searching over approximate closed-loop maps instead of exact closed-loop maps; constraints are imposed on the approximate closed-loop maps. We propose an alternative two-step procedure, as follows: \begin{enumerate} \item Synthesize the desired closed-loop response, subject to closed-loop constraints. This can be done using SLS or any other linear synthesis method (Proposition \ref{prop:all_linear_ctrllers}) \item Synthesize the controller, subject to controller constraints \end{enumerate} To fully separate closed-loop map constraints from controller constraints, we require a controller that is implemented using transfer matrices \textit{other} than the closed-loop maps. We define the space of such matrices in Theorem \ref{thm:cl_impl_matrices} and give conditions for their existence in Lemma \ref{lemma:existence_solutions}. The main contribution of this paper is to introduce the controller synthesis step of the design procedure and demonstrate its importance. We show that our proposed two-step synthesis allows us to design low-cost, distributed controllers that were unavailable to us in the previous framework. Additionally, the controller synthesis problem can be decomposed into parallelizable sub-problems, much like the original SLS problem. \section{PRELIMINARIES} \label{sec:preliminaries} \subsection{Notation} We use italicized lower-case letters (e.g. $x_t$) to denote vectors in the time domain. We use italicized upper-case letters (e.g. $A$) to denote constant matrices. We use superscripts to denote individual matrix elements (e.g. $A^{i,j}$). We use boldface lower and upper case letters (eg. $\mathbf{x}$, $\mathbf{\Phi}_x$, $\mathbf{R_c}$) to denote signals and transfer matrices in the frequency domain. We use $R_c(k)$ to denote the $k$th spectral component of $\mathbf{R_c}$, i.e. $\mathbf{R_c}(z) = \sum_{k=0}^{\infty} R_c(k)z^{-k}$. In this paper, we will restrict ourselves to strictly proper finite-impulse-response (FIR) transfer matrices, i.e. $\mathbf{R_c}(z) = \sum_{k=1}^{T} R_c(k)z^{-k}$, $T \in \mathbb{Z}_+$. \subsection{System setup} We use the same setup as in (2.1) of \cite{Anderson2019}: \begin{equation} \label{eq:dynamics} x_{t+1} = Ax_t + Bu_t + w_t \end{equation} where $x$, $w$ $\in \mathbb{R}^n$ and $u$ $\in \mathbb{R}^m$. In this paper we focus on the time-invariant case (i.e. $A$, $B$ have no time-dependence) with state feedback. $\mathbf{\Phi}_x$ and $\mathbf{\Phi}_u$ are the closed-loop maps from $w$ to $x$ and $u$, with FIR time horizon $T$: \begin{equation} \label{eq:clmaps_freq} \begin{bmatrix}\mathbf{x} \\ \mathbf{u} \end{bmatrix} = \begin{bmatrix}\mathbf{\Phi}_x \\ \mathbf{\Phi}_u \end{bmatrix}\mathbf{w} \end{equation} \subsection{Controller implementation} \begin{figure} \centering \begin{tikzpicture} \node [point, name=input] {}; \node [sum, right of=input, node distance=1.2cm] (sum) {}; \node [point, right of=sum, node distance=2.5cm] (pt1) {}; \node [block, right of=sum, node distance=3.5cm] (zM) {$\mathbf{zM_c}$}; \node [point, right of=zM, node distance=1.2cm, name=output] {}; \draw [->] (input) -- node[above] {$\mathbf{x}$} (sum); \draw [-] (sum) -- node[above, name=delta] {$\boldsymbol{\hat{\delta}}$} (pt1); \draw [->] (pt1) -- node[] {} (zM); \draw [->] (zM) -- node[above] {$\mathbf{u}$} (output); \node [point, below of=sum, node distance=1cm] (pt2) {}; \node [point, below of=pt1, node distance=1cm] (pt3) {}; \node [block, below of=delta, node distance=1.29cm] (IzR) {$\mathbf{I-zR_c}$}; \draw [-] (pt1) -- node {} (pt3); \draw [-] (pt3) -- node {} (IzR); \draw [-] (IzR) -- node {} (pt2); \draw [->] (pt2) -- node[left] {$\mathbf{-\hat{x}}$} (sum); \end{tikzpicture} \caption{Implementation of state feedback controller} \label{fig:blockdiag} \end{figure} Fig. \ref{fig:blockdiag} shows the controller implementation. $\mathbf{R_c}$ and $\mathbf{M_c}$ are the implementation matrices, with order (i.e. FIR time horizon) $T_c$. The controller includes two internal signals; $\mathbf{\hat{x}}$ and $\boldsymbol{\hat{\delta}}$. The equations describing the controller are \begin{subequations} \label{eq:impl_eqns_time} \begin{equation} \label{eq:impl_delta_time} \hat{\delta}_t = x_t - \sum_{k=2}^{T_c} R_c(k) \hat{\delta}_{t-k+1} \end{equation} \begin{equation} \label{eq:impl_u_time} u_t = \sum_{k=1}^{T_c} M_c(k) \hat{\delta}_{t-k+1} \end{equation} \end{subequations} where (\ref{eq:impl_delta_time}) assumes that $R_c(1)$ is the identity. For a more detailed derivation, refer to \cite{Ho2019}. The corresponding frequency-domain equations are \begin{subequations} \label{eq:impl_eqns} \begin{equation} \label{eq:impl_delta_freq} \boldsymbol{\hat{\delta}} = \mathbf{x+(I-zR_c)}\boldsymbol{\hat{\delta}} \end{equation} \begin{equation} \label{eq:impl_x_freq} \mathbf{x} = z\mathbf{R_c}\boldsymbol{\hat{\delta}} \end{equation} \begin{equation} \label{eq:impl_u_freq} \mathbf{u} = z\mathbf{M_c}\boldsymbol{\hat{\delta}} \end{equation} \end{subequations} \begin{prop} \label{prop:all_linear_ctrllers} Any linear controller (i.e. $\mathbf{u}=\mathbf{K}\mathbf{x}$) can be implemented using the controller structure defined in Fig. \ref{fig:blockdiag}. \end{prop} \begin{proof} We can construct closed-loop maps $\mathbf{\Phi}_x$ and $\mathbf{\Phi}_u$ directly from $\mathbf{K}$, as shown in (4.4) of \cite{Anderson2019}: \begin{subequations} \label{eq:phi_definitions} \begin{equation} \mathbf{\Phi}_x = (zI-A-B\mathbf{K})^{-1} \end{equation} \begin{equation} \mathbf{\Phi}_u = \mathbf{K}(zI-A-B\mathbf{K})^{-1} \end{equation} \end{subequations} We can then set $\mathbf{R_c}=\mathbf{\Phi}_x$ and $\mathbf{M_c}=\mathbf{\Phi}_u$ in (\ref{eq:impl_eqns}), which gives back the original controller $\mathbf{u}=\mathbf{K}\mathbf{x}$. \qedsymbol \end{proof} \section{STABILITY} \label{sec:stability} \subsection{Internal dynamics} \label{sec:int_dynamics} The system is internally stable if the dynamics of $\hat{\delta}$, the internal signal, are stable. By substituting (\ref{eq:impl_eqns_time}) into (\ref{eq:dynamics}) and rearranging, we can obtain internal dynamics of the form \begin{subequations} \begin{equation} z_t = \begin{bmatrix} \hat{\delta}_{t-T_c+1} \\ \vdots \\ \hat{\delta}_{t-1} \\ \hat{\delta}_t \end{bmatrix}, \quad z_{t+1} = A_zz_t \end{equation} \begin{equation} A_z = \begin{bmatrix} 0 & I & \ldots & 0 & 0 \\ \vdots & & & \ddots \\ 0 & 0 & \ldots & 0 & I \\ -\Delta_c(T_c) & & \ldots & & -\Delta_c(1) \end{bmatrix} \end{equation} \end{subequations} \subsection{Stability check} \label{sec:stability_check} We can verify internal stability \textit{a posteriori} by checking that $A_z$ is stable. Alternatively, a sufficient condition for internal stability is $\|\mathbf{\Delta}\| < 1$ \cite{Anderson2019}. The stability of $A_z$ can be checked in a distributed manner. First, a helpful proposition: \begin{prop} \label{prop:distr_check} Let $\|\cdot \|$ be an induced matrix norm. For $A \in \mathbb{R}^{n \times n}$, if $\exists m > 0$ s.t. $\|A^m\| < 1$, then $A$ is stable. \end{prop} \begin{proof} Let $\rho = \|A^m\|^{1/m}$, $\rho \in [0, 1)$. Using norm submultiplicativity and some algebra, we can show that $\forall t > m$, $\|A^t\| \leq C\rho^t$ where $C$ is some constant. Using this upper bound and induced norm properties, we can show that $\forall x_o \in \mathbb{R}^n$, $\lim_{t\to\infty}\|A^tx_o\| = 0$. This is the definition of stability in the discrete time setting. \qedsymbol \end{proof} Let each processor store $A_z$ and some columns of $A_z^k$, denoted $A_{z(i:j)}^k$. Overall, every column of $A_z^k$ is stored on some processor. The stability check procedure is as follows, starting with $k=1$: \begin{enumerate} \item Calculate $A_{z(i:j)}^k$ by multiplying $A_z$ and $A_{z(i:j)}^{k-1}$ \item Check the induced 1-to-1 norm of $A_{z(i:j)}^k$ \item Consensus on whether a termination condition has been met. If no termination condition is met, increment $k$ and return to Step 1 \end{enumerate} The clear termination condition is $\|A_z^k\| < 1$; then, $A_z$ is certified to be stable by Proposition \ref{prop:distr_check}. We suggest two additional termination conditions: \begin{itemize} \item $\|A_z^k\| > M$, where $M$ is some predetermined threshold. Since $\|A_z^k\|$ corresponds to the amplitude of the transient response, this termination condition corresponds to finding an unacceptably large transient condition \item $k > k_{max}$, where $k_{max}$ is some predetermined maximum number of iterations \end{itemize} Both conditions would indicate that the stability check failed to certify stability. Since we select a column-wise separable norm, the entire procedure can be distributed. The complexity per iteration scales quadratically with $n$, under the conservative assumption that each node has at least one processor. For the system in Section \ref{sec:examples}, this procedure certifies stability in 7 iterations for the low-order controller and 32 iterations for the full-order controller.
2024-02-18T23:41:00.480Z
2020-06-11T02:08:24.000Z
algebraic_stack_train_0000
3,916
5,586
proofpile-arXiv_066-3147
\section{Introduction} Internet-based Distributed Computing Systems (DCS) have become an essential backbone of the modern digital economy, society, and industrial operations. The emergence of the Internet of Things (IoT), diverse mobile applications, smart grids, smart industries, and smart cities has resulted in massive amounts of data generation. Thus, it has increased the demand for computing resources \cite{gubbi2013internet} to process this data and derive valuable insights for users and businesses. According to the report from Norton \cite{norton2019usa}, 21 billion IoT devices will be connected to the internet by 2025, creating substantial economic opportunities. Computing models such as Cloud and Edge computing have revolutionised the way services are delivered and consumed by providing flexible on-demand access to services with a pay-as-you-go model. Besides, new application and execution models like micro-services and serverless or Function as Service (FaaS) computing \cite{baldini2017serverless} are becoming mainstream that significantly reduces the complexities in the design and deployment of software components. On the other hand, this increased connectivity and heterogeneous workloads demand distinct Quality of Service (QoS) levels to satisfy their application requirements\cite{gan2019open, dastjerdi2016fog, fox2009above}. These developments have led to building hyper-scale data centres and complex multi-tier computing infrastructures that require new innovative approaches in managing resources efficiently and provide reliable services. Deployment of 5G and related infrastructures like dynamic network slicing for high bandwidth, high throughput, and low latency applications has only increased the challenges. Resource Management Systems (RMS) in DCS's are middleware platforms that perform different tasks such as resource provisioning, monitoring, workload scheduling, and many others. Building an efficient RMS for the present and imminent distributed systems are challenging due to many reasons. Significantly, the new class of applications, networks, and Cyber-Physical-Systems (CPS) such as data centres are enormously complex and challenging to fine-tune their parameters manually. For example, "Just $10$ pieces of equipment, each with $10$ settings, would have $10$ to the $10^{th}$ power, or $10$ billion, possible configurations — a set of possibilities far beyond the ability of anyone to test for real" \cite{schwartz2019allen, amodei2018ai}. The emerging network technologies, including 5G and satellite networks, such as Amazon's Project Kuiper and SpaceX's StarLink, have opened up new dimensions \cite{giambene2018satellite} and opportunities for developing advanced applications that require high bandwidth, high availability, and low latency. The availability of massive data and advancement in computing capabilities has witnessed the resurgence of Artificial intelligence (AI) techniques driving innovation across different domains such as healthcare, autonomous driving, and robotics \cite{giambene2018satellite, russell2002artificial}. Training AI models itself consumes vast resources and is increasing exponentially and doubling every 3.4 months for the largest AI models (compared to Moores' Law' 2-year doubling period) \cite{schwartz2019allen}. The Cloud and Edge infrastructures deliver resources (compute, network, storage) required to accommodate these rapid changes across different domains managed by third-party service providers. These are highly distributed, large-scale, and contain numerous heterogeneous resources. Furthermore, they are multi-tenant, with users sharing the underlying resources with diverse workload characteristics. Thus, providing the performance requirements in such a shared environment and increasing resource utilisation is a critical problem and challenging for RMS \cite{buyya2018manifesto}. The existing RMS techniques from operating systems to large scale DCS's are predominantly designed and built using preset threshold-based rules or heuristics. These solutions are static and often employ reactive solutions \cite{bianchini2020toward}; they work well in the general case but cannot adjust to the dynamic contexts \cite{dean2017machine}. Moreover, once deployed, they considerably fail to adapt and improve themselves in the runtime. In complex dynamic environments (such as Cloud and Edge), they are incapable of capturing the infrastructure and workload complexities and hence fall through. Consequently, the AI-centric approaches built on actual data and measurements collected from respective DCS environments are more promising, perform better, and adapt to dynamic contexts. Unlike heuristics, these are data-driven models built based on historical data. Accordingly, AI-centric methods can employ proactive measures by foreseeing the potential outcome based on current conditions. For instance, a static heuristic solution for scaling the resource uses workload and system load parameters to trigger the scaling mechanism. However, this reactive scaling diminishes the users' experience for a certain period (due to the time required for system bootup and application trigger). Consequently, an AI-centric RMS enabled by data-driven Machine Learning (ML) model can predict the future workload demand and scale up or scale down the resources beforehand as needed. Such techniques are highly valuable for both users to obtain better QoS and service providers to offer reliable services and retain their business competency in the market. Moreover, methods like Reinforcement Learning (RL) \cite{ dean2017machine, sutton2018reinforcement} can improve RMS's decisions and policies by using monitoring and feedback data in runtime, responding to the current demand, workload, and underlying system status. AI-centric RMS in DCS is more feasible now than ever for multiple reasons. First, AI techniques have matured and have proven efficient in many critical domains such as computer vision, natural language processing, healthcare applications, and autonomous vehicles. Second, most DCS platforms generate enormous amounts of data currently pushed as logs for debugging purposes or failure-cause explorations. For example, Cyber-Physical-Systems (CPS) in data centres already have hundreds of onboard CPU and external sensors monitoring workload, energy, temperature, and weather parameters. Such data is useful to build ML models cost-effectively. Finally, the increasing scale in computing infrastructure and its complexities require automated resource management systems that can deliver the decisions based on the data and key-insights from experience, to which AI models are ideal. In this regard, this paper makes the following key contributions: (1) presents evolution, and the state-of-the-art RMS techniques in DCS, (2) enlists the challenges associated with data-driven RMS methods, (3) identifies the future research directions and point out the different tasks in which AI-centric methods can be efficiently applied and benefited from, (4) proposes a conceptual data-driven RMS model, and (5) demonstrates two real-time use-cases using data-driven AI methods (related to energy-efficient GPU clock configurations and management of resources in data centres). The rest of the paper is organised as follows. Section \RomanNumeralCaps{2} gives an overview of DCS evolution and state-of-the-art practices in RMS. Section \RomanNumeralCaps{3} identifies the challenges associated with data-driven methods. Section \RomanNumeralCaps{4} draws Future research directions. In Section \RomanNumeralCaps{5}, a conceptual AI-centric RMS model is presented, and Section \RomanNumeralCaps{6} demonstrates the feasibility of AI-centric methods using two real-time use cases. Finally, the conclusion is drawn in Section \RomanNumeralCaps{7}. \section{DCS Evolution and the State-of-the-Art} An overview of the evolution of primary DCS's is given in Figure \ref{fig:overview}. Early DCS systems are prominently used in scientific domain applications composed of parallel tasks (distributed jobs in grid computing) and executed on clusters or supercomputing systems. The development of technologies such as service-orientated computing (Web services, REST, SOAP, etc.), virtualisation, and demand for utility-oriented services created the current Cloud computing-based data centres. However, the next decade of DCS's will be driven by IoT-based applications and scenarios that need to process the enormous amount of data and derive meaningful intelligence and business values from it. These IoT-based applications consist of numerous sensors and computing nodes distributed across different network layers from Edge to remote Cloud. Thus, requiring an autonomic sense-connect-actuate model \cite{gubbi2013internet} where application tasks are composed, deployed, and executed autonomously—demanding additional machine-to-machine interactions (compared to the current human-to-machine interactions). RMS should autonomously provision resources, schedule application tasks, and manage their demand for QoS and low latency. In parallel to system advancements, application models have continued to evolve and create new software design patterns like micro-services and execution models like serverless or Function as Service (FaaS) computing. To that end, managing these modern resources and applications requires intelligent decisions enabled from the AI-centric solutions. Although AI-centric RMS techniques will be applicable for all the computing paradigms discussed here, we mainly keep our discussions and illustrations around the Cloud and Edge computing paradigms. \begin{figure*} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{./images/overview-ppt.png} \caption{An overview of contemporary DCS evolution (Timeline shows approximate time of the genesis of the system and evolved as mainstream with some overlapping’s. The points shown for all dimensions are representative but not exhaustive and only lists the important facets.)} \label{fig:overview} \end{figure*} With the increased scale and complexities in next-generation DCSs, traditional static or heuristics solutions are becoming inadequate. These methods require careful hand pruning and human intervention to adapt to the dynamic environments \cite{dean2017machine}. Consequently, AI-centric data-driven solutions are promising, and there have been many attempts in recent years to address the resource management problems using the data-driven ML solutions \cite{bianchini2020toward}. For example, Google has achieved a 40\% efficiency in managing its cooling infrastructure using simple ML techniques and learning from historical data \cite{gao2014machine}. Many other methods explored problems such as device placement, scheduling, and application scaling using data-driven methods \cite{mirhoseini2017device}, \cite{tulifog}. At the system architecture level, \cite{ayers2019asmdb} used massive data sets of hardware performance counters and profiles collected from large-scale Google data centre servers and utilised this data to reason, analyse and mitigate front-end stalls in warehouse-scale systems. However, data-driven AI solutions for RMS are in its superficial stage. They require meticulous attention to address the challenges they pose and simultaneously identify potential avenues to incorporate these methods. Moreover, it is essential to build the general frameworks and standards to adopt AI solutions in resource management that are scalable and manageable. \section{Challenges }\label{Section:challenges} In this section, we identify and describe the critical issues associated with the adoption of AI solutions in the resource management of distributed computing systems. \subsection{Availability of Data} The quality of data used to train the models determines the success of machine learning techniques. Also, this data should be available in large quantities with enough features covering all the aspects of environments \cite{cummins2017end, cano2019optimizing}. Within DCS, multiple challenges exist concerning the availability of such data. First, currently, different resource abstraction platforms collect the data at different granularity. The physical machine-level data from onboard sensors and counters is gathered and accessed by tools like Intelligent Platform Management Interface (IPMI), while at a higher abstraction level, middleware platforms collect data related to workload level, user information, and surrounding environmental conditions (temperature, cooling energy in the data centre). Also, network elements such as SDN controllers collect data related to network load, traffic, and routing. Unifying these data together and preprocessing it in a meaningful way is a complex and tedious task. The respective tools gather the data in a different format without common standards between them. Hence, building data-pipelines combining various subsystems data is crucial for the flexible adoption of ML solutions. Secondly, current monitoring systems collect data and push them into logging repositories to be used later for debugging. However, converting this data for ML-ready requires monotonous data-engineering. Hence, future systems should be explicitly designed to gather information that can be directly fed to the ML models with minimal data engineering and preprocessing effort. Lastly, although several publicly available datasets provide workload traces, there are hardly any public datasets available representing various infrastructure, including physical resource configurations, energy footprints, and several other essential parameters (due to privacy and NDAs). Therefore, getting access to such data is a challenge and needs collaborative efforts and data management standards from the relevant stakeholders. Moreover, requiring standardised data formats and domain-specific frameworks \cite{portugal2016survey}. \subsection{Managing the Deployment of Models}\label{sec:managedeploy} Training ML models and inference in runtime needs an expensive amount of computational resources. However, one significant challenge is to manage the life cycle of ML models, including deciding how much to train, where to deploy the training modules in multi-tier computing architectures like Edge/Fog. As resources have limited capabilities at a lower level and should be allocated to needful applications, if these scarce resources are predominantly used to train models or run the RL agents, the latency-sensitive applications will experience resource starvation. On the other hand, if the models (RL agents) are trained or deployed in resource enriched cloud, the latency to push the inference decisions or the runtime feedback data to edge nodes shoots up, thus creating a delay-bottlenecks in RMS decisions. Furthermore, ML models tend to learn too much with the expense of massive computational resources. Therefore, the innovative solutions are needed to decide how much learning is sufficient based on specific constraints (resource budget, time-budget, etc.) and estimate context-aware adaptive accuracy thresholds of ML models \cite{toma2019adaptive}. To overcome this, techniques like transfer learning, distributed learning can be applied to reduce computational demands \cite{cano2019optimizing}. In addition, dedicated CPUs, GPUs, and domain-specific accelerators like Google TPU, Intel Habana, and FPGAs (Azure) can carry out the inference. \subsection{Non-Deterministic Outputs } Unlike statistical models, which are analogous for its deterministic outputs, ML models are intrinsically exploratory and depend on stochasticity for many of its operations, thus producing the non-deterministic results. For example, the cognitive neural nets, which are basic building blocks for many regressions, classification, and Deep Learning (DL) algorithms primarily rely on the principles of stochasticity for different operations (stochastic gradient descent, exploration phase in RL). When run multiple times with the same inputs, they tend to approximate the results and produce different outputs \cite{russell2002artificial}. This may pose a severe challenge in the DCS, such as Edge and Clouds, where strict Service Level Agreements (SLAs) govern the delivery of services requiring deterministic results. For example, if a service provider fixes a price based on certain conditions using ML models, consumers expect the price to be similar in all the time under similar settings. However, ML models may have a deviation in pricing due to stochasticity creating the transparency issues between users and service providers. Many recent works have focused on this issue and introduced techniques such as induced constraints in neural nets to produce the deterministic outputs \cite{lee2019gradient}. Yet, stochasticity in the ML model is inherent and requires careful monitoring and control over its output. \subsection{Black Box Decision Making } The ML models' decision-making process follows a completely black-box approach and fails to provide satisfactory justification for its decisions. The inherent probabilistic architectures and enormous complexities within ML models make it hard to evade the black-box decisions. It becomes more crucial in an environment such as DCS, where users expect useful feedback and explanation for any action taken by the service provider. This is instrumental in building trust between service providers and consumers. For instance, in case of a high overload condition, it is usual that service provider shall preempt few resources from certain users with the expense of certain SLA violations. However, choosing which users' resources should be preempted is crucial in business-driven environments. This requires simultaneously providing fair decisions and valid reasons. Many works have undertaken to build the explanatory ML models (Explainable AI- XAI) to address this issue \cite{arrieta2020explainable, gunning2017explainable}. However, solving this continues to remain a challenging task. \subsection{Lightweight and Meaningful Semantics} The DCS environment having heterogeneous resources across the multi-tiers accommodates different application services. RMS should interact between different resources, entities, and application services to efficiently manage the resources. However, these requisites semantic models that represent all these various entities meaningfully. Existing semantic models are either heavy or inadequate for such complex environments. Therefore, lightweight semantic models are needed to represent the resource, entities, applications, and services without introducing the overhead \cite{bermudez2016iot}. \subsection{Complex Network Architectures, Overlays, Upcoming Features} Network architectures across DCS and telecom networks are evolving rapidly using software-defined infrastructure, hierarchical overlay networks, Network Function Virtualization (NFV), and Virtual Network Functions (VNF). Commercial clouds like Amazon, Google, and Microsoft have recently partnered with telecom operators worldwide to deploy ultra-low latency infrastructure (AWS Wavelength and Azure Edge Zone, for example) for emerging 5G networks. The explosion of data from these 5G deployments and resource provisioning for high bandwidth, throughput, and low latency response through dynamic network slicing requires a complex orchestration of network functions \cite{zhang2017networkslice}. In future DCS, RMS needs to consider these complex network architectures, the overlap between telecom and public/private clouds, and service function orchestration to meet end-to-end bandwidth, throughput, and latency requirements. These architectures and implementations, in turn, generate enormous amounts of data at different levels of the hierarchical network architecture. As different types of data are generated in different abstraction levels, standardised well-agreed upon data formats and models for each aspect needs to be developed. \subsection{Performance, Efficiency, and Domain Expertise} Many ML algorithms and RL algorithms face performance issues like a cold-start problem. Specifically, RL algorithms spend a vast amount of the initial phase in exploration before reaching its optimal policies creating an inefficient period where the decisions are suboptimal, even completely random or incorrect leading to massive SLA violations \cite{cano2019optimizing}. RL-based approaches also face several challenges in the real world including (1) need for learning on the real system from limited samples, (2) safety constraints that should never or at least rarely be violated, (3) need of reward functions that are unspecified, multi-objective, or risk-sensitive, (4) inference that must happen in real-time at the control frequency of the system \cite{dulac2019challenges}. In addition, AI models are compute-heavy and designed with a primary focus on accuracy-optimisation resulting in a massive amount of energy consumption \cite{schwartz2019allen}. Consequently, new approaches are needed to balance the trade-offs between accuracy, energy, and performance overhead. Furthermore, current ML algorithms, including neural network architectures/libraries, are primarily designed to solve computer vision problems. Adapting them to RMS tasks needs some degree of transformation of the way input and outputs are interpreted. Currently, many AI-centric RMS algorithms transform their problem space and further use simple heuristics to interpret the result back and apply to the RMS problems. Such complexities demand expertise from many related domains. Thus, newer approaches, algorithms, standardised frameworks, and domain-specific AI frameworks are required to adopt AI in RMS efficiently. \section{Future Research Directions} Despite the challenges associated, AI solutions provide many opportunities to incorporate these techniques into RMS and benefit from them. In this section, we explore different avenues where AI techniques can be applied to manage distributed systems resources. \subsection{Data-driven Resource Provisioning and Scheduling} Resource provisioning and scheduling are a fundamental element of an RMS. Usually, resources are virtualised, and specifically, computing resources are delivered as Virtual machine (VM) or lightweight containers. The problems related to provisioning such as estimating the number of resources required for an application, co-locating workloads based on their resource consumption behaviours and several others can be addressed using AI techniques. These techniques can be extended to special case provisions such as spot instances. Utilising spot instances for application execution needs careful estimation of application run time (to avoid the state corruption or loss of computation if resources are preempted) and accordingly deciding resource quantity and checkpointing logic. It may require building prediction models based on previous execution performance counters or correlating with clusters based on existing knowledge base \cite{shashiccgrid2020}. In edge computing environments, RMS should utilise resources from multi-tier infrastructure, and selecting nodes from different layers also requires intelligence and adaptation to application demands and infrastructure status. Furthermore, data-driven AI solutions can be used in scheduling tasks such as finding an efficient node, VM consolidation, migration, etc. The prediction models' historical data and adaptive RL models can be used to manage dynamic scheduling and resource provisioning. \subsection{Managing Elasticity using Predictive Analytics} Elasticity is an essential feature providing flexibility by scaling up or scaling down the resources based on the applications' QoS requirements and budget constraints. Current approaches in elasticity are based on the reactive methods where resources are scaled according to the system load (in terms of the number of users and input requests). However, such reactive measures diminish the SLAs due to bootup time and swift burst loads. In contrast, forecasting the future load based on the application's past usage behaviours and proactively scaling the resources beforehand vastly improves SLAs and saves costs. Essentially, it needs time series analysis to predict future load using methods like ARIMA or more advanced RNN techniques such as LSTM networks that are proven to be efficient in capturing the temporal behaviours \cite{gan2019leveraging}. Such proactive measures from service providers enable efficient management of demand response without compromising the SLAs. \subsection{Energy Efficiency and Carbon footprint Management} One of the major challenges of computing in recent years has been energy consumption. Increasing reliance on computing resources has created enormous energy, economic and environmental issues. It is estimated that by 2025, data centres itself would consume around 20\% of global electricity and emit up to 5\% of the world's carbon emissions \cite{Lima2017}. Energy efficiency can be achieved across the computing stack from managing hardware circuits to data centre level workload management. Recent studies have shown promising results of AI techniques in device energy-optimised frequency management \cite{shashiccgrid2020}, intelligent and energy-efficient workload management (scheduling, consolidation), reducing cooling energy by fine-tuning cooling parameters \cite{gao2014machine, ilager2019etas}, and executing applications within power budgets \cite{bianchini2020toward}, etc. In addition, it can also be effectively used in minimising the carbon-footprints by forecasting renewable energy and shifting the workloads across clouds accordingly. Each of these subproblems can be addressed by using a combination of predictive and RL models based on application scenarios and requirements. \subsection{Security and Privacy Management} As cybersystems have become sophisticated and widely interconnected, preserving the privacy of data and securing resources from external threats has become quintessential. Dealing with security has the implications far beyond resource management, including privacy-preserving and complying with the respective jurisdiction's rules. For instance, RMS with user-level schedulers can classify input records and process records with privacy sensitivity within local resource environments (e.g., private cloud) and others on public clouds. One such work is carried out by the University of Washington \cite{XUanekasecuroty} wherein a deep learning method is used to classify medical records into sensitive and nonsensitive based on data privacy. They created a user-level scheduler for the Aneka Cloud application platform and able to process sensitive medical records on their private cloud and nonsensitive records on Amazon AWS EC2 public cloud. If resources are maliciously compromised, RMS should adapt to the requirements of the security concerns. There has been widespread use of ML algorithms in many aspects of security management. It includes AI-based Intruder Detection Systems (IDS) to prevent unauthorized access, anomaly detection \cite{moghaddam2019acas,butun2015anomly} to identify the deviations in the application/ resource behaviors. AI techniques, including Artificial Neural Networks (ANNs), ensemble learning, Bayesian networks, association rules, and several classification techniques like SVM, can be effectively utilised to address these security-related problems \cite{buczak2015survey}. They can also be predominantly used in preventing Denial-of-service attacks (DDoS) by analysing traffic patterns and filtering suspected traffic, hence, preventing the system failures \cite{yuan2017deepdefense}. Such measures vastly help to manage the resources securely and thus increasing the reliability of the system. \subsection{Managing Cloud Economics} Cloud economics is a complex problem and requires vast domain knowledge and expertise to price services adequately. It is also essential for consumers to easily understand pricing models and estimate the cost for their deployments. Current pricing models largely depend on subscription types, e.g., reserved, on-demand, or spot instances. The pricing for these subscription models is driven by standard economic principles like auction mechanisms, cost-benefit analysis, profit, revenue maximisation, etc. These pricing problems are solved using techniques like Operation Research (OR) or stochastic game theory approaches \cite{mistry2018economic}. However, such methods are mostly inflexible, and they either overprice the services or results in loss of revenues for cloud service providers. In this regard. ML models can forecast resource demand, and accordingly, excessive resources can be pooled in the open market for consumers. In addition, pricing can be more dynamic based on this forecasted demand response that benefits both consumers and service providers. \subsection{Generating the Large-scale Data Sets} Machine learning models require large amounts of training data for improved accuracy. However, access to large scale data is limited due to privacy and lack of capabilities to generate a large quantity of data from infrastructure. To that end, AI models itself can be used to create large-scale synthetic datasets that closely depict the real-world datasets. For instance, given a small quantity of data as input, Generative Adversarial Networks (GANs) can be used to produce large-scale data \cite{zhang2018generative}. Such methods are highly feasible in generating time-series data of DCS infrastructure. Moreover, these methods can also be leveraged to produce datasets from the incomplete datasets adequately. Such large-scale data sets are necessary to train efficient predictive models and bootstrap the RL agents to achieve a reasonable efficiency in its policies. \subsection{Future System Architectures} Cloud services have recently undergone a shift from monolithic applications to microservices, with hundreds or thousands of loosely-coupled microservices comprising the end-to-end application. In \cite{gan2019open}, the authors explore the implications of these microservices on hardware and system architectures, bottlenecks therein, and lessons for future data centre server design. Microservices affect the computation to communication ratio, as communication dominates, and the amount of computation per microservice decreases. Similarly, microservices require revisiting whether big or small servers are preferable. In \cite{ayers2019asmdb}, the authors use an always-on, fleet-wide monitoring system, to track front-end stalls, I-cache and D-cache miss (as cloud microservices do not lend them amenable to cache locality unlike traditional workloads) across hundreds and thousands of servers across Google's warehouse-scale computers. The enormous amounts of data generated and analysed help provide valuable feedback for the design of next-generation servers. Similarly, deep learning can be used to diagnose unpredictable performance in cloud systems. Data from such systems can thus be invaluable for the hardware and system architectures of future DCS. \subsection{Other Avenues} Along with the aforementioned directions, AI-centric solutions can be applied to several other RMS tasks, including optimising the heuristics itself \cite{cummins2017end}, network optimisations (e.g., TCP window size, SDN routing optimisation problems), and storage infrastructure management \cite{cano2019optimizing}. Moreover, learning-based systems can be extended across different computing system stack, from lower abstraction levels, including hardware design, compiler optimisations, operating system policies, to a higher level interconnected distributed system platforms\cite{dean2017machine}. \section{Conceptual Model for AI-centric RMS} In the AI-centric RMS (Resource Management Systems) system, models need to be trained and deployed for the inference used by the RMS for different tasks. However, integrating data-driven models into DCS platforms in a scalable and generic manner is challenging and is still at a conception stage. In this regard, as shown in Figure \ref{fig:conceptual_model}, we provide a high-level architectural model for such data-driven RMS. The essential elements of this system are explained below. It consists of three entities: \\ \textbf{Users/ Applications:} Users requiring computing resources or services interact with the middleware using APIs or interfaces. \\ \textbf{AI-centric RMS Middleware:} This is responsible for performing different tasks related to managing user requests and underlying infrastructure. The AI-centric RMS tasks continuously interact with the data-driven models for accurate and efficient decisions. The RMS needs to perform various tasks, including provisioning the resources, scheduling them on appropriate nodes, monitoring in runtime, dynamic optimisations like migrations, and consolidations \cite{bianchini2020toward} to avoid the potential SLA violations. Traditionally, these tasks are done using the algorithms implemented within the RMS system that would execute the policies based on the heuristics or threshold-based policies. However, in this AI-centric RMS, the individual RMS operations are aided with inputs from the data-driven models. The data-driven AI models are broadly categorised into two types, (1) predictive models and (2) adaptive RL models. In the former, models are trained offline using supervised or unsupervised ML algorithms utilising historical data collected from the DCS environment that includes features from resources, entities, and application services. This data is stored in databases, and data-engineering is done, such as preprocessing, cleaning, normalising, to suit AI models' requirements. Thus, this offline training can be done on remote cloud nodes to benefit from the specialised, powerful computing resources. The trained models can be deployed on specialised inference devices like Google Edge TPU and Intel Habana. Choosing the optimal place and deciding where to deploy these ML models depends on where the RMS engine is deployed in the environment, and this is itself a challenging research topic that should be addressed as described in Section \ref{sec:managedeploy}. In the latter case, runtime adaptive models such as Reinforcement Learning (RL) that continue to improve their policies based on agents' interactions and system feedback. It requires both initial learning and runtime policy improvement methods that need to be updated after every episode (certain time reaching to terminal state). The RMS operations can interact with both the predictive and RL-based data-driven models using the RESTful APIs in runtime \cite{bianchini2020toward}. \\ \textbf{DCS Infrastructure:} The computing infrastructure comprises heterogeneous resources, including sensors, gateway servers, edge data centres, and remote clouds. Therefore, adopting the data-driven AI-centric RMS models needs a significant change in the way current RMS systems are designed and implemented, as well as monitoring agents, interfaces, and deployment policies that can be easily integrated into existing environments. \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{./images/model.png} \caption{Conceptual Data-Driven RMS Model} \label{fig:conceptual_model} \end{figure} \section{Demonstration Case Studies} In this section, we present two use cases that have applied ML techniques to the following problems: (1) data-driven configuration of device frequencies for energy-efficient workload scheduling in cloud GPUs\cite{shashiccgrid2020}, (2) data centre resource management using ML models \cite{bianchini2020toward, gao2014machine}. \subsection{Data-Driven GPU Clock Configuration and Deadline-aware Scheduling} \begin{figure} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{./images/systemmodel.PNG} \caption{System Model} \label{fig:system_model} \end{figure} Graphics Processing Units (GPUs) have become the de-facto computing platform for advanced compute-intensive applications such as video processing and autonomous cars. Additionally, ML models are massively reliant on the GPUs for training due to their efficient SIMD architectures that are highly suitable for parallel computations. However, the energy consumption of GPUs is a critical problem. Dynamic Voltage Frequency Scaling (DVFS) is a widely used technique to reduce the dynamic power of GPUs. Yet, configuring the optimal clock frequency for essential performance requirements is a non-trivial task due to the complex nonlinear relationship between the application's runtime performance characteristics, energy, and execution time. It becomes even more challenging when different applications behave distinctively with similar clock settings. Simple analytical solutions and standard GPU frequency scaling heuristics fail to capture these intricacies and scale the frequencies appropriately. In this regard, we propose a data-driven frequency scaling technique by predicting the power and execution time of a given application over different clock settings. Furthermore, using this frequency scaling by prediction models, we present a deadline-aware application scheduling algorithm to reduce energy consumption while simultaneously meeting their deadlines. \begin{figure}[t] \captionsetup{justification=centering} \begin{subfigure}[t]{0.47\textwidth} \includegraphics[width=\linewidth]{./images/rmse_power_comparison1.pdf} \caption{Energy prediction} \label{fig:rmseenergy} \end{subfigure} \begin{subfigure}[t]{0.47\textwidth} \includegraphics[width=\linewidth]{./images/rmse_time_comparison1.pdf} \caption{Time prediction} \label{fig:rmsetime} \end{subfigure} \caption{Performance of different models for energy and execution time prediction (lower RMSE value is preferred)} \label{fig:rmse} \end{figure} The high-level overview of the system is given in Fig. 3. It is broadly classified into two parts, predictive modelling, and data-driven scheduler. In the first part, we collect the training data that consists of three parts, profiling information, energy-time measurements, and respective frequency configurations. We then predict two entities for a given application and frequency configuration, i.e., energy consumption and execution time. Subsequently, in the second part, the new applications arrive with the dead-line requirements and minimal profiling data from a default clock frequency execution. The scheduler finds correlated application data using the clustering technique, and this data is used for predicting the energy and execution time over all frequencies. Finally, based on the deadline requirements and energy efficiency, the scheduler scales the frequencies and executes the applications. We use twelve applications for evaluation from two standard GPU bench-marking suites, Rodinia and Polybench. The training data is generated from profiling the applications using nvprof, a standard profiling tool from NVIDIA. We collected around 120 key features representing key architectural, power, and performance counters. To build the predictive models, we explored several regression-based ML models, including Linear Regression (LR), lasso-linear regression (Lasso), and Support Vector Regression (SVR). Also, ensemble-based gradient boosting techniques, extreme Gradient Boosting (XGBoost), and CatBoost. The goal is to build energy and execution time prediction models for each GPU device to assist the frequency configuration. \begin{figure} \includegraphics[width=\linewidth]{./images/energgygroupedbarplot_kmeansw_50_avg_RD.pdf} \caption{Average energy consumption of applications} \label{fig:application_power} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{./images/scheduling_power_kmeans_total_w50_RD_bw.pdf} \caption{Average total energy consumption of GPU} \label{fig:total_power} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{./images/smclockw_50_kmeans.pdf} \caption{Frequency Scaling by different policies} \label{fig:dvfs_scaling} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{./images/normalised_deadlineplotw_50_kmeans.pdf} \caption{Normalised application completion time compared to deadline} \label{fig:deadline} \end{figure} We conduct extensive experiments on NVIDIA GPUs (TESLA P100). The experiment results have shown that our prediction models with CatBoost have high accuracy with the average Root Mean Square Error (RMSE) values of 0.38 and 0.05 for energy and time prediction, respectively (Figure \ref{fig:rmseenergy}, Figure \ref{fig:rmsetime}). Also, the scheduling algorithm consumes 15.07\% less energy (Figure \ref{fig:total_power}) as compared to the baseline policies (default and max clock) while meeting the application deadlines as our approach can scale the frequencies that have energy-efficient settings (Figure \ref{fig:dvfs_scaling}) also able to meet performance requirements. More details on prediction-models, scheduling algorithms, and implementation can be found in \cite{shashiccgrid2020}. \subsection{Industrial (Google Cloud and Microsoft Azure) Data Centre Management} Data centres are the backbone infrastructures of cloud computing today. A data centre is a complex Cyber-Physical-System (CPS) consists of numerous elements. It houses thousands of rack-mounted physical servers, networking equipment, sensors monitoring server, and room temperature, a cooling system to maintain acceptable room temperature, and many facility-related subsystems. The data centre is one of the highest power density CPS of up to 20 kW per rack, dissipating an enormous amount of heat. This poses a serious challenge to manage resources energy efficiently and provide reliable services to users. Optimising data centre operation requires tuning the hundreds of parameters belonging to different subsystems where heuristics or static solutions fail to yield a better result. Moreover, even a 1\% improvement in data centre efficiency leads to savings in millions of dollars over a year and also helps to reduce the carbon footprints. Therefore, optimising these data centres using potential AI techniques is of great importance. Accordingly, we discuss two real-time AI-based RMS systems built by researchers at Google and Microsoft Azure Cloud. ML-centric cloud \cite{bianchini2020toward} is an ML-based RMS system at an inception stage from the Microsoft Azure cloud. They built Resource Control (RC)—a general ML and prediction serving system that provides the insights of workload and infrastructure for re-source manager of Azure compute fabric. The input data collected from the virtual machine and physical servers. The models are trained using a gradient boosting tree and trained to predict the different outcomes for user's VMs such as average CPU utilisation, deployment size, lifetime, and blackout time. The Azure resource manager interacts with these models in runtime. For instance, the scheduler queries for virtual machine lifetime, and based on the predicted value; the appropriate decision is taken to increase infrastructure efficiency. Applying these models to several other resource management tasks is under consideration, including power management inside Azure infrastructure. Similarly, Google has also applied ML techniques to optimise the efficiency of their data centres. Specifically, they have used ML models to change the different knobs of the cooling system, thus saving a significant amount of energy \cite{gao2014machine}. The ML models are built using simple neural networks and trained to improve the PUEs (Pow-er Usage Effectiveness), a standard metric to measure the data centre efficiency. The input features include total IT workload level, network load, parameters affecting the cooling system like outside temperature, wind speed, number of active chillers, and others. The cooling subsystems are configured according to the predictions, and results have shown that the 40\% savings are achieved in terms of energy consumption. Therefore, the brief uses cases presented here firmly attest to the feasibility of AI-centric solutions in different aspects of resource management of distributed systems. \section{Conclusions} Future distributed computing platforms will be massively complex, large scale, and heterogeneous, enabling the development of highly connected resource-intensive business, scientific, and personal applications. Managing resources in such infrastructure require data-driven AI approaches that derive key insights from the data, learn from the environments, and take resource management decisions accordingly. In this paper, we have discussed the challenges associated with the adoption of AI-centric solutions in RMS. We identified the potential future directions describing different RMS tasks where we can apply AI techniques. Moreover, we presented the conceptual AI-centric RMS model. Finally, we demonstrated the two use-cases of AI-Centric solutions in resource management of distributed systems. The state-of-the-art rule-based or heuristics resource management solutions have become inadequate in modern distributed computing platforms. The RMS policies need to deal with massive scale, heterogeneity, and varying workload requirements. As a result, we believe that AI techniques and tools can be widely utilised in numerous RMS tasks, including monitoring, resource provisioning, scheduling, and many others. Such approaches are highly adaptive and better suited to deal with the resource management complexities, enabling optimised resource management from processor to middleware platforms, and application management. \bibliographystyle{IEEEtran}
2024-02-18T23:41:00.806Z
2020-11-10T02:06:05.000Z
algebraic_stack_train_0000
3,930
6,456
proofpile-arXiv_066-3155
\section{Introduction} Aneta Stefanovska expressed a vision ``to build a self-consistent theory of non-autonom\-ous oscillators" (June 2014). In this direction she introduced the class of ``chronotaxic" systems \cite{SCS}, defined as ``oscillatory systems with time-varying, but stable, amplitudes and frequencies''. This chapter presents a view of a non-autonomous oscillator as a mapping from input functions of time to a circle of possible solutions (state functions of time). It indicates how this view encompasses chronotaxic systems and enables one, at least conceptually, to understand the extent of synchronisation in networks of oscillators, whether autonomous or not. For the latter a hierarchical aggregation scheme is introduced. The approach is based on the theory of normal hyperbolicity \cite{F,HPS}. This theory is the mathematical expression of Haken's slaving principle \cite{Ha}, the idea that some variables for a dynamical system might contract relatively rapidly onto some invariant submanifold in the state space, and then it suffices to study the dynamics on the submanifold. Two key results of normal hyperbolicity theory are:~(i) conditions guaranteeing existence of such a submanifold, and (ii) smooth persistence of normally hyperbolic (NH) submanifolds as parameters are varied smoothly. It was developed before Haken's slaving principle and deserves to be better known in the physics community. It is a generalisation of centre manifold theory, which is the main mathematical tool Haken used, but has much wider scope. An obstacle is that it demands considerable technical expertise in mathematical analysis. Yet the obstacles are genuine:~it turns out that NH submanifolds are differentiable some number $r$ times, depending on the ratio between normal and tangential contraction rates, but typically not more than $r$ times. This is important to recognise, as there is a tendency in physics to consider such functions as pathologies (though physicists do understand that there can be fractal functions). It is a project on which I have been working for many years, notably with PhD student Stephen Gin (2006--13). It was prompted initially by Mohammad Ghaffari Saadat in 2003, who had formulated a limit-cycle model for a bipedal robot walking down a slope \cite{TGN} and asked me how much non-uniformity of slope it could cope with. I proposed to tackle this problem by fitting it into the framework of the non-autonomous version of the theory of NH submanifolds, where the result of a not too large forcing function on an oscillator is a circle of possible trajectories. Gin and I attempted to develop good versions of the proofs of normal hyperbolicity results to produce realistic conditions guaranteeing the outcome \cite{G}. Our approach is still incomplete, but I present here the key ideas. In the world of conservative dynamics, an oscillator is considered to be a Hamiltonian system with an elliptic equilibrium point; this view has fundamental importance but is not the appropriate one for present purposes. Outside the world of conservative dynamics, an oscillator is usually considered to be an autonomous dynamical system with an attracting periodic orbit. The concept has been extended to cater for chaotic oscillators, but I will postpone treating that extension until near the end of this chapter. This concept of oscillator as a system with an attracting limit-cycle, however, fails to include the many situations where it is subject to time-dependent forcing. Also, in a network of oscillators, each is subject to input from others, in general time-dependent, so even if the network is autonomous it is useful to consider time-dependent forcing on each of its oscillators. So I propose a view of an oscillator as a mapping from input functions $f$ of time to a circle's worth of solutions for its state $x$ as a function of time. Each input function $f$ (possibly with more than one component) causes a response $x_\theta$ (a function of time) with a phase $\theta \in S^1$ labelling the different possible responses. This view is justified by the theory of normal hyperbolicity, at least for not too strong forcing. It is also my interpretation of chronotaxic systems. The idea is to consider a non-autonomous system $\dot{x} = v(x,t)$ on a state space $X$ as an autonomous system in the extended state space $X \times \mathbb{R}$, with the real line $\mathbb{R}$ representing time. The dynamics has the form \begin{eqnarray} \dot{x} &=& v(x,s) \label{eq:sys}\\ \dot{s} &=& 1. \nonumber \end{eqnarray} First suppose the vector field $v = v_0$ is independent of $s$ and $\dot{x}=v_0(x)$ has a limit cycle $\gamma$ (in the strong sense of a periodic orbit with no Floquet multipliers\footnote{The Floquet multipliers of a periodic orbit are the eigenvalues of the derivative of the return map to a transverse section.} on the unit circle). The most relevant case for applications might be the attracting case (all Floquet multipliers inside the unit circle), but one can allow the more general situation. Then in $X\times \mathbb{R}$, the extended system (\ref{eq:sys}) has an extended verson of $\gamma$, namely an invariant cylinder $\gamma \times \mathbb{R}$. The trajectories form helices on the cylinder, representing the same periodic solution but shifted in $s$. This cylinder is an example of a NH submanifold. In general, a {\em NH submanifold} for a $C^1$ dynamical system is an invariant $C^1$ submanifold for which the linearised normal dynamics decomposes into components which contract exponentially in forward or backward time respectively, and faster than the linearised tangential dynamics. Note that the use of the word ``normal'' might suggest perpendicular, but actually, a normal vector to a submanifold is defined to be an equivalence class of vectors at a point modulo vectors tangent to the submanifold at that point. In the above case, the linearised tangential dynamics neither contracts nor expands on average, because the phase difference between any pair of the helices remains constant. The linearised normal dynamics decomposes into exponentially contracting components in forward and backward time, corresponding to the Floquet multipliers inside and outside the unit circle, respectively. Now allow $v$ to depend weakly on $s$. The key result for NH submanifolds is that they persist under $C^1$-small perturbation. Thus the perturbed system has a $C^1$-nearby invariant cylinder, no longer in general of product form but diffeomorphic to $S^1\times\mathbb{R}$. Furthermore, the vector field on it is close to that on the unperturbed cylinder, and the normal dynamics is close to that for the unperturbed case. The solutions on the perturbed cylinder are not in general just a family of periodic solutions differing by phase. In particular, there may be solutions on the cylinder to which all nearby ones converge in forward time. There may also be solutions to which all nearby ones converge in backward time. Or neither may happen. In any case, there is a circle's worth of solutions on the cylinder, which one could label by the intersections of the cylinder with $s=0$ for example. In particular, if $v(x,t) = v_0(x) + f(t)$ then the forcing function $f$ produces a circle's worth of state functions $x$ of time on the cylinder. In general a forcing function $f$ should be allowed to depend on the state $x$ too, so $v=v_0(x)+f(x,t)$, and by normal hyperbolicity theory, the same conclusion holds. As an illustration, consider a model of a quasiperiodically forced limit-cycle oscillator from \cite{CS}: \begin{eqnarray} \dot{x}&=&-qx-\omega y \\ \dot{y}&=&\omega x - q y + \gamma f(t), \nonumber \end{eqnarray} with $q= \alpha(\sqrt{x^2+y^2}-a)$, $f(t) = \sin 2\pi t + \sin 4t$, $\alpha, a >0, \gamma \ge 0$ (more natural would be $q=\alpha(x^2+y^2-a^2)$ because it makes the dynamics smooth at the origin, but the interest is in the behaviour for $r=\sqrt{x^2+y^2}$ near $a$). In polar coordinates $(r,\theta)$ and extended state-space, this is \begin{eqnarray} \dot{r}&=&-\alpha r(r-a) + \gamma f(s) \sin \theta \\ \dot{\theta}&=& \omega-\frac{\gamma}{r}f(s)\cos\theta \nonumber \\ \dot{s}&=&1. \nonumber \end{eqnarray} For $\gamma=0$ there is an invariant cylinder $r=a$. It attracts exponentially with exponent $-\alpha a$ and the motion on the cylinder is $\dot{\theta}=\omega$, $\dot{s}=1$, which has Lyapunov exponents 0. So the cylinder is NH and persists to a deformed invariant cylinder for small enough $\gamma$. A rough estimate of the range of $\gamma$ for which persistence is guaranteed is given by the range for which tangential contraction is weaker than normal contraction on the unperturbed cylinder. The normal contraction rate (onto the unperturbed cylinder) is still $\alpha a$. The tangential contraction (or expansion if negative) $-\frac{\partial \dot{\theta}}{\partial \theta} = -\frac{\gamma}{r}f(s)\sin\theta$. This is smaller than $\alpha a$ for all $s, \theta$ iff $2\gamma < \alpha a^2$. Thus one can expect the NH cylinder to persist for $\gamma$ up to something of the order of $\alpha a^2/2$. When $\gamma$ exceeds $\alpha a^2/2$ one can not expect the invariant cylinder to persist. It is shown numerically in \cite{CS} that the cylinder is replaced by a (non-autonomous) chaotic attractor with one unstable Lyapunov exponent (coming from the $s,\theta$ for which the tangential dynamics is expanding). For a class of examples where a NH submanifold (in fact two 2-tori) can be proved to break up, see \cite{BaM1}. In this chapter, however, I will concentrate on regimes of weak enough coupling that NH submanifolds persist. As an aside, this view of an oscillator fits in Willems' ``behavioural approach'' to systems and control \cite{W}. His view was that the description of a dynamical system should be considered to be the restrictions on the set of possible functions of time for all variables. Normal hyperbolicity strikes me a key tool for delivering his approach. On the other hand, he also proposed that one should go beyond the idealisation of inputs and outputs by treating all coupling as two-way, a line that I shall not follow consistently. In this chapter I will explain how this view of an oscillator illuminates the phenomena of phase-locking, synchronisation and chimera \cite{AS}, allows to extend the concept of coupling, and allows a hierarchical reduction treatment of synchronisation in networks of oscillators. I will extend the results to allow excitable oscillators and chaotic oscillators. I will outline how the theory of normal hyperbolicity underlies the results. There is a huge literature on synchronisation, e.g.~\cite{PRK}, and much of what I will say will be familiar but the important emphasis here is on synchronisation in aperiodically forced systems, which has been treated much less. Perhaps this direction is not what Aneta had in mind, but I believe it provides a self-consistent theory for non-autonomous oscillators and I hope that it will be useful. \section{Phase-locking} \label{sec:pl} It is well-known that an oscillator may phase-lock to some features of its inputs. Indeed, this is the principle of phase-locked loops in electronic engineering \cite{Br} and of synchronous generators and motors in AC electrical networks. My definition of phase-locking of an oscillator to forcing is that the NH cylinder (assumed attracting) has an attracting trajectory on it and the initial condition is in its basin of attraction. Any discussion of attractors for non-autonomous systems requires care because the dynamics is unbounded in the time direction of extended state-space, so there are inequivalent choices of neighbourhoods of a trajectory. For example, for the 2D system $\dot{x}=x,\dot{s}=1$, any trajectory has a neighbourhood of attraction, despite looking unstable, e.g.~for the solution $x=0$ just take neighbourhood of the form $|x|<\varepsilon e^{2s}$. So I make precise here that by ``attracting trajectory'' I mean the case with zero unstable space of a uniformly hyperbolic trajectory in the non-autonomous sense. To explain what this means would take some space, so I refer the reader to \cite{BiM} (with my PhD student Zahir Bishnani), but the important feature is to choose a notion of distance in extended state-space that is uniform in time (so that one does not allow neighbourhoods like that in the above example). There might be a bundle of trajectories which all converge together in forward time, but in general there is only one trajectory in the bundle that has a uniform-in-time neighbourhood of attraction. It is a pullback attractor (for this concept, see the contribution by Kloeden in this volume). My concept of attracting trajectory is distinct, however, from that of pullback attractor, because it can also occur that a pullback attractor is not uniformly hyperbolic (it may be repelling after some time). An alternative way to describe phase-locking is that the oscillator is synchronised to its inputs. I use ``synchronise" in a weak sense: that to a given input function of time there is a locally unique forwards asymptotic solution (the strong sense applies to systems of identical oscillators with a symmetry of the coupling that maps any oscillator to any other, and consists in all oscillators doing the same; for an example, see \cite{YM}). Note that a forced oscillator may have more than one such attracting trajectory; this would allow different synchronisations to the same input. This is in contrast to non-synchronisation, where there is a circle's worth of solutions that do not converge asymptotically to a discrete subset. The strongest version of non-synchronisation is when there is a time-dependent choice of $C^1$ coordinate $\phi$ around the cylinder, replacing an initial coordinate $\theta$, such that $\dot{\phi}=\omega(t)$, a positive function of $t$ only, and $\frac{\partial \phi}{\partial \theta}$ and its inverse are bounded. Then with a new time $\tau$ defined by $d\tau/dt=\omega(t)$, we obtain $d\phi/d\tau = 1$. It would be interesting to investigate the probability of this case with respect to a distribution of oscillator frequencies for given weak forcing, perhaps obtaining a sort of non-autonomous KAM result\footnote{The original KAM theory gives a set of invariant tori for near-integrable Hamiltonian systems, the measure of whose complement goes to zero as the perturbation from integrability goes to zero.}, extending the theory of reducibility of cocycles (see \cite{DS} for an early example). The main conclusion of this section is that synchronisation of an oscillator to its inputs is dimension-reduction. In particular, if there is no immediate feedback from the oscillator to any of its inputs, then one could delete that oscillator, replacing its outputs by some modifications of the outputs from its inputs. \section{Synchronisation of two oscillators} \label{sec:2osc} Let us start with two autonomous oscillators $x_i = v_i(x_i)$, $i=1,2$, meaning each has a limit cycle $\gamma_i$, and couple them in the standard sense of a modification to the vector field of the product system, depending on the state of each but not too strongly, so \begin{equation} \dot{x}_i = v_i(x_i) + g_i(x_1,x_2), \end{equation} with $g_i$ $C^1$-small. Then the product system has a NH 2-torus, being a small perturbation of $\gamma_1 \times \gamma_2$. If the difference of the frequencies of the uncoupled limit cycles is smaller in a suitable dimensionless sense than the coupling then the NH torus has an attracting limit cycle on it, which makes one turn in the $\gamma_2$ direction for each turn in the $\gamma_1$ direction. I say the two oscillators have gone into $1:1$ synchronisation. Recall Huygens' clocks. The torus may have more than one attracting limit cycle on it, in which case several synchronised solutions are possible. It may also have unstable limit cycles on it. Similarly, if the frequencies are close to being in (coprime) integer ratio $m:n$ then coupling might produce an attracting $m:n$ limit cycle on the NH torus, which makes $m$ revolutions in the $\gamma_1$ direction and $n$ in the $\gamma_2$ direction per period. On the other hand, for weak coupling and smooth enough dynamics, the non-synchronised situation occurs with high probability. More precisely, if one adds a free parameter varying the unperturbed frequency ratio, then KAM theory gives a set of parameter values of nearly full measure for which the dynamics is conjugate to a constant vector field on a 2-torus with irrational frequency ratio (e.g.~\cite{LD} for a version by my PhD student Jo\~ao Lopes Dias). Thus synchronisation does not always result. Now consider the non-autonomous situation, where one or both of the oscillators is subject to external forcing. If the forcing is not too strong then the resulting system has a NH submanifold in extended state space, diffeomorphic to $\gamma_1 \times \gamma_2 \times \mathbb{R}$, which I call a torus-cylinder. More generally, for any manifold $M$ I define an $M$-cylinder to be a manifold diffeomorphic to $M\times\mathbb{R}$. Thus an ordinary cylinder can be called a circle-cylinder. If the unperturbed frequencies are close to integer ratio $m:n$ then the NH submanifold might contain a NH attracting submanifold diffeomorphic to a circle cross time, being a perturbation of the product of a $m:n$ synchronised limit cycle for the autonomous system and time. In this situation the non-autonomous pair of oscillators can be replaced by a single one. So again, synchronisation of two oscillators is a dimension-reduction. \section{What is coupling?} In the previous section I used the standard dynamical systems notion for coupling as a perturbation of the product of two vector fields. One might want, however, to allow more general forms of coupling, for example incorporating time-delays or coupling via an intermediate dynamical system. Furthermore, suppose one achieved a dimension-reduction as in section~\ref{sec:pl} or \ref{sec:2osc} and then wants to consider how the new effective oscillator is coupled to others that originally were coupled to one or both of the pair of oscillators. This is no longer describable as a standard perturbation of the product of vector fields. So I generalise the notion of coupling of two non-autonomous oscillators. As already defined, a non-autonomous oscillator is a non-autonomous system with NH cylinder on which the dynamics can be described by one phase $\theta$ with $\dot{\theta} = f(\theta,t)$. A coupling of two non-autonomous oscillators is a non-autonomous system with a NH torus-cylinder on which the dynamics can be described by two phases $\theta = (\theta_1,\theta_2)$ with $\dot{\theta}_i=\tilde{f}_i(\theta,t)$ and $\tilde{f}_i(\theta,t)$ close to $f_i(\theta_i,t)$ for some $f_i$. Then the dynamics on the NH torus-cylinder may contain a NH attracting circle-cylinder, as in the more restricted case of the previous section. If the trajectory is in its basin of attraction, I say the two oscillators synchronise. \section{Synchronisation of $N$ oscillators} Not too strong coupling of $N$ non-autonomous oscillators produces a NH $N$-torus-cylinder. The dynamics on it might contain an attracting NH $d$-torus-cylinder for some $d<N$. If $d=1$ the whole group is synchronised and can be replaced by a single effective non-autonomous oscillator. If $d=0$ the whole group is phase-locked to its inputs and can be eliminated. Once again, synchronisation, whether partial or complete, means dimension-reduction. \section{Hierarchical aggregation} In a network of oscillators, the above dimension-reductions can in principle be iterated. First one identifies groups of oscillators which synchronise or phase-lock to their inputs. One reduces to a new network of effective oscillators. Then one repeats, if possible. The end result is a decomposition into synchronised clusters. Although I did not find out about his work until after I'd proposed this, it is a direct example of Willems' ``tearing, zooming, linking'' approach \cite{W}. One should note that the end result is not necessarily complete synchronisation. Indeed, it could well be a chimera \cite{AS}, meaning a system in which some of the oscillators are synchronised but others behave chaotically. The chaotic ones force the synchronised ones and the synchronised ones force the chaotic ones, but our approach of non-autonomous oscillators caters for both of these. There is now a huge literature on chimera. To me the phenomenon was not a surprise because it fits in my framework, but without the framework it can admittedly be considered surprising. \section{Normal hyperbolicity estimates} To achieve the above dimension-reductions requires good normal hyperbolicity estimates, i.e.~results guaranteeing existence of NH submanifolds. The easiest case, namely, 1D submanifolds, which are just uniformly hyperbolic trajectories of non-autonomous systems, was already treated in \cite{BiM} (incidentally, it was formulated with attracting trajectories in mind, but another application would be to the unstable trajectories of geophysical flows that form boundaries between trajectories of different classes, e.g.~\cite{FPET}). So that takes care of the case of phase-locking. Higher-dimensional NH submanifolds, however, require more theory. The classic references are \cite{F,HPS}. They are not particularly well adapted to producing practical estimates. Thus I set Stephen Gin onto developing a better way. His PhD thesis \cite{G} gives the outcome, but it is not a complete treatment. So here, I sketch an approach to NH estimates that I believe will be useful. It is in the classic dynamical systems setting of a vector field on the product of state space and time, but hopefully could be extended to take care of the more general forms of coupling that I have described here. I restrict attention to submanifolds that are torus-cylinders, but of arbitrary dimension $m+1$. So suppose \begin{eqnarray} \dot{\theta} &=& \Theta(\theta,r,t) \\ \dot{r} &=& R(\theta,r,t), \nonumber \end{eqnarray} for $\theta \in \mathbb{T}^m$, $r \in U$, a neighbourhood of $0\in \mathbb{R}^p$. I suppose that the product $|R_\theta| |\Theta_r|$ is small (where subscript denotes derivatives), the $r$-dynamics is hyperbolic, and the Green function for linearised normal dynamics decays faster than any contraction that may occur in $\theta$-dynamics. Given a Lipschitz graph $r=\rho(\theta,t)$, a candidate for an invariant submanifold, construct a new one, $T\rho$, by the following steps: \begin{enumerate} \item For all $(\theta_0,t_0)$, let $\theta()$ be the trajectory of $\dot{\theta}(t)=\Theta(\theta,\rho(\theta,t),t)$ from $\theta(t_0)=\theta_0$. \item Solve $\dot{r}(t) = R(\theta(t),r(t),t)$ for the unique function $r()$ such that $r(t)$ is near $\rho(\theta(t),t)$ for all $t$. \item Set $(T\rho)(\theta_0,t_0) = r(t_0)$. \end{enumerate} To achieve the second step, I assume that $L:C^1(\mathbb{R},\mathbb{R}^{p})\to C^0(\mathbb{R},\mathbb{R}^{p})$ defined by $$L[x](t) = \dot{x}(t) - R_r(\theta(t),r(t),t) x(t)$$ on infinitesimal displacements $x$ in $r$ has bounded inverse. This is equivalent to the first part of the NH condition, namely a splitting of the normal bundle into exponentially contracting backwards and forwards subspaces. Having thus constructed the ``graph transform'' $T$, I want to prove that it is a contraction on a suitable space of graphs and hence has a unique fixed point there, which will be an invariant graph. In the direction of achieving this, define a {\em slope} to be a linear map $\sigma$ from displacements in $\theta$ to displacements in $r$. For an approximation $\tilde{\sigma}$ to the expected derivative $\rho_\theta$, define $M_{\tilde{\sigma}}: W^{1,\infty}(\mathbb{R},\mathbb{R}^{mp}) \to W^{0,\infty}(\mathbb{R},\mathbb{R}^{mp})$ by $$M_{\tilde{\sigma}}[\sigma] = \dot{\sigma} - R_r\sigma+\sigma(\Theta_\theta + \Theta_r\tilde{\sigma})$$ on slope functions $\sigma$ of $t$, where $W^{s,\infty}$ are the spaces of functions with essentially bounded $s^{th}$ derivative. Suppose that $M_{\tilde{\sigma}}$ has bounded inverse. This is the second part of the NH condition, namely faster normal contraction than tangential contraction. Then $T$ should be a contraction in the space of $C^0$ functions with an a priori Lipschitz constant. So it would have a unique fixed point $\rho$. Any fixed point is invariant and actually $C^1$ with slope $\rho_\theta$ being the fixed point of the contraction map $\sigma \mapsto M_\sigma^{-1}[R_\theta]$. To complete this programme requires detailed estimates. Formulated in terms of contraction maps as here, it should be possible to obtain excellent estimates, along the lines of the uniformly hyperbolic case in \cite{BiM}. We might do best to follow the approach of \cite{H} (cf.~\cite{E}), but replacing their exponential hypotheses by our hypotheses of invertibility of $L$ and $M$ and modifying their exponentially weighted norm to use the linearised tangential flow. I would like to finish this one day. \section{Extension to class 1 neurons} So far, I have considered the simplest type of oscillator, namely limit cycles, but the treatment can be extended to class I neurons (or excitable oscillators). These are dynamical systems with an attracting invariant cylinder in the autonomous case and dynamics on it in simplest form given by \begin{eqnarray} \dot{\theta} &=& \mu + 1-\cos\theta \\ \dot{\mu}&=&0 . \nonumber \end{eqnarray} Since $\mu$ is constant, one could think of it as an external parameter, but I wish to consider it as a state variable because coupling from another neuron can make $\mu$ change in time. It is best to think of $\mu$ as bounded, so the attracting cylinder can be considered an invariant annulus. They arise in modelling of ``excitable neurons'' whose frequency goes to zero as a parameter ($\mu$) is varied and then settle at a $\mu$-dependent resting state, or in reverse go from a resting state to large amplitude periodic spiking. An example is the Morris-Lecar model \cite{ML}, but it was \cite{EK} who identified the phenomenon as the unfolding of a saddle-node on a cycle (I proposed this independently to physiologist H.Barlow in the same year and then in 1991 proposed to C.Koch the extension to allow crossover at a ``saddle-node loop'' \cite{S} to the unfolding of a homoclinic orbit to a saddle). Thus the non-autonomous version has an attracting NH annulus-cylinder. I had an undergraduate student study networks of such neurons in 1989/90, with the state $\mu$ of each neuron driven by the spiking of some others (with time-delay kernels), which produced periodic bursting \cite{M2}. Two class I neurons coupled not too strongly have a NH attracting annulus$\times$annulus-cylinder. Generic bifurcation diagrams in the autonomous case were given in \cite{BaM}. The dynamics on it has attracting submanifolds of various types. The non-autonomous case has non-autonomous versions of them. The theory of this paper applies just as well to class I neurons as to ordinary oscillators, with the addition of the $\mu$-direction for each class I neuron. \section{Extension to chaotic oscillators} The approach can also be extended to chaotic oscillators if they have an attracting NH submanifold containing the attractor. For example, think of a R\"ossler attractor \cite{R}, which is contained in a solid torus in $\mathbb{R}^3$. Then the non-autonomous system has a solid-torus-cylinder. A R\"ossler attractor can be phase-locked to forcing, meaning that the dynamics is attracted onto a disk-cylinder (a solid torus is the product of a disk and a circle). This should be quite easy because the R\"ossler attractor was observed to be nearly phase-coherent. I interpret that as meaning that there is a cross-section with nearly constant return time (equivalently, for a given cross-section $\Sigma$ there is a constant $c>0$ and a function $b:\Sigma \to \mathbb{R}$ such that the return time $\tau(x) = c + b(f(x))-b(x)$, where $f:\Sigma \to \Sigma$ is the return map). Synchronisation of chaotic attractors with NH cylinders of dimensions $N_1+1, N_2+1$ means there is a NH cylinder for the coupled system with dimension less than $N_1+N_2+1$. Even better, the theory of NH submanifolds extends to NH laminations \cite{HPS}. A lamination is a topological space in which each point has a neighbourhood homeomorphic to the product of a Euclidean space with a general topological space. It decomposes into leaves, which are locally submanifolds but in general only injectively immersed, so a leaf may accumulate onto itself. The theory of NH laminations requires a $C^1$-structure in addition, but is basically the same as for NH submanifolds. In particular, a NH lamination persists under $C^1$-small perturbation. This means one can treat some chaotic attractors in greater detail. In particular, imagine we start with a non-trivial uniformly hyperbolic attractor of an autonomous system, for example a suspension of a Plykin attractor \cite{P}. This is perhaps less familiar than R\"ossler's attractor but deserves to be better known, as the simplest uniformly hyperbolic attractor after equilibria and periodic orbits. The Plykin attractor was constructed for a discrete-time system, but the map is isotopic to the identity so one can realise it as the first return map of an associated continuous-time system. My PhD student Tim Hunt showed an explicit way to realise it in a system of three ODEs, extended by another PhD student Linling Ru, and less cumbersome ways have been proposed (though not yet with rigorous justification) \cite{K}. It is a NH lamination, whose leaves are its unstable manifolds (of dimension two:~one expanding dimension and one time dimension) and they form a Cantor set transversally. Under time-dependent forcing, it persists to a Cantor set of 3D leaves whose tangent space is spanned by one expanding dimension and two near neutral dimensions. The persistence is highly robust, requiring only that any tangential contraction be slower than any transverse contraction. Then one can ask what happens on the leaves. The dynamics might collapse onto a 2D subleaf with the same expanding dimension one neutral dimension. I would say the attractor has synchronised to the forcing. Similarly, one could couple a suspended Plykin attractor to a limit-cycle oscillator. It produces an attractor with a Cantor set of 3D leaves (the product of the 2D leaves of the chaotic attractor with the limit cycle). The dynamics of each leaf might collapse onto 2D subleaves. I would say the Plykin attractor and limit cycle synchronise together. More generally, one could couple a continuous-time autonomous uniformly hyperbolic attractor with $M$ unstable dimensions to $N$ limit cycle oscillators and obtain an attractor with a mixture of chaos and nearly quasiperiodic behaviour. It would have $M$ unstable dimensions, $N$ nearly quasiperiodic dimensions, and the flow dimension, with the remaining dimensions contracting onto the leaves. By the theory of NH laminations, such attractors persist for small smooth perturbations, though the dynamics in the quasiperiodic dimensions cannot be expected to remain quasiperiodic. Nonetheless, it will have small Lyapunov exponents for those dimensions and perhaps there is a non-autonomous KAM theory that would even give truly quasiperiodic motion for a set of nearly full measure of parameters. I propose this as an explanation of the scenario reported recently by \cite{YK}. As a final note, one might ask about physical realisation of attractors like R\"ossler's. I designed an electronic oscillator back in 1981, principally to demonstrate period-doubling sequences \cite{M1}, but moving the parameter further it exhibited a R\"ossler type of attractor. Model equations for the voltages at three points have the form \begin{eqnarray} \dot{x}&=&ax-by \\ \dot{y}&=& cx-ez \nonumber \\ \dot{z} &=& -fy-g(z) , \nonumber \end{eqnarray} with $a,b,c,e,f$ positive constants of which $a$ was adjustable by a 10-turn potentiometer, and $g$ an approximately odd cubic nonlinearity produced with a pair of transistors. Interestingly, as I increased $a$ further, the R\"ossler attractor turned into what Chua later called a double-scroll attractor \cite{MCK}. Indeed, Chua's equations turn out to be equivalent to mine after minor changes of variable. \section{Conclusion} I have shown that the behaviour of networks of oscillators, autonomous or not, can be aided by identifying normally hyperbolic submanifolds. This allows a deeper understanding of synchronisation of oscillators to forcing and to each other, especially in the aperiodic case. There are many studies on synchronisation in autonomous or periodically forced systems (for one example, see \cite{SST}) but relatively few on the aperiodically forced case. The fundamental feature of synchronisation is dimension-reduction of an associated normally hyperbolic submanifold. In a network of oscillators, even if autonomous, the inputs that an individual oscillator sees are in general aperiodic. This motivates a hierarchical aggregation scheme for understanding the dynamics of a network of oscillators:~oscillators that synchronise to their inputs can be eliminated, groups of oscillators that synchronise together can be replaced by a single effective oscillator. All this depends on generalising the notion of oscillator from a limit cycle of an autonomous dynamical systems to a mapping from input functions of time to solutions and generalising the notion of coupling. Finally, I extended the treatment from limit-cycle oscillators to excitable oscillators and chaotic oscillators.
2024-02-18T23:41:00.830Z
2020-06-11T02:11:07.000Z
algebraic_stack_train_0000
3,932
5,593
proofpile-arXiv_066-3178
\section{Introduction} The logarithmic nonlinearity appears in physical models from many fields. For example, the logarithmic nonlinearity is introduced in quantum mechanics or quantum optics, where a logarithmic Schr\"odinger equation (LogSE) is considered (e.g. \cite{BiMy76, BiMy79, buljan, KEB00}), \[ i\partial_t u=-\Delta u+\lambda\, u\ln |u|^2,\quad \lambda \in \R; \] in oceanography and in fluid dynamics, with a logarithmic Korteweg-de Vries (KdV) equation or a logarithmic Kadomtsev-Petviashvili (KP) equation (e.g. \cite{wazwaz2014, wazwaz2016, james2014}); in quantum field theory and in inflation cosmology, via a logarithmic Klein-Gordon equation (e.g. \cite{rosen1969, bartkowski2008, gorka2009}); or in material sciences, by the introduction of a Cahn-Hilliard (CH) equation with logarithmic potentials (e.g. \cite{cherfils2011, gilardi2009, elliott1996}). Recently, the heat equation with a logarithmic nonlinearity has been investigated mathematically \cite{chen2015, alfaro2017}. In the context of quantum mechanics, the logarithmic nonlinearity was selected by assuming the separability of noninteracting subsystems property (cf. \cite{BiMy76}). This means that a solution of the nonlinear equation for the whole system can be constructed, as in the linear theory, by taking the product of two arbitrary solutions of the nonlinear equations for the subsystems. In other words, no correlations are introduced for noninteracting subsystems. As for the physical reality, robust physical grounds have been found for the application of equations with logarithmic nonlinearity. For instance, it was found in the stochastic formulation of quantum mechanics \cite{lemos1983, nassar1985} that the logarithmic nonlinear term originates naturally from an internal stochastic force due to quantum fluctuations. Such kind of nonlinearity also appears naturally in inflation cosmology and in supersymmetric field theories \cite{barrow1995, enqvist1998q}. Remarkably enough for a nonlinear PDE, many explicit solutions are available for the logarithmic mechanics (see e.g. \cite{BiMy76,koutvitsky2006}). For example, the logarithmic KdV equation, the logarithmic KP equation, the logarithmic Klein-Gordon equation give Gaussons: solitary wave solutions with Gaussian shapes \cite{wazwaz2014, wazwaz2016}. In the case of LogSE (see \cite{CaGa18,ferriere-p1}), or the heat equation \cite{alfaro2017}, every initial Gaussian function evolves as a Gaussian: solving the corresponding nonlinear PDE is equivalent to solving ordinary differential equations (involving the purely time dependent parameters of the Gaussian). However we emphasize that this is not so in the case of, e.g., the logarithmic KdV equation, the logarithmic KP equation, or the logarithmic Klein-Gordon equation. This can be directly seen by trying to plug time dependent Gaussian functions into these equations. Note that this distinction between various PDEs regarding the propagation of Gaussian functions is the same as at the linear level. The well-posedness of the Cauchy problem for logarithmic equations is not trivial since the logarithmic nonlinearity is not locally Lipschitz continuous, due to the singularity of the logarithm at the origin. Existence was proved by compactness argument based on regularization of the nonlinearity, for the CH equation with a logarithmic potential \cite{elliott1991} and the LogSE \cite{cazenave1983}. Uniqueness is also a challenging question, settled in the case of LogSE thanks to a surprising inequality discovered in \cite{CaHa80}, recalled in Lemma~\ref{pre} below. The singularity of the logarithmic nonlinearity also makes it very challenging to design and analyze numerical schemes. There have been extensive numerical works for the CH equation with a logarithmic Flory Huggins energy potential \cite{copetti1992, gokieli2003, jeong2016, jeong2017, yang2019, chen2019}. Specifically, a regularized energy functional was adopted for the CH equation with a logarithmic free energy \cite{copetti1992, yang2019}. A regularization of the logarithmic nonlinearity was introduced and analyzed in \cite{bao2018, bao2019error} in the case LogSE, see also \cite{li2019}. In this paper, we introduce and analyze numerical methods for logarithmic equations via a local energy regularization. We consider the LogSE as an example; the regularization can be extended to other logarithmic equations. The LogSE which arises in a model of nonlinear wave mechanics reads (cf. \cite{BiMy76}), \begin{equation}\label{LSE} \left\{ \begin{aligned} &i\partial_t u({\bf x} ,t)=-\Delta u({\bf x} ,t)+\lambda\, u({\bf x} ,t)\,f(|u({\bf x} ,t)|^2),\quad {\bf x} \in \Omega, \quad t>0,\\ & u({\bf x} ,0)=u_0({\bf x} ),\quad {\bf x} \in \overline{\Omega}, \end{aligned} \right. \end{equation} where $t$ and ${\bf x} \in \mathbb{R}^d$ ($d=1,2,3$) represent the temporal and spatial coordinates, respectively, $\lambda\in \mathbb{R}\backslash\{0\}$ measures the force of the nonlinear interaction, $u:=u({\bf x} ,t)\in\mathbb{C}$ is the dimensionless wave function, and \begin{equation}\label{frhoSE} f(\rho)=\ln \rho, \qquad \rho>0, \qquad \hbox{with}\quad \rho=|u|^2. \end{equation} The spatial domain is either $\Omega=\mathbb{R}^d$, or $\Omega\subset\mathbb{R}^d$ bounded with Lipschitz continuous boundary; in the latter case the equation is subject to homogeneous Dirichlet or periodic boundary conditions. This model has been widely applied in quantum mechanics, nuclear physics, geophysics, open quantum systems and Bose-Einstein condensation, see e.g. \cite{Hef85, yasue, HeRe80, de2003, BEC}. We choose to consider positive time only merely to simplify the presentation, since \eqref{LSE} is time reversible. Formally, the flow of \eqref{LSE} enjoys two important conservations. The {\sl mass}, defined as \begin{equation}\label{massSE} N(t):=N(u(\cdot,t))=\|u\|^2=\int_\Omega |u({\bf x} ,t)|^2d{\bf x} \equiv N(u_0), \qquad t\ge0, \end{equation} and the {\sl energy}, defined as \begin{equation}\label{conserv} \begin{split} E(t):&=E(u(\cdot,t))=\int_\Omega\left[|\nabla u({\bf x} ,t)|^2d{\bf x} +\lambda F(|u({\bf x} ,t)|^2)\right]d{\bf x} \\ &\equiv\int_\Omega\left[|\nabla u_0({\bf x} )|^2+\lambda F(|u_0({\bf x} )|^2)\right]d{\bf x} =E(u_0), \qquad t\ge0, \end{split} \end{equation} where \begin{equation} \label{Frho345} F(\rho)=\int_0^\rho f(s)ds=\int_0^\rho \ln s\, ds=\rho\,\ln \rho-\rho, \qquad \rho\ge0. \end{equation} The total angular momentum is also conserved, an identity that we do not use in the present paper. For the Cauchy problem \eqref{LSE} in a suitable functional framework, we refer to \cite{CaHa80, CaGa18,GLN10}. For stability properties of standing waves for \eqref{LSE}, we refer to \cite{cazenave1982, cazenave1983, Ar16}. For the analysis of breathers and the existence of multisolitons, see \cite{ferriere-p1,ferriere-p2}. In order to avoid numerical blow-up of the logarithmic nonlinearity at the origin, two models of regularized logarithmic Schr\"odinger equation (RLogSE) were proposed in \cite{bao2019error}, involving a direct regularization of $f$ in \eqref{frhoSE}, relying on a small regularized parameter $0<\varepsilon\ll1$, \begin{equation}\label{RLSE0} \left\{ \begin{aligned} &i\partial_t u^\varepsilon({\bf x} ,t)=-\Delta u^\varepsilon({\bf x} ,t)+\lambda \, u^\varepsilon({\bf x} ,t)\,\widetilde{f}^\varepsilon(|u^\varepsilon({\bf x} ,t)|)^2),\quad {\bf x} \in \Omega, \quad t>0,\\ &u^\varepsilon({\bf x} ,0)=u_0({\bf x} ),\quad {\bf x} \in \overline{\Omega}, \end{aligned} \right. \end{equation} and \begin{equation}\label{RLSE1} \left\{ \begin{aligned} &i\partial_t u^\varepsilon({\bf x} ,t)=-\Delta u^\varepsilon({\bf x} ,t)+\lambda \, u^\varepsilon({\bf x} ,t)\,\widehat{f}^\varepsilon(|u^\varepsilon({\bf x} ,t)|^2)),\quad {\bf x} \in \Omega, \quad t>0,\\ &u^\varepsilon({\bf x} ,0)=u_0({\bf x} ),\quad {\bf x} \in \overline{\Omega}. \end{aligned} \right. \end{equation} Here, $\widetilde{f}^\varepsilon(\rho)$ and $\widehat{f}^\varepsilon(\rho)$ are two types of regularization for $f(\rho)$, given by \begin{equation} \label{1str2} \widetilde{f}^\varepsilon(\rho)=2\ln (\varepsilon+\sqrt{\rho}),\quad \widehat{f}^\varepsilon(\rho)=\ln(\varepsilon^2+\rho),\quad \rho\ge0, \qquad \hbox{with}\quad \rho=|u^\varepsilon|^2. \end{equation} Again, the RLogSEs \eqref{RLSE0} and \eqref{RLSE1} conserve the mass \eqref{massSE} with $u=u^\varepsilon$, as well as the {\sl energies} \begin{equation}\label{conserv1} \widetilde{E}^\varepsilon(t):=\widetilde{E}^\varepsilon (u^\varepsilon(\cdot,t))=\int_\Omega\left[|\nabla u^\varepsilon({\bf x} ,t)|^2d{\bf x} +\lambda \widetilde{F}^\varepsilon(|u^\varepsilon({\bf x} ,t)|^2)\right]d{\bf x} \equiv \widetilde{E}^\varepsilon(u_0), \end{equation} and \begin{equation}\label{conserv2} \widehat{E}^\varepsilon(t):=\widehat{E}^\varepsilon (u^\varepsilon(\cdot,t))=\int_\Omega\left[|\nabla u^\varepsilon({\bf x} ,t)|^2d{\bf x} +\lambda \widehat{F}^\varepsilon(|u^\varepsilon({\bf x} ,t)|^2)\right]d{\bf x} \equiv \widehat{E}^\varepsilon(u_0), \end{equation} respectively, with, for $\rho\ge 0$, \begin{equation}\label{1str} \begin{split} \widetilde{F}^\varepsilon(\rho)&=\int_0^\rho \widetilde{f}^\varepsilon(s)ds =2\rho\ln(\varepsilon+\sqrt{\rho})+2\varepsilon\sqrt{\rho}-\rho-2 \varepsilon^2\ln(1+\sqrt{\rho}/\varepsilon),\\ \widehat{F}^\varepsilon(\rho)&=\int_0^\rho \widehat{f}^\varepsilon(s)ds=(\varepsilon^2+\rho)\ln(\varepsilon^2+\rho)-\rho-2\varepsilon^2\ln \varepsilon. \end{split} \end{equation} The idea of this regularization is that the function $\rho\mapsto \ln \rho$ causes no (analytical or numerical) problem for large values of $\rho$, but is singular at $\rho=0$. A linear convergence was established between the solutions of the LogSE \eqref{LSE} and the regularized model \eqref{RLSE0} or \eqref{RLSE1} for bounded $\Omega$ in terms of the small regularization parameter $0<\varepsilon\ll1$, i.e., \[\sup_{t\in [0,T]}\|u^\varepsilon(t) -u(t)\|_{L^2(\Omega)}=O(\varepsilon),\quad \forall\ T>0.\] Applying this regularized model, a semi-implicit finite difference method (FDM) and a time-splitting method were proposed and analyzed for the LogSE \eqref{RLSE0} in \cite{bao2019error} and \cite{bao2018} respectively. The above regularization saturates the nonlinearity in the region $\{\rho<\varepsilon^2\}$ (where $\rho=|u^\varepsilon|^2$), but of course has also some (smaller) effect in the other region $\{\rho>\varepsilon^2\}$, i.e., it regularizes $f(\rho)=\ln \rho$ globally. Energy regularization is a method which has been adapted in different fields for dealing with singularity and/or roughness: in materials science, for establishing the well-posedness of the Cauchy problem for the CH equation with a logarithmic potential \cite{elliott1991}, and for treating strongly anisotropic surface energy \cite{Jiang,BaoJ}; in mathematical physics, for the well-posedness of the LogSE \cite{cazenave1983}; in scientific computing, for designing regularized numerical methods in the presence of singularities \cite{copetti1992, yang2019,BaoR}. The main goal of this paper is to present a local energy regularization ({\sl LER}) for the LogSE \eqref{LSE}. We regularize the interaction energy density $F(\rho)$ only locally in the region $\{\rho<\varepsilon^2\}$ by a sequence of polynomials, and keep it unchanged in $\{\rho>\varepsilon^2\}$. The choice of the regularized interaction energy density $F_n^\varepsilon$ is prescribed by the regularity $n$ imposed at this step, involving the matching conditions at $\{\rho=\varepsilon^2\}$. We then obtain a sequence of energy regularized logarithmic Schr\"odinger equations (ERLogSEs), from the regularized energy functional density $F_n^\varepsilon$, via energy variation. Unlike in \cite{copetti1992,yang2019}, where the interaction energy density $F(\rho)$ is approximated by a second order polynomial near the origin, here we present a systematic way to regularize the interaction energy density near the origin, i.e. locally, by a sequence of polynomials such that the order of regularity $n$ of the overall regularized interaction energy density is arbitrary. We establish convergence rates between the solutions of ERLogSEs and LogSE in terms of the small regularized parameter $0<\varepsilon\ll1$. In addition, we also prove error estimates of numerical approximations of ERLogSEs by using time-splitting integrators. The rest of this paper is organized as follows. In Section~\ref{sec:regul}, we introduce a sequence of regularization $F_n^\varepsilon$ for the logarithmic potential. A regularized model is derived and analyzed in Section~\ref{sec:regulLSE} via the LER of the LogSE. Some numerical methods are proposed and analyzed in Section~\ref{sec:lie}. In Section~\ref{sec:num}, we present numerical experiments. Throughout the paper, we adopt the standard $L^2$-based Sobolev spaces as well as the corresponding norms, and denote by $C$ a generic positive constant independent of $\varepsilon$, the time step $\tau$ and the function $u$, and by $C(c)$ a generic positive constant depending on $c$. \section{Local regularization for $F(\rho)=\rho\,\ln \rho-\rho$} \label{sec:regul} We consider a local regularization starting from an approximation to the interaction energy density $F(\rho)$ in \eqref{Frho345} (and thus in \eqref{conserv}). \subsection{A sequence of local regularization} In order to make a comparison with the former global regularization \eqref{RLSE0}, we again distinguish the regions $\{\rho>\varepsilon^2\}$ and $\{\rho<\varepsilon^2\}$. Instead of saturating the nonlinearity in the second region, we regularize it locally as follows. For an arbitrary integer $n\ge2$, we approximate $F(\rho)$ by a piecewise smooth function which is polynomial near the origin, \begin{equation}\label{Fn} F^\varepsilon_n(\rho)=F(\rho)\chi_{\{\rho\ge \varepsilon^2\}}+P^\varepsilon_{n+1}(\rho)\chi_{\{\rho<\varepsilon^2\}}, \quad n\ge 2, \end{equation} where $0<\varepsilon\ll1$ is a small regularization parameter, $\chi_{_A}$ is the characteristic function of the set $A$, and $P^\varepsilon_{n+1}$ is a polynomial of degree $n+1$. We demand $F^\varepsilon_n \in C^n([0,+\infty))$ and $F^\varepsilon_n(0)=F(0)=0$ (this allows the regularized energy to be well-defined on the whole space). The above conditions determine $P_{n+1}^\varepsilon$, as we now check. Since $P^\varepsilon_{n+1}(0)=0$, write \begin{equation}\label{PQ} P^\varepsilon_{n+1}(\rho)=\rho\, Q_n^\varepsilon(\rho), \end{equation} with $Q^\varepsilon_n$ a polynomial of degree $n$. Correspondingly, denote $F(\rho)=\rho\, Q(\rho)$ with $Q(\rho)=\ln \rho-1$. The continuity conditions read \[P^\varepsilon_{n+1}(\varepsilon^2)=F(\varepsilon^2), \quad (P^\varepsilon_{n+1})'(\varepsilon^2)=F'(\varepsilon^2), \quad\ldots, \quad (P^\varepsilon_{n+1})^{(n)}(\varepsilon^2)=F^{(n)}(\varepsilon^2),\] which in turn yield \[Q^\varepsilon_{n}(\varepsilon^2)=Q(\varepsilon^2), \quad (Q^\varepsilon_{n})'(\varepsilon^2)=Q'(\varepsilon^2), \quad\ldots, \quad (Q^\varepsilon_{n})^{(n)}(\varepsilon^2)=Q^{(n)}(\varepsilon^2).\] Thus $Q^\varepsilon_n$ is nothing else but Taylor polynomial of $Q$ of degree $n$ at $\rho=\varepsilon^2$, i.e., \begin{equation}\label{Qd} Q^\varepsilon_n(\rho)=Q(\varepsilon^2)+\sum\limits_{k=1}^n \fl{Q^{(k)}(\varepsilon^2)}{k!}(\rho-\varepsilon^2)^k= \ln \varepsilon^2-1 -\sum\limits_{k=1}^n \fl{1}{k}\left(1-\fl{\rho}{\varepsilon^2}\right)^k. \end{equation} In particular, Taylor's formula yields \begin{equation}\label{eq:Taylor} Q(\rho)- Q^\varepsilon_n(\rho)=\int_{\varepsilon^2}^\rho Q^{(n+1)}(s)\frac{(\rho-s)^n}{n!}ds = \int_{\varepsilon^2}^\rho \frac{(s-\rho)^n}{s^{n+1}}ds . \end{equation} Plugging \eqref{Qd} into \eqref{PQ}, we get the explicit formula of $P_{n+1}^\varepsilon(\rho)$. We emphasize a formula which will be convenient for convergence results: \begin{equation} \label{eq:Q'} \(Q^\varepsilon_n\)'(\rho)= \frac{1}{\varepsilon^2}\sum\limits_{k=1}^n \left(1-\fl{\rho}{\varepsilon^2}\right)^{k-1}= \frac{1}{\rho}\( 1-\(1-\fl{\rho}{\varepsilon^2}\)^n\), \qquad 0\le \rho \le \varepsilon^2. \end{equation} \subsection{Properties of the local regularization functions} Differentiating \eqref{Fn} with respect to $\rho$ and noting \eqref{PQ}, \eqref{Qd} and \eqref{eq:Q'}, we get \begin{equation}\label{fep} f_n^\varepsilon(\rho)=(F_n^\varepsilon)'(\rho)=\ln \rho \,\chi_{\{\rho\ge \varepsilon^2\}}+q^\varepsilon_n(\rho)\chi_{\{\rho<\varepsilon^2\}}, \qquad \rho\ge0, \end{equation} where \begin{align*} q^\varepsilon_n(\rho)&=(P_{n+1}^\varepsilon)'(\rho)=Q_n^\varepsilon(\rho)+\rho\, (Q_n^\varepsilon)'(\rho)\\ &=\ln (\varepsilon^2)-\frac{n+1}{n}\left(1-\fl{\rho}{\varepsilon^2}\right)^n-\sum\limits_{k=1}^{n-1} \fl{1}{k}\left(1-\fl{\rho}{\varepsilon^2}\right)^k. \end{align*} Noticing that $q^\varepsilon_n$ is increasing in $[0, \varepsilon^2]$, $\widetilde{f}^\varepsilon$ and $\widehat{f}^\varepsilon$ are increasing on $[0, \infty)$, thus all three types of regularization \eqref{Fn} and \eqref{1str} preserve the convexity of $F$. Moreover, as a sequence of local regularization (or approximation) for the semi-smooth function $F(\rho)\in C^0([0,\infty))\cap C^\infty((0,\infty))$, we have $F^\varepsilon_n \in C^n([0,+\infty))$ for $n\ge2$, while $\widetilde{F}^\varepsilon \in C^1([0,\infty))\cap C^\infty((0,\infty))$ and $\widehat{F}^\varepsilon \in C^\infty([0,\infty))$. Similarly, as a sequence of local regularization (or approximation) for the logarithmic function $f(\rho)=\ln \rho\in C^\infty((0,\infty))$, we observe that $f_n^\varepsilon\in C^{n-1}([0, \infty))$ for $n\ge 2$, while $\widehat{f}^\varepsilon\in C^\infty([0, \infty))$ and $\widetilde{f}^\varepsilon \in C^0([0,\infty))\cap C^\infty((0,\infty))$. Recall the following lemma, established initially in \cite[Lemma~1.1.1]{CaHa80}. \begin{lemma}\label{pre} For $z_1, z_2\in\mathbb{C}$, we have \[\left|\mathrm{Im}\left(\(z_1\ln (|z_1|^2)-z_2\ln (|z_2|^2)\) (\overline{z_1}-\overline{z_2})\right)\right|\le 2|z_1-z_2|^2, \] where $\mathrm{Im}(z)$ and $\overline{z}$ denote the imaginary part and the complex conjugate of $z$, respectively. \end{lemma} Next we highlight some properties of $f_n^\varepsilon$. \begin{lemma} Let $n\ge 2$ and $\varepsilon>0$. For $z_1$, $z_2\in\mathbb{C}$, we have \begin{align} &|f_n^\varepsilon(|z_1|^2)-f_n^\varepsilon(|z_2|^2)|\le\fl{4n|z_1-z_2|}{\max\{\varepsilon,\min\{|z_1|, |z_2|\}\}},\label{fl}\\ &\left|\mathrm{Im}\left[\(z_1f_n^\varepsilon(|z_1|^2)-z_2f_n^\varepsilon(|z_2|^2)\) (\overline{z_1}-\overline{z_2})\right]\right|\le 4n|z_1-z_2|^2,\label{gl}\\ &|\rho (f_n^\varepsilon)'(\rho)|\le 3,\quad |\sqrt{\rho} (f_n^\varepsilon)'(\rho)|\le \fl{2n}{\varepsilon},\quad |\rho^{3/2} (f_n^\varepsilon)''(\rho)|\le \fl{3n^2}{2\varepsilon},\quad \rho\ge0,\label{fd}\\ &|f_n^\varepsilon(\rho)|\le \max\{|\ln A|, 2+\ln(n\varepsilon^{-2})\}, \quad \rho\in [0, A].\label{fb} \end{align} \end{lemma} \begin{proof} When $|z_1|, |z_2|\ge\varepsilon$, we have \[ \left|f_n^\varepsilon(|z_1|^2)-f_n^\varepsilon(|z_2|^2)\right|=2\ln\Big(1+\fl{\left||z_1|-|z_2|\right| }{\min\{|z_1|,|z_2|\}}\Big)\le \fl{2|z_1-z_2|}{\min\{|z_1|,|z_2|\}}. \] A direct calculation gives \begin{equation}\label{fp1} (f_n^\varepsilon)'(\rho)=\fl{1}{\rho}\chi_{\{\rho\ge \varepsilon^2\}}+\left(\fl{n}{\varepsilon^2} \big(1-\fl{\rho}{\varepsilon^2}\big)^{n-1}+ \fl{1}{\varepsilon^2}\sum\limits_{k=0}^{n-1}\big(1-\fl{\rho}{\varepsilon^2}\big)^{k}\right)\chi_{\{\rho< \varepsilon^2\}}. \end{equation} Thus when $|z_1|<|z_2|\le\varepsilon$, we have \begin{align*} \left|f_n^\varepsilon(|z_1|^2)-f_n^\varepsilon(|z_2|^2)\right|&= \int_{|z_1|^2}^{|z_2|^2} (f_n^\varepsilon)'(\rho)d\rho\\ &=\frac{n}{\varepsilon^2}\int_{|z_1|^2}^{|z_2|^2} \big(1-\fl{\rho}{\varepsilon^2}\big)^{n-1}d\rho+\frac{1}{\varepsilon^2}\sum\limits_{k=0}^{n-1} \int_{|z_1|^2}^{|z_2|^2}\big(1-\fl{\rho}{\varepsilon^2}\big)^{k}d\rho\\ &\le \frac{n}{\varepsilon^2}(|z_2|^2-|z_1|^2)+\frac{1}{\varepsilon^2}\sum\limits_{k=0}^{n-1}(|z_2|^2-|z_1|^2)\\ &=\frac{2n}{\varepsilon^2}(|z_2|^2-|z_1|^2)\le \frac{4n}{\varepsilon}|z_1-z_2|. \end{align*} Another case when $|z_2|<|z_1|\le \varepsilon$ can be established similarly. Supposing, for example, $|z_2|<\varepsilon< |z_1|$, denote by $z_3$ the intersection point of the circle $\{z\in \mathbb{C}: |z|=\varepsilon\}$ and the line segment connecting $z_1$ and $z_2$. Combining the inequalities above, we have \begin{align*} \left|f_n^\varepsilon(|z_1|^2)-f_n^\varepsilon(|z_2|^2)\right|&\le |f_n^\varepsilon(|z_2|^2)-f_n^\varepsilon(|z_3|^2)|+ |\ln(|z_1|^2)-\ln(|z_3|^2)|\\ &\le \fl{4n}{\varepsilon}|z_2-z_3|+ \fl{2}{\varepsilon}|z_1-z_3|\\ &\le \fl{4n}{\varepsilon}\left(|z_2-z_3|+|z_1-z_3|\right)=\fl{4n}{\varepsilon}|z_1-z_2|, \end{align*} which completes the proof for \eqref{fl}. Noticing that \begin{align*} &\mathrm{Im}\left[\(z_1f_n^\varepsilon(|z_1|^2)-z_2f_n^\varepsilon(|z_2|^2)\) (\overline{z_1}-\overline{z_2})\right]\\ &\quad=-\mathrm{Im}(\overline{z_1} z_2) f_n^\varepsilon(|z_2|^2)-\mathrm{Im}(z_1\overline{z_2})f_n^\varepsilon(|z_1|^2)\\ &\quad=\mathrm{Im}(\overline{z_1} z_2)\left[f_n^\varepsilon(|z_1|^2)- f_n^\varepsilon(|z_2|^2)\right]\\ &\quad=\fl{1}{2i}(\overline{z_1}z_2-z_1\overline{z_2}) \left[f_n^\varepsilon(|z_1|^2)-f_n^\varepsilon(|z_2|^2)\right], \end{align*} and \begin{align*} &\left|\overline{z_1}z_2-z_1\overline{z_2}\right|= \left|z_2(\overline{z_1}-\overline{z_2})+\overline{z_2}(z_2-z_1)\right|\le 2|z_2|\,|z_1-z_2|,\\ &\left|\overline{z_1}z_2-z_1\overline{z_2}\right|= \left|\overline{z_1}(z_2-z_1)+z_1(\overline{z_1}-\overline{z_2})\right|\le 2|z_1|\,|z_1-z_2|, \end{align*} which implies \[\left|\overline{z_1}z_2-z_1\overline{z_2}\right|\le 2\min\{|z_1|, |z_2|\}\,|z_1-z_2|,\] one can conclude \eqref{gl} by applying \eqref{fl}. It follows from \eqref{fp1} that \begin{align*} g(\rho)&=\rho(f_n^\varepsilon)'(\rho)=\chi_{\{\rho\ge \varepsilon^2\}}+\left(\fl{n\rho}{\varepsilon^2} \big(1-\fl{\rho}{\varepsilon^2}\big)^{n-1}+ \fl{\rho}{\varepsilon^2}\sum\limits_{k=0}^{n-1}\big(1-\fl{\rho}{\varepsilon^2}\big)^{k}\right)\chi_{\{\rho< \varepsilon^2\}}\\ &=\chi_{\{\rho\ge \varepsilon^2\}}+\left(\fl{n\rho}{\varepsilon^2} \big(1-\fl{\rho}{\varepsilon^2}\big)^{n-1}+1-\big(1-\fl{\rho}{\varepsilon^2}\big)^{n} \right)\chi_{\{\rho< \varepsilon^2\}}, \end{align*} which gives that \[g'(\rho)\chi_{\{\rho< \varepsilon^2\}}=\frac{n}{\varepsilon^2}\big(1-\fl{\rho}{\varepsilon^2}\big)^{n-2} \left[2-\frac{(n+1)\rho}{\varepsilon^2}\right].\] This leads to \[\left|\rho(f_n^\varepsilon)'(\rho)\right|=g(\rho)\le \max\{1, g\left(\frac{2\varepsilon^2}{n+1}\right)\}\le 1+\frac{2n}{n+1}\le 3,\] which completes the proof for the first inequality in \eqref{fd}. Finally it follows from \eqref{fp1} that \begin{align*} &\sqrt{\rho}(f_n^\varepsilon)'(\rho)=\fl{1}{\sqrt{\rho}}\chi_{\{\rho\ge \varepsilon^2\}}+\fl{\sqrt{\rho}}{\varepsilon^2}\left(n \big(1-\fl{\rho}{\varepsilon^2}\big)^{n-1}+ \sum\limits_{k=0}^{n-1}\big(1-\fl{\rho}{\varepsilon^2}\big)^{k}\right)\chi_{\{\rho< \varepsilon^2\}},\\ &(f_n^\varepsilon)''(\rho)=-\fl{1}{\rho^2}\chi_{\{\rho\ge \varepsilon^2\}}-\left(\fl{n^2-1}{\varepsilon^4}\big(1-\fl{\rho}{\varepsilon^2}\big)^{n-2}+\fl{1}{\varepsilon^4} \sum\limits_{k=0}^{n-3}(k+1)\big(1-\fl{\rho}{\varepsilon^2}\big)^{k}\right)\chi_{\{\rho< \varepsilon^2\}}, \end{align*} which immediately yields that \begin{align*} &|\sqrt{\rho}(f_n^\varepsilon)'(\rho)|\le \frac{2n}{\varepsilon},\\ &|\rho^{3/2}(f_n^\varepsilon)''(\rho)|\le \frac{1}{\varepsilon}\left(n^2-1+ \sum\limits_{k=0}^{n-3}(k+1)\right)=\frac{3n(n-1)}{2\varepsilon}<\frac{3n^2}{2\varepsilon}. \end{align*} For $\rho\in [0, \varepsilon^2]$, in view of $\varepsilon\in (0, 1]$, one deduces \begin{align*} |f_n^\varepsilon(\rho)|&\le \ln (\varepsilon^{-2})+\frac{n+1}{n}\left(1-\fl{\rho}{\varepsilon^2}\right)^n+\sum\limits_{k=1}^{n-1} \fl{1}{k}\left(1-\fl{\rho}{\varepsilon^2}\right)^k\\ &\le \ln (\varepsilon^{-2})+\frac{n+1}{n}+\sum\limits_{k=1}^{n-1} \fl{1}{k}\\ &\le \ln (\varepsilon^{-2})+2+\sum\limits_{k=2}^{n} \fl{1}{k}\\ &\le 2+\ln (n\varepsilon^{-2}), \end{align*} which together with $|f_n^\varepsilon(\rho)|\le \max\{\ln(\varepsilon^{-2}), |\ln(A)|\}$ when $\rho\in [\varepsilon^2, A]$ concludes \eqref{fb}. \end{proof} \subsection{Comparison between different regularizations} To compare different regularizations for $F(\rho)$ (and thus for $f(\rho)$), Fig. ~\ref{Fcomp} shows $F_n^\varepsilon$ ($n=2,4,100,500$), $\widetilde{F}^\varepsilon$ and $\widehat{F}^\varepsilon$ for different $\varepsilon$, from which we can see that the newly proposed local regularization $F_n^\varepsilon$ approximates $F$ more accurately. \begin{figure}[h!] \begin{center} \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/F/Error_ReguL_Fun_TiLde.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/F/Error_ReguL_Fun_Hat.eps}\\[1em] \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/F/Error_ReguL_Fun_Energ_Order2.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/F/Error_ReguL_Fun_Energ_Order4.eps}\\[1em] \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/F/Error_ReguL_Fun_Energ_Order100.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/F/Error_ReguL_Fun_Energ_Order500.eps} \end{center} \caption{Comparison of different regularizations for $F(\rho)=\rho\ln\rho-\rho$. } \label{Fcomp} \end{figure} Fig. \ref{fcomp} shows various regularizations $f_n^\varepsilon$ ($n=2,4,100,500$), $\widetilde{f}^\varepsilon$ and $\widehat{f}^\varepsilon$ for various $\varepsilon$, while Figs. \ref{fprime} \& \ref{fpp} show their first- and second-order derivatives. From these figures, we can see that the newly proposed local regularization $f_n^\varepsilon$ (and its derivatives with larger $n$) approximates the nonlinearity $f$ (and its derivatives) more accurately. In addition, Fig. \ref{fig:Revision_EnergyReguL_Conv_WRT_Order_N} depicts $F_n^\varepsilon(\rho)$ (with $\varepsilon=0.1$) and its derivatives for different $n$, from which we can clearly see the convergence of $F_n^\varepsilon(\rho)$ (and its derivatives) to $F(\rho)$ (and its derivatives) W.R.T. order $n$. \begin{figure}[htbp!] \begin{center} \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/1st_Deriv_F/Error_First_DeR_Fun_TiLde.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/1st_Deriv_F/Error_First_DeR_Fun_Hat.eps}\\[1em] \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/1st_Deriv_F/Error_First_DeR_Fun_Energ_Order2.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/1st_Deriv_F/Error_First_DeR_Fun_Energ_Order4.eps}\\[1em] \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/1st_Deriv_F/Error_First_DeR_Fun_Energ_Order100.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/1st_Deriv_F/Error_First_DeR_Fun_Energ_Order500.eps} \end{center} \caption{Comparison of different regularizations for the nonlinearity $f(\rho)=\ln \rho$.} \label{fcomp} \end{figure} \begin{figure}[htbp!] \begin{center} \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/2nd_Deriv_F/Second_DeR_Fun_TiLde_Order.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/2nd_Deriv_F/Second_DeR_Fun_Hat_Order.eps}\\[1em] \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/2nd_Deriv_F/Second_DeR_Fun_Energ_Order2.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/2nd_Deriv_F/Second_DeR_Fun_Energ_Order4.eps}\\[1em] \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/2nd_Deriv_F/Second_DeR_Fun_Energ_Order100.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/2nd_Deriv_F/Second_DeR_Fun_Energ_Order500.eps} \end{center} \caption{Comparison of different regularizations for $f'(\rho)=1/\rho$.} \label{fprime} \end{figure} \begin{figure}[htbp!] \begin{center} \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/3rd_Deriv_F/Third_DeR_Fun_TiLde_Order.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/3rd_Deriv_F/Third_DeR_Fun_Hat_Order.eps}\\[1em] \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/3rd_Deriv_F/Third_DeR_Fun_Energ_Order2.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/3rd_Deriv_F/Third_DeR_Fun_Energ_Order4.eps}\\[1em] \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/3rd_Deriv_F/Third_DeR_Fun_Energ_Order100.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/ReguL_Funs_Comp/3rd_Deriv_F/Third_DeR_Fun_Energ_Order500.eps} \end{center} \caption{Comparison of different regularizations for $f''(\rho)=-1/\rho^2$.} \label{fpp} \end{figure} \begin{figure}[htbp!] \begin{center} \includegraphics[width=6cm,height=4cm]{./figs/Revision/Energy_ReguL_Conv/Revision_ReguL_Fun_Energ_Error_WRT_Order_Eps_0_1.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/Revision/Energy_ReguL_Conv/Revision_First_DeR_ReguL_Fun_Energ_Error_WRT_Order_Eps_0_1.eps}\\[1em] \includegraphics[width=6cm,height=4cm]{./figs/Revision/Energy_ReguL_Conv/Revision_Second_DeR_ReguL_Fun_Energ_Error_WRT_Order_Eps_0_1.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/Revision/Energy_ReguL_Conv/Revision_Third_DeR_ReguL_Fun_Energ_Error_WRT_Order_Eps_0_1.eps} \end{center} \caption{Comparison of regularizations $g_n^{0.1}$ ($g=F, f, f', f''$) with different order $n$.} \label{fig:Revision_EnergyReguL_Conv_WRT_Order_N} \end{figure} \section{Local energy regularization (LER) for the LogNLS} \label{sec:regulLSE} In this section, we consider the regularized energy \begin{equation} \label{RegL_Energ} E_n^\varepsilon(u):=\int_\Omega\left[|\nabla u|^2+\lambda F_n^\varepsilon(|u|^2)\right]d{\bf x} , \end{equation} where $F_n^\varepsilon$ is defined in \eqref{Fn}. The Hamiltonian flow of the regularized energy $i\partial_t u=\fl{\delta E_n^\varepsilon(u)}{\delta \overline{u}}$ yields the following energy regularized logarithmic Schr\"odinger equation (ERLogSE) with a regularizing parameter $0<\varepsilon\ll1 $, \begin{equation}\label{ERLSE} \left\{ \begin{aligned} &i\partial_t u^\varepsilon({\bf x} ,t)=-\Delta u^\varepsilon({\bf x} ,t)+\lambda\, u^\varepsilon({\bf x} ,t)\,f_n^\varepsilon(|u^\varepsilon({\bf x} ,t)|^2),\quad {\bf x} \in \Omega, \quad t>0,\\ &u^\varepsilon({\bf x} ,0)=u_0({\bf x} ),\quad {\bf x} \in \overline{\Omega}. \end{aligned} \right. \end{equation} We recall that $f_n^\varepsilon$ is defined by \eqref{fep}. \subsection{The Cauchy problem} To investigate the well-posedness of the problem \eqref{ERLSE}, we first introduce some appropriate spaces. For $\alpha>0$ and $\Omega=\R^d$, denote by $L^2_\alpha$ the weighted $L^2$ space \[L^2_\alpha:=\{v\in L^2(\mathbb{R}^d), \quad {\bf x} \longmapsto \langle {\bf x} \rangle^\alpha v({\bf x} )\in L^2(\mathbb{R}^d)\},\] where $\langle {\bf x} \rangle :=\sqrt{1+|{\bf x} |^2}$, with norm $\|v\|_{L^2_\alpha}:=\|\langle {\bf x} \rangle^\alpha v({\bf x} )\|_{L^2(\mathbb{R}^d)}$. Regarding the Cauchy problem \eqref{ERLSE}, we have similar results as for the regularization \eqref{RLSE0} in \cite{bao2019error}, but not quite the same. For the convenience of the reader, we recall the main arguments. \bigskip \begin{theorem}\label{theo:cauchy} Let $\lambda\in \R$, $u_0\in H^1(\Omega)$, and $0<\varepsilon\le 1$. \\ $(1).$ For \eqref{ERLSE} posed on $\Omega=\R^d$ or a bounded domain $\Omega$ with homogeneous Dirichlet or periodic boundary condition, there exists a unique, global weak solution $u^\varepsilon\in L^\infty_{\rm loc}(\R; H^1(\Omega))$ to \eqref{ERLSE} (with $H^1_0(\Omega)$ instead of $H^1(\Omega)$ in the Dirichlet case). Furthermore, for any given $T>0$, there exists a positive constant $C(\lambda, T)$ (independent of $n$) such that \begin{equation}\label{unib1} \|u^\varepsilon\|_{L^\infty([0, T]; H^1(\Omega))}\le C(\lambda, T)\|u_0\|_{H^1(\Omega)}, \quad \forall \varepsilon>0. \end{equation} $(2).$ For \eqref{ERLSE} posed on a bounded domain $\Omega$ with homogeneous Dirichlet or periodic boundary condition, if in addition $u_0\in H^2(\Omega)$, then $u^\varepsilon \in L^\infty_{\rm loc}(\R;H^2(\Omega))$ and there exists a positive constant $C(n, \lambda, T)$ such that \begin{equation}\label{unib2} \|u^\varepsilon\|_{L^\infty([0, T]; H^2(\Omega))}\le C(n, \lambda, T)\|u_0\|_{H^2(\Omega)}, \quad \forall \varepsilon>0. \end{equation} $(3).$ For \eqref{ERLSE} on $\Omega=\R^d$, suppose moreover $u_0\in L^2_\alpha$, for some $0<\alpha\le 1$. \begin{itemize} \item There exists a unique, global weak solution $u^\varepsilon\in L^\infty_{\rm loc}(\R; H^1(\R^d)\cap L^2_\alpha)$ to \eqref{ERLSE}, and \begin{equation}\label{unib3} \begin{split} & \|u^\varepsilon\|_{L^\infty([0, T]; H^1)}\le C(n, \lambda, T)\|u_0\|_{H^1}, \\ &\|u^\varepsilon\|_{L^\infty([0, T]; L^2_\alpha)}\le C(n, \lambda, T, \|u_0\|_{H^1}) \|u_0\|_{L^2_\alpha}, \quad \forall \varepsilon>0. \end{split} \end{equation} \item If in addition $u_0\in H^2(\R^d)$, then $u^\varepsilon \in L^\infty_{\rm loc}(\R;H^2(\R^d))$, and \begin{equation}\label{unib4} \|u^\varepsilon\|_{L^\infty([0, T]; H^2)}\le C(n, \lambda, T, \|u_0\|_{H^2}, \|u_0\|_{L^2_\alpha}),\quad \forall \varepsilon>0.\end{equation} \item If $u_0\in H^2(\R^d)\cap L_2^2$, then $u^\varepsilon \in L^\infty_{\rm loc}(\R;H^2(\R^d)\cap L_2^2)$. \end{itemize} \end{theorem} \begin{proof} (1). For fixed $\varepsilon>0$, the nonlinearity in \eqref{ERLSE} is locally Lipschitz continuous, and grows more slowly than any power of $|u^\varepsilon|$. Standard Cauchy theory for nonlinear Schr\"odinger equations implies that there exists a unique solution $u^\varepsilon\in L^\infty_{\rm loc}(\R;H^1(\Omega))$ to \eqref{ERLSE} (respectively, $u^\varepsilon\in L^\infty_{\rm loc}(\R;H^1_0(\Omega))$ in the Dirichlet case); see e.g. \cite[Corollary~3.3.11 and Theorem~3.4.1]{CazCourant}. In addition, the $L^2$-norm of $u^\varepsilon$ is independent of time, \[ \|u^\varepsilon(t)\|_{L^2(\Omega)}^2=\|u_0\|_{L^2(\Omega)}^2,\quad \forall t\in\R.\] For $j\in \{1,\dots,d\}$, differentiate \eqref{ERLSE} with respect to $x_j$: \begin{equation*} \(i\partial_t +\Delta\)\partial_j u^\varepsilon = \lambda \partial_j u^\varepsilon f_n^\varepsilon(|u^\varepsilon|^2) + 2\lambda u^\varepsilon (f_n^\varepsilon)'(|u^\varepsilon|^2){\mathrm{Re}}\(\overline{u^\varepsilon}\partial_j u^\varepsilon\). \end{equation*} Multiply the above equation by $\partial_j \overline{u^\varepsilon}$, integrate on $\Omega$, and take the imaginary part: \eqref{fd} implies \begin{equation*} \frac{1}{2}\frac{d}{dt}\|\partial_j u^\varepsilon \|_{L^2(\Omega)}^2 \le 6|\lambda| \|\partial_j u^\varepsilon \|_{L^2(\Omega)}^2, \end{equation*} hence \eqref{unib1}, by Gronwall lemma. (2). The propagation of the $H^2$ regularity is standard, since $f_n^\varepsilon$ is smooth, so we focus on \eqref{unib2}. We now differentiate \eqref{ERLSE} with respect to time: we get the same estimate as above, with $\partial_j$ replaced by $\partial_t$, and so \begin{equation*} \|\partial_t u^\varepsilon (t)\|_{L^2(\Omega)}^2\le \|\partial_t u^\varepsilon (0)\|_{L^2(\Omega)}^2e^{12|\lambda\rvert \lvert t|}. \end{equation*} In view of \eqref{ERLSE}, \begin{equation*} i \partial_t u^\varepsilon_{\mid t=0} = -\Delta u_0 +\lambda u_0f_n^\varepsilon(|u_0|^2). \end{equation*} For $0<\delta<1$, we have \begin{equation*} \sqrt\rho |f_n^\varepsilon(\rho)|\le C(\delta)\(\rho^{1/2-\delta/2} + \rho^{1/2+\delta/2}\), \end{equation*} for some $C(\delta)$ independent of $\varepsilon$ and $n$, so for $\delta>0$ sufficiently small, Sobolev embedding entails \begin{equation*} \|\partial_t u^\varepsilon (0)\|_{L^2(\Omega)}\le \|u_0\|_{H^2(\Omega)}+ C(\delta)\(\|u_0\|^{1-\delta}_{L^{2-2\delta}(\Omega)} + \|u_0\|_{H^1(\Omega)}^{1+\delta}\). \end{equation*} Since $\Omega$ is bounded, H\"older inequality yields \begin{equation*} \|u_0\|_{L^{2-2\delta}(\Omega)} \le \|u_0\|_{L^2(\Omega}|\Omega|^{\delta/(2-2\delta)}. \end{equation*} Thus, the first term in \eqref{ERLSE} is controlled in $L^2$. Using the same estimates as above, we control the last term in \eqref{ERLSE} (thanks to \eqref{unib1}), and we infer an $L^2$-estimate for $\Delta u^\varepsilon$, hence \eqref{unib2}. (3). In the case $\Omega=\R^d$, we multiply \eqref{ERLSE} by $\langle {\bf x} \rangle^\alpha$, and the same energy estimate as before now yields \begin{align*} \frac d{dt} \|u^\varepsilon\|_{L^2_\alpha}^2 = 4\alpha\,\mathrm{Im}\int_{\R^d} \frac{{\bf x} \cdot \nabla u^{\varepsilon} }{ \langle {\bf x} \rangle^{2-2\alpha}} \, \overline {u^\varepsilon}(t) \, d{\bf x} & \lesssim \|\<{\bf x} \>^{2\alpha-1}u^\varepsilon\|_{L^2(\R^d)}\|\nabla u^\varepsilon\|_{L^2(\R^d)}\\ &\lesssim \|\<{\bf x} \>^{\alpha}u^\varepsilon\|_{L^2(\R^d)}\|\nabla u^\varepsilon\|_{L^2(\R^d)}, \end{align*} where the last inequality follows from the assumption $\alpha\le 1$, hence \eqref{unib3}. To prove \eqref{unib4}, we resume the same approach as to get \eqref{unib2}, with the difference that the H\"older estimate must be replaced by some other estimate (see e.g. \cite{CaGa18}): for $\delta>0$ sufficiently small, \begin{equation*} \int_{{\mathbb R}^d} |u|^{2-2\delta} \lesssim \|u\|_{L^2(\R^d)}^{2-2\delta-d \delta/\alpha} \left\lVert \lvert {\bf x} \rvert^\alpha u\right\rVert_{L^2(\R^d)}^{d \delta/\alpha} . \end{equation*} The $L^2_2$ estimate follows easily, see e.g. \cite{bao2019error} for details. \end{proof} \color{black} \subsection{Convergence of the regularized model} \label{sec:cvmodel} In this subsection, we show an approximation property of the regularized model \eqref{ERLSE} to \eqref{LSE}. \begin{lemma}\label{thmcon} Suppose the equation \eqref{ERLSE} is set on $\Omega$, where $\Omega=\mathbb{R}^d$, or $\Omega\subset \mathbb{R}^d$ is a bounded domain with homogeneous Dirichlet or periodic boundary condition. We have the general estimate: \begin{equation}\label{gene} \fl{d}{dt}\|u^\varepsilon(t)-u(t)\|_{L^2}^2\le |\lambda|\left(4\|u^\varepsilon(t)-u(t)\|_{L^2}^2+ 6\varepsilon\|u^\varepsilon(t)-u(t)\|_{L^1}\right). \end{equation} \end{lemma} \begin{proof} Subtracting \eqref{LSE} from \eqref{ERLSE}, we see that the error function $e^\varepsilon:=u^\varepsilon-u$ satisfies \[ i\partial_t e^\varepsilon+\Delta e^\varepsilon=\lambda \left[u^\varepsilon\ln(|u^\varepsilon|^2)-u\ln(|u|^2)\right]+\lambda u^\varepsilon\left[ f^\varepsilon_n(|u^\varepsilon|^2)-\ln (|u^\varepsilon|^2)\right]\chi_{\{|u^\varepsilon|<\varepsilon\}}. \] Multiplying the above error equation by $\overline{e^\varepsilon(t)}$, integrating in space and taking imaginary parts, we can get by using Lemma~\ref{pre}, \eqref{eq:Taylor} and \eqref{eq:Q'} that \begin{align*} &\quad\fl{1}{2}\fl{d}{dt}\|e^\varepsilon(t)\|_{L^2}^2\\ &=2\lambda\, \mathrm{Im} \int_{\Omega} \left[u^\varepsilon\ln(|u^\varepsilon|)-u\ln(|u|)\right]\overline{e^\varepsilon}({\bf x} ,t)d{\bf x} \\ &\quad+\lambda\, \mathrm{Im} \int_{|u^\varepsilon|<\varepsilon}u^\varepsilon\left[ f_n^\varepsilon(|u^\varepsilon|^2)-\ln (|u^\varepsilon|^2)\right]\overline{e^\varepsilon}({\bf x} ,t)d{\bf x} \\ &\le 2 |\lambda|\|e^\varepsilon(t)\|_{L^2}^2+|\lambda|\Big|\int_{|u^\varepsilon|<\varepsilon}u^\varepsilon\overline{e^\varepsilon} \left[Q_n^\varepsilon(|u^\varepsilon|^2)-\ln (|u^\varepsilon|^2)+|u^\varepsilon|^2(Q_n^\varepsilon)'(|u^\varepsilon|^2)\right]d{\bf x} \Big|\\ &\le 2 |\lambda|\|e^\varepsilon(t)\|_{L^2}^2+|\lambda|\,\Big|\int_{|u^\varepsilon|<\varepsilon}\overline{e^\varepsilon}u^\varepsilon \Big[\int_{|u^\varepsilon|^2}^{\varepsilon^2} \fl{(s-|u^\varepsilon|^2)^n}{s^{n+1}}ds-1+|u^\varepsilon|^2(Q_n^\varepsilon)'(|u^\varepsilon|^2)\Big]d{\bf x} \Big|\\ &= 2 |\lambda|\,\|e^\varepsilon(t)\|_{L^2}^2+|\lambda|\,\Big|\int_{|u^\varepsilon|<\varepsilon}\overline{e^\varepsilon}u^\varepsilon \Big[\int_{|u^\varepsilon|^2}^{\varepsilon^2} \fl{(s-|u^\varepsilon|^2)^n}{s^{n+1}}ds-\left(1-\fl{|u^\varepsilon|^2}{\varepsilon^2}\right)^n\Big]d{\bf x} \Big|\\ &\le 2|\lambda|\,\|e^\varepsilon(t)\|_{L^2}^2+\varepsilon|\lambda|\|e^\varepsilon\|_{L^1}+ |\lambda|\,\Big|\int_0^{\varepsilon^2} s^{-n-1}\int_{|u^\varepsilon|^2<s}\overline{e^\varepsilon}u^\varepsilon (s-|u^\varepsilon|^2)^{n}d{\bf x} ds\Big|\\ &\le 2|\lambda|\,\|e^\varepsilon(t)\|_{L^2}^2+3\varepsilon|\lambda|\|e^\varepsilon\|_{L^1}. \end{align*} This yields the result. \end{proof} Invoking the same arguments as in \cite{bao2019error}, based on the previous error estimate, and interpolation between $L^2$ and $H^2$, we get the following error estimate. \begin{proposition}\label{prop1} If $\Omega$ has finite measure and $u_0\in H^2(\Omega)$, then for any $T>0$, \[ \|u^\varepsilon-u\|_{L^\infty([0,T]; L^2(\Omega))}\le C_1\varepsilon,\quad \|u^\varepsilon-u\|_{L^\infty([0,T]; H^1(\Omega))}\le C_2\varepsilon^{1/2}, \] where $C_1$ depends on $|\lambda|$, $T$, $|\Omega|$, and $C_2$ depends in addition on $\|u_0\|_{H^2(\Omega)}$. If $\Omega=\R^d$, $1\le d\le 3$ and $u_0\in H^2(\R^d)\cap L^2_2$, then for any $T>0$, we have \[\|u^\varepsilon-u\|_{L^\infty([0,T]; L^2(\mathbb{R}^d))}\le D_1\varepsilon^{\fl{4}{4+d}},\quad \|u^\varepsilon-u\|_{L^\infty([0,T]; H^1(\mathbb{R}^d))}\le D_2\varepsilon^{\fl{2}{4+d}},\] where $D_1$ and $D_2$ depend on $d$, $|\lambda|$, $T$, $\|u_0\|_{L^2_2}$ and $\|u_0\|_{H^2(\mathbb{R}^d)}$. \end{proposition} \begin{proof} The proof is the same as that in \cite{bao2019error}. We just list the outline for the readers' convenience. When $\Omega$ is bounded, the convergence in $L^2$ follows from Gronwall's inequality by applying \eqref{gene} and the estimate $\|v\|_{L^1}\le |\Omega|^{1/2}\|v\|_{L^2}$. The estimate in $H^1$ follows form the Gagliardo-Nirenberg inequality $\|v\|_{H^1}\le C\|v\|^{1/2}_{L^2}\|v\|_{H^2}^{1/2}$ and the property \eqref{unib2}. For $\Omega=\mathbb{R}^d$, the convergence in ${L^2}$ can be established by Gronwall's inequality and the estimate (cf. \cite{bao2019error}) \[\|v\|_{L^1}\le C_d\|v\|_{L^2}^{1-d/4}\|v\|_{L^2_2}^{d/4} \le C_d \left(\varepsilon^{-1}\|v\|_{L^2}^2+\varepsilon^{\frac{4-d}{4+d}}\|v\|_{L^2_2}^{\frac{2d}{4+d}}\right),\] which is derived by the Cauchy-Schwarz inequality and Young's inequality. The convergence in $H^1$ can similarly derived by the Gagliardo-Nirenberg inequality. \end{proof} \subsection{Convergence of the energy} \label{sec:cvenergy} By construction, the energy is conserved, i.e., \begin{equation}\label{engre} E^\varepsilon_n(u^\varepsilon)=\int_{\Omega}\big[|\nabla u^\varepsilon({\bf x} ,t)|^2+\lambda F_n^\varepsilon(|u^\varepsilon({\bf x} ,t)|^2)\big]d{\bf x} =E^\varepsilon_n(u_0). \end{equation} For the convergence of the energy, we have the following estimate. \begin{proposition}\label{energyc} For $u_0\in H^1(\Omega)\cap L^\alpha(\Omega)$ with $\alpha\in (0,2)$, the energy $E_n^\varepsilon(u_0)$ converges to $E(u_0)$ with $$|E_n^\varepsilon(u_0)-E(u_0)|\le |\lambda|\,\|u_0\|_{L^\alpha}^\alpha \fl{\varepsilon^{2-\alpha}}{1-\alpha/2}.$$ In addition, for bounded $\Omega$, we have \[|E_n^\varepsilon(u_0)-E(u_0)|\le |\lambda|\, |\Omega|\,\varepsilon^2.\] \end{proposition} \begin{proof} It can be deduced from the definition \eqref{engre} and \eqref{eq:Taylor} that \begin{align*} \left|E_n^\varepsilon(u_0)-E(u_0)\right|&=|\lambda|\left|\int_{\Omega} [F(|u_0({\bf x} )|^2)-F_n^\varepsilon(|u_0({\bf x} )|^2)]d{\bf x} \right|\\ &=|\lambda|\left|\int_{|u_0({\bf x} )|<\varepsilon} |u_0({\bf x} )|^2[Q(|u_0({\bf x} )|^2)-Q_n^\varepsilon(|u_0({\bf x} )|^2)]d{\bf x} \right|\\ &=|\lambda|\int_{|u_0({\bf x} )|<\varepsilon} |u_0({\bf x} )|^2\int_{|u_0({\bf x} )|^2}^{\varepsilon^2} s^{-n-1}(s-|u_0({\bf x} )|^2)^ndsd{\bf x} \\ &=|\lambda|\int_0^{\varepsilon^2} s^{-n-1}\int_{|u_0({\bf x} )|^2<s} |u_0({\bf x} )|^2(s-|u_0({\bf x} )|^2)^nd{\bf x} ds. \end{align*} If $\Omega$ is bounded, we immediately get \[\left|E_n^\varepsilon(u_0)-E(u_0)\right|\le |\lambda|\, |\Omega|\, \varepsilon^2.\] For unbounded $\Omega$, one gets \[\left|E_n^\varepsilon(u_0)-E(u_0)\right|\le |\lambda| \int_0^{\varepsilon^2} s^{-n-1}s^{n+1-\alpha/2} \|u_0\|_{L^\alpha}^\alpha ds=|\lambda|\,\|u_0\|_{L^\alpha}^\alpha \fl{\varepsilon^{2-\alpha}}{1-\alpha/2},\] which completes the proof. \end{proof} \begin{remark} Recall that it was shown in \cite{bao2019error} that for the regularized model \eqref{RLSE0} with the energy density \eqref{1str}, the energy \begin{equation} \label{Energ1} \widetilde{E}^\varepsilon(u_0)=\|\nabla u_0\|_{L^2}^2+\lambda\int_{\Omega} \widetilde{F}^\varepsilon(|u_0|^2)d{\bf x} \end{equation} converges to $E(u_0)$ with an error $O(\varepsilon)$. For the regularization \eqref{RLSE1} with the energy density \eqref{1str} and the regularized energy \begin{equation}\label{Energ2} \widehat{E}^\varepsilon(u_0)=\|\nabla u_0\|_{L^2}^2+\lambda\int_{\Omega} \widehat{F}^\varepsilon(|u_0|^2)d{\bf x} , \end{equation} we have \begin{align*} \left|\widehat{E}^\varepsilon(u_0)-E(u_0)\right|&=|\lambda|\,\left|\int_{\Omega} [F(|u_0({\bf x} )|^2)-\widehat{F}^\varepsilon(|u_0({\bf x} )|^2)]d{\bf x} \right|\\ &\hspace{-7mm}=|\lambda|\,\left|\int_\Omega \left[(\varepsilon^2+|u_0|^2)\ln(\varepsilon^2+|u_0|^2)-\varepsilon^2\ln(\varepsilon^2)- |u_0|^2\ln(|u_0|^2) \right]d{\bf x} \right|\\ &\hspace{-7mm}\le |\lambda|\,\varepsilon^2\int_\Omega \ln\left(1+\frac{|u_0|^2}{\varepsilon^2}\right)d{\bf x} +|\lambda| \int_\Omega|u_0|^2\ln\left(1+\frac{\varepsilon^2}{|u_0|^2}\right)d{\bf x} \\ &\hspace{-7mm}\le |\lambda|\,\varepsilon^{2-\alpha}C(\alpha)\int_\Omega|u_0|^\alpha d{\bf x} \\ &\hspace{-7mm}=|\lambda|\,\varepsilon^{2-\alpha}C(\alpha)\|u_0\|_{L^\alpha}^\alpha, \end{align*} where we have used the inequality $\ln(1+x)\le C(\beta) x^\beta$ for $\beta\in (0,1]$ and $x\ge 0$. Hence for $u_0\in H^1(\Omega)\cap L^\alpha(\Omega)$ with $\alpha\in (0,2)$, we infer \[\left|\widehat{E}^\varepsilon(u_0)-E(u_0)\right|\le |\lambda|\,\varepsilon^{2-\alpha}C(\alpha)\|u_0\|_{L^\alpha}^\alpha,\] that is, the same convergence rate as $E_n^\varepsilon$. Thus the newly proposed local energy regularization $F_n^\varepsilon$ is more accurate than $\widetilde{F}^\varepsilon$, and than $\widehat{F}^\varepsilon$ in the case of bounded domains, from the viewpoint of energy. \end{remark} \section{Regularized Lie-Trotter splitting methods} \label{sec:lie} In this section, we investigate approximation properties of the Lie-Trotter splitting methods \cite{mclachlan, descombes, besse2002order} for solving the regularized model \eqref{ERLSE} in one dimension (1D). Extensions to higher dimensions are straightforward. To simplify notations, we set $\lambda=1$. \subsection{A time-splitting for \eqref{ERLSE}} The operator splitting methods are based on a decomposition of the flow of \eqref{ERLSE}: \[\partial_t u^\varepsilon=A(u^\varepsilon)+B(u^\varepsilon),\] where \[A(v)=i\Delta v,\quad B(v)=-i v f_n^\varepsilon(|v|^2),\] and the solution of the sub-equations \begin{equation}\label{lp} \left\{ \begin{aligned} &\partial_t v(x,t)=A(v(x,t)),\quad x\in\Omega,\quad t>0,\\ &v(x,0)=v_0(x), \end{aligned}\right. \end{equation} \begin{equation}\label{nlp} \left\{ \begin{aligned} &\partial_t \omega(x,t)=B(\omega(x,t)),\quad x\in\Omega,\quad t>0,\\ &\omega(x,0)=\omega_0(x), \end{aligned}\right. \end{equation} where $\Omega=\mathbb{R}$ or $\Omega\subset \mathbb{R}$ is a bounded domain with homogeneous Dirichlet or periodic boundary condition on the boundary. Denote the flow of \eqref{lp} and \eqref{nlp} by \begin{equation}\label{ABs} v(\cdot,t)=\Phi_A^t(v_0)=e^{it\Delta}v_0,\quad \omega(\cdot,t)=\Phi_B^t(\omega_0)=\omega_0e^{-itf_n^\varepsilon(|\omega_0|^2)},\quad t\ge0. \end{equation} As is well known, the flow $\Phi_A^t$ satisfies the isometry relation \begin{equation}\label{Ap} \|\Phi_A^t(v_0)\|_{H^s}=\|v_0\|_{H^s},\quad \forall s\in \mathbb{R},\quad \forall t\ge 0. \end{equation} Regarding the flow $\Phi_B^t$, we have the following properties. \begin{lemma} Assume $\tau>0$ and $\omega_0\in H^1(\Omega)$, then \begin{equation}\label{Bp} \|\Phi_B^\tau(\omega_0)\|_{L^2}=\|\omega_0\|_{L^2},\quad \|\Phi_B^\tau(\omega_0)\|_{H^1}\le (1+6\tau)\,\|\omega_0\|_{H^1}. \end{equation} For $v$, $w\in L^2(\Omega)$, \begin{equation}\label{phibl} \|\Phi_B^\tau(v)-\Phi_B^\tau(w)\|_{L^2}\le (1+4n\tau)\,\|v-w\|_{L^2}.\end{equation} \end{lemma} \emph{Proof.} By direct calculation, we get \[\partial_x\Phi_B^\tau(\omega_0)=e^{-i\tau f_n^\varepsilon(|\omega_0|)^2}\left[\partial_x \omega_0-i\tau(f_n^\varepsilon)'(|\omega_0|^2)(\omega_0^2\partial_x \overline{\omega_0}+|\omega_0|^2\partial_x \omega_0)\right],\] which immediately gives \eqref{Bp} by recalling \eqref{fd}. We claim that for any $x\in\Omega$, \[|\Phi_B^\tau(v)(x)-\Phi_B^\tau(w)(x)|\le (1+4n\tau)\,|v(x)-w(x)|.\] Assuming, for example, $|v(x)|\le|w(x)|$, by inserting a term $v(x)e^{-i\tau f_n^\varepsilon(|w(x)|)^2}$, we can get \begin{align*} &\quad|\Phi_B^\tau(v)(x)-\Phi_B^\tau(w)(x)|\\ &=\left|v(x)e^{-i\tau f_n^\varepsilon(|v(x)|)^2}- w(x)e^{-i\tau f_n^\varepsilon(|w(x)|)^2}\right|\\ &=\Big|v(x)-w(x)+v(x)\Big(e^{i\tau [f_n^\varepsilon(|w(x)|^2)-f_n^\varepsilon(|v(x)|^2)]}-1\Big)\Big|\\ &\le|v(x)-w(x)|+2|v(x)|\,\Big|\sin \Big(\fl{\tau}{2}\left[f_n^\varepsilon(|w(x)|^2)-f_n^\varepsilon(|v(x)|^2)\right]\Big)\Big|\\ &\le |v(x)-w(x)|+\tau|v(x)|\,|f_n^\varepsilon(|w(x)|^2)-f_n^\varepsilon(|v(x)|^2)|\\ &\le(1+4n\tau)\,|v(x)-w(x)|, \end{align*} where we have used the estimate \eqref{fl}. When $|v(x)|\ge |w(x)|$, the same inequality can be obtained by exchanging $v$ and $w$ in the above computation. Thus the proof for \eqref{phibl} is complete. \hfill $\square$ \bigskip \subsection{Error estimates for $\Phi^\tau=\Phi_A^\tau\Phi_B^\tau$} We consider the Lie-Trotter splitting \begin{equation}\label{LT} u^{\varepsilon, k+1}=\Phi^\tau(u^{\varepsilon, k})=\Phi_A^\tau(\Phi_B^\tau(u^{\varepsilon, k})),\quad k\ge 0;\quad u^{\varepsilon,0}=u_0, \quad \tau>0. \end{equation} For $u_0\in H^1(\Omega)$, it follows from \eqref{Ap} and \eqref{Bp} that \begin{equation}\label{unp} \begin{split} &\|u^{\varepsilon,k}\|_{L^2}=\|u^{\varepsilon,k-1}\|_{L^2}\equiv \|u^{\varepsilon,0}\|_{L^2}= \|u_0\|_{L^2} ,\\ &\|u^{\varepsilon,k}\|_{H^1}\le (1+6\tau)\,\| u^{\varepsilon,k-1}\|_{H^1}\le e^{6k\tau}\| u_0\|_{H^1}, \quad k\ge0. \end{split} \end{equation} \begin{theorem}\label{thmlt} Let $T>0$ and $\tau_0>0$ be given constants. Assume that the solution of \eqref{ERLSE} satisfies $u^\varepsilon\in L^\infty([0,T]; H^1(\Omega))$ and the time step $\tau\le \tau_0$. Then there exists $0<\varepsilon_0<1$ depending on $n$, $\tau_0$ and $M:=\|u^\varepsilon\|_{L^\infty([0,T]; H^1(\Omega))}$ such that when $\varepsilon\le \varepsilon_0$ and $t_k:=k\tau\le T$, we have \begin{equation}\label{li1} \|u^{\varepsilon,k}-u^\varepsilon(t_k)\|_{L^2}\le C\left(n, \tau_0, T, M \right)\ln(\varepsilon^{-1})\tau^{1/2}.\end{equation} \end{theorem} \begin{proof} Denote the exact flow of \eqref{ERLSE} by $u^\varepsilon(t)=\Psi^t(u_0)$. First, we establish the local error for $v\in H^1(\Omega)$: \begin{equation}\label{local1} \|\Psi^\tau(v)-\Phi^\tau(v)\|_{L^2}\le C(n, \tau_0) \|v\|_{H^1}\ln(\varepsilon^{-1})\tau^{3/2}, \quad \tau\le \tau_0, \end{equation} when $\varepsilon$ is sufficiently small. Note that definitions imply \begin{align*} &i\partial_t\Psi^t(v)+\Delta \Psi^t (v)=\Psi^t (v)f_n^\varepsilon(|\Psi^t (v)|^2),\\ &i\partial_t\Phi^t(v)+\Delta\Phi^t(v)=\Phi_A^t\left(\Phi_B^t(v)f_n^\varepsilon(|\Phi_B^t(v)|^2)\right). \end{align*} Denoting $\mathcal E^t(v)=\Psi^t(v)-\Phi^t(v)$, we have \begin{equation}\label{erq} i\partial_t\mathcal E^t(v)+\Delta \mathcal E^t(v)=\Psi^t (v)f_n^\varepsilon(|\Psi^t (v)|^2)-\Phi_A^t\left(\Phi_B^t(v)f_n^\varepsilon(|\Phi_B^t(v)|^2)\right). \end{equation} Multiplying \eqref{erq} by $\overline{\mathcal E^t (v)}$, integrating in space and taking the imaginary part, we get \begin{align*} \fl{1}{2}\fl{d}{dt}\|\mathcal E^t(v)\|_{L^2}^2&=\mathrm{Im}\left(\Psi^t (v)f_n^\varepsilon(|\Psi^t(v)|^2)-\Phi_A^t\left(\Phi_B^t(v)f_n^\varepsilon(|\Phi_B^t(v)|^2)\right), \mathcal E^t(v)\right)\\ &=\mathrm{Im}\left(\Psi^t (v)f_n^\varepsilon(|\Psi^t (v)|^2)-\Phi^t (v)f_n^\varepsilon(|\Phi^t (v)|^2), \mathcal E^t(v)\right)\\ &\quad+\mathrm{Im}\left(\Phi^t (v)f_n^\varepsilon(|\Phi^t(v)|^2)-\Phi_A^t\left(\Phi_B^t(v)f_n^\varepsilon(|\Phi_B^t(v)|^2)\right), \mathcal E^t(v)\right)\\ &\le 4n\|\mathcal E^t(v)\|_{L^2}^2\\ &\quad+\left\|\Phi^t (v)f_n^\varepsilon(|\Phi^t(v)|^2)-\Phi_A^t\left(\Phi_B^t(v)f_n^\varepsilon(|\Phi_B^t(v)|^2)\right)\right\|_{L^2} \|\mathcal E^t(v)\|_{L^2}, \end{align*} where we have used \eqref{gl} and the scalar product is the standard one in $L^2$: $(u, w)=\int_\Omega u(x)\overline{w(x)}dx$. This implies \begin{equation}\label{tmp1} \fl{d}{dt}\|\mathcal E^t(v)\|_{L^2}\le 4n\|\mathcal E^t(v)\|_{L^2}+J_1+J_2, \end{equation} where \begin{align*} J_1&=\|\Phi^t(v)f_n^\varepsilon(|\Phi^t(v)|^2)-\Phi_B^t(v) f_n^\varepsilon(|\Phi_B^t(v)|^2)\|_{L^2},\\ J_2&=\|\Phi_B^t(v)f_n^\varepsilon(|\Phi_B^t(v)|^2)- \Phi_A^t\left(\Phi_B^t(v)f_n^\varepsilon(|\Phi_B^t(v)|^2)\right)\|_{L^2}. \end{align*} To estimate $J_1$ in \eqref{tmp1}, first we try to find the bound of $\|\Phi^t(v)\|_{L^\infty}, \|\Phi_B^t(v)\|_{L^\infty}$. It follows from \eqref{Ap} and \eqref{Bp} that \[\|\Phi^t(v)\|_{H^1}=\|\Phi_B^t(v)\|_{H^1}\le (1+6t)\|v\|_{H^1}\le (1+6t_0)\|v\|_{H^1},\quad t\le t_0.\] Hence by Sobolev embedding, we have \begin{equation}\label{phib} \|\Phi^t(v)\|_{L^\infty}\le c(1+6t_0)\|v\|_{H^1},\quad \|\Phi_B^t(v)\|_{L^\infty}\le c(1+6t_0)\|v\|_{H^1},\end{equation} where $c$ is the constant in the Sobolev inequality $\|\omega\|_{L^\infty}\le c\|\omega\|_{H^1}$. Next we claim that for $y$, $z$ satisfying $|y|, |z|\le D$, it can be established that \begin{equation}\label{vp} |yf_n^\varepsilon(|y|^2)-zf_n^\varepsilon(|z|^2)|\le 4\ln(\varepsilon^{-1})|y-z|,\end{equation} when $\varepsilon$ is sufficiently small. It follows from \eqref{fb} that $|f_n^\varepsilon(|y|^2)|\le 2 +\ln(n\varepsilon^{-2})$, when $|y|\le D$ and $\varepsilon\le\sqrt{n}/D$. Assuming, for example, $0<|z|\le|y|$, and applying \eqref{fl}, we get \begin{align*} |yf_n^\varepsilon(|y|^2)-zf_n^\varepsilon(|z|^2)|&=|(y-z)f_n^\varepsilon(|y|^2)| +|z||f_n^\varepsilon(|y|^2)-f_n^\varepsilon(|z|^2)|\\ &\le (2+\ln(n\varepsilon^{-2}))|y-z|+|z|\fl{4n|y-z|}{|z|}\\ &\le 2 (3n+\ln(\varepsilon^{-1}))|y-z|\\ &\le 4\ln(\varepsilon^{-1})|y-z|, \end{align*} when $\varepsilon\le \widetilde{\varepsilon}:=\min\{\sqrt{n}/D, e^{-3n}\}$. The case when $y=0$ or $z=0$ can be handled similarly. Recalling \eqref{phib}, taking $D=c(1+6t_0)\|v\|_{H^1}$, we obtain, when $\varepsilon\le \varepsilon_1:=\min\{\frac{\sqrt{n}}{c(1+6t_0)\|v\|_{H^1}}, e^{-3n}\}$, \begin{align} J_1&\le 4\ln(\varepsilon^{-1})\|\Phi^t(v)-\Phi_B^t(v)\|_{L^2}\nonumber\\ &\le 4\ln(\varepsilon^{-1})\sqrt{2t}\|\Phi_B^t(v)\|_{H^1}\nonumber\\ &\le 6 \ln(\varepsilon^{-1})\sqrt{t}\|v\|_{H^1},\label{tmp3} \end{align} where we have used the estimate \begin{equation}\label{ld} \left\|\omega-\Phi_A^t(\omega)\right\|_{L^2}\le \sqrt{2t}\,\|\omega\|_{H^1}, \end{equation} as in \cite{bao2018}, instead of the estimate from \cite{besse2002order}, \[\left\|\omega-\Phi_A^t(\omega)\right\|_{L^2}\le 2t\|\omega\|_{H^2},\] which in our case yields an extra $1/\varepsilon$ factor in the error estimate. To estimate $J_2$, we first claim that \begin{equation}\label{c1} \|\Phi_B^t(v)f_n^\varepsilon(|\Phi_B^t(v)|^2)\|_{H^1}\le 6\ln(\varepsilon^{-1})(1+3t_0)\|v\|_{H^1}, \end{equation} when $\varepsilon\le\varepsilon_1$ and $t\le t_0$. Recalling that $$\Phi_B^t(v)f_n^\varepsilon(|\Phi_B^t(v)|^2)=vf_n^\varepsilon(|v|^2)e^{-itf_n^\varepsilon(|v|^2)},$$ and $|f_n^\varepsilon(|v|^2)|\le 3\ln(\varepsilon^{-1})$, when $\varepsilon\le \varepsilon_1$, this implies \[\|(\Phi_B^t(v))f_n^\varepsilon(|\Phi_B^t(v)|^2)\|_{L^2}\le 3 \ln(\varepsilon^{-1})\|v||_{L^2}.\] Noticing that \begin{align*} \partial_x [\Phi_B^t(v)f_n^\varepsilon(|\Phi_B^t(v)|^2)]&=e^{-itf_n^\varepsilon(|v|^2)}\left[ v_x f_n^\varepsilon(|v|^2)\right.\\ &\qquad\qquad\qquad\left.+(1-itf_n^\varepsilon(|v|^2))(f_n^\varepsilon)'(|v|^2)(v^2\overline{v_x} +|v|^2v_x)\right], \end{align*} which together with \eqref{fd} yields \[|\partial_x [\Phi_B^t(v)f_n^\varepsilon(|\Phi_B^t(v)|^2)]|\le \left[6+3\ln(\varepsilon^{-1})(1+6t_0)\right]|v_x|\le 6\ln(\varepsilon^{-1})(1+3t_0)|v_x|,\] which immediately gives \eqref{c1}. Applying \eqref{ld} again entails \begin{equation}\label{tmp2} \begin{aligned} J_2\le \sqrt{2t}\, \|(\Phi_B^t(v))f_n^\varepsilon(|\Phi_B^t(v)|^2)\|_{H^1}\le 9\ln(\varepsilon^{-1})(1+3t_0)\sqrt{t}\|v\|_{H^1}, \end{aligned} \end{equation} for $\varepsilon\le \varepsilon_1$ and $t\le t_0$. Combining \eqref{tmp1}, \eqref{tmp3} and \eqref{tmp2}, we get \[\fl{d}{dt}\|\mathcal E^t(v)\|_{L^2}\le 4n\|\mathcal E^t (v)\|_{L^2}+15(1+2t_0)\ln(\varepsilon^{-1}))\sqrt{t}\|v\|_{H^1}.\] Invoking Gronwall's inequality, we have \begin{align*} \|\mathcal E^\tau (v)\|_{L^2}&\le e^{4n\tau}\left[\|\mathcal E^0 (v)\|_{L^2}+15(1+2\tau_0)\ln(\varepsilon^{-1})\|v\|_{H^1}\int_0^\tau \sqrt{s}ds\right]\\ &\le 30(1+2\tau_0)e^{4n\tau}\|v\|_{H^1}\ln(\varepsilon^{-1})\tau^{3/2}\\ &\le C(n, \tau_0)\|v\|_{H^1}\ln(\varepsilon^{-1})\tau^{3/2}, \end{align*} when $\tau\le \tau_0$ and $\varepsilon\le \varepsilon_0:=\min\{\frac{\sqrt{n}}{c(1+6\tau_0)M}, e^{-3n}\}$ depending on $\tau_0$, $n$ and $M=\|u^\varepsilon\|_{L^\infty([0, T]; H^1)}$, which completes the proof for \eqref{local1}. Next we infer the stability analysis for the operator $\Phi^t$: \begin{equation}\label{stab} \|\Phi^\tau(v)-\Phi^\tau(w)\|_{L^2}\le (1+4n\tau)\|v-w\|_{L^2},\quad \mathrm{for}\quad v, w\in L^2(\Omega). \end{equation} Noticing that $\Phi_A^\tau$ is a linear isometry on $H^s(\Omega)$, \eqref{phibl} gives \eqref{stab} directly. Thus the error \eqref{li1} can be established by combining the local error \eqref{local1}, the stability property \eqref{stab} and a standard argument \cite{besse2002order, bao2018}: \begin{align*} &\hspace{-4mm}\|u^{\varepsilon, k}-u^\varepsilon(t_k)\|_{L^2}=\|\Phi^\tau(u^{\varepsilon, k-1})-\Psi^\tau(u^\varepsilon(t_{k-1})\|_{L^2}\\ &\le \|\Phi^\tau(u^{\varepsilon, k-1})-\Phi^\tau(u^\varepsilon(t_{k-1}))\|_{L^2}+\|\Phi^\tau(u^\varepsilon(t_{ k-1}))-\Psi^\tau(u^\varepsilon(t_{k-1}))\|_{L^2}\\ &\le (1+4n\tau)\|u^{\varepsilon, k-1}-u^\varepsilon(t_{k-1})\|_{L^2}+C(n, \tau_0)\ln(\varepsilon^{-1})\tau^{3/2}\|u^\varepsilon(t_{k-1})\|_{H^1}\\ &\le (1+4n\tau)\|u^{\varepsilon, k-1}-u^\varepsilon(t_{k-1})\|_{L^2}+MC(n, \tau_0)\ln(\varepsilon^{-1})\tau^{3/2}\\ &\le (1+4n\tau)^2\|u^{\varepsilon, k-2}-u^\varepsilon(t_{k-2})\|_{L^2}+MC(n, \tau_0)\ln(\varepsilon^{-1})\tau^{3/2}\left[1+(1+4n\tau)\right]\\ &\le \ldots\\ &\le (1+4n\tau)^k\|u^{\varepsilon, 0}-u_0\|_{L^2}+MC(n, \tau_0)\ln(\varepsilon^{-1})\tau^{3/2}\sum\limits_{j=0}^{k-1}(1+4n\tau)^j\\ &\le C(n, \tau_0, T, M)\ln(\varepsilon^{-1})\tau^{1/2}, \end{align*} which completes the proof. \end{proof} \begin{remark}\label{rem41} As established in Theorem \ref{theo:cauchy}, for an arbitrarily large fixed $T>0$, we have $u^\varepsilon\in L^\infty([0, T]; H^1(\Omega))$ as soon as $u_0\in H^1(\Omega)$ when $\Omega $ is bounded. More specifically, \[ M=\|u^\varepsilon\|_{L^\infty([0,T]; H^j)} \le C\left(n, \lambda, T, \|u_0\|_{H^j}\right), \quad j=1, 2, \] for a constant $C$ independent of $\varepsilon$. When $\Omega=\R^d$, we require in addition $u_0\in L^2_\alpha$ for some $0<\alpha\le 1$ and $C$ depends additionally on $\|u_0\|_{L^2_\alpha}$. Hence the constant in \eqref{li1} as well as \eqref{li2} in Theorem \ref{thmlt2} is independent of $\varepsilon$. \end{remark} \begin{remark} By applying similar arguments as in \cite{bao2018}, for $d=2, 3$, the error estimate \eqref{li1} can be established under a more restrictive condition $u^\varepsilon\in L^\infty([0,T];H^2(\Omega))$, in which case $\varepsilon_0$ depends on $n$ and $\|u^\varepsilon\|_{L^\infty(0,T; H^2(\Omega))}$, and $\|\Phi_B^t(v)\|_{H^2}$ has to be further investigated due to the Sobolev inequality $H^2(\Omega)\hookrightarrow L^\infty(\Omega)$. For details, we refer to \cite{bao2018}. \end{remark} \subsection{Error estimates for $\Phi^\tau=\Phi_B^\tau\Phi_A^\tau$} We consider another Lie-Trotter splitting \begin{equation}\label{LT1} u^{\varepsilon, k+1}=\Phi^\tau(u^{\varepsilon, k})=\Phi_B^\tau(\Phi_A^\tau(u^{\varepsilon, k})),\quad k\ge 0;\quad u^{\varepsilon,0}=u_0, \quad \tau\in (0,\tau_0]. \end{equation} In the same fashion as above, we have \begin{equation}\label{unp1} \|u^{\varepsilon,k}\|_{L^2}=\|u_0\|_{L^2} ,\quad \|u^{\varepsilon,k}\|_{H^1}\le e^{6k\tau}\| u_0\|_{H^1}, \quad k\ge0. \end{equation} \begin{theorem}\label{thmlt2} Let $T>0$. Assume that the solution of \eqref{ERLSE} satisfies $u^\varepsilon\in L^\infty([0,T];H^2(\Omega))$. Then there exists $\varepsilon_0>0$ depending on $n$, $\tau_0$ and $M=\|u^\varepsilon\|_{L^\infty([0,T]; H^1(\Omega))}$ such that when $\varepsilon\le \varepsilon_0$ and $k\tau\le T$, we have \begin{equation}\label{li2} \|u^{\varepsilon,k}-u^\varepsilon(t_k)\|_{L^2}\le C\left(n, \tau_0, T, \|u^\varepsilon\|_{L^\infty([0,T];H^2(\Omega))}\right)\frac{\tau}{\varepsilon},\end{equation} where $C(\cdot,\cdot, \cdot, \cdot)$ is independent of $\varepsilon$. \end{theorem} \begin{proof} First, we prove the local error estimate: for $v_0\in H^1(\Omega)$, \begin{equation}\label{local2} \|\Psi^\tau(v_0)-\Phi^\tau(v_0)\|_{L^2}\le C(n, \|v_0\|_{H^2})\fl{\tau^{2}}{\varepsilon} , \quad \varepsilon\le\widetilde{\varepsilon}_0, \end{equation} where $\Phi^\tau=\Phi_B^\tau\Phi^\tau_A$, $\Psi^\tau(v_0)$ is the exact flow of \eqref{ERLSE} with initial data $v_0$ and $C(\cdot, \alpha)$ is increasing with respect to $\alpha$ and $\widetilde{\varepsilon}_0$ depends on $n$ and $\|v_0\|_{H^1}$. We start from the Duhamel formula for $v(t)=\Psi^t(v_0)$: \begin{equation}\label{duh} \Psi^t(v_0)=e^{it\Delta} v_0+\int_0^t e^{i(t-s)\Delta} B(v(s))ds.\end{equation} Recall \begin{equation}\label{bex} B(v(s))=B(e^{is\Delta}v_0)+\int_0^s dB(e^{i(s-y)\Delta}v(y))[e^{i(s-y)\Delta}B(v(y))]dy,\end{equation} which is the variation-of-constants formula \[B(g(s))-B(g(0))=\int_0^s dB(g(y))[g'(y)]dy,\quad g(y)=e^{i(s-y)\Delta}v(y).\] Here $dB(\cdot)[\cdot]$ is the G\^{a}teaux derivative: \begin{align} dB(w_1)[w_2]&=\lim\limits_{\delta\rightarrow 0}\frac{B(w_1+\delta w_2)-B(w_1)}{\delta}\nonumber\\ &=-i w_2 f_n^\varepsilon(|w_1|^2)-iw_1 (f_n^\varepsilon)'(|w_1|^2)[w_1 \overline{w_2}+\overline{w_1}w_2].\label{dBdef} \end{align} Plugging \eqref{bex} into \eqref{duh} with $t=\tau$, we get \[\Psi^\tau(v_0)=e^{i\tau \Delta}v_0+\int_0^\tau e^{i(\tau-s)\Delta}B(e^{is\Delta}v_0)ds+e_1,\] where \[e_1=\int_0^\tau\int_0^s e^{i(\tau-s)\Delta}dB(e^{i(s-y)\Delta}v(y))[e^{i(s-y)\Delta}B(v(y))]dyds.\] On the other hand, for the Lie splitting $\Phi^\tau(v_0)=\Phi_B^\tau\Phi_A^\tau(v_0)$, applying the first-order Taylor expansion \[\Phi_B^\tau(w)=w+\tau B(w)+\tau^2\int_0^1(1-s) dB(\Phi_B^{s\tau}(w))[B(\Phi_B^{s\tau}(w))]ds,\] for $w=\Phi_A^\tau(v_0)=e^{i\tau \Delta}v_0$, we get \[\Phi^\tau(v_0)=\Phi_B^\tau\Phi_A^\tau(v_0)=e^{i\tau\Delta}v_0+\tau B(e^{i\tau\Delta}v_0)+e_2,\] with \[e_2=\tau^2\int_0^1(1-s)dB(\Phi_B^{s\tau}(e^{i\tau\Delta}v_0)) [B(\Phi_B^{s\tau}(e^{i\tau\Delta}v_0))]ds.\] Thus \[\Psi^\tau(v_0)-\Phi^\tau(v_0)=e_1-e_2+e_3,\] where \[e_3=\int_0^\tau e^{i(\tau-s)\Delta}B(e^{is\Delta}v_0)ds-\tau B(e^{i\tau\Delta}v_0).\] Noticing that $e_3$ is the quadrature error of the rectangle rule approximating the integral on $[0,\tau]$ of the function $g(s)=e^{i(\tau-s)\Delta}B(e^{is\Delta}v_0)$, this implies \[e_3=-\tau^2\int_0^1 \theta g'(\theta \tau)d\theta,\] where $g'(s)=-e^{i(\tau-s)\Delta}[A, B](e^{is\Delta}v_0)$, with \begin{align*} [A, B](w)&=dA(w)[Bw]-dB(w)[Aw]=i\Delta(Bw)-dB(w)[Aw]\\ &=(f_n^\varepsilon)'(|w|^2)(2w_x^2\overline{w}+4w|w_x|^2+3w^2\overline{w_{xx}}-|w|^2w_{xx})\\ &\quad+w(f_n^\varepsilon)''(|w|^2)(w_x\overline{w}+w\overline{w_x})^2, \end{align*} by recalling \eqref{dBdef} and \begin{equation}\label{dAdef} dA(w_1)[w_2]=\lim\limits_{\delta\rightarrow 0}\frac{A(w_1+\delta w_2)-A(w_1)}{\delta}=i\Delta w_2.\end{equation} Applying \eqref{fd}, we get \[\left|[A, B](w)\right|\le \fl{12n+6n^2}{\varepsilon}|w_x|^2+12|w_{xx}|,\] which implies \begin{align*} \|[A, B](w)\|_{L^2}&\le \fl{12n+6n^2}{\varepsilon}\|w_x\|_{L^4}^2+12\|w_{xx}\|_{L^2}\\ &\le \fl{12n+6n^2}{\varepsilon}\|w_x\|_{L^\infty}\|w_x\|_{L^2}+12\|w_{xx}\|_{L^2}\\ &\le 12\|w\|_{H^2}+\frac{12cn^2}{\varepsilon}\|w\|_{H^2}^2, \end{align*} where we have used $n\ge 2$ and the Sobolev embedding $\|w\|_{L^\infty}\le c\|w\|_{H^1}$ for $d=1$. This yields that for any $s\in [0,1]$, \[\|g'(s)\|_{L^2}=\|[A, B](e^{is\Delta} v_0)\|_{L^2} \le 12\|v_0\|_{H^2}(1+cn^2\|v_0\|_{H^2}/\varepsilon),\] which immediately gives \begin{equation}\label{e3} \|e_3\|_{L^2}\le \tau^2\int_0^1\|g'(\theta \tau)\|_{L^2}d\theta \le 12\|v_0\|_{H^2}(1+cn^2\|v_0\|_{H^2}/\varepsilon)\tau^2.\end{equation} Next we estimate $e_1$ and $e_2$. In view of \eqref{fd}, we have \[\|dB(w_1)[w_2]\|_{L^2}\le (8+\ln(n\varepsilon^{-2}))\|w_2\|_{L^2}, \] when $\varepsilon\le \widetilde{\varepsilon}:=\sqrt{n}/\|w_1\|_{L^\infty}$. Thus one gets \begin{align*} \|dB(e^{i(s-y)\Delta} v(y))[e^{i(s-y)\Delta} B(v(y))]\|_{L^2}&\le (8+\ln(n\varepsilon^{-2}))\|e^{i(s-y)\Delta}B(v(y))\|_{L^2}\\ &=(8+\ln(n\varepsilon^{-2}))\|B(v(y))\|_{L^2}, \end{align*} when $\varepsilon\le \varepsilon_1=\sqrt{n}/\|e^{i(s-y)\Delta} v(y)\|_{L^\infty}$. By Sobolev embedding, \begin{equation}\label{qq1} \|e^{i(s-y)\Delta}v(y)\|_{L^\infty}\le c\|e^{i(s-y)\Delta}v(y)\|_{H^1} =c\|\Psi^y(v_0)\|_{H^1},\end{equation} thus when $\varepsilon\le \varepsilon_2:=\fl{\sqrt{n}/c}{\max\limits_{y\in[0,\tau]}\|\Psi^y(v_0)\|_{H^1}}$, we have \begin{align} \|e_1\|_{L^2}&\le \int_0^\tau\int_0^s \|dB(e^{i(s-y)\Delta} v(y))[e^{i(s-y)\Delta} B(v(y))]\|_{L^2}dyds\nonumber\\ &\le (8+\ln(n\varepsilon^{-2}))\int_0^\tau\int_0^s \|B(v(y))\|_{L^2}dyds\nonumber\\ &\le (8+\ln(n\varepsilon^{-2}))\tau^2 \max\limits_{0\le y\le\tau}\|v(y)f_n^\varepsilon(|v(y)|^2)\|_{L^2}\nonumber\\ &\le (8+\ln(n\varepsilon^{-2}))^2\tau^2 \max\limits_{0\le y\le\tau}\|v(y)\|_{L^2}\nonumber\\ &= (8+\ln(n\varepsilon^{-2}))^2\|v_0\|_{L^2}\tau^2.\label{e1} \end{align} Similarly, by recalling \[\|\Phi_B^{s\tau}(e^{i\tau\Delta}v_0)\|_{L^\infty}=\|e^{i\tau\Delta}v_0\|_{L^\infty} \le c\|v_0\|_{H^1},\] when $\varepsilon\le \varepsilon_3:=\sqrt{n}/(c\|v_0\|_{H^1})$, \begin{align} \|e_2\|_{L^2}&\le (8+\ln(n\varepsilon^{-2}))\tau^2\int_0^1\|B(\Phi_B^{s\tau}(e^{i\tau\Delta}v_0))\|_{L^2}ds\nonumber\\ &\le (8+\ln(n\varepsilon^{-2}))^2\tau^2 \int_0^1\|\Phi_B^{s\tau}(e^{i\tau\Delta}v_0)\|_{L^2}ds\nonumber\\ &=(8+\ln(n\varepsilon^{-2}))^2 \|v_0\|_{L^2}\tau^2.\label{e2} \end{align} Combining \eqref{e3}, \eqref{e1} and \eqref{e2}, when $\varepsilon\le \widetilde{\varepsilon}_0=\min\{\varepsilon_2, \varepsilon_3\}=\varepsilon_2$, we have \begin{align*} \|\Psi^\tau(v_0)-\Phi^\tau(v_0)\|_{L^2}&\le \tau^2\|v_0\|_{H^2}\big[c_1 +c_2\ln(n\varepsilon^{-2})+c_3(\ln(n\varepsilon^{-2}))^2+\frac{12cn^2}{\varepsilon}\|v_0\|_{H^2}\big]\\ &\le \tau^2\|v_0\|_{H^2}\big[\frac{c_1}{\varepsilon} +\frac{C_2n^{1/2}}{\varepsilon}+\frac{12cn^2}{\varepsilon}\|v_0\|_{H^2}\big]\\ &\le C(n, \|v_0\|_{H^2})\frac{\tau^2}{\varepsilon}, \end{align*} where we have employed the inequalities $\ln(x)\le Cx^{1/2}$ and $\ln(x)\le Cx^{1/4}$ for $x\in [1, \infty)$. Hence \eqref{local2} is established. Similarly the stability can be yielded by \eqref{phibl}: \begin{equation}\label{stab1} \|\Phi^\tau(v)-\Phi^\tau(w)\|_{L^2}\le (1+4n\tau)\|\Phi_A^\tau(v-w)\|_{L^2}= (1+4n\tau)\|v-w\|_{L^2}, \end{equation} for $v, w\in L^2(\Omega)$. Denote $\varepsilon_0=\frac{\sqrt{n}/c}{\|u\|_{L^\infty([0, T]; H^1)}}$, then by applying similar arguments in the proof of Theorem \ref{thmlt}, we can get the error estimate \eqref{li2}. \end{proof} \bigskip \begin{remark} For $d=2, 3$, the error estimate \eqref{li2} can be established with $\varepsilon_0$ depending on $n$, $\tau_0$ and $\|u^\varepsilon\|_{L^\infty([0,T]; H^2(\Omega))}$ by noticing that $H^2(\Omega)\hookrightarrow L^\infty(\Omega)$ and $H^2(\Omega)\hookrightarrow W^{1,4}(\Omega)$ for $d=2, 3$. \end{remark} \bigskip \begin{remark}[Strang splitting] \label{rem:ST_Error} When considering a Strang splitting, \begin{equation} \label{ST} u^{\varepsilon,k+1}= \Phi_B^{\tau/2}\left(\Phi_A^\tau \left(\Phi_B^{\tau/2}(u^{\varepsilon,k})\right)\right),\ \ \mathrm{or}\ \ u^{\varepsilon,k+1}= \Phi_A^{\tau/2}\left(\Phi_B^\tau \left(\Phi_A^{\tau/2}(u^{\varepsilon,k})\right)\right), \end{equation} by applying similar but more intricate arguments as above, we can prove the error bound \[\|u^{\varepsilon,k}-u^\varepsilon(t_k)\|_{L^2}\le C\left(n, \tau_0, T, \|u^\varepsilon\|_{L^\infty([0,T];H^4(\Omega))}\right)\,\frac{\tau^2}{\varepsilon^3},\] under the assumption that $u^\varepsilon\in L^\infty([0,T];H^4(\Omega))$. \end{remark} \begin{remark} In view of Theorem~\ref{theo:cauchy}, Theorems~\ref{thmlt} and \ref{thmlt2} rely on a regularity that we know is available. On the other hand, the regularity assumed in the above remark on Strang splitting is unclear in general, in the sense that we don't know how to bound $u^\varepsilon$ in $ L^\infty([0,T];H^4(\Omega))$. \end{remark} \section{Numerical results}\label{sec:num} In this section, we first test the convergence rate of the local energy regularized model \eqref{ERLSE} and compare it with the other two \eqref{RLSE0} and \eqref{RLSE1}. We then test the order of accuracy of the regularized Lie-Trotter splitting (LTSP) schemes \eqref{LT} and \eqref{LT1} and Strang splitting (STSP) scheme \eqref{ST}. To simplify the presentation, we unify the regularized models \eqref{RLSE0}, \eqref{RLSE1} and \eqref{ERLSE} as follows: \begin{equation} \label{RLSE_Unified} \left\{ \begin{aligned} &i\partial_t u^\varepsilon({\bf x} ,t)+\Delta u^\varepsilon({\bf x} ,t)=\lambda u^\varepsilon({\bf x} ,t)f_{\rm reg}^\varepsilon(|u^\varepsilon({\bf x} ,t)|^2),\quad {\bf x} \in \Omega, \quad t>0,\\ &u^\varepsilon({\bf x} ,0)=u_0({\bf x} ),\quad {\bf x} \in \overline{\Omega}. \end{aligned} \right. \end{equation} With the regularized nonlinearity $f_{\rm reg}^\varepsilon(\rho)$ being chosen as $\widetilde{f}^\varepsilon$, $\widehat{f}^\varepsilon$ and $f_n^\varepsilon$, \eqref{RLSE_Unified} corresponds to the regularized models \eqref{RLSE0}, \eqref{RLSE1} and \eqref{ERLSE}, respectively. In practical computation, we impose periodic boundary condition on $\Omega$ and employ the standard Fourier pseudo-spectral method \cite{bao2002,bao2003,bao2018} for spatial discretization. The details are omitted here for brevity. Hereafter, unless specified, we consider the following Gaussian initial data in $d$-dimension ($d=1,2$), i.e., $u_0({\bf x} )$ is chosen as \begin{equation} u_0({\bf x} )=b_d\, e^{i{\bf x} \cdot\bm{v} +\frac{\lambda}{2}|{\bf x} |^2}, \qquad {\bf x} \in {\mathbb R}^d. \end{equation} In this case, the LogSE \eqref{LSE} admits the moving Gausson solution \begin{equation} \label{Gausson} u({\bf x} ,t)=b_d\, e^{i({\bf x} \cdot\bm{v}-(a_d+|\bm{v}|^2)t)+\fl{\lambda}{2}|{\bf x} -2\bm{v}t|^2},\qquad {\bf x} \in {\mathbb R}^d, \quad t\ge0, \end{equation} with $a_d=-\lambda\, (d-\ln|b_d|^2).$ In this paper, we let $\lambda=-1$, $b_d=1/\sqrt[4]{-\lambda\pi}$ and choose $\Omega=[-16, 16]^d$. Moreover, we fix $v=1$ and $\bm{v}=(1, 1)^T$ as well as take the mesh size as $h=1/64$ and $h_x=h_y=1/16$ for $d=1$ and $2$, respectively. To quantify the numerical errors, we define the following error functions: \begin{equation} \label{Neror} \begin{split} &\breve{e}^{\varepsilon}_\rho(t_k):=\rho(\cdot,t_k)-\rho^{\varepsilon}(\cdot,t_k)=|u(\cdot, t_k)|^2-|u^\varepsilon(\cdot, t_k)|^2, \\ &\breve{e}^{\varepsilon}(t_k):=u(\cdot,t_k)-u^{\varepsilon}(\cdot,t_k), \qquad \Breve{\Breve{e}}^{\varepsilon}(t_k):=u(\cdot, t_k)-u^{\varepsilon,k}, \\ & e^{\varepsilon}(t_k):=u^{\varepsilon}(\cdot, t_k)-u^{\varepsilon,k}, \qquad\;\;\;\; e_{E}^{\varepsilon}:=|E(u_0)-E_{\rm reg}^{\varepsilon}(u_0)|. \end{split} \end{equation} Here, $u$ and $u^\varepsilon$ are the exact solutions of the LogSE \eqref{LSE} and RLogSE \eqref{RLSE_Unified}, respectively, while $u^{\varepsilon, k}$ is the numerical solution of the RLogSE \eqref{RLSE_Unified} obtained by LTSP \eqref{LT} (or \eqref{LT1}) or STSP \eqref{ST}. The ``exact'' solution $u^{\varepsilon}$ is obtained numerically by STSP \eqref{ST} with a very small time step, e.g., $\tau=10^{-5}$. The energy is obtained by the trapezoidal rule for approximating the integrals in the energy \eqref{conserv}, \eqref{RegL_Energ}, \eqref{Energ1} and \eqref{Energ2}. \subsection{Convergence rate of the regularized model} Here, we consider the error between the solutions of the RLogSE \eqref{RLSE_Unified} and the LogSE \eqref{LSE}. For various regularized models (i.e., different choices of regularized nonlinearity $f_{\rm reg}^\varepsilon$ in equation \eqref{RLSE_Unified}), Fig. \ref{fig:ModeL_Conv_Rate} shows $\|\breve{e}^{\varepsilon}(t)\|_{H^1}$ and $\|\breve{e}_\rho^{\varepsilon}(t)\|_1$ at $t=3$ and $t=2$, respectively, for $d=1$ and $2$, while Fig. \ref{fig:ModeL_Conv_Rate_Total_Energy} depicts $e_{E}^{\varepsilon}$ versus $\varepsilon$. The results are similar when $\breve{e}^{\varepsilon}(t)$ is measured by $L^2$- or $L^\infty$-norm. \begin{figure}[htbp!] \begin{center} \includegraphics[width=6cm,height=4cm]{./figs/ModeL_Conv/ModeL_Converg_Rate_ALL_ReguL_FunN_H1_norm_At_t3.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/ModeL_Conv/ModeL_Converg_Rate_ALL_ReguL_FunN_Density_Err_L1_norm_At_t3.eps}\\[1em] \includegraphics[width=6cm,height=4cm]{./figs/Revision/2D_Case/Revision_2D_ModeL_Converg_Rate_ALL_ReguL_FunN_H1_norm_At_t2.eps} \quad \includegraphics[width=6cm,height=4cm]{./figs/Revision/2D_Case/Revision_2D_ModeL_Converg_Rate_ALL_ReguL_FunN_Dens_L1_norm_At_t2.eps} \end{center} \caption{Convergence of the RLogSE \eqref{RLSE_Unified} with various regularized nonlinearities $f_{\rm reg}^\varepsilon$ to the LogSE \eqref{LSE}, i.e., the error $\|\breve{e}^\varepsilon(t)\|_{H^1}$ and $\|\breve{e}^\varepsilon_\rho(t)\|_1$ versus the regularization parameter $\varepsilon$ at $t=3$ for $d=1$ (upper) and $t=2$ for $d=2$ (lower). } \label{fig:ModeL_Conv_Rate} \end{figure} From these figures and additional similar numerical results not shown here for brevity, we could clearly see: (i) The solution of the RLogSE \eqref{RLSE_Unified} converges linearly to that of the LogSE \eqref{LSE} in terms of $\varepsilon$ for all the three types of regularized models. Moreover, the regularized energy $\widetilde{E}^\varepsilon$ converges linearly to the original energy $E$ in terms of $\varepsilon$, while $\widehat{E}^\varepsilon$ \& $E_n^\varepsilon$ (for any $n\ge 2$) converges quadratically. These results confirm the theoretical results from Section \ref{sec:cvmodel} \& \ref{sec:cvenergy}. (ii) In $L^1$-norm, the density $\rho^\varepsilon$ of the solution of the RLogSE with regularized nonlinearity $\widetilde{f}^\varepsilon$ converges linearly to that of the LogSE \eqref{LSE} in terms of $\varepsilon$, while the convergence rate is not clear for those of RLogSE with other regularized nonlinearities. Generally, for fixed $\varepsilon$, the errors of the densities measured in $L^1$-norms are smaller than those of wave functions (measured in $L^2$, $H^1$ or $L^\infty$-norm). (iii) For any fixed $\varepsilon>0$, the proposed local energy regularization (i.e., $f_{\rm reg}^\varepsilon=f_n^\varepsilon$) outperforms the other two (i.e., $f_{\rm reg}^\varepsilon=\widehat{f}^\varepsilon$ and $f_{\rm reg}^\varepsilon=\widetilde{f}^\varepsilon$) in the sense that its corresponding errors in wave function and total energy are smaller. The larger the order (i.e., $n$) of the energy-regularization is chosen, the smaller the difference between the solutions of the ERLogSE \eqref{ERLSE} and LogSE is obtained. \begin{figure}[h!] \begin{center} \includegraphics[width=6cm,height=5cm]{./figs/ModeL_Conv/New_TotaL_Energy_Converg_Rate_ALL_ReguL_Type.eps}\quad \includegraphics[width=6cm,height=5cm]{./figs/Revision/2D_Case/Revision_2D_ModeL_Converg_Rate_ALL_ReguL_Energy_Err_At_t2.eps} \end{center} \caption{Convergence of the RLogSE \eqref{RLSE_Unified} with various regularized nonlinearities $f_{\rm reg}^\varepsilon$ to the LogSE \eqref{LSE}: the energy error $e_{E}^\varepsilon(t)$ \eqref{Neror} at $t=3$ for $d=1$ (left) and $t=2$ for $d=2$ (right).} \label{fig:ModeL_Conv_Rate_Total_Energy} \end{figure} \subsection{Convergence rate of the time-splitting spectral method} Here, we investigate the model RLogSE \eqref{RLSE_Unified} with $f_{\rm reg}^\varepsilon=f_n^\varepsilon$, i.e., the ERLogSE \eqref{ERLSE}. We will test the convergence rate of type-1 LTSP \eqref{LT} \& type-2 LTSP \eqref{LT1} and the STSP \eqref{ST} to the ERLogSE \eqref{ERLSE} or the LogSE \eqref{LSE} in terms of the time step $\tau$ for fixed $\varepsilon\in(0,1)$. Fig. \ref{fig:Order_Accuracy_LT_ST} shows the errors $\|e^\varepsilon(3)\|_{H^1}$ versus time step $\tau$ for $f_2^\varepsilon$ \& $f_4^\varepsilon$. In addition, Table \ref{tab:conv_STSP_Energy_ReguL_f2} displays $\|\breve{\breve{e}}^{\varepsilon}(3)\|$ versus $\varepsilon$ \& $\tau$ for $f_2^\varepsilon$. \begin{figure}[h!] \begin{center} \includegraphics[width=6cm,height=5cm]{./figs/Order_Accuracy/N_Order_2_H1_Error_Lie_vs_Strang_Order_of_Accuracy.eps} \quad \includegraphics[width=6cm,height=5cm]{./figs/Order_Accuracy/N_Order_4_H1_Error_Lie_vs_Strang_Order_of_Accuracy.eps} \end{center} \caption{Convergence of the type-1 LTSP \eqref{LT} \& type-2 LTSP \eqref{LT1} as well as the STSP \eqref{ST} to the ERLogSE \eqref{ERLSE} with regularized nonlinearity $f_2^\varepsilon$ (left) and $f_4^\varepsilon$ (right), i.e., errors $\|e^\varepsilon(3)\|_{H_1}$ versus $\tau$ for various $\varepsilon$.} \label{fig:Order_Accuracy_LT_ST} \end{figure} From Fig. \ref{fig:Order_Accuracy_LT_ST}, Table \ref{tab:conv_STSP_Energy_ReguL_f2} and additional similar results not shown here for brevity, we can observe that: (i) In $H^1$ norm, for any fixed $\varepsilon\in(0, 1)$ and $n\ge 2$, the LTSP scheme converges linearly while the STSP scheme converges quadratically when $\varepsilon<\varepsilon_0$ for some $\varepsilon_0>0$. (ii) For any $f_n^\varepsilon$ with $n\ge2$, the STSP converges quadratically to the LogSE \eqref{LSE} only when $\varepsilon$ is sufficiently small, i.e., $\varepsilon\lesssim \tau^2$ (cf. each row in the lower triangle below the diagonal in bold letter in Table \ref{tab:conv_STSP_Energy_ReguL_f2}). (iii) When $\tau$ is sufficiently small, i.e., $\tau^2\lesssim \varepsilon$, the ERLogSE \eqref{ERLSE} converges linearly at $O(\varepsilon)$ to the LogSE \eqref{LSE} (cf. each column in the upper triangle above the diagonal in bold letter in Table \ref{tab:conv_STSP_Energy_ReguL_f2}). (iv) The numerical results are similar for other $f_n^\varepsilon$ with $n\ge3$ and when the errors are measured in $L^\infty$- and $L^2$-norm, which confirm the theoretical conclusion in Theorem \ref{thmlt2} and Remark \ref{rem:ST_Error}. \begin{table}[htbp!] \footnotesize \tabcolsep 0pt \caption{Convergence of the STSP \eqref{ST} (via solving the ERLogSE \eqref{ERLSE} with $f_2^\varepsilon$) to the LogSE \eqref{LSE}, i.e., $\|\breve{\breve{e}}^{\varepsilon}(3)\|$ for different $\varepsilon$ and $\tau$. } \label{tab:conv_STSP_Energy_ReguL_f2} \begin{center}\vspace{-0.5em} \def1\textwidth{1\textwidth} {\rule{1\textwidth}{1pt}} \begin{tabularx}{1\textwidth}{@{\extracolsep{\fill}}p{1.38cm}|cccccccccc} & $\tau=0.1$ & $\tau/2$ & $\tau/2^2$ & $\tau/2^3$ & $\tau/2^4$ & $\tau/2^5$ & $\tau/2^6$ & $\tau/2^7$ & $\tau/2^8$ & $\tau/2^{9}$ \\[0.3em] \hline $\varepsilon$=0.025 &7.98E-3 & \bf{2.13E-3 }& 8.86E-4 & 7.28E-4 & 7.14E-4 & 7.12E-4 & 7.12E-4 & 7.12E-4 & 7.12E-4 & 7.12E-4 \\ [0.25em] rate & -- & \bf{1.91 }& 1.27 & 0.28 & 0.03 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ [0.25em] \hline $\varepsilon/4$ & 7.77E-3 & 1.96E-3 & \bf{5.02E-4 }& 1.67E-4 & 1.12E-4 & 1.08E-4 & 1.08E-4 & 1.08E-4 & 1.08E-4 & 1.08E-4 \\ [0.25em] rate & -- &1.99 & \bf{1.97} & 1.59 & 0.57 & 0.06 & 0.01 & 0.00 & 0.00 & 0.00 \\[0.25em] \hline $\varepsilon/4^2$ & 7.76E-3 & 1.95E-3 & 4.88E-4 & \bf{1.25E-4 }& 3.81E-5 & 2.40E-5 & 2.28E-5 & 2.27E-5 & 2.27E-5 & 2.27E-5 \\ [0.25em] rate & -- & 2.00 & 2.00 & \bf{1.97} & 1.71 & 0.67 & 0.07 & 0.01 & 0.00 & 0.00 \\[0.25em] \hline $\varepsilon/4^3$ & 7.76E-3 & 1.95E-3 & 4.87E-4 & 1.22E-4 & \bf{3.08E-5 }& 8.95E-6 & 5.09E-6 & 4.74E-6 & 4.72E-6 & 4.71E-6 \\ [0.25em] rate & -- & 2.00 & 2.00 & 2.00 & \bf{1.98} & 1.78 & 0.82 & 0.10 & 0.01 & 0.00 \\[0.25em] \hline $\varepsilon/4^4$ &7.76E-3 & 1.95E-3 & 4.87E-4 & 1.22E-4 & 3.04E-5 & \bf{7.66E-6} & 2.092E-6 & 9.93E-7 & 8.80E-7 & 8.72E-7\\ [0.25em] rate & -- &2.00 & 2.00 & 2.00 & 2.00 & \bf{1.99} & 1.87 & 1.08 & 0.18 & 0.01 \\[0.25em] \hline $\varepsilon/4^5$ &7.76E-3 & 1.95E-3 & 4.87E-4 & 1.22E-4 & 3.04E-5 & 7.61E-6 & \bf{1.92E-6} & 5.26E-7 & 2.54E-7 & 2.27E-7\\ [0.25em] rate & -- &2.00 & 2.00 & 2.00 & 2.00 & 2.00 & \bf{1.99} & 1.87 & 1.05 & 0.16 \\[0.25em] \hline $\varepsilon/4^6$ & 7.76E-3 & 1.95E-3 & 4.87E-4 & 1.22E-4 & 3.04E-5 & 7.61E-6 & 1.90E-6 & \bf{4.78E-7} & 1.27E-7 & 5.36E-8 \\ [0.25em] rate & -- &2.00 & 2.00 & 2.00 & 2.00 & 2.00 & 2.00 &\bf{ 1.99} & 1.91 & 1.25 \\[0.25em] \hline $\varepsilon/4^7$ & 7.76E-3 & 1.95E-3 & 4.87E-4 & 1.22E-4 & 3.04E-5 & 7.61E-6 & 1.90E-6 & 4.76E-7 & \bf{1.19E-7} & 3.13E-8 \\ [0.25em] rate & -- &2.00 & 2.00 & 2.00 & 2.00 & 2.00 & 2.00 & 2.00 & \bf{2.00 }& 1.93 \\[0.25em] \hline $\varepsilon/4^8$ & 7.76E-3 & 1.95E-3 & 4.87E-4 & 1.22E-4 & 3.04E-5 & 7.61E-6 & 1.90E-6 & 4.76E-7 & 1.19E-7 & \bf{2.98E-8} \\ [0.25em] rate & -- &2.00 & 2.00 & 2.00 & 2.00 & 2.00 & 2.00 & 2.00 & 2.00 & \bf{2.00 } \\[0.25em] \end{tabularx} {\rule{1\textwidth}{1pt}} \end{center} \end{table} \subsection{Application for interaction of 2D Gaussons} In this section, we apply the STSP method to investigate the interaction of Gaussons in dimension 2. To this end, we fix $n=4$, $\varepsilon=10^{-12}$, $\tau=0.001$, $h_x=h_y=1/16$, $\Omega=[-16, 16]^2$ for {\bf Case 1} \& {\bf Case 2} while $\Omega=[-48, 48]^2$ for {\bf Case 3}. The initial data is chosen as \begin{equation} u_0({\bf x} )=b_1 e^{i{\bf x} \cdot\bm{v}_1 +\frac{\lambda}{2}|{\bf x} -{\bf x} _1^0|^2} +b_2 e^{i{\bf x} \cdot\bm{v}_2 +\frac{\lambda}{2}|{\bf x} -{\bf x} _2^0|^2}, \end{equation} where $b_j$, $\bm{v}_j$ and ${\bf x} _j^0$ ($j=1,2$) are real constant vectors, i.e., the initial data is the sum of two Gaussons \eqref{Gausson} with velocity $\bm{v}_j$ and initial location ${\bf x} _j^0$. Here, we consider the following cases: \begin{itemize} \item[(i)] $b_1=b_2=\fl{1}{\sqrt[4]{\pi}}$,\quad $\bm{v}_1=\bm{v_2}=(0, 0)^T$,\quad ${\bf x} _1^0=-{\bf x} _2^0=(-2, 0)^T$; \smallskip \item[(ii)] $b_1= 1.5\,b_2 =\fl{1}{\sqrt[4]{\pi}}$,\quad $\bm{v}_1=(-0.15, 0)^T$,\quad $\bm{v_2}={\bf x} _1^0=(0, 0)^T$,\quad ${\bf x} _2^0=(5, 0)^T$; \smallskip \item[(iii)] $b_1=b_2 =\fl{1}{\sqrt[4]{\pi}}$,\quad $\bm{v}_1=(0, 0)^T$,\quad $\bm{v_2}=(0, 0.85)^T$,\quad ${\bf x} _1^0=-{\bf x} _2^0=(-2, 0)^T$. \end{itemize} Fig. \ref{fig:Rev_2D_Gau_Inter_Case1_2} shows the contour plots of $|u^\varepsilon(x,y,t)|^2$ at different time as well as the evolution of $\sqrt{|u^\varepsilon(x,0,t)|}$ for {\bf Case} (i) \& (ii). While Fig. \ref{fig:Rev_2D_Gau_Inter_Case3} illustrates that for {\bf Case} (iii). From these figures we clearly see that: (1) Even for two static Gaussons, if they stay close enough, they will contact and undergo attractive interactions. They will collide and stick together shortly then separate again. The Gaussons will swing like a pendulum and small solitary waves are emitted outward during the interaction (cf. Fig.~\ref{fig:Rev_2D_Gau_Inter_Case1_2} top). This dynamics phenomena is similar to that in 1D case \cite{bao2018}. (2) For Case (ii), the two Gaussons also undergo attractive interactions. The slowly moving Gausson will drag its nearby static Gausson to move in the same direction (cf. Fig.~\ref{fig:Rev_2D_Gau_Inter_Case1_2} bottom), which is also similar to that in 1D case \cite{bao2018}. (3) For two Gaussons (one static and the other moving) staying close enough, if the moving Gausson move perpendicular to the line connecting the two Gaussons, the static Gausson will be dragged to move and the direction of the moving Gausson will be altered. The two Gaussons will rotate with each other and gradually drift away, which is similar to the dynamics of a vortex pair in the cubic Schr\"odinger equation \cite{Bao2014}. \begin{figure}[h!] \begin{center} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case1_Density_PLots_At_t_0.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case1_Density_PLots_At_t_7_2.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case1_Density_PLots_At_t_14_4.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case1_Density_PLots_At_Y0.eps}\\[1em] \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case2_Density_PLots_At_t_0.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case2_Density_PLots_At_t_20.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case2_Density_PLots_At_t_40.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case2_Density_PLots_At_Y0.eps}\\ \end{center} \caption{Plots of $|u^\varepsilon(x,y,t)|^2$ at different times (first three column) and contour plot of $|u^\varepsilon(x,0,t)|^2$ (last column) for {\bf Case} (i) (Upper) in region $[-6, 6]^2$ and {\bf Case} (ii) (Lower) in region $[-13, 7]\times[-6, 6]$.} \label{fig:Rev_2D_Gau_Inter_Case1_2} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case3_Density_PLots_At_t_0.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case3_Density_PLots_At_t_2.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case3_Density_PLots_At_t_6.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case3_Density_PLots_At_t_8.eps}\\[1em] \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case3_Density_PLots_At_t_10.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case3_Density_PLots_At_t_12.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case3_Density_PLots_At_t_14.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case3_Density_PLots_At_t_17.eps}\\[1em] \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case3_Density_PLots_At_t_20.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case3_Density_PLots_At_t_23.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case3_Density_PLots_At_t_26.eps} \includegraphics[width=3cm,height=2.5cm]{./figs/Revision/2D_Case/Interaction/2D_Case3_Density_PLots_At_t_32.eps}\\[1em] \end{center} \caption{Plots of $|u^\varepsilon(x,y,t)|^2$ at different times for {\bf Case} (iii) in region $[-9, 9]\times [-5, 32]$.} \label{fig:Rev_2D_Gau_Inter_Case3} \end{figure} \section{Conclusion} We proposed a new systematic local energy regularization (LER) approach to overcome the singularity of the nonlinearity in the logarithmic Schr\"{o}dinger equation (LogSE). With a small regularized parameter $0<\varepsilon\ll1$, in contrast to the existing ones that directly regularize the logarithmic nonlinearity, we regularized locally the interaction energy density in the energy functional of the LogSE. The Hamiltonian flow of the new regularized energy then yields an energy regularized logarithmic Schr\"{o}dinger equation (ERLogSE). Linear and quadratic convergence in terms of $\varepsilon$ was established between the solutions, and between the conserved total energy of ERLogSE and LogSE, respectively. Then we presented and analyzed time-splitting schemes to solve the ERLogSE. The classical first order of convergence was obtained both theoretically and numerically for the Lie-Trotter splitting scheme. Numerical results suggest that the error bounds of splitting schemes to the LogSE clearly depend on the time step $\tau$ and mesh size $h$ as well as the small regularized parameter $\varepsilon$. Our numerical results confirm the error bounds and indicate that the ERLogSE model outperforms the other existing ones in accuracy. \section*{Acknowledgment} This work was partially supported by the Ministry of Education of Singapore grant R-146-000-296-112 (MOE2019-T2-1-063) (W. Bao), Rennes M\'etropole through its AIS program (R. Carles), the Alexander von Humboldt Foundation (C. Su), the Institutional Research Fund from Sichuan University (No. 2020SCUNL110) and the National Natural Science Foundation of China (No. 11971335) (Q. Tang).
2024-02-18T23:41:00.964Z
2021-09-07T02:28:03.000Z
algebraic_stack_train_0000
3,939
15,053
proofpile-arXiv_066-3287
\section{More Discussion on Experimental Details} \label{app:exp} \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{fig/ablation_EWC_M/EWC_M.jpg} \caption{Results with different $M$ for EWC, where we perform 4 runs for each $M$, and plot the mean and standard deviation. The pre-trained models are embedded with pattern-based watermarks, and fine-tuned with partial CIFAR-100.} \label{fig:ewc-m} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{fig/ablation_AU_size/AU_size.jpg} \caption{Results with different numbers of unlabeled samples for AU, where we perform 4 runs for every setting and plot the mean and standard deviation. The pre-trained models are embedded with pattern-based watermarks and fine-tuned with partial CIFAR-100. The unlabeled samples for augmentation are drawn from the unlabeled part of STL-10.} \label{fig:au-sample-size} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{fig/doubling_lr/double_per20_CIFAR-10+CIFAR-100.jpg} \caption{Training curves to illustrate the effect of learning rate during the fine-tuning stage. The configuration is mostly the same as Figure~\ref{fig:double-lr-partial02-20}, except that the model is fine-tuned on the entire training set.} \label{fig:double-lr-20} \end{figure} \begin{table*}[t] \centering \begin{tabular}{|c|c|ccc|} \toprule Dataset & Scheme & Initial learning rate & $\lambda$(EWC) & m(AU) \\ \midrule \multirow{4}{*}{CIFAR-10} & Pattern & 0.03 & 150 & 50 \\ & OOD & $[0.05, 0.15]$& $10$ & $50$ \\ & EW & $0.03$ & $20$ & $50$ \\ & AFS &$[0.01, 0.1]$ & $3$ & $[5, 50]$ \\ \midrule \multirow{4}{*}{CIFAR-100} & Pattern & $[0.03, 0.1]$ & $20$ & $50$ \\ & OOD & $[0.03, 0.1]$ & $200$ & $50$ \\ & EW & $[0.04, 0.05]$& $[2, 5]$& $50$ \\ & AFS & $[0.015, 0.07]$ & $[25, 30]$ & $[10, 50]$ \\ \midrule \multirow{4}{*}{CIFAR-10$\to$ STL-10} & Pattern & $[0.03, 0.05]$ & $10$ & $50$ \\ & OOD & $[0.04, 0.15]$ & $10$ & $50$ \\ & EW & $[0.03, 0.05]$ & $200$ & $50$ \\ & AFS & $[0.02, 0.05]$ & $200$ & $50$ \\ \midrule \multirow{4}{*}{ImageNet32} & Pattern & $[0.004, 0.04]$ & $[800, 1200]$ & $50$ \\ & OOD & $[0.005, 0.05]$ & $[30, 100]$ & $50$ \\ & EW & $[0.003, 0.1]$ & $[10^4, 2\times 10^4]$ & $50$ \\ & AFS & $[0.006, 0.03]$ & $[3, 50]$ & $[30, 50]$ \\ \midrule \multirow{4}{*}{ImageNet32$\to$ STL-10} & Pattern & $[0.015, 0.02]$ & $[1000, 1100]$ & $50$ \\ & OOD & $[0.01, 0.015]$ & $[50, 100]$ & $50$ \\ & EW & $[0.007, 0.03]$ & $[1.2 \times 10^4, 1.5 \times 10^4]$ & $50$ \\ & AFS & $[0.003, 0.008]$ & $[200, 500]$ & $50$ \\ \bottomrule \end{tabular} \caption{Ranges of the best hyper-parameter configuration for all watermark removal results. $\lambda$ denotes the coefficient in EWC and $m$ is the number of unlabeled samples added to a training batch with AU. } \label{tab:hyperparameters} \end{table*} Our implementation is in PyTorch~\footnote{The implementation is mainly adapted from~\url{https://github.com/adiyoss/WatermarkNN}, the code repo of~\cite{adi2018turning}.}. For each watermarking scheme in our evaluation, we present the best hyper-parameter configurations in Table~\ref{tab:hyperparameters}. Note that the adversary can always make the worst assumption about the strength of the watermark scheme, and conservatively set the initial learning rate to ensure that the watermarks are removed. In our evaluation, we observe that setting an initial learning rate to be $0.05$ works relatively well for all settings. Other hyper-parameters are set to maximize the test accuracy after watermark removal. \section{Discussion of sample efficiency} \label{app:eval-sample-efficiency} In the following, we provide some discussion about the sample efficiency of EWC and AU components in {REFIT}. \paragraph{The number of samples $M$ for EWC.} For EWC component, we investigate how the number of samples $M$ drawn for Fisher information approximation affects the performance, and present the results in Figure~\ref{fig:ewc-m}. Specifically, we evaluate on CIFAR-100, and embed the pre-trained models with pattern-based watermarks. We observe that with only $M=100$ samples, EWC is already able to increase the test accuracy around $1\%$ over the basic fine-tuning, demonstrating its effectiveness of preserving the test performance. Setting a higher $M$ may further improve the results, but it could also introduce a higher computation overhead without significant performance gain when $M$ becomes too large. Therefore, we set the default value of $M$ based on such a trade-off. \paragraph{The number of unlabeled samples for AU.} For AU component, we demonstrate the results of varying the number of unlabeled samples for fine-tuning in Figure~\ref{fig:au-sample-size}. We also evaluate on CIFAR-100 with the pre-trained model using the pattern-based watermarking scheme, and the unlabeled samples are drawn from the unlabeled part of STL-10. Despite the large difference of dataset distribution between STL-10 and CIFAR-100, augmenting with $5K$ unlabeled samples already enables a considerable performance gain, and the test accuracy continues to increase with more unlabeled samples, suggesting the promise of leveraging unlabeled data for watermark removal, which is typically much easier to collect than in-distribution labeled data. \section{More Discussion on Pruning} \label{app:pruning} Previous work has studied the effectiveness of pruning-based approaches for watermark removal, and found that such techniques are largely ineffective~\cite{zhang2018protecting,liu2018fine,namba2019robust}. In our evaluation, we compare with the pruning method studied in~\cite{liu2018fine}, where we follow their setup to prune the neurons of the last convolutional layer in the increasing order of the magnitude of their activations on the validation set. Figure~\ref{fig:prune} presents the curves of the model accuracy with different pruning rates. Note that due to the skip connections introduced in ResNet architecture, the model accuracy may not be low even if the pruning rate is close to 1. Therefore, we also evaluate VGG-16~\cite{simonyan2014very}, another neural network architecture that is capable of achieving the same level of performance on both CIFAR-10 and CIFAR-100. For both models, we observe that the watermark accuracy is tightly associated with the test accuracy, which makes it hard to find a sweet spot of the pruning rate so that the test performance is preserved while the watermarks are removed. In particular, as shown in Table~\ref{tab:res-finepruning}, using the pruning approach, when the test accuracy degrades to $90.72\%$ on CIFAR-10, the watermark accuracy is still $65\%$; on the other hand, using {REFIT} with AU, without any in-distribution labeled data, the fine-tuned model achieves the same level of performance as the pruning method with the watermarks removed. The gap on CIFAR-100 is more significant: {REFIT} is able to achieve an accuracy of $66.79\%$, but the test accuracy of the pruned model already decreases to $53.34\%$ with $71\%$ watermarks still retained. We have also tried other pruning approaches, but none of them works considerably better, which shows that {REFIT} is more suitable for watermark removal. \begin{figure*}[t] \centering \begin{subfigure}[t]{0.48\linewidth} \includegraphics[width=\linewidth]{fig/pruning_only/prune_cifar10ResNet-18+VGG-16.jpg} \caption{} \label{} \end{subfigure} \begin{subfigure}[t]{0.48\linewidth} \includegraphics[width=\linewidth]{fig/pruning_only/prune_cifar100ResNet-18+VGG-16.jpg} \caption{} \label{} \end{subfigure} \caption{Curves to illustrate the effect of neuron pruning. The corresponding pre-trained models are embedded with OOD watermarks. (a) CIFAR-10; (b) CIFAR-100.} \label{fig:prune} \end{figure*} \section{More Discussion on Fine-pruning} \label{app:fine-pruning} \begin{table*}[t] \centering \begin{tabular}{|c|c|c|c|ccccc|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & \multirow{2}{*}{Pruning} & \multirow{2}{*}{Before fine-tuning} & \multicolumn{5}{c|}{Percentage}\\ & & & & 20\% & 30\% & 40\% & 50\% & 80\% \\ \hline \multirow{4}{*}{CIFAR-10} & \multirow{2}{*}{ResNet-18} & $\circ$ & $90.72\%(65\%)$ & $91.10\%$ & $92.05\%$ & $92.72\%$ & $93.25\%$ & $94.20\%$\\ & & $\times$ & $93.73\%(100\%)$ & $90.82\%$ & $92.27\%$ & $92.78\%$ & $93.44\%$ & $94.03\%$ \\ \cline{2-9} & \multirow{2}{*}{VGG-16} & $\circ$ & $64.69\%(77\%)$ & $90.21\%$ & $91.44\%$ & $92.00\%$ & $92.81\%$ & $93.52\%$ \\ & &$\times$ & $93.48\%(100\%)$ & $89.94\%$ & $91.53\%$ & $92.59\%$ & $92.69\%$ & $93.35\%$ \\ \hline \multirow{4}{*}{CIFAR-100} & \multirow{2}{*}{ResNet-18} &$\circ$ & $53.34\%(71\%)$ & $67.34\%$ & $70.25\%$& $71.42\%$ & $72.80\%$ & $74.05\%$ \\ & & $\times$ & $74.50\%(100\%)$ & $67.83\%$ & $70.54\%$ & $72.16\%$ & $72.49\%$ & $74.74\%$ \\ \cline{2-9} & \multirow{2}{*}{VGG-16} & $\circ$& $63.26\%(97\%)$ & $62.03\%$ & $65.44\%$ & $67.72\%$ & $68.49\%$ & $70.99\%$ \\ & & $\times$ &$72.19\%(100\%)$ & $62.80\%$ & $65.65\%$ & $68.11\%$ & $69.47\%$ & $71.38\%$ \\ \hline \end{tabular} \caption{Comparisons between the basic version of {REFIT} and fine-pruning~\cite{liu2018fine}, where $\times$ in the column ``Pruning'' denotes {REFIT} without EWC and AU, and $\circ$ denotes fine-pruning. The pre-trained models are embedded with OOD watermarks. For results before fine-tuning, we also present the watermark accuracies in the brackets. In the columns of ``Percentage'', we present the proportion of labeled training set used for fine-tuning. For fine-pruning, the ratios of the pruned neurons from the last convolution layer are $98.4\%$ and $85.9\%$ for CIFAR-10 and CIFAR-100, respectively. Note that we apply the same learning rate schedule for fine-pruning as {REFIT}, which is crucial in preserving a good test performance while removing the watermarks.} \label{tab:res-finepruning} \end{table*} For the implementation of fine-pruning, we set the pruning rates before fine-tuning in the same way as their paper, i.e., keep increasing the pruning rate stepwise, and stop when the degradation of the model performance becomes observable. We apply the same learning rate schedule for fine-pruning as {REFIT}, which is crucial in preserving the test performance of the model while removing the watermarks. Table~\ref{tab:res-finepruning} presents the results on CIFAR-10 and CIFAR-100, comparing the fine-pruning approach to the basic version of {REFIT} without EWC and AU, where the pre-trained models are embedded with OOD watermarks. Besides ResNet-18, we also evaluate VGG-16~\cite{simonyan2014very}, another neural network architecture that is capable of achieving the same level of performance on both CIFAR-10 and CIFAR-100. For both datasets and model architectures, we find that the results are roughly similar, suggest that pruning is not necessary with a properly designed learning rate schedule for fine-tuning. In particular, our full {REFIT} framework still outperforms the fine-pruning. \section{Model Watermarking} \label{sec:background} We study the watermarking problem following the formulation in~\cite{adi2018turning}. Specifically, a model owner trains a model $f_\theta$ for a task $\mathcal{T}$. Besides training on data drawn from the distribution of $\mathcal{T}$, the owner also embeds a set of watermarks $\mathcal{K}=\{(x^k, y^k)\}_{k=1}^K$ into $f_\theta$. A valid watermarking scheme should at least satisfy two properties: \begin{itemize}[leftmargin=*,noitemsep,topsep=0em] \item \emph{Functionality-preserving}, i.e., watermarking does not noticeably degrade the model accuracy on $\mathcal{T}$. \item \emph{Verifiability}, i.e., $Pr(f_\theta(x^k)=y^k) \gg Pr(f'(x^k)=y^k)$ for $(x^k, y^k) \in \mathcal{K}$, where $f'$ is any other model that is not trained with the same set of watermarks. In practice, the model owner often sets a threshold $\gamma$, so that when $Pr(\hat{f}(x^k)=y^k) > \gamma$, the model $\hat{f}$ is considered to have the watermarks embedded, which could be used as an evidence to claim the ownership. We refer to $\gamma$ as the~\emph{watermark decision threshold}. \end{itemize} Various watermark embedding schemes have been proposed in recent years~\cite{zhang2018protecting,chen2017targeted,gu2017badnets,adi2018turning,namba2019robust,merrer2017adversarial}. The most widely studied watermarking schemes could be pattern-based techniques, which blend the same pattern into a set of images as the watermarks~\cite{chen2017targeted,gu2017badnets,adi2018turning}. Such techniques are also commonly applied for backdoor injection or Trojan attacks~\cite{liu2017trojaning,liu2017neural,shafahi2018poison}. Therefore, a long line of work has studied defense proposals against pattern-based watermarks~\cite{wang2019neural,gao2019strip,chen2019deepinspect,guo2019tabor}. Despite that these defense methods are shown to be effective against at least some types of pattern-based watermarks, they typically rely on certain assumptions of the pattern size, label distribution, etc. More importantly, it would be hard to directly apply these methods to remove other types of watermarks, which limits their generalizability. In contrast to this line of work, we study the threat model where the adversary has minimal knowledge of the pre-training process, as detailed below. \subsection{Threat Model for Watermark Removal} In this work, we assume the following threat model for the adversary who aims at removing the watermarks. In Figure~\ref{fig:threat_model}, we provide an overview to illustrate the setup of watermark embedding and removal, as well as the threat model. \paragraph{No knowledge of the watermarks.} Some prior work on detecting samples generated by pattern-based techniques requires access to the entire data for pre-training, including the watermarks~\cite{tran2018spectral,chen2018detecting}. In contrast, we do not assume access to the watermarks. \paragraph{No knowledge of the watermarking scheme.} As discussed above, most prior works demonstrating successful watermark removal rely on the assumption that the watermarks are pattern-based~\cite{wang2019neural,gao2019strip,chen2019deepinspect,guo2019tabor}. In this work, we study fine-tuning as a generic and effective approach to watermark removal, without the knowledge of the specific watermarking scheme. \paragraph{Limited data for fine-tuning.} We assume that the adversary has computation resources for fine-tuning, and this assumption is also made in previous work studying fine-tuning and distillation-based approaches for watermark removal~\cite{adi2018turning,zhang2018protecting,liu2018fine,yang2019effectiveness}. Note that most prior works along this line assume that the adversary has access to the same amount of benign data for task $\mathcal{T}$ as the model owner. However, this assumption does not always hold in reality. Specifically, when the adversary has a sufficiently large dataset to train a good model, such an adversary is generally less motivated to take the risk of conducting watermark removal attacks, given that the adversary is already able to train his own model from scratch. To study the watermark removal problem with a more realistic threat model, in this work, we perform a comprehensive study of the scenarios where the adversary has a much smaller dataset for fine-tuning than the pre-training dataset. In this case, training a model from scratch with such a limited dataset would typically result in inferior performance, as we will demonstrate in Section~\ref{sec:eval}, which provides the adversary with sufficient incentives to pirate a pre-trained model and invalidate its watermarks. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig/threat-model.pdf} \caption{An overview of our setup of watermark embedding and removal, as well as the threat model. Specifically, the model owner embeds a set of watermark samples into the pre-trained model, so that these samples could be used for ownership verification. Meanwhile, the training data accessible to the adversary is too limited to train a model of good performance from scratch, which motivates the adversary to pirate a pre-trained model. To bypass the ownership verification, the adversary needs to remove the watermarks, so that the watermark accuracy does not pass the threshold $\gamma$.} \label{fig:threat_model} \end{figure} \section{Conclusion} \label{sec:conc} In this work, we propose {REFIT}, a unified framework that removes the watermarks via fine-tuning. We first demonstrate that by appropriately designing the learning rate schedule, our fine-tuning approach could effectively remove the watermarks. We further propose two techniques integrated into the {REFIT} framework, i.e., an adaption of the elastic weight consolidation (EWC) approach, and unlabeled data augmentation (AU). We conduct an extensive evaluation with the assumption of a weak adversary who only has access to a limited amount of training data. Our results demonstrate the effectiveness of {REFIT} against several watermarking schemes of different types. In particular, EWC and AU enable the adversary to successfully remove the watermarks without causing much degradation of the model performance. Our study highlights the vulnerability of existing watermarking techniques, and we consider proposing more robust watermarking techniques as future work. \section{Evaluation} \label{sec:eval} In this section, we demonstrate the effectiveness of {REFIT} to remove watermarks embedded by several different schemes, in both transfer and non-transfer learning scenarios. \vspace{-1em} \subsection{Evaluation of transfer learning} \label{sec:eval-transfer} \begin{table*}[tbp] \begin{tabular}{cc} \begin{minipage}{.43\linewidth} \centering \scalebox{0.9}{\begin{tabular}{c|cc|ccc} \toprule & \multicolumn{2}{c|}{\textbf{FS}} & \multicolumn{3}{c}{\textbf{REFIT}} \\ & \textbf{Basic} & \textbf{AU} & \textbf{Basic} & \textbf{EWC} & \textbf{AU} \\ \midrule Pattern & \multirow{4}{*}{$66.15$} & $75.28$/$74.01$ & $82.96$ & $83.76$ & $83.80$/{\large $\mathbf{84.36}$} \\ OOD & & $74.69$/$74.59$ & $82.83$ & {\large $\mathbf{83.90}$}& $83.51$/$83.40$ \\ EW & & $75.51$/$74.48$ & $84.03$ & {\large $\mathbf{84.66}$} & $84.43$/$84.07$ \\ ADV & & $75.23$/$73.95$ & $83.66$ & {\large $\mathbf{84.39}$} & {\large $\mathbf{84.39}$}/$83.80$ \\ \bottomrule \end{tabular}} \caption{Test accuracies (\%) of models on STL-10 after watermark removal in the transfer learning setting, where the models are pre-trained on CIFAR-10. The accuracies of fine-tuned models on STL-10 with no requirement for watermark removal are $82.06\%$, $82.89\%$, $84.03 \%$ and $83.66\%$ for Pattern, OOD, EW and ADV respectively. For AU, x/y stands for the results of augmenting with STL-10 and ImageNet32 respectively.} \label{tab:res-cifar10-stl} \end{minipage} & \begin{minipage}{.45\linewidth} \centering \scalebox{0.8}{\begin{tabular}{c|cc|cccc} \toprule & \multicolumn{2}{c|}{\textbf{FS}} & \multicolumn{4}{c}{\textbf{REFIT}} \\ & \textbf{Basic} & \textbf{AU} & \textbf{Basic} & \textbf{EWC} & \textbf{AU} & \textbf{EWC+AU} \\ \midrule Pattern & \multirow{4}{*}{$66.15$} & $74.76$/$71.50$ & $88.89$ & $91.14$ & $92.30/90.78$& {\large $\mathbf{93.31}$}/$92.99$ \\ OOD & & $75.63$/$72.50$ & $90.39$ & $92.03$ & $92.74/91.96$ & {\large $\mathbf{92.94}$}/$92.45$ \\ EW & & $75.56$/$72.36$ & $91.01$ & $91.68$ & $92.11/91.41$ & {\large $\mathbf{92.46}$}/$92.34$ \\ ADV & & $75.19$/$72.71$ & $92.46$ & $92.63$ & $92.63/92.51$ & {\large $\mathbf{92.96}$}/$92.65$ \\ \bottomrule \end{tabular}} \caption{Test accuracies (\%) of models on STL-10 after watermark removal in the transfer learning setting, where the models are pre-trained on ImageNet32. The accuracies of fine-tuned models on STL-10 with no requirement for watermark removal are $92.95\%$, $92.39\%$, $92.16\%$, and $92.46\%$ for Pattern, OOD, EW and ADV respectively. For AU, x/y stands for the results of augmenting with STL-10 and ImageNet32 respectively.} \label{tab:res-imgnet-stl} \end{minipage} \end{tabular} \end{table*} \paragraph{Pre-training on CIFAR-10.} We first present the results of transfer learning from CIFAR-10 to STL-10 in Table~\ref{tab:res-cifar10-stl}. We observe that with the basic version of {REFIT}, where neither EWC nor AU is applied, removing watermarks already does not compromise the model performance on the test set. When equipped with either EWC or AU, the model fine-tuned with {REFIT} even surpasses the performance of the watermarked model. \paragraph{Pre-training on ImageNet32.} The results of transferring from ImageNet32 to STL-10 are in Table~\ref{tab:res-imgnet-stl}. We observe that using the pre-trained models on ImageNet32 yields around $10\%$ improvement of test accuracy compared to the ones pre-trained on CIFAR-10, although the label set of ImageNet32 is much more different from STL-10 than CIFAR-10. This could attribute to the diversity of samples in ImageNet32, which makes it a desirable data source for pre-training. Different from pre-training on CIFAR-10, the basic version of {REFIT} no longer suffices to preserve the test accuracy. By leveraging the unlabeled part of STL-10, the model performance becomes comparable to the watermarked ones. When combining EWC and AU, the performance of fine-tuned models dominates among different variants of {REFIT} and the watermarked models. \paragraph{Discussion of different pre-training datasets.} Meanwhile, when we train the STL-10 from scratch and only use the pre-trained model as the labeling tool, the performance of models fine-tuned on unlabeled part of STL-10 is consistently better than models using ImageNet32 for unlabeled data augmentation. This is expected since the unlabeled part of STL-10 is closer to the test distribution than ImageNet32. Interestingly, we find that by integrating AU into {REFIT}, the gap between utilizing STL-10 and ImageNet32 for unlabeled data augmentation is significantly shrunk, which indicates the effectiveness of our overall framework. \subsection{Evaluation of non-transfer learning} \label{sec:eval-nontransfer} \begin{table*}[tbp] \begin{tabular}{cc} \begin{minipage}{.43\linewidth} \centering \scalebox{0.9}{\begin{tabular}{c|cc|ccc} \toprule \multirow{2}{*}{\textbf{Pct.}} & \multicolumn{2}{c|}{\textbf{FS}} & \multicolumn{3}{c}{\textbf{REFIT}} \\ & \textbf{Basic} & \textbf{AU} & \textbf{Basic} & \textbf{EWC} & \textbf{AU} \\ \midrule \multicolumn{6}{c}{\textbf{Pattern}} \\ \midrule $0\%$ & $-$ & $89.86/88.43$ & $-$ &$-$ & {\large $\mathbf{92.53}$}/$91.93$ \\ $20\%$ & $87.40$ & $91.32/90.91$ & $92.12$ & {\large $\mathbf{92.90}$} & $92.80/92.78$ \\ $30\%$ & $89.64$ & $92.13/91.49$ & $92.22$ &$93.02$ & {\large $\mathbf{93.15}$}/$92.88$ \\ $40\%$ & $90.46$ & $92.46/92.15$ & $92.93$ & {\large $\mathbf{93.25}$} & $93.18/93.03$ \\ $50\%$ & $91.45$ & $92.47/92.25$ & $93.08$ & {\large $\mathbf{93.25}$} & $93.18/93.13$ \\ $80\%$ & $93.01$ & $92.82/92.67$ & $93.52$ &$93.67$ & {\large $\mathbf{94.11}$}/$93.43$\\ \midrule \multicolumn{6}{c}{\textbf{OOD}} \\ \midrule $0\%$ & $-$ & $90.13/88.01$ & $-$ &$-$ & {\large $\mathbf{90.48}$}/$87.52$ \\ $20\%$ & $87.40$ & $91.15/90.87$ & $91.19$ &$91.85$ & {\large $\mathbf{92.41}$}/$92.08$ \\ $30\%$ & $89.64$ & $91.67/91.58$ & $91.58$ &$92.58$ & {\large $\mathbf{93.01}$}/$92.61$ \\ $40\%$ & $90.46$ & $92.11/91.92$ & $92.76$ &$93.20$ & {\large $\mathbf{93.21}$}/$92.58$ \\ $50\%$ & $91.45$ & $92.48/92.29$ & $92.97$ & {\large $\mathbf{93.37}$} & $93.21/92.66$ \\ $80\%$ & $93.01$ & $92.81/92.66$ & $93.93$ &$93.85$ & {\large $\mathbf{94.00}$}/$93.26$ \\ \midrule \multicolumn{6}{c}{\textbf{EW}} \\ \midrule $0\%$ & $-$ & $89.77/89.11$ & $-$ &$-$ & $93.05$/{\large $\mathbf{93.22}$} \\ $20\%$ & $87.40$ & $91.58/90.99$ & $91.65$ &$92.46$ &$93.30$/{\large $\mathbf{93.34}$} \\ $30\%$ & $89.64$ & $91.69/91.69$ & $92.30$ &$93.29$ & {\large $\mathbf{93.50}$}/$93.39$ \\ $40\%$ & $90.46$ & $92.35/91.92$ & $92.83$ &$93.27$ & $93.34$/{\large $\mathbf{93.42}$} \\ $50\%$ & $91.45$ & $92.44/92.31$ & $93.39$ &$93.39$ & {\large $\mathbf{93.51}$}/$93.36$ \\ $80\%$ & $93.01$ & $92.97/93.03$ & $93.95$ & {\large $\mathbf{94.05}$} & $93.61/93.42$ \\ \midrule \multicolumn{6}{c}{\textbf{ADV}} \\ \midrule $0\%$ & $-$ & $90.05/79.47$ & $-$ &$-$ & {\large $\mathbf{91.60}$}/$85.68$ \\ $20\%$ & $87.40$ & $91.52/89.07$ & $92.85$ &$92.95$ & {\large $\mathbf{93.09}$}/$92.72$ \\ $30\%$ & $89.64$ & $92.09/90.02$ & $93.16$ & {\large $\mathbf{93.40}$} & $93.09/93.01$ \\ $40\%$ & $90.46$ & $92.23/91.15$ & $93.21$ & {\large $\mathbf{93.37}$} & $93.20/93.09$ \\ $50\%$ & $91.45$ & $92.58/91.83$ & $93.12$ & {\large $\mathbf{93.56}$} & $93.19/93.42$ \\ $80\%$ & $93.01$ & $92.93/92.69$ & $93.69$ & {\large $\mathbf{93.80}$} & $93.65/93.76$ \\ \bottomrule \end{tabular}} \caption{Results of non-transfer learning setting on CIFAR-10. The first column is the percentage of the CIFAR-10 training set used for fine-tuning, and the rest columns show the accuracy (\%) on the test set. The test accuracy of the pre-trained model is $93.23\%$ for Pattern, $93.63\%$ for OOD, $93.49\%$ for EW, and $93.31\%$ for ADV. For AU, x/y stands for the results of augmenting with STL-10 and ImageNet32 respectively.} \label{tab:res-cifar10} \end{minipage} & \begin{minipage}{.45\linewidth} \centering \scalebox{0.9}{\begin{tabular}{c|cc|ccc} \toprule \multirow{2}{*}{\textbf{Pct.}} & \multicolumn{2}{c|}{\textbf{FS}} & \multicolumn{3}{c}{\textbf{REFIT}} \\ & \textbf{Basic} & \textbf{AU} & \textbf{Basic} & \textbf{EWC} & \textbf{AU} \\ \midrule \multicolumn{6}{c}{\textbf{Pattern}} \\ \midrule $0\%$ & $-$ & $58.07/62.44$ & $-$ &$-$ & {\large $\mathbf{70.75}$}/$68.27$ \\ $20\%$ & $56.72$ & $67.28/68.12$ & $68.88$ &$71.80$ & $71.97$/{\large $\mathbf{72.06}$} \\ $30\%$ & $62.20$ & $68.95/70.07$ & $71.05$ &$72.64$ & {\large $\mathbf{72.98}$}/$72.73$ \\ $40\%$ & $65.42$ & $70.45/71.34$ & $71.96$ &$73.20$ & {\large $\mathbf{73.44}$}/$73.39$ \\ $50\%$ & $68.18$ & $71.27/72.23$ & $72.58$ &$73.44$ & $73.72$/{\large $\mathbf{73.84}$}\\ $80\%$ & $71.71$ & $73.22/73.79$ & $74.23$ &$74.77$ & {\large $\mathbf{75.42}$}/$74.09$\\ \midrule \multicolumn{6}{c}{\textbf{OOD}} \\ \midrule $0\%$ & $-$ & $57.22/61.11$ & $-$ &$-$ & $65.98$/{\large $\mathbf{66.79}$} \\ $20\%$ & $56.72$ & $67.18/67.75$ & $68.55$ &$69.91$ & {\large $\mathbf{71.02}$}/$71.00$ \\ $30\%$ & $62.20$ & $68.83/70.06$ & $70.12$ &$71.77$ & $71.70$/ {\large $\mathbf{72.25}$} \\ $40\%$ & $65.42$ & $70.44/71.10$ & $70.80$ & {\large $\mathbf{72.57}$} & $72.20/72.40$ \\ $50\%$ & $68.18$ & $71.37/72.17$ & $72.27$ &$72.73$ & $72.73$/{\large $\mathbf{73.11}$} \\ $80\%$ & $71.71$ & $72.65/73.00$ & $73.61$ & {\large $\mathbf{74.00}$} & $73.70/73.18$\\ \midrule \multicolumn{6}{c}{\textbf{EW}} \\ \midrule $0\%$ & $-$ & $55.79/64.35$ & $-$ &$-$ & $71.78$/ {\large $\mathbf{73.41}$} \\ $20\%$ & $56.72$ & $67.66/68.57$ & $69.00$ &$70.63$ & {\large $\mathbf{73.48}$}/$73.34$ \\ $30\%$ & $62.20$ & $69.01/70.71$ & $71.37$ &$72.13$ & $73.72$/ {\large $\mathbf{74.08}$} \\ $40\%$ & $65.42$ & $70.72/71.30$ & $72.64$ &$73.27$ & $74.21$/ {\large $\mathbf{74.34}$} \\ $50\%$ & $68.18$ & $71.96/72.38$ & $73.46$ &$74.25$ & $74.26$/ {\large $\mathbf{75.07}$} \\ $80\%$ & $71.71$ & $73.70/73.56$ & $74.98$ & {\large $\mathbf{75.18}$} & $75.09/74.84$ \\ \midrule \multicolumn{6}{c}{\textbf{ADV}} \\ \midrule $0\%$ & $-$ & $57.47/64.89$ & $-$ &$-$ & {\large $\mathbf{69.92}$}/$68.64$ \\ $20\%$ & $56.72$ & $67.22/67.81$ & $71.16$ &$71.46$ & {\large $\mathbf{71.67}$}/$71.58$ \\ $30\%$ & $62.20$ & $69.30/69.40$ & $71.73$ &$72.20$ & {\large $\mathbf{72.28}$}/$72.02$ \\ $40\%$ & $65.42$ & $70.74/71.31$ & $72.62$ & {\large $\mathbf{73.33}$} & $72.86/72.72$ \\ $50\%$ & $68.18$ & $72.00/72.20$ & $73.01$ & {\large $\mathbf{73.41}$} & $73.11/73.26$ \\ $80\%$ & $71.71$ & $72.71/73.01$ & $73.56$ & {\large $\mathbf{74.10}$} & $73.14/74.00$ \\ \bottomrule \end{tabular}} \caption{Results of non-transfer learning setting on CIFAR-100. The first column is the percentage of the CIFAR-100 training set used for fine-tuning, and the rest columns show the accuracy (\%) on the test set. The test accuracy of the pre-trained model is $73.83\%$ for Pattern, $73.37\%$ for OOD, $74.95\%$ for EW, and $73.14\%$ for ADV. For AU, x/y stands for the results of augmenting with STL-10 and ImageNet32 respectively.} \label{tab:res-cifar100} \end{minipage} \end{tabular} \end{table*} \paragraph{Results on CIFAR-10 and CIFAR-100.} For non-transfer learning setting, to begin with, we present results on CIFAR-10 and CIFAR-100 in Table~\ref{tab:res-cifar10} and~\ref{tab:res-cifar100} respectively. First, we observe that when the adversary has $80\%$ of the entire training set, using the basic version of {REFIT} already achieves higher test accuracies than the pre-trained models using any watermarking scheme in our evaluation, while removing the watermarks. Note that the watermark accuracies are still above $95\%$ using the fine-tuning approaches in previous work~\cite{adi2018turning,zhang2018protecting}, suggesting the effectiveness of our modification of the fine-tuning learning rate schedule. However, when the adversary only has a small proportion of the labeled training set, the test accuracy could degrade. Although the test accuracy typically drops for about $2\%$ on CIFAR-10 even if the adversary has only $20\%$ of the entire training set, the accuracy degradation could be up to $5\%$ on CIFAR-100. For all watermarking schemes other than ADV, incorporating EWC typically improves the test accuracy for nearly $1\%$ on CIFAR-10, and up to $3\%$ on CIFAR-100, which are significant considering the performance gap to the pre-trained models. The improvement for ADV is smaller yet still considerable, partially because the performance of the basic fine-tuning is already much better than other watermarking schemes, which suggests that ADV could be more vulnerable to watermark removal, at least when the labeled data is very limited. By leveraging the unlabeled data, the adversary is able to achieve the same level of test performance as the pre-trained models with only $20\% ~\sim 30\%$ of the entire training set. In particular, in Table~\ref{tab:res-cifar100-break}, we demonstrate that AU significantly improves the performance for labels with the lowest test accuracies. We skip the results of combining EWC and AU on CIFAR-10 and CIFAR-100, since they are generally very close to the results of AU. However, we will demonstrate that the combination of EWC and AU provides observable performance improvement on ImageNet32, which is a more challenging benchmark. We defer more discussion of sample efficiency to Appendix~\ref{app:eval-sample-efficiency}. \paragraph{The effectiveness of AU.} Furthermore, unlabeled data augmentation enables the adversary to fine-tune the model without any labeled training data, and by solely relying on the unlabeled data, the accuracy of the fine-tuned model could be within $1\%$ difference from the pre-trained model on both CIFAR-10 and CIFAR-100, and sometimes even surpasses the performance of the model trained with $80\%$ data from scratch. Note that both STL-10 and ImageNet32 images are drawn from very different distributions than CIFAR-10 and CIFAR-100; when we only apply AU alone and train the model from scratch, the model accuracies are even worse than the basic version of {REFIT}. Specifically, augmenting with STL-10 provides better results on CIFAR-10, partially because the label set of CIFAR-10 overlaps much more with STL-10 than ImageNet32; meanwhile, augmenting with ImageNet32 clearly shows better performance on CIFAR-100, which may result from its higher diversity that is necessary for CIFAR-100 classification. However, when integrating AU into {REFIT}, the choice of unlabeled data does not play an important role in the final performance; i.e., the performance of augmenting with one data source is not always better than the other. These results show that {REFIT} is effective without the requirement that the unlabeled data comes from the same distribution as the task of evaluation, which makes it a practical watermark removal technique for the adversary given its simplicity and efficacy, posing real threats to the robustness of watermark embedding schemes. \paragraph{The effectiveness of EWC.} In addition, we notice that while AU mostly dominates when the percentage of labeled data is very small, with a moderate percentage of labeled data for fine-tuning, e.g., around $40\%$, EWC starts to outperform AU in some cases. In particular, on CIFAR-10, EWC typically becomes competitive to AU when $30\%$ labeled data is available to the adversary, and the corresponding percentage is $40\%$ on CIFAR-100. This indicates that with the increase of the labeled data, the estimated Fisher matrix could better capture the important model parameters to preserve for the adversary's task of interest. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Basic&$ 34.00 $ &$ 37.00 $ &$ 39.00 $ &$ 42.00 $ &$ 43.00 $ \\ \hline EWC&$ 32.00 $ &$ 45.00 $ &$ 40.00 $ &$ 49.00 $ &$ 44.00 $ \\ AU&$ {\large \mathbf{52.00}} $ &$ {\large \mathbf{46.00}} $ & {\large $\mathbf{46.00}$} & {\large $\mathbf{56.00}$} & {\large $\mathbf{54.00}$} \\ \hline \end{tabular} \caption{Results of non-transfer learning on CIFAR-100. We show the test accuracies (\%) for 5 labels with the lowest test accuracies. The pre-trained model is embedded with EW watermarks. Models are fine-tuned with $20\%$ of the CIFAR-100 training set. The AU uses STL-10 for data augmentation.} \label{tab:res-cifar100-break} \vspace{-1em} \end{table} \begin{table}[h] \centering \begin{tabular}{c|cc|cc|cc} \toprule \multirow{2}{*}{\textbf{Pct.}} & \multicolumn{2}{c|}{\textbf{FS}} & \multicolumn{3}{c}{\textbf{REFIT}} \\ & \textbf{Basic} & \textbf{AU} & \textbf{Basic} & \textbf{EWC} & \textbf{AU} & \textbf{EWC + AU} \\ \midrule \multicolumn{7}{c}{\textbf{Pattern}} \\ \midrule $0\%$ & $-$ & $21.77$ & $-$ & $-$& \multicolumn{2}{c}{ {\large $\mathbf{54.37}$}} \\ $10\%$ & $36.06$ & $39.41$ & $51.05$ & $53.59$ & $55.98$ & {\large $\mathbf{56.81}$} \\ $20\%$ & $42.53$ & $48.34$ & $54.76$ & $56.35$ & $58.06$ & {\large $\mathbf{58.75}$} \\ $30\%$ & $47.83$ & $52.76$ & $56.87$ & $58.40$ & $58.62$ & {\large $\mathbf{59.40}$} \\ $40\%$ & $51.70$ & $55.24$ & $57.82$ & $59.09$ & $59.24$ & {\large $\mathbf{59.71}$} \\ $50\%$ & $53.58$ & $57.04$ & $58.76$ & $59.68$ & $59.40$ & {\large $\mathbf{60.02}$} \\ \midrule \multicolumn{7}{c}{\textbf{OOD}} \\ \midrule $0\%$ & $-$ & $21.46$ & $-$ & $-$ &\multicolumn{2}{c}{{\large $\mathbf{51.68}$}} \\ $10\%$ & $36.06$ & $39.32$ & $50.76$ & $52.02$ & $53.87$ & {\large $\mathbf{55.16}$} \\ $20\%$ & $42.53$ & $48.30$ & $53.05$ & $54.64$ & $55.92$ & {\large $\mathbf{57.04}$} \\ $30\%$ & $47.83$ & $52.58$ & $55.47$ & $56.42$ & $57.63$ & {\large $\mathbf{58.27}$} \\ $40\%$ & $51.70$ & $55.34$ & $56.60$ & $57.41$ & $58.17$ & {\large $\mathbf{58.44}$} \\ $50\%$ & $53.58$ & $56.87$ & $57.86$ & $58.50$ & $58.51$ & {\large $\mathbf{59.12}$} \\ \midrule \multicolumn{7}{c}{\textbf{EW}} \\ \midrule $0\%$ & $-$ & $23.56$ & $-$ & $-$ &\multicolumn{2}{c}{ {\large $\mathbf{52.76}$}} \\ $10\%$ & $36.06$ & $39.70$ & $49.69$ & $52.44$ & $54.58$ & {\large $\mathbf{55.68}$} \\ $20\%$ & $42.53$ & $48.16$ & $53.65$ & $55.89$ & $56.10$ & {\large $\mathbf{56.94}$} \\ $30\%$ & $47.83$ & $52.26$ & $55.54$ & $56.25$ & $57.12$ & {\large $\mathbf{57.23}$} \\ $40\%$ & $51.70$ & $55.32$ & $56.36$ & $57.00$ & $57.28$ & {\large $\mathbf{57.40}$} \\ $50\%$ & $53.58$ & $56.90$ & $57.30$ & $57.68$ & $57.66$ & {\large $\mathbf{57.80}$} \\ \midrule \multicolumn{7}{c}{\textbf{ADV}} \\ \midrule $0\%$ & $-$ & $20.12$ & $-$ & $-$ & \multicolumn{2}{c}{ {\large $\mathbf{50.22}$}} \\ $10\%$ & $36.06$ & $39.22$ & $50.27$ & $51.05$ & $53.52$ & {\large $\mathbf{53.72}$} \\ $20\%$ & $42.53$ & $48.20$ & $52.95$ & $54.03$ & $56.00$ & {\large $\mathbf{56.50}$} \\ $30\%$ & $47.83$ & $52.64$ & $55.21$ & $56.31$ & $57.02$ & {\large $\mathbf{57.40}$} \\ $40\%$ & $51.70$ & $55.28$ & $57.43$ & $57.57$ & $57.90$ & {\large $\mathbf{57.94}$} \\ $50\%$ & $53.58$ & $57.28$ & $57.88$ & $58.52$ & $58.02$ & {\large $\mathbf{58.83}$} \\ \bottomrule \end{tabular} \caption{Results of non-transfer learning setting on ImageNet32. The first column is the percentage of the training set used for fine-tuning, and the rest columns show the test accuracy (\%). Note that the percentage is with respect to the training samples of the first 500 classes in ImageNet32. The test accuracy of the pre-trained model is $60.26\%$ for Pattern, $60.04\%$ for OOD, $58.31\%$ for EM, and $59.60\%$ for ADV. The reported test accuracy is measured on only the first 500 classes of ImageNet32. For AU, the unlabeled images are obtained from the last 500 classes of ImageNet32.} \label{tab:res-imgnet32} \vspace{-3em} \end{table} \paragraph{Results on ImageNet32.} In Table~\ref{tab:res-imgnet32}, we further present our results on ImageNet32. Compared to the results on CIFAR-10 and CIFAR-100, removing watermarks embedded into pre-trained ImageNet32 models could result in a larger decrease of test accuracy, which is expected given that ImageNet32 is a more challenging benchmark with a much larger label set. Despite facing more challenges, we demonstrate that by combining EWC and AU, {REFIT} is still able to reach the same level of performance as the pre-trained watermarked model with 50\% of the labeled training data. Meanwhile, the increased difficulty of this benchmark enables us to better analyze the importance of each component in {REFIT}, i.e., EWC and AU. In particular, each of the two components offers a decent improvement of the test performance. The increase of accuracy with EWC is around $1\%-3\%$ over the basic version when the fine-tuning data is very limited, e.g., the percentage of labeled samples is $20\%$. The performance of using AU is generally better than using EWC, until the labeled training set includes $50\%$ of the ImageNet32 training samples of the first 500 classes, when EWC becomes more competitive. Finally, including both EWC and AU always enables further improvement of the test performance, suggesting that the combined technique is advantageous for challenging tasks. \paragraph{Discussion of different watermarking schemes.} Comparing the results of different watermarking schemes, we observe that the models fine-tuned from pre-trained models embedded with pattern-based watermarks consistently beat the test accuracy of models embedded with other watermarks, suggesting that while pattern-based watermarking techniques are generally more often used than other approaches, especially for backdoor injection, such watermarks could be easier to remove, which makes it necessary to propose more advanced backdoor injection techniques that are robust to removal attacks. \subsection{Comparison with alternative watermark removal techniques} \label{sec:eval-ab} In the following, we provide some discussion and comparison with some general-purpose watermark removal approaches proposed in previous work, which also does not assume the knowledge of the concrete watermarking scheme. \paragraph{Discussion of distillation-based approaches.} Distillation is a process to transform the knowledge extracted from a pre-trained model into a smaller model, while preserving the prediction accuracy of the smaller model so that it is comparable to the pre-trained one~\cite{hinton2015distilling}. Specifically, a probability vector is computed as $p(x)_i=\frac{exp(f(x)_i/T)}{\sum_{j}exp(f(x)_j/T)}$, where $f(x)$ is the output logit of the model $f$ given the input $x$, and $T$ is a hyper-parameter representing the temperature. Afterwards, instead of using the one-hot vector of the ground truth label for each training sample $x$, the extracted $p(x)$ from the pre-trained model is fed into the smaller model as the ground truth. Previous work has proposed distillation as a defense against adversarial examples~\cite{papernot2016distillation} and watermark embedding approaches~\cite{yang2019effectiveness}. Therefore, we investigate incorporating this technique into our framework. Specifically, instead of using the one-hot encoding of labels predicted by the pre-trained model, we use $p(x)$ as the ground truth label and vary the value of $T$ to see the effect. Nevertheless, this method does not provide better performance; for example, with $20\%$ labeled training set on CIFAR-10 and using unlabeled part of STL-10 for augmentation, when the pre-trained model is embedded with OOD watermarks, setting $T=1$ provides a test accuracy of $91.60\%$, while using the one-hot label results in $91.93\%$ test accuracy as in Table~\ref{tab:res-cifar10}, and setting other values of $T$ do not cause any significant difference. In particular, we observe that when using output logits of the watermarked model as the ground truth for fine-tuning, the resulted model tends to have a higher watermark accuracy, perhaps because while the output logits allow the fine-tuned model to better fit the pre-trained model, it also encourages the fine-tuned model to learn more information of watermarks. Thus, we stick to our original design to annotate the unlabeled data. \paragraph{Comparison with pruning and fine-pruning.} Previous work has studied the effectiveness of pruning-based approaches for watermark removal and found that such techniques are largely ineffective~\cite{zhang2018protecting,liu2018fine,namba2019robust}. In our evaluation, we observe that when applying the pruning approach, the watermark accuracy is tightly associated with the test accuracy, which makes it hard to find a sweet spot of the pruning rate so that the test performance is preserved while the watermarks are removed. We defer more discussion to Appendix~\ref{app:pruning}. On the other hand, prior work shows that combining pruning with fine-tuning could improve the effectiveness of watermark removal~\cite{liu2018fine}. Therefore, we also compare with the fine-pruning method proposed in~\cite{liu2018fine}, which first prunes part of the neurons that are activated the least for benign samples, and then performs the fine-tuning. We observe that {REFIT} achieves a better test performance than the fine-pruning approach, even if we use the same learning rate schedule for both of them, which improves the performance of the fine-pruning algorithm compared to its original design. We defer more discussion of fine-pruning to Appendix~\ref{app:fine-pruning}. \section{Introduction} \label{sec:intro} Deep neural networks (DNNs) have achieved great performance on a variety of application domains, and are creating tremendous business values~\cite{he2016deep,devlin2019bert}. Building these models from scratch is computationally intensive and requires a large set of high-quality annotated training samples. Various online marketplaces, such as BigML and Amazon, have emerged to allow people to buy and sell the pre-trained models. Just like other commodity software, the intellectual property (IP) embodied in DNNs needs proper protection to preserve competitive advantages of the model owner. To protect the intellectual property of pre-trained DNNs, a widely adopted approach is \emph{watermarking}~\cite{adi2018turning,zhang2018protecting,rouhani2018deepsigns,uchida2017embedding}. A common paradigm of watermarking is to inject some specially-designed training samples, so that the model could be trained to predict in the ways specified by the owner when the watermark samples are fed into the model. In this way, a legitimate model owner can train the model with watermarks embedded, and distribute it to the model users. When he later encounters a model he suspects to be a copy of his own, he can verify the ownership by inputting the watermarks to the model and checking the model predictions. This approach has gained a lot of popularity due to the simplicity of its protocol. On the other hand, recent work has studied attack approaches to bypass the watermark verification process, so that the legitimate model owner is not able to claim the ownership. To achieve this goal, there are two lines of work in the literature. One line of work studies detection attacks against watermark verification~\cite{namba2019robust,hitaj2018have}. Specifically, when the input is suspected to be a watermark by the detection mechanism, the model returns a random prediction, otherwise it returns the true model prediction. Another line of work that attracts more interest is on~\emph{watermark removal attacks}, which aims at modifying the watermarked models so that they no longer predict in the ways specified by the model owner when provided with the watermark samples. In particular, most of existing work assumes the knowledge of the watermarking scheme, e.g., the approach is specifically designed for pattern-based watermarks, where each of the watermark samples is blended with the same pattern~\cite{wang2019neural,gao2019strip,chen2019deepinspect,guo2019tabor}. Although there are some latest works studying general-purpose watermark removal schemes that are agnostic to watermark embedding approaches, including pruning~\cite{zhang2018protecting,liu2018fine,namba2019robust}, distillation~\cite{yang2019effectiveness}, and fine-pruning~\cite{liu2018fine}, most of these attacks either significantly hamper the model accuracy to remove the watermarks, or are conducted with the assumption that the adversary has full access to the data used to train the watermarked model. The lack of investigation into data efficiency leaves it unclear whether such watermark removal attacks are practical in the real world. In this paper, we propose {REFIT}, a general-purpose watermark removal framework based on fine-tuning. Although previous work suggests that fine-tuning alone is not sufficient to remove the watermarks~\cite{adi2018turning,liu2018fine}, we find that by carefully designing the fine-tuning learning rate schedule, the adversary is able to remove the watermarks instead. However, when the adversary only has access to a small training set that is not comparable to the pre-training dataset, although the watermarks can still be removed, the test accuracy could also degrade. Therefore, we propose two techniques to overcome this challenge. The first technique is adapted from elastic weight consolidation (EWC)~\cite{kirkpatrick2017overcoming}, which is originally proposed to mitigate the catastrophic forgetting phenomenon, i.e., the model tends to forget the knowledge learned from old tasks when later trained on a new one~\cite{goodfellow2013empirical,kirkpatrick2017overcoming,kemker2018measuring}. The central idea behind this component is to slow down learning on model weights that are relevant to the knowledge learned for the task of interest, and keep updating other weights that were used more for memorizing watermarks. Another technique is called unlabeled data augmentation (AU). While a large amount of labeled data could be expensive to collect, unlabeled data is much cheaper to obtain; e.g., the adversary can simply download as many images as he wants from the Internet. Therefore, the adversary could leverage inherently unbounded provisions of unlabeled samples during fine-tuning. Specifically, we propose to utilize the watermarked model to annotate the unlabeled samples, and augment the fine-tuning training data with them. We perform a systematic study of {REFIT}, where we evaluate the attack performance when varying the amount of data the adversary has access to. We focus on watermark removal of deep neural networks for image recognition in our evaluation, where existing watermarking techniques are shown to be the most effective. To demonstrate that {REFIT} is designed to be agnostic to different watermarking schemes, we evaluate our watermark removal performance over a diverse set of watermark embedding approaches, and on both transfer learning and non-transfer learning. For transfer learning setting, we demonstrate that after fine-tuning with {REFIT}, the resulted models consistently surpass the test performance of the pre-trained watermarked models, sometimes even when neither EWC nor AU is applied, while the watermarks are successfully removed. For non-transfer learning setting with a very limited in-distribution training set, it becomes challenging for the basic version of {REFIT} to achieve a comparable test performance to the pre-trained watermarked model. With the incorporation of EWC and AU, {REFIT} significantly decreases the amount of in-distribution labeled samples required for preserving the model performance while the watermarks are effectively removed. Furthermore, the unlabeled data could be drawn from a very different distribution than the data for evaluation; e.g., the label sets could barely overlap. To summarize, we make the following contributions. \begin{itemize}[leftmargin=*,noitemsep,topsep=0em] \item In contrast to the previous observation of the ineffectiveness of fine-tuning-based watermark removal schemes, we demonstrate that with an appropriately designed learning rate schedule, fine-tuning is able to remove the watermarks. \item We propose {REFIT}, a watermark removal framework that is agnostic to watermark embedding schemes. In particular, to deal with the challenge of lacking in-distribution labeled fine-tuning data, we develop two techniques, i.e., an adaption of elastic weight consolidation (EWC) and augmentation of unlabeled data (AU), towards mitigating this problem from different perspectives. \item We perform the first comprehensive study of data efficiency of watermark removal attacks, demonstrating the effectiveness of {REFIT} against diverse watermarking schemes. \end{itemize} Our work provides the first successful demonstration of watermark removal techniques against different watermark embedding schemes when the adversary has limited data, which poses real threats to existing watermark embedding schemes. We hope that our extensive study could shed some light on the potential vulnerability of existing watermarking techniques in the real world, and encourage further investigation of designing more robust watermark embedding approaches. \section*{Acknowledgement} This material is in part based upon work supported by the National Science Foundation under Grant No. TWC-1409915, Berkeley DeepDrive, and DARPA D3M under Grant No. FA8750-17-2-0091. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Xinyun Chen is supported by the Facebook Fellowship. \else \fi \bibliographystyle{ACM-Reference-Format} \section{REFIT: REmoving watermarks via FIne-Tuning} \label{sec:method} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig/REFIT.pdf} \caption{An overview of our proposed {REFIT} framework. Specifically, besides the basic fine-tuning scheme, {REFIT} incorporates two techniques to address the challenge when the adversary has a limited amount of in-distribution labeled data, i.e., elastic weight consolidation (EWC) and augmentation with unlabeled data (AU).} \label{fig:refit} \end{figure} In this section, we present {REFIT}, a unified watermark removal framework based on fine-tuning. We present an overview of the framework in Figure~\ref{fig:refit}, and we will discuss the technical details in the later part of the section. The central intuition behind this scheme stems from the~\emph{catastrophic forgetting} phenomenon of machine learning models, that is, when a model is trained on a series of tasks, such a model could easily forget how to perform the previously trained tasks after training on a new task~\cite{goodfellow2013empirical,kirkpatrick2017overcoming,kemker2018measuring}. Accordingly, when the adversary further trains the model with his own data during the fine-tuning process, since the fine-tuning data no longer includes the watermark samples, the model should forget the previously learned watermark behavior. Contrary to this intuition, some prior works show that existing watermarking techniques are robust against fine-tuning based techniques, even if the adversary fine-tunes the entire model and has access to the same benign data as the owner, i.e., the entire data for pre-training excluding the watermark samples~\cite{adi2018turning,zhang2018protecting,liu2018fine}. The key reason could be that the fine-tuning learning rates set in these works are too small to change the model weights with a small number of training epochs. To confirm this hypothesis, we first replicate the experiments in~\cite{adi2018turning} to embed watermarks into models trained on CIFAR-10 and CIFAR-100 respectively. Afterwards, we fine-tune the models in a similar way as their FTAL process, i.e., we update the weights of all layers. The only change is that instead of setting a small learning rate for fine-tuning, which is $0.001$ in their evaluation, we vary the magnitude of the learning rate to see its effect. Specifically, starting from 1e-5, the learning rate is doubled every 20 epochs in the fine-tuning process, which is the number of fine-tuning epochs for watermark removal in their evaluation. Figure~\ref{fig:double-lr-partial02-20} presents the training curve of this fine-tuning process. We can observe that the change of model performance is still negligible when the learning rate is around $0.001$, becomes noticeable when it reaches around $0.005$, and requires a larger value to reach a sufficiently low watermark accuracy. Meanwhile, at the beginning of each epoch when the learning rate is doubled, the training and test accuracies decrease first, then gradually improve within the next 20 epochs; on the other hand, the watermark accuracy keeps decreasing, since the watermarks are not included in the fine-tuning dataset. Therefore, although the adversary does not have knowledge of the watermarks, the adversary can set the initial fine-tuning learning rate so that it considerably degrades the training and test accuracies within the first few fine-tuning steps, which suggests that the model weights are sufficiently modified to remove the watermarks; on the other hand, desirable test performance is achieved when the fine-tuning converges, meaning that the initial learning rate does not have to be so large that results in a model not much different from one trained from scratch. In Section~\ref{sec:eval}, we demonstrate that with a learning rate schedule designed in this way, the adversary is able to remove the watermarks without compromising the model performance, when the adversary has access to a large amount of labeled training data. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{fig/doubling_lr_partial/double_per20_partial02_CIFAR-10+CIFAR-100.jpg} \caption{Training curves to illustrate the effect of learning rate during the fine-tuning stage, using $20\%$ of the labeled training data. At the beginning, the model is pre-trained with the watermark scheme in~\cite{adi2018turning}. Starting from a fine-tuning learning rate of 1e-5, the learning rate is doubled every 20 epochs. The watermark accuracy considerably decreases only when the learning rate is appropriately large. See Figure~\ref{fig:double-lr-20} in the appendix for the corresponding plots where the model is fine-tuned on the entire training set, by which we can draw similar conclusions.} \label{fig:double-lr-partial02-20} \end{figure} While this initial attempt of watermark removal is promising, this basic fine-tuning scheme is inadequate when the adversary does not have training data comparable to the owner of the watermarked model. For example, when the adversary only has 20\% of the CIFAR-100 training set, to ensure that the watermarks are removed, the test accuracy of the fine-tuned model could degrade by $5\%$. This is again due to the catastrophic forgetting: when we fine-tune the model to forget its predictions on the watermark set, the model also forgets part of the normal training samples drawn from the same distribution as the test one. Although the decrease of the test accuracy is in general much less significant than watermark accuracy, such degradation is still considerable, which could hurt the utility of the model. There have been some attempts to mitigate the catastrophic forgetting phenomenon in the literature~\cite{kirkpatrick2017overcoming,coop2013ensemble}. However, most techniques are not directly applicable to our setting. In fact, during the watermark embedding stage, the model is jointly trained on two tasks: (1) to achieve a good performance on a task of interest, e.g., image classification on CIFAR-10; (2) to remember the labels of images in the watermark set. Contrary to previous studies of catastrophic forgetting, which aims at preserving the model's predictions on all tasks it has been trained, our goal of watermark removal is two-fold, i.e., minimizing the model's memorization on the watermark task, while still preserving the performance on the main task it is evaluated on. This conflict results in the largest difference between our watermark removal task and the continual learning setting studied in previous work. Another important difference is that although the training data of the adversary is different from the pre-trained data, the fine-tuning dataset contributes to a sub-task of the pre-trained model, while getting rid of the watermarks. On the other hand, different tasks are often complementary with each other in previous studies of catastrophic forgetting. This key observation enables us to adapt elastic weight consolidation~\cite{kirkpatrick2017overcoming}, a regularization technique proposed to mitigate the catastrophic forgetting issue, for our purpose of watermark removal. \paragraph{Elastic Weight Consolidation (EWC).} The central motivation of EWC is to slow down the learning of parameters that are important for previously trained tasks~\cite{kirkpatrick2017overcoming}. To measure the contribution of each model parameter to a task, EWC first computes the diagonal of the Fisher information matrix of the previous task as follows: \begin{align} F_i = F_{ii} = \mathbb{E}_{x \sim D, y \sim f_{\theta^{*}}(y|x)} \left[ \left. \frac{\partial \log f_{\theta}(y|x)}{\partial \theta_i} \right|_{\theta = \theta^{*}}^2 \right] \label{eq:EWC-Fisher} \end{align} where $f_{\theta^{*}}(y|x)$ is the probability distribution obtained by applying the softmax to output logits of the model with parameters $\theta^{*}$ given an input $x$, and $D$ is the training dataset of the previous task. The entire Fisher information matrix is given by $ F_{ij} = \mathbb{E}_{x \sim D, y \sim f_{\theta^{*}}(y|x)} \left[ \left. \frac{\partial \log f_{\theta}(y|x)}{\partial \theta_i} \cdot \frac{\partial \log f_{\theta}(y|x)}{\partial \theta_j} \right|_{\theta = \theta^{*}} \right] $, which defines a Riemannian metric on the parameter space. Intuitively, to prevent the model from forgetting prior tasks when learning a new task, the learned parameter $\theta$ should be close to the parameter $\theta^{*}$ of prior tasks, when the new data also contains information relevant to $\theta^{*}$. Algorithmically, we penalize the distance between $\theta_i$ and $\theta^{*}_i$ when the $i$-th diagonal entry of the Fisher information matrix is large. Specifically, EWC adds a regularization term into the loss function for training on a new task, i.e., \begin{align} \mathcal{L}_{EWC}(\theta) = \mathcal{L}_{basic}(\theta) + \frac{\lambda}{2}\sum_{i} F_i \left( \theta_i - \theta_i^{*} \right)^2 \label{eq:EWC-loss} \end{align} where $\mathcal{L}_{basic}(\theta)$ is the loss to optimize the performance on the new task (e.g. a cross entropy loss); $\lambda$ controls the strength of the regularization, indicating the importance of memorizing old tasks; $\theta^{*}$ is the parameters trained with the previous task; $F$ is the Fisher information matrix associated with $f_{\theta^{*}}$, and $F_i$ is the diagonal entry corresponding to the i-th parameter. We can further extend this idea to the transfer learning setting, when the fine-tuning data belongs to a different task from the pre-trained one. In this case, the adversary can first fine-tune the pre-trained watermarked model with a small learning rate, which results in a model for his new task, although the watermarks usually still exist. Afterwards, the adversary can treat the model parameters of this new model as $\theta^{*}$, and plug in Equation~\ref{eq:EWC-Fisher} correspondingly. Notice that since we do not have access to the pre-trained data, in principle we are not able to compute the Fisher information matrix of the previous task, thus cannot calculate the regularization term in $\mathcal{L}_{EWC}(\theta)$. However, by leveraging the assumption that the fine-tuning data is drawn from a similar distribution to the pre-trained data, we can instead approximate the Fisher matrix using the fine-tuning data. Given that the fine-tuning data contains no watermark, the EWC component enables the model to update less on model weights important for achieving a good test accuracy, while the model weights important for watermark memorization are still sufficiently modified. Although the approximation could be imprecise in this way, in Section~\ref{sec:eval}, we will show that this technique enables the adversary to improve the test performance of the model with limited data, while the watermarks are successfully removed. With the same goal of preserving the test performance of the model with watermarks removed, we propose data augmentation with unlabeled data, which further decreases the amount of in-distribution labeled training samples needed for this purpose. \paragraph{Augmentation with Unlabeled data (AU).} We propose to augment the fine-tuning data with unlabeled samples, which could easily be collected from the Internet. Let $\mathcal{U}=\{x_u\}_{u=1}^U$ be the unlabeled sample set, we use the pre-trained model as the labeling tool, i.e., $y_u=f_\theta(x_u)$ for each $x_u \in \mathcal{U}$. We have tried more advanced semi-supervised techniques to utilize the unlabeled data, e.g., virtual adversarial training~\cite{miyato2018virtual} and entropy minimization~\cite{grandvalet2005semi}, but none of them provides a significant gain compared to the aforementioned simple approach. Therefore, unless otherwise specified, we use this method for our evaluation of unlabeled data augmentation. Similar to our discussion of extending EWC to transfer learning, we can also apply this technique to the transfer learning setting by first fine-tuning the model for the new task without considering watermark removal, then using this model for labeling. Since the test accuracy of the pre-trained model is not $100\%$ itself, such label annotation is inherently noisy; in particular, when $\mathcal{U}$ is drawn from a different distribution than the task of consideration, the assigned labels may not be meaningful at all. Nevertheless, they still enable the fine-tuned model to better mimic the prediction behavior of the pre-trained model. In Section~\ref{sec:eval}, we will show that leveraging unlabeled data significantly decreases the in-distribution labeled samples needed for effective watermark removal, while preserving the model performance. \section{Related Work} Aside from the attacks that infringe the intellectual property of a machine learning model, a variety of attacks have been proposed against machine learning models, aiming at either manipulating model predictions~\cite{gu2017badnets,shafahi2018poison,szegedy2013intriguing} or revealing sensitive information from trained models~\cite{shokri2017membership,hayes2019logan,fredrikson2014privacy,fredrikson2015model}. We will also review work on the catastrophic forgetting phenomenon in deep learning, as it inspires the use of EWC loss for our watermark removal scheme. \paragraph{Model watermarking.} To protect the intellectual property of deep neural networks, prior works have proposed several watermarking schemes~\cite{zhang2018protecting,adi2018turning,namba2019robust,merrer2017adversarial}. For pattern-based techniques, watermark images are blended with the same pattern~\cite{zhang2018protecting}. Such techniques are also commonly used for backdoor attacks~\cite{chen2017targeted,gu2017badnets,liu2017trojaning}. Other watermark schemes utilize individual images as watermarks~\cite{adi2018turning,namba2019robust,merrer2017adversarial}. Existing watermark removal approaches are largely designed for pattern-based techniques~\cite{wang2019neural,gao2019strip,chen2019deepinspect,guo2019tabor}. Meanwhile, prior general-purpose watermark removal techniques require a large amount of training data to effectively remove the watermarks~\cite{yang2019effectiveness,liu2018fine}. \paragraph{Backdoor attacks.} In the context of machine learning, backdoor attacks manipulate the model to provide the predictions specified by the adversary on inputs associated with the backdoor key. In this sense, backdoor attacks are closely connected to watermarks in their formats, but usually with different purposes, as discussed in~\cite{adi2018turning}. Previous work has shown that deep neural networks are vulnerable to backdoor attacks~\cite{chen2017targeted,gu2017badnets}. Accordingly, several defense methods have been proposed for backdoor attacks~\cite{wang2019neural,gao2019strip,chen2019deepinspect,guo2019tabor}. \paragraph{Poisoning and evasion attacks.} Poisoning attacks inject well-crafted data into the training set to alter the predictive performance of a deep neural network. Besides backdoor injection, other attack goals include degrading the test accuracy indiscriminately, and changing the predictions of specific examples~\cite{biggio2012poisoning,nelson2008exploiting,li2016data,munoz2017towards,koh2017understanding,shafahi2018poison}. In contrast to poisoning attacks, evasion attacks are launched in the test time of a machine learning model. The resulted samples are called adversarial examples, which are visually similar to normal data but lead to wrong predictions by the model~\cite{biggio2013evasion,szegedy2013intriguing,goodfellow2014explaining,carlini2017towards}. Note that we leverage adversarial examples as the watermark for the ADV watermarking scheme. \paragraph{Catastrophic forgetting.} Catastrophic forgetting refers to the phenomenon that a neural network model tends to underperform on old tasks when it is trained sequentially on multiple tasks. This occurs because the model weights that are important for an old task are changed to meet the objectives of a new task. Many recent approaches have been proposed against this effect, such as adjusting weights~\cite{kirkpatrick2017overcoming,zenke2017continual}, and adding data of past tasks to the new task training~\cite{lopez2017gradient,shin2017continual}. In particular, elastic weight consolidation is a classic algorithm for mitigating catastrophic forgetting via adapting the learning of specific weights to their importance to previous tasks~\cite{kirkpatrick2017overcoming}. Note that the original EWC algorithm requires access to the data used for learning old tasks, which is not available in our case. Therefore, we propose an adaption of the algorithm to make it suitable for our watermark removal application. \section{Evaluation Setup} \label{sec:eval-setup} In this section, we introduce the benchmarks and the watermark embedding schemes used in our evaluation, and discuss the details of our experimental configurations. \subsection{Datasets} \label{sec:datasets} We evaluate on CIFAR-10~\cite{krizhevsky2009learning}, CIFAR-100~\cite{krizhevsky2009learning}, STL-10~\cite{coates2011analysis} and ImageNet32~\cite{chrabaszcz2017downsampled}, which are popular benchmarks for image classification, and some of them have been widely used in previous work on watermarking ~\cite{adi2018turning,zhang2018protecting,namba2019robust}. \paragraph{CIFAR-10.} CIFAR-10 includes coloured images of 10 classes, where each of them has 5,000 images for training, and 1,000 images for testing. Each image is of size $32 \times 32$. \paragraph{CIFAR-100.} CIFAR-100 includes coloured images of 100 classes, where each of them has 500 images for training, and 100 images for testing, thus the total number of training samples is the same as CIFAR-10. The size of each image is also $32 \times 32$. \paragraph{STL-10.} STL-10 has been widely used for evaluating transfer learning, semi-supervised and unsupervised learning algorithms, featured with a large number of unlabeled samples. Specifically, STL-10 includes 10 labels, where each label has 500 training samples and 800 test samples. Besides the labeled samples, STL-10 also provides 100,000 unlabeled images drawn from a similar but broader distribution of images, i.e., they include images of labels that do not belong to the STL-10 label set. The size of each image is $96 \times 96$, which is much larger than CIFAR-10 and CIFAR-100. Although the label set of STL-10 and CIFAR-10 largely overlap, the images from them are distinguishable, even if resizing them to the same size. \paragraph{ImageNet32.} ImageNet32 is a downsampled version of the ImageNet dataset~\cite{deng2009imagenet}. Specifically, ImageNet32 includes all samples in the training and validation sets of the original ImageNet, except that the images are resized to $32 \times 32$. Same as the original ImageNet, this dataset has 1.28 million training samples in 1000 classes, and 50,000 samples with 50 images per class for validation. \subsection{Watermarking Techniques} To demonstrate the effectiveness of {REFIT} against various watermark embedding schemes, we evaluate pattern-based techniques~\cite{zhang2018protecting,chen2017targeted,gu2017badnets}, embedding samples drawn from other data sources as the watermarks~\cite{adi2018turning,zhang2018protecting,chen2017targeted}, exponential weighting~\cite{namba2019robust}, and adversarial frontier stitching~\cite{merrer2017adversarial}. These techniques represent the typical approaches of watermark embedding studied in the literature, and are shown to be the most effective ones against watermark removal. \paragraph{Pattern-based techniques (Pattern).} A pattern-based technique specifies a key pattern $key$ and a target label $y^t$, so that for any image $x$ blended with the pattern $key$, $Pr(f_\theta(x)=y^t)$ is high. To achieve this, the owner generates a set of images $\{x^k\}_{k=1}^K$ blended with $key$, assigns $y^k=y^t (k \in 1, ..., K)$, then adds $\{(x^k, y^k)\}_{k=1}^K$ into the training set. See Figure~\ref{fig:pattern-ood-watermark-ex} for some watermark samples. \paragraph{Out-of-distribution watermark embedding (OOD).} A line of work has studied using images from other data sources than the original training set as the watermarks. Figure~\ref{fig:pattern-ood-watermark-ex} presents some watermarks used in~\cite{adi2018turning}, where each watermark image is independently assigned with a random label, thus different watermarks can have different labels. Meanwhile, these images are very different from the samples in any benchmark we evaluate on, and do not belong to any category in the label set. \paragraph{Exponential weighting (EW).} Compared to the above watermarking techniques, exponential weighting introduces two main different design choices~\cite{namba2019robust}. The first choice is about the watermark generation. Specifically, they change the labels of some training samples to different random labels, but do not modify the images themselves. The main motivation is to defend against the detection attacks mentioned in Section~\ref{sec:intro}, i.e., an adversary who steals the model could use an outlier detection scheme to detect input images that are far from the data distribution of interest, so as to bypass the watermark verification. The second choice is about the embedding method. Instead of jointly training the model on both the normal training set and the watermark set, they decompose the training process into three stages. They first train the model on the normal training set only. Afterwards, they add an exponential weight operator over each model parameter. Specifically, for parameters in the $l$-th layer of the model denoted as $\theta^l$, $EW(\theta^l, T)_i = \frac{\exp{\left| \theta^l_i T \right| }}{\max_j \exp{\left| \theta^l_j T \right|}} \theta^l_i$, where $T$ is a hyper-parameter for adjusting the intensity of weighting. Finally, the model with the exponential weighting scheme is further trained on both normal training data and watermarks. \paragraph{Adversarial frontier stitching (ADV).} In~\cite{merrer2017adversarial}, they propose to use images added with the adversarial perturbation as the watermarks. Specifically, the model is first trained on the normal training set only. Afterwards, they generate a watermark set that is made up of 50\% true adversaries, i.e., adversarially perturbed images that the model provides the wrong predictions, and 50\% false adversaries, i.e., adversarially perturbed images on which the model still predicts the correct labels. The adversarial perturbations are computed using the fast gradient sign method~\cite{goodfellow2014explaining}, i.e., $x^{\text{adv}} = x + \epsilon \cdot sign(\nabla_x J(\theta, x, y))$, where $J(\theta, x, y)$ is the training loss function of the model, and $\epsilon$ controls the scale of the perturbation. Each of these images is annotated with the ground truth label of its unperturbed counterpart as its watermark label, i.e., the label of $x^{\text{adv}}$ is $y$, no matter whether it is a true adversary or false adversary. Finally, the model is fine-tuned with these watermarks added into the training set. See Figure~\ref{fig:ew-afs-watermark-ex} for examples of watermarks generated by this technique. \begin{figure}[t] \centering \begin{subfigure}[t]{0.15\linewidth} \includegraphics[width=\linewidth]{fig/pattern-watermark-txt.png} \caption{} \label{} \end{subfigure} \begin{subfigure}[t]{0.15\linewidth} \includegraphics[width=\linewidth]{fig/pattern-watermark-ex.png} \caption{} \label{} \end{subfigure} \begin{subfigure}[t]{0.15\linewidth} \includegraphics[width=\linewidth]{fig/pattern-watermark-ex-1.png} \caption{} \label{} \end{subfigure} \begin{subfigure}[t]{0.2\linewidth} \includegraphics[width=\linewidth]{fig/instance-watermark-ex.jpg} \caption{} \label{} \end{subfigure} \begin{subfigure}[t]{0.2\linewidth} \includegraphics[width=\linewidth]{fig/instance-watermark-ex-1.jpg} \caption{} \label{} \end{subfigure} \vspace{-0.5em} \caption{(a), (b), and (c) are examples of watermarks generated by the pattern-based technique in~\cite{zhang2018protecting}. Specifically, after an image is blended with the ``TEST'' pattern in (a), such an image is classified as the target label, e.g., an ``automobile'' on CIFAR-10. (d) and (e) are examples of watermarks generated by the out-of-distribution watermark embedding technique in~\cite{adi2018turning}, where different watermarks could have different assigned labels.} \label{fig:pattern-ood-watermark-ex} \vspace{-0.5em} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[t]{0.2\linewidth} \includegraphics[width=\linewidth]{fig/watermark_examples/EW/00000.png} \caption{} \label{} \end{subfigure} \begin{subfigure}[t]{0.2\linewidth} \includegraphics[width=\linewidth]{fig/watermark_examples/AFS/1.png} \caption{} \label{} \end{subfigure} \begin{subfigure}[t]{0.2\linewidth} \includegraphics[width=\linewidth]{fig/watermark_examples/EW/00003.png} \caption{} \label{} \end{subfigure} \begin{subfigure}[t]{0.2\linewidth} \includegraphics[width=\linewidth]{fig/watermark_examples/AFS/4.png} \caption{} \label{} \end{subfigure} \caption{Examples of watermarks generated by exponential weighting~\cite{namba2019robust} and adversarial frontier stitching~\cite{merrer2017adversarial}. (a) and (c) are generated by exponential weighting, which are images from the ImageNet32 training set, but assigned with random labels different from the ground truth; for example, the watermark label of (a) is ``trash can''. (b) and (d) are generated by adversarial frontier stitching technique, which add adversarial perturbations over (a) and (c) respectively, but keep the ground truth classes as their watermark labels; for example, the watermark label of (b) is still ``dog''.} \label{fig:ew-afs-watermark-ex} \vspace{-0.5em} \end{figure} \vspace{-1em} \subsection{Attack Scenarios} \label{sec:attack-scenarios} We consider the following attack scenarios in our evaluation. \paragraph{Non-transfer learning.} The adversary leverages a watermarked model pre-trained for the same task as what the adversary desires. For this scenario, we evaluate on CIFAR-10, CIFAR-100, and ImageNet32. For CIFAR-10 and CIFAR-100, the watermarked model is pre-trained on its entire training set; for ImageNet32, the pre-trained model uses images of labels smaller than 500 in the training set. To simulate the scenario where the adversary can only collect a relatively small number of labeled training samples drawn from a similar distribution to the pre-training data, we vary the proportion of training samples the adversary has access to in the entire training set, but in practice, the fine-tuning training dataset does not necessarily need to be a subset of the pre-training dataset. We consider two data sources with abundant images for unlabeled data augmentation: (1) the unlabeled part of STL-10, which includes 100,000 samples; (2) for classification on CIFAR-10 and CIFAR-100, we also use the entire ImageNet32 for unlabeled data augmentation. For classification on ImageNet32, only those training samples with labels larger than 500 are included for unlabeled data augmentation. In both cases, we discard the labels of these ImageNet32 samples, and only use the images for augmentation. Note that these unlabeled images are very different from the labeled data. In particular, the label sets between CIFAR-100 and STL-10 barely overlap, and the label set of ImageNet32 is much more fine-grained than CIFAR-10 and CIFAR-100. \paragraph{Transfer learning.} The adversary leverages a watermarked model pre-trained for a different task from what the adversary desires. For this scenario, our evaluation is centered on achieving good performance on STL-10. Note that the labeled part of STL-10 only includes 5,000 samples, which is insufficient for training a model with high accuracy. Therefore, an adversary can leverage the pre-trained model on another task with a larger training set, then fine-tune the model on STL-10. This fine-tuning method is widely adopted for transfer learning~\cite{yosinski2014transferable}, and is also evaluated in~\cite{adi2018turning}. In particular, we perform the transfer learning to adapt from a model trained on CIFAR-10 or ImageNet32 to STL-10. We do not consider CIFAR-100 in this setting, because we find that adapting from a pre-trained CIFAR-100 model results in inferior performance on STL-10 compared to CIFAR-10 and ImageNet32, e.g., the accuracy on STL-10 is around $5\%$ lower than the model pre-trained on CIFAR-10, as presented in~\cite{adi2018turning}. We perform the unlabeled data augmentation in the same way as the non-transfer learning setting. \subsection{Implementation Details} \label{sec:implementation-details} \paragraph{Watermarking schemes.} Our watermarking scheme configuration largely follows the same setups as their original papers. We tune the hyper-parameters to ensure that the pre-trained model achieves 100\% watermark accuracy for each scheme, i.e., for all embedded watermark samples, the model prediction is the same as the assigned watermark label. Also, the prediction confidence scores for these watermark samples are high, e.g., above 0.85, suggesting the strength of the embedded watermark. We directly use their open-source implementation when applicable. Specifically: \begin{itemize}[leftmargin=*,noitemsep,topsep=0em] \item \emph{Pattern-based techniques.} We use the text pattern in~\cite{zhang2018protecting}, and Figure~\ref{fig:pattern-ood-watermark-ex} presents some examples of generated watermarks. \item \emph{OOD watermark techniques.} The watermark images are from the code repository of~\cite{adi2018turning}~\footnote{\url{https://github.com/adiyoss/WatermarkNN}.}. The watermark set contains 100 individual images with labels randomly drawn from the entire label set, and Figure~\ref{fig:pattern-ood-watermark-ex} shows some examples. \item \emph{Exponential weighting.} We set $T=2.0$ as in \cite{namba2019robust}. For each dataset, we use the last 100 samples from the training set to form the watermark set, and ensure that these watermark samples are never included in the fine-tuning training set. \item \emph{Adversarial frontier stitching.} We set $\epsilon$ so that the watermark accuracy of a model trained without watermarks is around 50\%. The values of $\epsilon$ are 0.15, 0.10 and 0.05 for CIFAR-10, CIFAR-100 and ImageNet32 respectively. \end{itemize} \paragraph{Watermark removal techniques.} We always fine-tune the entire model for {REFIT}, because we find that fine-tuning the output layer only is insufficient for watermark removal, as demonstrated in~\cite{adi2018turning}; moreover, it will completely fail to remove watermarks in the transfer learning setting by design. We have tried both FTAL and RTAL processes in~\cite{adi2018turning}. Specifically, FTAL directly fine-tunes the entire model; when using RTAL, the output layer is randomly initialized before fine-tuning. For non-transfer learning, we apply the FTAL method, as RTAL does not provide additional performance gain; for transfer learning, we apply the RTAL method, since the label sets of the pre-trained and fine-tuning datasets are different. We observe that as long as the pre-trained model achieves a high test accuracy and fits the watermarks well, the model architecture does not have a critical influence on the effectiveness of watermark embedding and removal. Thus, unless otherwise specified, we mainly apply the ResNet-18 model~\cite{he2016deep} in our evaluation, which is able to achieve competitive performance on all benchmarks in our evaluation. As discussed in Section~\ref{sec:method}, the failure of previous fine-tuning-based watermark removal approaches is mainly due to the improper learning rate schedule for fine-tuning. For example, the initial learning rate for fine-tuning is $0.001$ in~\cite{adi2018turning}, which is $100\times$ smaller than the initial learning rate for pre-training. In our evaluation, we set the initial fine-tuning learning rate to be much larger, e.g., $0.05$. We used SGD as the optimizer and set the batch size to be 100 for both pre-training and fine-tuning without unlabeled data, following the setup in~\cite{adi2018turning}. We fine-tune the model until the training accuracy does not further improve, which is typically within 20 epochs, as in~\cite{adi2018turning}. For unlabeled data augmentation, when there is no in-distribution labeled samples, each batch includes 100 unlabeled samples. When fine-tuning on CIFAR-10, CIFAR-100 and STL-10, we decay the learning rate by 0.9 every 500 steps. When fine-tuning on partial ImageNet32, the learning rate is multiplied by $0.9^t$ after training on the $\frac{t}{10}$-fraction of the entire training set. More discussion on implementation details is in Appendix~\ref{app:exp}. In Section~\ref{sec:eval}, we denote this basic version of {REFIT} without EWC and AU as~\emph{Basic}. For our EWC component, Fisher information is approximated by drawing $M$ samples from in-distribution labeled data available to the adversary. Unless otherwise specified, we set $M=10,000$ when the target domain is CIFAR-10, CIFAR-100 or STL-10, and $M=40,000$ when the target domain is ImageNet32. Notice that the samples are drawn with replacement, so $M$ can be larger than the number of training examples available, where the same example may be used multiple times. In practice, to improve the stability of the optimization, we first normalize the Fisher matrix $F_i$ so that its maximum entry is $1$, then clip the matrix by $\frac{1}{\lambda \cdot lr}$ before plugging it into Equation (\ref{eq:EWC-loss}), where $lr$ is the learning rate. In addition, we also compare with two baseline methods denoted as~\emph{FS}, which train the entire model from scratch, so that the model is guaranteed to have a watermark accuracy no higher than the decision threshold, though the test accuracy is typically sub-optimal, especially when the training data is limited. The basic version simply trains on the dataset available for fine-tuning, without leveraging the pre-trained model. The second variant, denoted as~\emph{AU}, applies the pre-trained model as the labeling tool in the same way as the AU module in {REFIT}, but the model is randomly initialized instead of initializing from the pre-trained model. \paragraph{Evaluation metrics.} We consider the two metrics below. \begin{itemize}[leftmargin=*,noitemsep,topsep=0em] \item \emph{Watermark accuracy.} The adversary needs to make sure that the model accuracy on the watermark set is no more than the watermark decision threshold $\gamma$, i.e., the model predictions are the same as the assigned watermark labels for no more than $\gamma$ of the watermark inputs. In particular, we set $\gamma$ to be within the range of watermark accuracies of models trained without watermarks. Specifically, we trained 10 models from different random initialization without watermarks, evaluated their watermark accuracies, and set the threshold to ensure that there is no false positive, i.e., the watermark accuracies of these models are not accidentally higher than $\gamma$. For watermark schemes other than ADV, we set $\gamma$ to be $20\%$ for CIFAR-10, $10\%$ for CIFAR-100, and $3\%$ for ImageNet32. We set $\gamma=58\%$ for all benchmarks when using ADV, following~\cite{merrer2017adversarial}. Notice that for the transfer learning setting, due to the difference of the label sets between the pre-trained and fine-tuning tasks, the embedded watermarks naturally do not apply to the new model. To measure the watermark accuracy, following~\cite{adi2018turning}, we replace the output layer of the fine-tuned model with the original output layer of the pre-trained model. \item \emph{Test accuracy.} The adversary also aims to maximize the model accuracy on the normal test set. We evaluate the top-1 accuracy. \end{itemize} Regarding the presentation of evaluation results in the next section, unless otherwise specified, we only present the test accuracies of the models. The watermark accuracy of the pre-trained model embedded with any watermarking scheme in our evaluation is $100\%$, and the watermark accuracy of the model after watermark removal using {REFIT} is always below the threshold $\gamma$.
2024-02-18T23:41:01.476Z
2021-03-26T01:14:25.000Z
algebraic_stack_train_0000
3,959
13,600
proofpile-arXiv_066-3291
\section{Introduction}\label{intro} \noindent Let $d(n)$ denote the number of divisors of $n$, and for $k\geq 2$ let $d_{k}(n)$ denote the number of ways of writing $n$ as a product of $k$ integers. Using the following Dirichlet series \begin{equation}\label{s1} \sum_{n=1}^{\infty} \frac{d(n)^{2}}{n^{s}} = \frac{\zeta^{4}(s)}{\zeta(2s)}, \quad \sum_{n=1}^{\infty} \frac{d_{k}(n)}{n^{s}} = \zeta^{k}(s), \quad (\Re(s) >1), \end{equation} we have, via Perron's formula, that \begin{equation}\label{s2} \sum_{n\leq x} d_{k}(n) \sim \frac{1}{(k-1)!} x (\log x)^{k-1}, \end{equation} and \begin{equation}\label{s2a} \sum_{n\leq x} d(n)^{2} \sim \frac{1}{\pi^{2}} x (\log x)^{3}. \end{equation} The purpose of this article is to consider good explicit versions of (\ref{s2a}) and (\ref{s2}) when $k=4$. The rolled-gold example is $\sum_{n\leq x} d(n)$, which is gives a bound of the form (\ref{s2}) for $k=2$. Berkan\'{e}, Bordell\`{e}s and Ramar\'{e} \cite[Thm. 1.1]{Berk} gave several pairs of values $(\alpha, x_{0})$ such that \begin{equation}\label{study} \sum_{n\leq x} d(n) = x(\log x + 2 \gamma -1) + \Delta(x), \end{equation} holds with $|\Delta(x)| \leq \alpha x^{1/2}$ for $x\geq x_0$, and where $\gamma$ is Euler's constant. One such pair given, which we shall use frequently, is $\alpha=0.397$ and $x_0= 5560$. The best known bound for (\ref{study}) is $\Delta(x) = O(x^{131/416 + \epsilon})$ by Huxley \cite{Huxley}. It seems hopeless to give a bound on the implied constant in this estimate. Therefore weaker\footnote{It is worth mentioning Theorem 1.2 in (\cite{Berk}), which gives $|\Delta(x)|\leq 0.764 x^{1/3} \log x$ for $x\geq 9995$. This could be used in our present approach, but the improvement is only apparent for very large $x$.}, yet-still-explicit bounds such as those in \cite{Berk} are very useful in applications. Bordell\`{e}s \cite{Border} considered the sum in (\ref{s2}), and showed that when $k\geq 2$ and $x\geq1$ we have \begin{equation}\label{hall} \sum_{n\leq x} d_{k}(n) \leq x \left( \log x + \gamma + \frac{1}{x}\right)^{k-1}. \end{equation} This misses the asymptotic bound in (\ref{s2}) by a factor of $1/(k-1)!$. Nicolas and Tenenbaum (see \cite{Border}, pg. 2) were able to meet the asymptotic bound in (\ref{s2}), by showing that for a fixed $k$, and $x\geq 1$, we have \begin{equation}\label{lounge} \sum_{n\leq x} d_{k}(n) \leq \frac{x}{(k-1)!} \left( \log x +(k-1)\right)^{k-1}. \end{equation} We improve these results in our first theorem. \begin{thm}\label{parlour} For $x\geq 2$ we have \begin{equation}\label{chaise} \sum_{n\leq x} d_{4}(n) = C_1 x\log^3 x + C_2 x\log^2x + C_3 x\log x + C_4x + \vartheta(4.48 x^{3/4} \log x), \end{equation} where $$C_{1} = 1/6, \quad C_{2} = 0.654\ldots, \quad C_{3} = 0.981\ldots, \quad C_{4} = 0.272\ldots,$$ are exact constants given in (\ref{d4}). Furthermore, when $x\geq 193$ we have \begin{equation}\label{sun} \sum_{n\leq x} d_{4}(n) \leq \frac{1}{3} x \log ^{3} x. \end{equation} \end{thm} \noindent It may be noted that the result in (\ref{chaise}) is a sharper bound than (\ref{hall}) and (\ref{lounge}) for all $x\geq 2$. Other bounds of the form (\ref{sun}) are possible: we have selected one that is nice and neat, and valid when $x$ is not too large. Sums of $d_{k}(n)$ can be used to obtain upper bounds on class numbers of number fields. Let $K$ be a number field of degree $n_{K}= [K:\mathbb{Q}]$ and discriminant $d_{K}$. Also, let $r_{1}$ (resp.\ $r_{2}$) denote the number of real (resp.\ complex) embeddings in $K$, so that $n_{K} = r_{1} + 2r_{2}$. Finally, let $$b= b_{K} = \left( \frac{n_{K}!}{n_{K}^{n_{K}}}\right) \left(\frac{4}{\pi}\right)^{r_{2}}|d_{K}|^{1/2},$$ denote the Minkowski bound, and let $h(K)$ denote the class number. Lenstra \cite[\S 6]{Lenstra} --- see also Bordell\`{e}s \cite[Lem.\ 1]{Border} --- proved that \begin{equation}\label{duck} h_{K} \leq \sum_{m\leq b} d_{n_{K}}(m). \end{equation} We note that we need only upper bounds on (\ref{s2}) to give bounds on the class number. Bordell\`{e}s used (\ref{hall}) in its weaker form \begin{equation}\label{games} \sum_{n\leq x} d_{k}(n) \leq 2x(\log x)^{k-1}, \quad (x\geq 6), \end{equation} to this end. The bound obtained on $h_{K}$ is not the sharpest possible for all degrees. For example, much more work has been done on quadratic extensions (see \cite{Le}, \cite{Louboutin}, and \cite{RamL}). Using Theorem \ref{parlour} we are able to give an improved bound on the class number of quartic number fields. \begin{cor}\label{chalk} Let $K$ be a quartic number field with class number $h_{K}$ and Minkowski bound~$b$. Then, if $b\geq 193$ we have $$ h_{K} \leq \frac{1}{3} x\log^3x.$$ \end{cor} We note that it should be possible to use Corollary \ref{chalk} to improve slightly Lemma 13 in \cite{Deb} when $n=4$. Turning to the sum in (\ref{s2a}), we first state the following result by Ramanujan \cite{Ram2} \begin{equation}\label{bravo} \sum_{n\leq x} d(n)^{2} - x(A \log^{3} x + B \log^{2} x + C\log x + D) \ll x^{3/5 + \epsilon}. \end{equation} The error in (\ref{bravo}) was improved by Wilson \cite{Wilson} to $x^{1/2 + \epsilon}$. The constants $A, B, C, D$ can (these days) be obtained via Perron's formula. Of note is an elementary result by Gowers \cite{Gowers}, namely, that \begin{equation}\label{kitchen} \sum_{n\leq x} d(n)^{2} \leq x(\log x + 1)^{3} \leq 2x (\log x)^{3}, \quad (x\geq 1). \end{equation} This is used by Kadiri, Lumely, and Ng \cite{Kadiri} in their work on zero-density estimates for the zeta-function. Although one would expect some lower-order terms, the bound in (\ref{kitchen}) is a factor of $2\pi^{2} \approx 19.7$ times the asymptotic bound in (\ref{s2a}), whence one should be optimistic about obtaining a saving. We obtain such a saving in our second main result. \begin{thm}\label{billiard} For $x\geq 2$ we have $$\sum_{n\leq x} d(n)^{2} = D_1 x\log^3 x + D_2 x\log^2x + D_3 x\log x + D_4x + \vartheta \left(9.73 x^\frac{3}{4} \log x + 0.73 x^\frac{1}{2} \right),$$ where $$D_{1} = \frac{1}{\pi^2}, \quad D_{2} = 0.745 \ldots, \quad D_{3} = 0.824 \ldots, \quad D_{4} = 0.461\ldots,$$ are exact constants given in (\ref{dn2}). Furthermore, for $x\geq x_j$ we have $$\sum_{n\leq x} d(n)^2 \leq K x \log ^{3} x,$$ where one may take $\{K, x_j\}$ to be, among others, $\{\tfrac{1}{4}, 433\}$ or $\{1, 7\}$. \end{thm} The outline of this article is as follows. Theorem \ref{parlour} is proved in Section \ref{boat}. A similar process would give good explicit bounds on $\sum_{n\leq x} d_{k}(n)$. We have not pursued this, but the potential for doing so is discussed. We then present Theorem \ref{billiard} in Section \ref{ship}. \section{Bounding $d_{4}(n)$}\label{boat} Since $d_{4}(n) = d(n) * d(n)$, where $*$ denotes Dirichlet convolution, the hyperbola method gives us that \begin{equation*} \sum_{n\leq x} d_{4}(n) = 2 \sum_{a\leq \sqrt{x}} d(a) \sum_{n\leq x/a} d(n) - \left( \sum_{n\leq \sqrt{x}} d(n)\right)^{2}. \end{equation*} Using (\ref{study}), we arrive at \begin{equation}\label{daisy} \begin{split} \sum_{n\leq x} d_{4}(n) &= 2x \left[ \left(\log x + 2\gamma -1\right) S_{1} (\sqrt{x}) - S_{2}(\sqrt{x})\right] + 2\sum_{a\leq \sqrt{x}} d(a) \Delta \left(\frac{x}{a} \right) \\ &\quad - \left\{\sqrt{x}\left(\frac{1}{2} \log x + 2\gamma -1\right) + \Delta(\sqrt{x})\right\}^{2}, \end{split} \end{equation} where \begin{equation*} S_{1}(x) = \sum_{n\leq x} \frac{d(n)}{n}, \text{ and } \quad S_{2}(x) = \sum_{n\leq x} \frac{d(n)\log n}{n}. \end{equation*} The absolute value of the sum on the right-hand side of (\ref{daisy}) can also be bounded above by $$2 \alpha x^{1/2} S_{3}(\sqrt{x}),$$ where $$S_{3}(x) = \sum_{n\leq x} \frac{d(n)}{\sqrt{n}}.$$ We can approximate $S_1$, $S_2$, and $S_3$ with partial summation and the bound in (\ref{study}). We note that for applications in \S \ref{ship} we need only concern ourselves with values of $x\geq 1$. Berkan\'{e}, Bordell\`{e}s, and Ramar\'{e} \cite[Cor.\ 2.2]{Berk}, give a bound for $S_{1}(x)$. As noted by Platt and Trudgian in \cite[\S 2.1]{Platt} their constant $1.16$ should be replaced by $1.641$ as in Riesel and Vaughan \cite[Lem.\ 1]{RV}. To obtain an error term in Theorem \ref{parlour} of size $x^{3/4} \log x$ we should like an error term in $S_{1}(x)$ of size $x^{-1/2}$, which is right at the limit of what is achievable. We follow the method used by Riesel and Vaughan (\cite[pp. 48--50]{RV}) to write \begin{equation}\label{chef} S_{1}(x) = \frac{1}{2} \log^{2} x + 2 \gamma \log x + \gamma^{2} - 2 \gamma_{1} + \vartheta(c x^{-1/2}), \end{equation} with $c=1.001$ for $x\geq 6 \cdot 10^5$. One can directly check that this also holds for $2\leq x < 6 \cdot 10^5$. Taking larger values of $x$ reduces the constant $c$, but not to anything less than unity. For $S_{2}$, we have \begin{equation}\label{S2} S_{2}(x) = \frac{1}{3} \log^{3} x + \gamma \log^{2} x + 2\gamma\gamma_{1} - \gamma_{2} + E_{2}(x), \end{equation} where \begin{equation*} E_{2}(x) = \frac{\Delta(x)\log x}{x} - \int_{x}^{\infty} \frac{(\log t -1)\Delta(t)}{t^{2}}\, dt. \end{equation*} Using the bound in (\ref{study}) with $\alpha=0.397$ and $x_0=5560$ we have \begin{equation*} |E_{2}(x)| \leq \alpha\left( 3 + \frac{2}{\log x_{0}}\right) x^{-1/2} \log x, \qquad (x\geq x_{0}). \end{equation*} Lastly, for $S_{3}$ we have \begin{equation}\label{S3} S_{3}(x) = 2x^{1/2} \log x + 4(\gamma -1)x^{1/2} + E_{3}(x), \end{equation} where \begin{equation}\label{box} E_{3}(x) = \frac{\Delta(x)}{x^{1/2}} - \frac{1}{2} \int_{x}^{\infty} \frac{\Delta(t)}{t^{3/2}}\, dt. \end{equation} For the integral in (\ref{box}) to converge we need to use a bound of the form $\Delta(t)\ll t^{1/2 - \delta}$. The only such explicit bound we know of is Theorem 1.2 in \cite{Berk}: as pointed out in \S \ref{intro} this improves on results only for large values of $x$. Instead, since $S_{3}(x)$ has a relatively small contribution to the total error, we can afford a slightly larger bound on $E_{3}(x)$. In writing the error as \begin{equation*} E_{3}(x) = 3 - 2\gamma + \frac{\Delta(x)}{x^{1/2}} + \frac{1}{2} \int_{1}^{x} \frac{\Delta(t)}{t^{3/2}}\, dt \end{equation*} we can apply the bound in (\ref{study}) and the triangle inequality to get \begin{align*} |E_{3}(x)| & \leq (3-2\gamma) + \alpha + \frac{1}{2} \alpha \log x \\ & \leq \log x\left(\frac{\alpha}{2} + \frac{3 - 2\gamma + \alpha}{\log x_{0}}\right):= \beta \log x, \qquad (x\geq x_{0}). \end{align*} Thus, the bounds in (\ref{chef}), (\ref{S2}), and (\ref{S3}) can be used in (\ref{daisy}) to prove Theorem \ref{parlour} \begin{align} \label{d4} \sum_{n\leq x} d_{4}(n) &= C_1 x\log^3 x + C_2 x\log^2x + C_3 x\log x + C_4x + E(x) \end{align} where $$|E(x)| \leq F_1 x^\frac{3}{4}\log x,$$ and we have \begin{align*} &C_1=\frac{1}{6}, \quad C_2= 2\gamma-\frac{1}{2} , \quad C_3= 6\gamma^2-4\gamma-4\gamma_1+1 , \\ & C_4= 4\gamma^3-6\gamma^2+ 4\gamma -12\gamma \gamma_1 + 4\gamma_1+2\gamma_2 -1 , \text{ and } F_1= 2c+6\alpha+ \frac{2\alpha}{\log x_{0}}. \end{align*} Recalling that $c=1.001, \alpha = 0.397$, and $x_{0} = 5560$, we prove the theorem for $x\geq 5560^2$. We directly calculated the partial sums of $d_4(n)$ to confirm that the bound in (\ref{chaise}) also holds for $2 \leq x < 5560^2$. We could bound the partial sums of $d_{k}(n)$ by generalising the previous method. When $k$ is even, one can use $d_{k}(n) = d_{k-2}(n) * d(n)$, and when $k$ is odd, one can use $d_{k}(n) = d_{k-1}(n) * 1$. We have not pursued this, but for small values of $k$, this is likely to lead to decent bounds. We expect that the error term could, potential, blow up on repeated applications of this process. One may also consider a more direct approach using a `hyperboloid' method, that is, considering $d_{3}(n) = 1 * 1 * 1.$ We have not considered this here. \section{Bound on $\sum_{n\leq x} d(n)^2$}\label{ship} We define $$H(s) = \sum_{n=1}^\infty \frac{h(n)}{n^{s}} = \frac{1}{\zeta(2s)},$$ and let $$H^{*}(s) = \sum_{n\geq 1} |h(n)| n^{-s} = \prod_{p} \left( 1 + \frac{1}{p^{2s}}\right) = \frac{\zeta(2s)}{\zeta(4s)},$$ whence both $H(s)$ and $H^{*}(s)$ converge for $\Re(s)> \frac{1}{2}$. Referring to (\ref{s1}), we therefore have $d(n)^2 = d_4(n) \ast h(n)$. Hence, we can write $$\sum_{n\leq x} d(n)^2 = \sum_{a\leq x} h(a) \sum_{b\leq \frac{x}{a}} d_4(b).$$ This leads to the bound \begin{align} \label{dn2} \sum_{n\leq x} d(n)^2 = D_1 x\log^3 x + D_2 x\log^2x + D_3 x\log x + D_4x + \sum_{a\leq x} h(a)E\left(\frac{x}{a}\right), \end{align} where \begin{align*} D_1&=C_1 H(1),\quad D_2=C_2H(1)+3C_1H'(1),\quad D_3=C_3H(1)+2C_2H'(1)+3C_1H''(1), \\ D_4&=C_4H(1)+C_3H'(1)+C_2H''(1)+C_1H^{(3)}(1). \end{align*} Furthermore, we require a bound on $E(x)$ which holds for $x\geq 1$. Adapting the error term in (\ref{d4}) to hold for the desired range, we can write $$\left| \sum_{a\leq x} h(a)E\left(\frac{x}{a}\right) \right| \leq F_1 H^{*}(\tfrac{3}{4}) x^\frac{3}{4} \log x + (1-C_4)x^\frac{1}{2}.$$ We also note the following exact values, for ease of further calculations: \begin{equation}\label{tea} \begin{split} H(1) &= \frac{6}{\pi^{2}}, \quad H'(1) = -\frac{72 \zeta'(2)}{\pi^{4}}, \quad H''(1) = \frac{1728 \zeta'(2)^{2}}{\pi^{6}} - \frac{144 \zeta''(2)}{\pi^{4}},\\ H^{(3)}(1)&= -\frac{62208 \zeta'(2)^{3}}{\pi^{8}} + \frac{10368 \zeta'(2) \zeta''(2)}{\pi^{6}} - \frac{288 \zeta^{(3)}(2)}{\pi^{4}}.\\ \end{split} \end{equation} As an aside, our result in Theorem \ref{billiard} could have been achieved with Ramar\'{e}'s general result in \cite[Lem.\ 3.2]{Ram}, as modified in \cite[Lem.\ 14]{TQ} (with the constants repaired as in \cite{RV}), but some further generalisation would have been necessary. Instead, we proceeded directly as above. \subsection*{Acknowledgements} We wish to thank Olivier Ramar\'{e} for many rollicking conversations on this topic.
2024-02-18T23:41:01.498Z
2019-11-19T02:19:24.000Z
algebraic_stack_train_0000
3,961
2,728